Fix errors
* Fix home page name * Remove untranslated * Localize left panel in Getting Started * Fix left menu button on mobile and add redirect * Implement review suggestions Co-authored-by: Anastasiya Kulyk <nastya.kulyk@gmail.com> Co-authored-by: Maksym Vlasov <MaxymVlasov@users.noreply.github.com>pull/20020/head
parent
e91001782c
commit
e29dbe476d
16
README-uk.md
16
README-uk.md
|
@ -3,7 +3,7 @@
|
|||
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
|
||||
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [вебсайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок!
|
||||
Вітаємо! В цьому репозиторії міститься все необхідне для роботи над [сайтом і документацією Kubernetes](https://kubernetes.io/). Ми щасливі, що ви хочете зробити свій внесок!
|
||||
|
||||
## Внесок у документацію
|
||||
|
||||
|
@ -21,11 +21,11 @@
|
|||
|
||||
## Запуск сайту локально за допомогою Docker
|
||||
|
||||
Для локального запуску вебсайту Kubernetes рекомендовано запустити спеціальний [Docker](https://docker.com)-образ, що містить генератор статичних сайтів [Hugo](https://gohugo.io).
|
||||
Для локального запуску сайту Kubernetes рекомендовано запустити спеціальний [Docker](https://docker.com)-образ, що містить генератор статичних сайтів [Hugo](https://gohugo.io).
|
||||
|
||||
> Якщо ви працюєте під Windows, вам знадобиться ще декілька інструментів, які можна встановити за допомогою [Chocolatey](https://chocolatey.org). `choco install make`
|
||||
|
||||
> Якщо ви вважаєте кращим запустити вебсайт локально без використання Docker, дивіться пункт нижче [Запуск сайту локально за допомогою Hugo](#запуск-сайту-локально-зa-допомогою-hugo).
|
||||
> Якщо ви вважаєте кращим запустити сайт локально без використання Docker, дивіться пункт нижче [Запуск сайту локально за допомогою Hugo](#запуск-сайту-локально-зa-допомогою-hugo).
|
||||
|
||||
Якщо у вас вже [запущений](https://www.docker.com/get-started) Docker, зберіть локальний Docker-образ `kubernetes-hugo`:
|
||||
|
||||
|
@ -33,25 +33,25 @@
|
|||
make docker-image
|
||||
```
|
||||
|
||||
Після того, як образ зібрано, ви можете запустити вебсайт локально:
|
||||
Після того, як образ зібрано, ви можете запустити сайт локально:
|
||||
|
||||
```bash
|
||||
make docker-serve
|
||||
```
|
||||
|
||||
Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
|
||||
Відкрийте у своєму браузері http://localhost:1313, щоб побачити сайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує сайт відповідно до внесених змін і оновлює сторінку у браузері.
|
||||
|
||||
## Запуск сайту локально зa допомогою Hugo
|
||||
|
||||
Для інструкцій по установці Hugo дивіться [офіційну документацію](https://gohugo.io/getting-started/installing/). Обов’язково встановіть розширену версію Hugo, яка позначена змінною оточення `HUGO_VERSION` у файлі [`netlify.toml`](netlify.toml#L9).
|
||||
|
||||
Після установки Hugo запустіть вебсайт локально командою:
|
||||
Після установки Hugo, запустіть сайт локально командою:
|
||||
|
||||
```bash
|
||||
make serve
|
||||
```
|
||||
|
||||
Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити вебсайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує вебсайт відповідно до внесених змін і оновлює сторінку у браузері.
|
||||
Команда запустить локальний Hugo-сервер на порту 1313. Відкрийте у своєму браузері http://localhost:1313, щоб побачити сайт. По мірі того, як ви змінюєте вихідний код, Hugo актуалізує сайт відповідно до внесених змін і оновлює сторінку у браузері.
|
||||
|
||||
## Спільнота, обговорення, внесок і підтримка
|
||||
|
||||
|
@ -68,4 +68,4 @@ make serve
|
|||
|
||||
## Дякуємо!
|
||||
|
||||
Долучення до спільноти - запорука успішного розвитку Kubernetes. Ми цінуємо ваш внесок у наш вебсайт і документацію!
|
||||
Долучення до спільноти - запорука успішного розвитку Kubernetes. Ми цінуємо ваш внесок у наш сайт і документацію!
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: Case Studies
|
||||
#title: Case Studies
|
||||
title: Приклади використання
|
||||
linkTitle: Case Studies
|
||||
#linkTitle: Case Studies
|
||||
linkTitle: Приклади використання
|
||||
bigheader: Kubernetes User Case Studies
|
||||
#bigheader: Kubernetes User Case Studies
|
||||
bigheader: Приклади використання Kubernetes від користувачів.
|
||||
abstract: A collection of users running Kubernetes in production.
|
||||
#abstract: A collection of users running Kubernetes in production.
|
||||
abstract: Підбірка користувачів, що використовують Kubernetes для робочих навантажень.
|
||||
layout: basic
|
||||
class: gridPage
|
||||
|
|
|
@ -26,7 +26,7 @@ weight: 40
|
|||
|
||||
<!--Once you've set your desired state, the *Kubernetes Control Plane* makes the cluster's current state match the desired state via the Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
|
||||
-->
|
||||
Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Генератора подій життєвого циклу Пода ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері:
|
||||
Після того, як ви задали бажаний стан, *площина управління Kubernetes* приводить поточний стан кластера до бажаного за допомогою Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для цього Kubernetes автоматично виконує ряд задач: запускає або перезапускає контейнери, масштабує кількість реплік у певному застосунку тощо. Площина управління Kubernetes складається із набору процесів, що виконуються у вашому кластері:
|
||||
|
||||
<!--* The **Kubernetes Master** is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and [kube-scheduler](/docs/admin/kube-scheduler/).
|
||||
* Each individual non-master node in your cluster runs two processes:
|
||||
|
@ -57,8 +57,8 @@ Kubernetes оперує певною кількістю абстракцій, щ
|
|||
-->
|
||||
До базових об'єктів Kubernetes належать:
|
||||
|
||||
* [Под *(Pod)*](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Сервіс *(Service)*](/docs/concepts/services-networking/service/)
|
||||
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
|
||||
* [Service](/docs/concepts/services-networking/service/)
|
||||
* [Volume](/docs/concepts/storage/volumes/)
|
||||
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
|
||||
|
||||
|
@ -75,7 +75,7 @@ Kubernetes оперує певною кількістю абстракцій, щ
|
|||
<!--## Kubernetes Control Plane
|
||||
-->
|
||||
|
||||
## Площина управління Kubernetes (*Kubernetes Control Plane*)
|
||||
## Площина управління Kubernetes (*Kubernetes Control Plane*) {#площина-управління-kubernetes}
|
||||
|
||||
<!--The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will respond to changes in the cluster and work to make the actual state of all the objects in the system match the desired state that you provided.
|
||||
-->
|
||||
|
|
|
@ -1,623 +0,0 @@
|
|||
---
|
||||
title: Managing Compute Resources for Containers
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
feature:
|
||||
# title: Automatic bin packing
|
||||
title: Автоматичне пакування у контейнери
|
||||
# description: >
|
||||
# Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
|
||||
description: >
|
||||
Автоматичне розміщення контейнерів з огляду на їхні потреби у ресурсах та інші обмеження, при цьому не поступаючись доступністю. Поєднання критичних і "найкращих з можливих" робочих навантажень для ефективнішого використання і більшого заощадження ресурсів.
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
When you specify a [Pod](/docs/concepts/workloads/pods/pod/), you can optionally specify how
|
||||
much CPU and memory (RAM) each Container needs. When Containers have resource
|
||||
requests specified, the scheduler can make better decisions about which nodes to
|
||||
place Pods on. And when Containers have their limits specified, contention for
|
||||
resources on a node can be handled in a specified manner. For more details about
|
||||
the difference between requests and limits, see
|
||||
[Resource QoS](https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Resource types
|
||||
|
||||
*CPU* and *memory* are each a *resource type*. A resource type has a base unit.
|
||||
CPU is specified in units of cores, and memory is specified in units of bytes.
|
||||
If you're using Kubernetes v1.14 or newer, you can specify _huge page_ resources.
|
||||
Huge pages are a Linux-specific feature where the node kernel allocates blocks of memory
|
||||
that are much larger than the default page size.
|
||||
|
||||
For example, on a system where the default page size is 4KiB, you could specify a limit,
|
||||
`hugepages-2Mi: 80Mi`. If the container tries allocating over 40 2MiB huge pages (a
|
||||
total of 80 MiB), that allocation fails.
|
||||
|
||||
{{< note >}}
|
||||
You cannot overcommit `hugepages-*` resources.
|
||||
This is different from the `memory` and `cpu` resources.
|
||||
{{< /note >}}
|
||||
|
||||
CPU and memory are collectively referred to as *compute resources*, or just
|
||||
*resources*. Compute
|
||||
resources are measurable quantities that can be requested, allocated, and
|
||||
consumed. They are distinct from
|
||||
[API resources](/docs/concepts/overview/kubernetes-api/). API resources, such as Pods and
|
||||
[Services](/docs/concepts/services-networking/service/) are objects that can be read and modified
|
||||
through the Kubernetes API server.
|
||||
|
||||
## Resource requests and limits of Pod and Container
|
||||
|
||||
Each Container of a Pod can specify one or more of the following:
|
||||
|
||||
* `spec.containers[].resources.limits.cpu`
|
||||
* `spec.containers[].resources.limits.memory`
|
||||
* `spec.containers[].resources.limits.hugepages-<size>`
|
||||
* `spec.containers[].resources.requests.cpu`
|
||||
* `spec.containers[].resources.requests.memory`
|
||||
* `spec.containers[].resources.requests.hugepages-<size>`
|
||||
|
||||
Although requests and limits can only be specified on individual Containers, it
|
||||
is convenient to talk about Pod resource requests and limits. A
|
||||
*Pod resource request/limit* for a particular resource type is the sum of the
|
||||
resource requests/limits of that type for each Container in the Pod.
|
||||
|
||||
|
||||
## Meaning of CPU
|
||||
|
||||
Limits and requests for CPU resources are measured in *cpu* units.
|
||||
One cpu, in Kubernetes, is equivalent to:
|
||||
|
||||
- 1 AWS vCPU
|
||||
- 1 GCP Core
|
||||
- 1 Azure vCore
|
||||
- 1 IBM vCPU
|
||||
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
|
||||
|
||||
Fractional requests are allowed. A Container with
|
||||
`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much
|
||||
CPU as one that asks for 1 CPU. The expression `0.1` is equivalent to the
|
||||
expression `100m`, which can be read as "one hundred millicpu". Some people say
|
||||
"one hundred millicores", and this is understood to mean the same thing. A
|
||||
request with a decimal point, like `0.1`, is converted to `100m` by the API, and
|
||||
precision finer than `1m` is not allowed. For this reason, the form `100m` might
|
||||
be preferred.
|
||||
|
||||
CPU is always requested as an absolute quantity, never as a relative quantity;
|
||||
0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.
|
||||
|
||||
## Meaning of memory
|
||||
|
||||
Limits and requests for `memory` are measured in bytes. You can express memory as
|
||||
a plain integer or as a fixed-point integer using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
|
||||
```shell
|
||||
128974848, 129e6, 129M, 123Mi
|
||||
```
|
||||
|
||||
Here's an example.
|
||||
The following Pod has two Containers. Each Container has a request of 0.25 cpu
|
||||
and 64MiB (2<sup>26</sup> bytes) of memory. Each Container has a limit of 0.5
|
||||
cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128
|
||||
MiB of memory, and a limit of 1 cpu and 256MiB of memory.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: db
|
||||
image: mysql
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: "password"
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
- name: wp
|
||||
image: wordpress
|
||||
resources:
|
||||
requests:
|
||||
memory: "64Mi"
|
||||
cpu: "250m"
|
||||
limits:
|
||||
memory: "128Mi"
|
||||
cpu: "500m"
|
||||
```
|
||||
|
||||
## How Pods with resource requests are scheduled
|
||||
|
||||
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
|
||||
run on. Each node has a maximum capacity for each of the resource types: the
|
||||
amount of CPU and memory it can provide for Pods. The scheduler ensures that,
|
||||
for each resource type, the sum of the resource requests of the scheduled
|
||||
Containers is less than the capacity of the node. Note that although actual memory
|
||||
or CPU resource usage on nodes is very low, the scheduler still refuses to place
|
||||
a Pod on a node if the capacity check fails. This protects against a resource
|
||||
shortage on a node when resource usage later increases, for example, during a
|
||||
daily peak in request rate.
|
||||
|
||||
## How Pods with resource limits are run
|
||||
|
||||
When the kubelet starts a Container of a Pod, it passes the CPU and memory limits
|
||||
to the container runtime.
|
||||
|
||||
When using Docker:
|
||||
|
||||
- The `spec.containers[].resources.requests.cpu` is converted to its core value,
|
||||
which is potentially fractional, and multiplied by 1024. The greater of this number
|
||||
or 2 is used as the value of the
|
||||
[`--cpu-shares`](https://docs.docker.com/engine/reference/run/#cpu-share-constraint)
|
||||
flag in the `docker run` command.
|
||||
|
||||
- The `spec.containers[].resources.limits.cpu` is converted to its millicore value and
|
||||
multiplied by 100. The resulting value is the total amount of CPU time that a container can use
|
||||
every 100ms. A container cannot use more than its share of CPU time during this interval.
|
||||
|
||||
{{< note >}}
|
||||
The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.
|
||||
{{</ note >}}
|
||||
|
||||
- The `spec.containers[].resources.limits.memory` is converted to an integer, and
|
||||
used as the value of the
|
||||
[`--memory`](https://docs.docker.com/engine/reference/run/#/user-memory-constraints)
|
||||
flag in the `docker run` command.
|
||||
|
||||
If a Container exceeds its memory limit, it might be terminated. If it is
|
||||
restartable, the kubelet will restart it, as with any other type of runtime
|
||||
failure.
|
||||
|
||||
If a Container exceeds its memory request, it is likely that its Pod will
|
||||
be evicted whenever the node runs out of memory.
|
||||
|
||||
A Container might or might not be allowed to exceed its CPU limit for extended
|
||||
periods of time. However, it will not be killed for excessive CPU usage.
|
||||
|
||||
To determine whether a Container cannot be scheduled or is being killed due to
|
||||
resource limits, see the
|
||||
[Troubleshooting](#troubleshooting) section.
|
||||
|
||||
## Monitoring compute resource usage
|
||||
|
||||
The resource usage of a Pod is reported as part of the Pod status.
|
||||
|
||||
If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md)
|
||||
is configured for your cluster, then Pod resource usage can be retrieved from
|
||||
the monitoring system.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### My Pods are pending with event message failedScheduling
|
||||
|
||||
If the scheduler cannot find any node where a Pod can fit, the Pod remains
|
||||
unscheduled until a place can be found. An event is produced each time the
|
||||
scheduler fails to find a place for the Pod, like this:
|
||||
|
||||
```shell
|
||||
kubectl describe pod frontend | grep -A 3 Events
|
||||
```
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From Subobject PathReason Message
|
||||
36s 5s 6 {scheduler } FailedScheduling Failed for reason PodExceedsFreeCPU and possibly others
|
||||
```
|
||||
|
||||
In the preceding example, the Pod named "frontend" fails to be scheduled due to
|
||||
insufficient CPU resource on the node. Similar error messages can also suggest
|
||||
failure due to insufficient memory (PodExceedsFreeMemory). In general, if a Pod
|
||||
is pending with a message of this type, there are several things to try:
|
||||
|
||||
- Add more nodes to the cluster.
|
||||
- Terminate unneeded Pods to make room for pending Pods.
|
||||
- Check that the Pod is not larger than all the nodes. For example, if all the
|
||||
nodes have a capacity of `cpu: 1`, then a Pod with a request of `cpu: 1.1` will
|
||||
never be scheduled.
|
||||
|
||||
You can check node capacities and amounts allocated with the
|
||||
`kubectl describe nodes` command. For example:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes e2e-test-node-pool-4lw4
|
||||
```
|
||||
```
|
||||
Name: e2e-test-node-pool-4lw4
|
||||
[ ... lines removed for clarity ...]
|
||||
Capacity:
|
||||
cpu: 2
|
||||
memory: 7679792Ki
|
||||
pods: 110
|
||||
Allocatable:
|
||||
cpu: 1800m
|
||||
memory: 7474992Ki
|
||||
pods: 110
|
||||
[ ... lines removed for clarity ...]
|
||||
Non-terminated Pods: (5 in total)
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
|
||||
--------- ---- ------------ ---------- --------------- -------------
|
||||
kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%)
|
||||
kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%)
|
||||
kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%)
|
||||
kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%)
|
||||
kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%)
|
||||
Allocated resources:
|
||||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
CPU Requests CPU Limits Memory Requests Memory Limits
|
||||
------------ ---------- --------------- -------------
|
||||
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
|
||||
```
|
||||
|
||||
In the preceding output, you can see that if a Pod requests more than 1120m
|
||||
CPUs or 6.23Gi of memory, it will not fit on the node.
|
||||
|
||||
By looking at the `Pods` section, you can see which Pods are taking up space on
|
||||
the node.
|
||||
|
||||
The amount of resources available to Pods is less than the node capacity, because
|
||||
system daemons use a portion of the available resources. The `allocatable` field
|
||||
[NodeStatus](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodestatus-v1-core)
|
||||
gives the amount of resources that are available to Pods. For more information, see
|
||||
[Node Allocatable Resources](https://git.k8s.io/community/contributors/design-proposals/node/node-allocatable.md).
|
||||
|
||||
The [resource quota](/docs/concepts/policy/resource-quotas/) feature can be configured
|
||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||
with namespaces, it can prevent one team from hogging all the resources.
|
||||
|
||||
### My Container is terminated
|
||||
|
||||
Your Container might get terminated because it is resource-starved. To check
|
||||
whether a Container is being killed because it is hitting a resource limit, call
|
||||
`kubectl describe pod` on the Pod of interest:
|
||||
|
||||
```shell
|
||||
kubectl describe pod simmemleak-hra99
|
||||
```
|
||||
```
|
||||
Name: simmemleak-hra99
|
||||
Namespace: default
|
||||
Image(s): saadali/simmemleak
|
||||
Node: kubernetes-node-tf0f/10.240.216.66
|
||||
Labels: name=simmemleak
|
||||
Status: Running
|
||||
Reason:
|
||||
Message:
|
||||
IP: 10.244.2.75
|
||||
Replication Controllers: simmemleak (1/1 replicas created)
|
||||
Containers:
|
||||
simmemleak:
|
||||
Image: saadali/simmemleak
|
||||
Limits:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
State: Running
|
||||
Started: Tue, 07 Jul 2015 12:54:41 -0700
|
||||
Last Termination State: Terminated
|
||||
Exit Code: 1
|
||||
Started: Fri, 07 Jul 2015 12:54:30 -0700
|
||||
Finished: Fri, 07 Jul 2015 12:54:33 -0700
|
||||
Ready: False
|
||||
Restart Count: 5
|
||||
Conditions:
|
||||
Type Status
|
||||
Ready False
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "k8s.gcr.io/pause:0.8.0" already present on machine
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
|
||||
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
|
||||
```
|
||||
|
||||
In the preceding example, the `Restart Count: 5` indicates that the `simmemleak`
|
||||
Container in the Pod was terminated and restarted five times.
|
||||
|
||||
You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status
|
||||
of previously terminated Containers:
|
||||
|
||||
```shell
|
||||
kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99
|
||||
```
|
||||
```
|
||||
Container Name: simmemleak
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
|
||||
```
|
||||
|
||||
You can see that the Container was terminated because of `reason:OOM Killed`, where `OOM` stands for Out Of Memory.
|
||||
|
||||
## Local ephemeral storage
|
||||
{{< feature-state state="beta" >}}
|
||||
|
||||
Kubernetes version 1.8 introduces a new resource, _ephemeral-storage_ for managing local ephemeral storage. In each Kubernetes node, kubelet's root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.
|
||||
|
||||
This partition is “ephemeral” and applications cannot expect any performance SLAs (Disk IOPS for example) from this partition. Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.
|
||||
|
||||
{{< note >}}
|
||||
If an optional runtime partition is used, root partition will not hold any image layer or writable layers.
|
||||
{{< /note >}}
|
||||
|
||||
### Requests and limits setting for local ephemeral storage
|
||||
Each Container of a Pod can specify one or more of the following:
|
||||
|
||||
* `spec.containers[].resources.limits.ephemeral-storage`
|
||||
* `spec.containers[].resources.requests.ephemeral-storage`
|
||||
|
||||
Limits and requests for `ephemeral-storage` are measured in bytes. You can express storage as
|
||||
a plain integer or as a fixed-point integer using one of these suffixes:
|
||||
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi,
|
||||
Mi, Ki. For example, the following represent roughly the same value:
|
||||
|
||||
```shell
|
||||
128974848, 129e6, 129M, 123Mi
|
||||
```
|
||||
|
||||
For example, the following Pod has two Containers. Each Container has a request of 2GiB of local ephemeral storage. Each Container has a limit of 4GiB of local ephemeral storage. Therefore, the Pod has a request of 4GiB of local ephemeral storage, and a limit of 8GiB of storage.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: db
|
||||
image: mysql
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
value: "password"
|
||||
resources:
|
||||
requests:
|
||||
ephemeral-storage: "2Gi"
|
||||
limits:
|
||||
ephemeral-storage: "4Gi"
|
||||
- name: wp
|
||||
image: wordpress
|
||||
resources:
|
||||
requests:
|
||||
ephemeral-storage: "2Gi"
|
||||
limits:
|
||||
ephemeral-storage: "4Gi"
|
||||
```
|
||||
|
||||
### How Pods with ephemeral-storage requests are scheduled
|
||||
|
||||
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
|
||||
run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
|
||||
|
||||
The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
|
||||
|
||||
### How Pods with ephemeral-storage limits run
|
||||
|
||||
For container-level isolation, if a Container's writable layer and logs usage exceeds its storage limit, the Pod will be evicted. For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the limit, the Pod will be evicted.
|
||||
|
||||
### Monitoring ephemeral-storage consumption
|
||||
|
||||
When local ephemeral storage is used, it is monitored on an ongoing
|
||||
basis by the kubelet. The monitoring is performed by scanning each
|
||||
emptyDir volume, log directories, and writable layers on a periodic
|
||||
basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log
|
||||
directories or writable layers) may, at the cluster operator's option,
|
||||
be managed by use of [project
|
||||
quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html).
|
||||
Project quotas were originally implemented in XFS, and have more
|
||||
recently been ported to ext4fs. Project quotas can be used for both
|
||||
monitoring and enforcement; as of Kubernetes 1.16, they are available
|
||||
as alpha functionality for monitoring only.
|
||||
|
||||
Quotas are faster and more accurate than directory scanning. When a
|
||||
directory is assigned to a project, all files created under a
|
||||
directory are created in that project, and the kernel merely has to
|
||||
keep track of how many blocks are in use by files in that project. If
|
||||
a file is created and deleted, but with an open file descriptor, it
|
||||
continues to consume space. This space will be tracked by the quota,
|
||||
but will not be seen by a directory scan.
|
||||
|
||||
Kubernetes uses project IDs starting from 1048576. The IDs in use are
|
||||
registered in `/etc/projects` and `/etc/projid`. If project IDs in
|
||||
this range are used for other purposes on the system, those project
|
||||
IDs must be registered in `/etc/projects` and `/etc/projid` to prevent
|
||||
Kubernetes from using them.
|
||||
|
||||
To enable use of project quotas, the cluster operator must do the
|
||||
following:
|
||||
|
||||
* Enable the `LocalStorageCapacityIsolationFSQuotaMonitoring=true`
|
||||
feature gate in the kubelet configuration. This defaults to `false`
|
||||
in Kubernetes 1.16, so must be explicitly set to `true`.
|
||||
|
||||
* Ensure that the root partition (or optional runtime partition) is
|
||||
built with project quotas enabled. All XFS filesystems support
|
||||
project quotas, but ext4 filesystems must be built specially.
|
||||
|
||||
* Ensure that the root partition (or optional runtime partition) is
|
||||
mounted with project quotas enabled.
|
||||
|
||||
#### Building and mounting filesystems with project quotas enabled
|
||||
|
||||
XFS filesystems require no special action when building; they are
|
||||
automatically built with project quotas enabled.
|
||||
|
||||
Ext4fs filesystems must be built with quotas enabled, then they must
|
||||
be enabled in the filesystem:
|
||||
|
||||
```
|
||||
% sudo mkfs.ext4 other_ext4fs_args... -E quotatype=prjquota /dev/block_device
|
||||
% sudo tune2fs -O project -Q prjquota /dev/block_device
|
||||
|
||||
```
|
||||
|
||||
To mount the filesystem, both ext4fs and XFS require the `prjquota`
|
||||
option set in `/etc/fstab`:
|
||||
|
||||
```
|
||||
/dev/block_device /var/kubernetes_data defaults,prjquota 0 0
|
||||
```
|
||||
|
||||
|
||||
## Extended resources
|
||||
|
||||
Extended resources are fully-qualified resource names outside the
|
||||
`kubernetes.io` domain. They allow cluster operators to advertise and users to
|
||||
consume the non-Kubernetes-built-in resources.
|
||||
|
||||
There are two steps required to use Extended Resources. First, the cluster
|
||||
operator must advertise an Extended Resource. Second, users must request the
|
||||
Extended Resource in Pods.
|
||||
|
||||
### Managing extended resources
|
||||
|
||||
#### Node-level extended resources
|
||||
|
||||
Node-level extended resources are tied to nodes.
|
||||
|
||||
##### Device plugin managed resources
|
||||
See [Device
|
||||
Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
for how to advertise device plugin managed resources on each node.
|
||||
|
||||
##### Other resources
|
||||
To advertise a new node-level extended resource, the cluster operator can
|
||||
submit a `PATCH` HTTP request to the API server to specify the available
|
||||
quantity in the `status.capacity` for a node in the cluster. After this
|
||||
operation, the node's `status.capacity` will include a new resource. The
|
||||
`status.allocatable` field is updated automatically with the new resource
|
||||
asynchronously by the kubelet. Note that because the scheduler uses the node
|
||||
`status.allocatable` value when evaluating Pod fitness, there may be a short
|
||||
delay between patching the node capacity with a new resource and the first Pod
|
||||
that requests the resource to be scheduled on that node.
|
||||
|
||||
**Example:**
|
||||
|
||||
Here is an example showing how to use `curl` to form an HTTP request that
|
||||
advertises five "example.com/foo" resources on node `k8s-node-1` whose master
|
||||
is `k8s-master`.
|
||||
|
||||
```shell
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
--request PATCH \
|
||||
--data '[{"op": "add", "path": "/status/capacity/example.com~1foo", "value": "5"}]' \
|
||||
http://k8s-master:8080/api/v1/nodes/k8s-node-1/status
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
In the preceding request, `~1` is the encoding for the character `/`
|
||||
in the patch path. The operation path value in JSON-Patch is interpreted as a
|
||||
JSON-Pointer. For more details, see
|
||||
[IETF RFC 6901, section 3](https://tools.ietf.org/html/rfc6901#section-3).
|
||||
{{< /note >}}
|
||||
|
||||
#### Cluster-level extended resources
|
||||
|
||||
Cluster-level extended resources are not tied to nodes. They are usually managed
|
||||
by scheduler extenders, which handle the resource consumption and resource quota.
|
||||
|
||||
You can specify the extended resources that are handled by scheduler extenders
|
||||
in [scheduler policy
|
||||
configuration](https://github.com/kubernetes/kubernetes/blob/release-1.10/pkg/scheduler/api/v1/types.go#L31).
|
||||
|
||||
**Example:**
|
||||
|
||||
The following configuration for a scheduler policy indicates that the
|
||||
cluster-level extended resource "example.com/foo" is handled by the scheduler
|
||||
extender.
|
||||
|
||||
- The scheduler sends a Pod to the scheduler extender only if the Pod requests
|
||||
"example.com/foo".
|
||||
- The `ignoredByScheduler` field specifies that the scheduler does not check
|
||||
the "example.com/foo" resource in its `PodFitsResources` predicate.
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Policy",
|
||||
"apiVersion": "v1",
|
||||
"extenders": [
|
||||
{
|
||||
"urlPrefix":"<extender-endpoint>",
|
||||
"bindVerb": "bind",
|
||||
"managedResources": [
|
||||
{
|
||||
"name": "example.com/foo",
|
||||
"ignoredByScheduler": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Consuming extended resources
|
||||
|
||||
Users can consume extended resources in Pod specs just like CPU and memory.
|
||||
The scheduler takes care of the resource accounting so that no more than the
|
||||
available amount is simultaneously allocated to Pods.
|
||||
|
||||
The API server restricts quantities of extended resources to whole numbers.
|
||||
Examples of _valid_ quantities are `3`, `3000m` and `3Ki`. Examples of
|
||||
_invalid_ quantities are `0.5` and `1500m`.
|
||||
|
||||
{{< note >}}
|
||||
Extended resources replace Opaque Integer Resources.
|
||||
Users can use any domain name prefix other than `kubernetes.io` which is reserved.
|
||||
{{< /note >}}
|
||||
|
||||
To consume an extended resource in a Pod, include the resource name as a key
|
||||
in the `spec.containers[].resources.limits` map in the container spec.
|
||||
|
||||
{{< note >}}
|
||||
Extended resources cannot be overcommitted, so request and limit
|
||||
must be equal if both are present in a container spec.
|
||||
{{< /note >}}
|
||||
|
||||
A Pod is scheduled only if all of the resource requests are satisfied, including
|
||||
CPU, memory and any extended resources. The Pod remains in the `PENDING` state
|
||||
as long as the resource request cannot be satisfied.
|
||||
|
||||
**Example:**
|
||||
|
||||
The Pod below requests 2 CPUs and 1 "example.com/foo" (an extended resource).
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: my-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: my-container
|
||||
image: myimage
|
||||
resources:
|
||||
requests:
|
||||
cpu: 2
|
||||
example.com/foo: 1
|
||||
limits:
|
||||
example.com/foo: 1
|
||||
```
|
||||
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Get hands-on experience [assigning Memory resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/).
|
||||
|
||||
* Get hands-on experience [assigning CPU resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/).
|
||||
|
||||
* [Container API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
|
||||
* [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core)
|
||||
|
||||
{{% /capture %}}
|
File diff suppressed because it is too large
Load Diff
|
@ -1,7 +1,4 @@
|
|||
---
|
||||
reviewers:
|
||||
- bgrant0607
|
||||
- mikedanese
|
||||
title: Що таке Kubernetes?
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
|
|
|
@ -1,109 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- lachie83
|
||||
- khenidak
|
||||
- aramase
|
||||
title: IPv4/IPv6 dual-stack
|
||||
feature:
|
||||
title: Подвійний стек IPv4/IPv6
|
||||
description: >
|
||||
Призначення IPv4- та IPv6-адрес подам і сервісам.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
|
||||
IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to {{< glossary_tooltip text="Pods" term_id="pod" >}} and {{< glossary_tooltip text="Services" term_id="service" >}}.
|
||||
|
||||
If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Supported Features
|
||||
|
||||
Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
|
||||
|
||||
* Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod)
|
||||
* IPv4 and IPv6 enabled Services (each Service must be for a single address family)
|
||||
* Pod off-cluster egress routing (eg. the Internet) via both IPv4 and IPv6 interfaces
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
|
||||
|
||||
* Kubernetes 1.16 or later
|
||||
* Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces)
|
||||
* A network plugin that supports dual-stack (such as Kubenet or Calico)
|
||||
* Kube-proxy running in mode IPVS
|
||||
|
||||
## Enable IPv4/IPv6 dual-stack
|
||||
|
||||
To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments:
|
||||
|
||||
* kube-controller-manager:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>` eg. `--cluster-cidr=10.244.0.0/16,fc00::/24`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6
|
||||
* kubelet:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* kube-proxy:
|
||||
* `--proxy-mode=ipvs`
|
||||
* `--cluster-cidrs=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
|
||||
{{< caution >}}
|
||||
If you specify an IPv6 address block larger than a /24 via `--cluster-cidr` on the command line, that assignment will fail.
|
||||
{{< /caution >}}
|
||||
|
||||
## Services
|
||||
|
||||
If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, `.spec.ipFamily`, on that Service.
|
||||
You can only set this field when creating a new Service. Setting the `.spec.ipFamily` field is optional and should only be used if you plan to enable IPv4 and IPv6 {{< glossary_tooltip text="Services" term_id="service" >}} and {{< glossary_tooltip text="Ingresses" term_id="ingress" >}} on your cluster. The configuration of this field not a requirement for [egress](#egress-traffic) traffic.
|
||||
|
||||
{{< note >}}
|
||||
The default address family for your cluster is the address family of the first service cluster IP range configured via the `--service-cluster-ip-range` flag to the kube-controller-manager.
|
||||
{{< /note >}}
|
||||
|
||||
You can set `.spec.ipFamily` to either:
|
||||
|
||||
* `IPv4`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv4`
|
||||
* `IPv6`: The API server will assign an IP from a `service-cluster-ip-range` that is `ipv6`
|
||||
|
||||
The following Service specification does not include the `ipFamily` field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured `service-cluster-ip-range` to this Service.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
|
||||
The following Service specification includes the `ipFamily` field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}}
|
||||
|
||||
For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured `service-cluster-ip-range` to this Service.
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}}
|
||||
|
||||
### Type LoadBalancer
|
||||
|
||||
On cloud providers which support IPv6 enabled external load balancers, setting the `type` field to `LoadBalancer` in additional to setting `ipFamily` field to `IPv6` provisions a cloud load balancer for your Service.
|
||||
|
||||
## Egress Traffic
|
||||
|
||||
The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying {{< glossary_tooltip text="CNI" term_id="cni" >}} provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The [ip-masq-agent](https://github.com/kubernetes-incubator/ip-masq-agent) is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters.
|
||||
|
||||
## Known Issues
|
||||
|
||||
* Kubenet forces IPv4,IPv6 positional reporting of IPs (--cluster-cidr)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,188 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- freehan
|
||||
title: EndpointSlices
|
||||
feature:
|
||||
title: EndpointSlices
|
||||
description: >
|
||||
Динамічне відстеження мережевих вузлів у кластері Kubernetes.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
|
||||
|
||||
_EndpointSlices_ provide a simple way to track network endpoints within a
|
||||
Kubernetes cluster. They offer a more scalable and extensible alternative to
|
||||
Endpoints.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## EndpointSlice resources {#endpointslice-resource}
|
||||
|
||||
In Kubernetes, an EndpointSlice contains references to a set of network
|
||||
endpoints. The EndpointSlice controller automatically creates EndpointSlices
|
||||
for a Kubernetes Service when a {{< glossary_tooltip text="selector"
|
||||
term_id="selector" >}} is specified. These EndpointSlices will include
|
||||
references to any Pods that match the Service selector. EndpointSlices group
|
||||
network endpoints together by unique Service and Port combinations.
|
||||
|
||||
As an example, here's a sample EndpointSlice resource for the `example`
|
||||
Kubernetes Service.
|
||||
|
||||
```yaml
|
||||
apiVersion: discovery.k8s.io/v1beta1
|
||||
kind: EndpointSlice
|
||||
metadata:
|
||||
name: example-abc
|
||||
labels:
|
||||
kubernetes.io/service-name: example
|
||||
addressType: IPv4
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 80
|
||||
endpoints:
|
||||
- addresses:
|
||||
- "10.1.2.3"
|
||||
conditions:
|
||||
ready: true
|
||||
hostname: pod-1
|
||||
topology:
|
||||
kubernetes.io/hostname: node-1
|
||||
topology.kubernetes.io/zone: us-west2-a
|
||||
```
|
||||
|
||||
By default, EndpointSlices managed by the EndpointSlice controller will have no
|
||||
more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1
|
||||
with Endpoints and Services and have similar performance.
|
||||
|
||||
EndpointSlices can act as the source of truth for kube-proxy when it comes to
|
||||
how to route internal traffic. When enabled, they should provide a performance
|
||||
improvement for services with large numbers of endpoints.
|
||||
|
||||
### Address Types
|
||||
|
||||
EndpointSlices support three address types:
|
||||
|
||||
* IPv4
|
||||
* IPv6
|
||||
* FQDN (Fully Qualified Domain Name)
|
||||
|
||||
### Topology
|
||||
|
||||
Each endpoint within an EndpointSlice can contain relevant topology information.
|
||||
This is used to indicate where an endpoint is, containing information about the
|
||||
corresponding Node, zone, and region. When the values are available, the
|
||||
following Topology labels will be set by the EndpointSlice controller:
|
||||
|
||||
* `kubernetes.io/hostname` - The name of the Node this endpoint is on.
|
||||
* `topology.kubernetes.io/zone` - The zone this endpoint is in.
|
||||
* `topology.kubernetes.io/region` - The region this endpoint is in.
|
||||
|
||||
The values of these labels are derived from resources associated with each
|
||||
endpoint in a slice. The hostname label represents the value of the NodeName
|
||||
field on the corresponding Pod. The zone and region labels represent the value
|
||||
of the labels with the same names on the corresponding Node.
|
||||
|
||||
### Management
|
||||
|
||||
By default, EndpointSlices are created and managed by the EndpointSlice
|
||||
controller. There are a variety of other use cases for EndpointSlices, such as
|
||||
service mesh implementations, that could result in other entities or controllers
|
||||
managing additional sets of EndpointSlices. To ensure that multiple entities can
|
||||
manage EndpointSlices without interfering with each other, a
|
||||
`endpointslice.kubernetes.io/managed-by` label is used to indicate the entity
|
||||
managing an EndpointSlice. The EndpointSlice controller sets
|
||||
`endpointslice-controller.k8s.io` as the value for this label on all
|
||||
EndpointSlices it manages. Other entities managing EndpointSlices should also
|
||||
set a unique value for this label.
|
||||
|
||||
### Ownership
|
||||
|
||||
In most use cases, EndpointSlices will be owned by the Service that it tracks
|
||||
endpoints for. This is indicated by an owner reference on each EndpointSlice as
|
||||
well as a `kubernetes.io/service-name` label that enables simple lookups of all
|
||||
EndpointSlices belonging to a Service.
|
||||
|
||||
## EndpointSlice Controller
|
||||
|
||||
The EndpointSlice controller watches Services and Pods to ensure corresponding
|
||||
EndpointSlices are up to date. The controller will manage EndpointSlices for
|
||||
every Service with a selector specified. These will represent the IPs of Pods
|
||||
matching the Service selector.
|
||||
|
||||
### Size of EndpointSlices
|
||||
|
||||
By default, EndpointSlices are limited to a size of 100 endpoints each. You can
|
||||
configure this with the `--max-endpoints-per-slice` {{< glossary_tooltip
|
||||
text="kube-controller-manager" term_id="kube-controller-manager" >}} flag up to
|
||||
a maximum of 1000.
|
||||
|
||||
### Distribution of EndpointSlices
|
||||
|
||||
Each EndpointSlice has a set of ports that applies to all endpoints within the
|
||||
resource. When named ports are used for a Service, Pods may end up with
|
||||
different target port numbers for the same named port, requiring different
|
||||
EndpointSlices. This is similar to the logic behind how subsets are grouped
|
||||
with Endpoints.
|
||||
|
||||
The controller tries to fill EndpointSlices as full as possible, but does not
|
||||
actively rebalance them. The logic of the controller is fairly straightforward:
|
||||
|
||||
1. Iterate through existing EndpointSlices, remove endpoints that are no longer
|
||||
desired and update matching endpoints that have changed.
|
||||
2. Iterate through EndpointSlices that have been modified in the first step and
|
||||
fill them up with any new endpoints needed.
|
||||
3. If there's still new endpoints left to add, try to fit them into a previously
|
||||
unchanged slice and/or create new ones.
|
||||
|
||||
Importantly, the third step prioritizes limiting EndpointSlice updates over a
|
||||
perfectly full distribution of EndpointSlices. As an example, if there are 10
|
||||
new endpoints to add and 2 EndpointSlices with room for 5 more endpoints each,
|
||||
this approach will create a new EndpointSlice instead of filling up the 2
|
||||
existing EndpointSlices. In other words, a single EndpointSlice creation is
|
||||
preferrable to multiple EndpointSlice updates.
|
||||
|
||||
With kube-proxy running on each Node and watching EndpointSlices, every change
|
||||
to an EndpointSlice becomes relatively expensive since it will be transmitted to
|
||||
every Node in the cluster. This approach is intended to limit the number of
|
||||
changes that need to be sent to every Node, even if it may result with multiple
|
||||
EndpointSlices that are not full.
|
||||
|
||||
In practice, this less than ideal distribution should be rare. Most changes
|
||||
processed by the EndpointSlice controller will be small enough to fit in an
|
||||
existing EndpointSlice, and if not, a new EndpointSlice is likely going to be
|
||||
necessary soon anyway. Rolling updates of Deployments also provide a natural
|
||||
repacking of EndpointSlices with all pods and their corresponding endpoints
|
||||
getting replaced.
|
||||
|
||||
## Motivation
|
||||
|
||||
The Endpoints API has provided a simple and straightforward way of
|
||||
tracking network endpoints in Kubernetes. Unfortunately as Kubernetes clusters
|
||||
and Services have gotten larger, limitations of that API became more visible.
|
||||
Most notably, those included challenges with scaling to larger numbers of
|
||||
network endpoints.
|
||||
|
||||
Since all network endpoints for a Service were stored in a single Endpoints
|
||||
resource, those resources could get quite large. That affected the performance
|
||||
of Kubernetes components (notably the master control plane) and resulted in
|
||||
significant amounts of network traffic and processing when Endpoints changed.
|
||||
EndpointSlices help you mitigate those issues as well as provide an extensible
|
||||
platform for additional features such as topological routing.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,127 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- johnbelamaric
|
||||
- imroc
|
||||
title: Service Topology
|
||||
feature:
|
||||
title: Топологія Сервісів
|
||||
description: >
|
||||
Маршрутизація трафіка Сервісом відповідно до топології кластера.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
|
||||
|
||||
_Service Topology_ enables a service to route traffic based upon the Node
|
||||
topology of the cluster. For example, a service can specify that traffic be
|
||||
preferentially routed to endpoints that are on the same Node as the client, or
|
||||
in the same availability zone.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Introduction
|
||||
|
||||
By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to
|
||||
any backend address for the Service. Since Kubernetes 1.7 it has been possible
|
||||
to route "external" traffic to the Pods running on the Node that received the
|
||||
traffic, but this is not supported for `ClusterIP` Services, and more complex
|
||||
topologies — such as routing zonally — have not been possible. The
|
||||
_Service Topology_ feature resolves this by allowing the Service creator to
|
||||
define a policy for routing traffic based upon the Node labels for the
|
||||
originating and destination Nodes.
|
||||
|
||||
By using Node label matching between the source and destination, the operator
|
||||
may designate groups of Nodes that are "closer" and "farther" from one another,
|
||||
using whatever metric makes sense for that operator's requirements. For many
|
||||
operators in public clouds, for example, there is a preference to keep service
|
||||
traffic within the same zone, because interzonal traffic has a cost associated
|
||||
with it, while intrazonal traffic does not. Other common needs include being able
|
||||
to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to
|
||||
Nodes connected to the same top-of-rack switch for the lowest latency.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The following prerequisites are needed in order to enable topology aware service
|
||||
routing:
|
||||
|
||||
* Kubernetes 1.17 or later
|
||||
* Kube-proxy running in iptables mode or IPVS mode
|
||||
* Enable [Endpoint Slices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
|
||||
## Enable Service Topology
|
||||
|
||||
To enable service topology, enable the `ServiceTopology` feature gate for
|
||||
kube-apiserver and kube-proxy:
|
||||
|
||||
```
|
||||
--feature-gates="ServiceTopology=true"
|
||||
```
|
||||
|
||||
## Using Service Topology
|
||||
|
||||
If your cluster has Service Topology enabled, you can control Service traffic
|
||||
routing by specifying the `topologyKeys` field on the Service spec. This field
|
||||
is a preference-order list of Node labels which will be used to sort endpoints
|
||||
when accessing this Service. Traffic will be directed to a Node whose value for
|
||||
the first label matches the originating Node's value for that label. If there is
|
||||
no backend for the Service on a matching Node, then the second label will be
|
||||
considered, and so forth, until no labels remain.
|
||||
|
||||
If no match is found, the traffic will be rejected, just as if there were no
|
||||
backends for the Service at all. That is, endpoints are chosen based on the first
|
||||
topology key with available backends. If this field is specified and all entries
|
||||
have no backends that match the topology of the client, the service has no
|
||||
backends for that client and connections should fail. The special value `"*"` may
|
||||
be used to mean "any topology". This catch-all value, if used, only makes sense
|
||||
as the last value in the list.
|
||||
|
||||
If `topologyKeys` is not specified or empty, no topology constraints will be applied.
|
||||
|
||||
Consider a cluster with Nodes that are labeled with their hostname, zone name,
|
||||
and region name. Then you can set the `topologyKeys` values of a service to direct
|
||||
traffic as follows.
|
||||
|
||||
* Only to endpoints on the same node, failing if no endpoint exists on the node:
|
||||
`["kubernetes.io/hostname"]`.
|
||||
* Preferentially to endpoints on the same node, falling back to endpoints in the
|
||||
same zone, followed by the same region, and failing otherwise: `["kubernetes.io/hostname",
|
||||
"topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`.
|
||||
This may be useful, for example, in cases where data locality is critical.
|
||||
* Preferentially to the same zone, but fallback on any available endpoint if
|
||||
none are available within this zone:
|
||||
`["topology.kubernetes.io/zone", "*"]`.
|
||||
|
||||
|
||||
|
||||
## Constraints
|
||||
|
||||
* Service topology is not compatible with `externalTrafficPolicy=Local`, and
|
||||
therefore a Service cannot use both of these features. It is possible to use
|
||||
both features in the same cluster on different Services, just not on the same
|
||||
Service.
|
||||
|
||||
* Valid topology keys are currently limited to `kubernetes.io/hostname`,
|
||||
`topology.kubernetes.io/zone`, and `topology.kubernetes.io/region`, but will
|
||||
be generalized to other node labels in the future.
|
||||
|
||||
* Topology keys must be valid label keys and at most 16 keys may be specified.
|
||||
|
||||
* The catch-all value, `"*"`, must be the last value in the topology keys, if
|
||||
it is used.
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
|
||||
{{% /capture %}}
|
File diff suppressed because it is too large
Load Diff
|
@ -1,736 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- jsafrane
|
||||
- saad-ali
|
||||
- thockin
|
||||
- msau42
|
||||
title: Persistent Volumes
|
||||
feature:
|
||||
title: Оркестрація сховищем
|
||||
description: >
|
||||
Автоматично монтує систему збереження даних на ваш вибір: з локального носія даних, із хмарного сховища від провайдера публічних хмарних сервісів, як-от <a href="https://cloud.google.com/storage/">GCP</a> чи <a href="https://aws.amazon.com/products/storage/">AWS</a>, або з мережевого сховища, такого як: NFS, iSCSI, Gluster, Ceph, Cinder чи Flocker.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This document describes the current state of `PersistentVolumes` in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Introduction
|
||||
|
||||
Managing storage is a distinct problem from managing compute instances. The `PersistentVolume` subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: `PersistentVolume` and `PersistentVolumeClaim`.
|
||||
|
||||
A `PersistentVolume` (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
|
||||
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).
|
||||
|
||||
While `PersistentVolumeClaims` allow a user to consume abstract storage
|
||||
resources, it is common that users need `PersistentVolumes` with varying
|
||||
properties, such as performance, for different problems. Cluster administrators
|
||||
need to be able to offer a variety of `PersistentVolumes` that differ in more
|
||||
ways than just size and access modes, without exposing users to the details of
|
||||
how those volumes are implemented. For these needs, there is the `StorageClass`
|
||||
resource.
|
||||
|
||||
See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
|
||||
|
||||
|
||||
## Lifecycle of a volume and claim
|
||||
|
||||
PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
|
||||
|
||||
### Provisioning
|
||||
|
||||
There are two ways PVs may be provisioned: statically or dynamically.
|
||||
|
||||
#### Static
|
||||
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
|
||||
|
||||
#### Dynamic
|
||||
When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`,
|
||||
the cluster may try to dynamically provision a volume specially for the PVC.
|
||||
This provisioning is based on `StorageClasses`: the PVC must request a
|
||||
[storage class](/docs/concepts/storage/storage-classes/) and
|
||||
the administrator must have created and configured that class for dynamic
|
||||
provisioning to occur. Claims that request the class `""` effectively disable
|
||||
dynamic provisioning for themselves.
|
||||
|
||||
To enable dynamic storage provisioning based on storage class, the cluster administrator
|
||||
needs to enable the `DefaultStorageClass` [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
|
||||
on the API server. This can be done, for example, by ensuring that `DefaultStorageClass` is
|
||||
among the comma-delimited, ordered list of values for the `--enable-admission-plugins` flag of
|
||||
the API server component. For more information on API server command-line flags,
|
||||
check [kube-apiserver](/docs/admin/kube-apiserver/) documentation.
|
||||
|
||||
### Binding
|
||||
|
||||
A user creates, or in the case of dynamic provisioning, has already created, a `PersistentVolumeClaim` with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, `PersistentVolumeClaim` binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping.
|
||||
|
||||
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
|
||||
|
||||
### Using
|
||||
|
||||
Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod.
|
||||
|
||||
Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods and access their claimed PVs by including a `persistentVolumeClaim` in their Pod's volumes block. [See below for syntax details](#claims-as-volumes).
|
||||
|
||||
### Storage Object in Use Protection
|
||||
The purpose of the Storage Object in Use Protection feature is to ensure that Persistent Volume Claims (PVCs) in active use by a Pod and Persistent Volume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss.
|
||||
|
||||
{{< note >}}
|
||||
PVC is in active use by a Pod when a Pod object exists that is using the PVC.
|
||||
{{< /note >}}
|
||||
|
||||
If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC.
|
||||
|
||||
You can see that a PVC is protected when the PVC's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pvc-protection`:
|
||||
|
||||
```shell
|
||||
kubectl describe pvc hostpath
|
||||
Name: hostpath
|
||||
Namespace: default
|
||||
StorageClass: example-hostpath
|
||||
Status: Terminating
|
||||
Volume:
|
||||
Labels: <none>
|
||||
Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath
|
||||
volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath
|
||||
Finalizers: [kubernetes.io/pvc-protection]
|
||||
...
|
||||
```
|
||||
|
||||
You can see that a PV is protected when the PV's status is `Terminating` and the `Finalizers` list includes `kubernetes.io/pv-protection` too:
|
||||
|
||||
```shell
|
||||
kubectl describe pv task-pv-volume
|
||||
Name: task-pv-volume
|
||||
Labels: type=local
|
||||
Annotations: <none>
|
||||
Finalizers: [kubernetes.io/pv-protection]
|
||||
StorageClass: standard
|
||||
Status: Terminating
|
||||
Claim:
|
||||
Reclaim Policy: Delete
|
||||
Access Modes: RWO
|
||||
Capacity: 1Gi
|
||||
Message:
|
||||
Source:
|
||||
Type: HostPath (bare host directory volume)
|
||||
Path: /tmp/data
|
||||
HostPathType:
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
### Reclaiming
|
||||
|
||||
When a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a `PersistentVolume` tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted.
|
||||
|
||||
#### Retain
|
||||
|
||||
The `Retain` reclaim policy allows for manual reclamation of the resource. When the `PersistentVolumeClaim` is deleted, the `PersistentVolume` still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.
|
||||
|
||||
1. Delete the `PersistentVolume`. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
|
||||
1. Manually clean up the data on the associated storage asset accordingly.
|
||||
1. Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new `PersistentVolume` with the storage asset definition.
|
||||
|
||||
#### Delete
|
||||
|
||||
For volume plugins that support the `Delete` reclaim policy, deletion removes both the `PersistentVolume` object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the [reclaim policy of their `StorageClass`](#reclaim-policy), which defaults to `Delete`. The administrator should configure the `StorageClass` according to users' expectations; otherwise, the PV must be edited or patched after it is created. See [Change the Reclaim Policy of a PersistentVolume](/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
|
||||
|
||||
#### Recycle
|
||||
|
||||
{{< warning >}}
|
||||
The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning.
|
||||
{{< /warning >}}
|
||||
|
||||
If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
|
||||
|
||||
However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pv-recycler
|
||||
namespace: default
|
||||
spec:
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: vol
|
||||
hostPath:
|
||||
path: /any/path/it/will/be/replaced
|
||||
containers:
|
||||
- name: pv-recycler
|
||||
image: "k8s.gcr.io/busybox"
|
||||
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /scrub
|
||||
```
|
||||
|
||||
However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled.
|
||||
|
||||
### Expanding Persistent Volumes Claims
|
||||
|
||||
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
|
||||
|
||||
Support for expanding PersistentVolumeClaims (PVCs) is now enabled by default. You can expand
|
||||
the following types of volumes:
|
||||
|
||||
* gcePersistentDisk
|
||||
* awsElasticBlockStore
|
||||
* Cinder
|
||||
* glusterfs
|
||||
* rbd
|
||||
* Azure File
|
||||
* Azure Disk
|
||||
* Portworx
|
||||
* FlexVolumes
|
||||
* CSI
|
||||
|
||||
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
|
||||
|
||||
``` yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: gluster-vol-default
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
parameters:
|
||||
resturl: "http://192.168.10.100:8080"
|
||||
restuser: ""
|
||||
secretNamespace: ""
|
||||
secretName: ""
|
||||
allowVolumeExpansion: true
|
||||
```
|
||||
|
||||
To request a larger volume for a PVC, edit the PVC object and specify a larger
|
||||
size. This triggers expansion of the volume that backs the underlying `PersistentVolume`. A
|
||||
new `PersistentVolume` is never created to satisfy the claim. Instead, an existing volume is resized.
|
||||
|
||||
#### CSI Volume expansion
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information.
|
||||
|
||||
|
||||
#### Resizing a volume containing a file system
|
||||
|
||||
You can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4.
|
||||
|
||||
When a volume contains a file system, the file system is only resized when a new Pod is using
|
||||
the `PersistentVolumeClaim` in ReadWrite mode. File system expansion is either done when a Pod is starting up
|
||||
or when a Pod is running and the underlying file system supports online expansion.
|
||||
|
||||
FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to `true`.
|
||||
The FlexVolume can be resized on Pod restart.
|
||||
|
||||
#### Resizing an in-use PersistentVolumeClaim
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
Expanding in-use PVCs is available as beta since Kubernetes 1.15, and as alpha since 1.11. The `ExpandInUsePersistentVolumes` feature must be enabled, which is the case automatically for many clusters for beta features. Refer to the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) documentation for more information.
|
||||
{{< /note >}}
|
||||
|
||||
In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC.
|
||||
Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded.
|
||||
This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that
|
||||
uses the PVC before the expansion can complete.
|
||||
|
||||
|
||||
Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod.
|
||||
|
||||
{{< note >}}
|
||||
FlexVolume resize is possible only when the underlying driver supports resize.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume quota of one modification every 6 hours.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Types of Persistent Volumes
|
||||
|
||||
`PersistentVolume` types are implemented as plugins. Kubernetes currently supports the following plugins:
|
||||
|
||||
* GCEPersistentDisk
|
||||
* AWSElasticBlockStore
|
||||
* AzureFile
|
||||
* AzureDisk
|
||||
* CSI
|
||||
* FC (Fibre Channel)
|
||||
* FlexVolume
|
||||
* Flocker
|
||||
* NFS
|
||||
* iSCSI
|
||||
* RBD (Ceph Block Device)
|
||||
* CephFS
|
||||
* Cinder (OpenStack block storage)
|
||||
* Glusterfs
|
||||
* VsphereVolume
|
||||
* Quobyte Volumes
|
||||
* HostPath (Single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
|
||||
* Portworx Volumes
|
||||
* ScaleIO Volumes
|
||||
* StorageOS
|
||||
|
||||
## Persistent Volumes
|
||||
|
||||
Each PV contains a spec and status, which is the specification and status of the volume.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv0003
|
||||
spec:
|
||||
capacity:
|
||||
storage: 5Gi
|
||||
volumeMode: Filesystem
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Recycle
|
||||
storageClassName: slow
|
||||
mountOptions:
|
||||
- hard
|
||||
- nfsvers=4.1
|
||||
nfs:
|
||||
path: /tmp
|
||||
server: 172.17.0.2
|
||||
```
|
||||
|
||||
### Capacity
|
||||
|
||||
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`.
|
||||
|
||||
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
|
||||
|
||||
### Volume Mode
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
||||
|
||||
Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume.
|
||||
Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem`
|
||||
to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API
|
||||
parameter.
|
||||
|
||||
### Access Modes
|
||||
|
||||
A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
|
||||
|
||||
The access modes are:
|
||||
|
||||
* ReadWriteOnce -- the volume can be mounted as read-write by a single node
|
||||
* ReadOnlyMany -- the volume can be mounted read-only by many nodes
|
||||
* ReadWriteMany -- the volume can be mounted as read-write by many nodes
|
||||
|
||||
In the CLI, the access modes are abbreviated to:
|
||||
|
||||
* RWO - ReadWriteOnce
|
||||
* ROX - ReadOnlyMany
|
||||
* RWX - ReadWriteMany
|
||||
|
||||
> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
|
||||
|
||||
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany|
|
||||
| :--- | :---: | :---: | :---: |
|
||||
| AWSElasticBlockStore | ✓ | - | - |
|
||||
| AzureFile | ✓ | ✓ | ✓ |
|
||||
| AzureDisk | ✓ | - | - |
|
||||
| CephFS | ✓ | ✓ | ✓ |
|
||||
| Cinder | ✓ | - | - |
|
||||
| CSI | depends on the driver | depends on the driver | depends on the driver |
|
||||
| FC | ✓ | ✓ | - |
|
||||
| FlexVolume | ✓ | ✓ | depends on the driver |
|
||||
| Flocker | ✓ | - | - |
|
||||
| GCEPersistentDisk | ✓ | ✓ | - |
|
||||
| Glusterfs | ✓ | ✓ | ✓ |
|
||||
| HostPath | ✓ | - | - |
|
||||
| iSCSI | ✓ | ✓ | - |
|
||||
| Quobyte | ✓ | ✓ | ✓ |
|
||||
| NFS | ✓ | ✓ | ✓ |
|
||||
| RBD | ✓ | ✓ | - |
|
||||
| VsphereVolume | ✓ | - | - (works when Pods are collocated) |
|
||||
| PortworxVolume | ✓ | - | ✓ |
|
||||
| ScaleIO | ✓ | ✓ | - |
|
||||
| StorageOS | ✓ | - | - |
|
||||
|
||||
### Class
|
||||
|
||||
A PV can have a class, which is specified by setting the
|
||||
`storageClassName` attribute to the name of a
|
||||
[StorageClass](/docs/concepts/storage/storage-classes/).
|
||||
A PV of a particular class can only be bound to PVCs requesting
|
||||
that class. A PV with no `storageClassName` has no class and can only be bound
|
||||
to PVCs that request no particular class.
|
||||
|
||||
In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
|
||||
of the `storageClassName` attribute. This annotation is still working; however,
|
||||
it will become fully deprecated in a future Kubernetes release.
|
||||
|
||||
### Reclaim Policy
|
||||
|
||||
Current reclaim policies are:
|
||||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
|
||||
* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
|
||||
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
|
||||
|
||||
### Mount Options
|
||||
|
||||
A Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node.
|
||||
|
||||
{{< note >}}
|
||||
Not all Persistent Volume types support mount options.
|
||||
{{< /note >}}
|
||||
|
||||
The following volume types support mount options:
|
||||
|
||||
* AWSElasticBlockStore
|
||||
* AzureDisk
|
||||
* AzureFile
|
||||
* CephFS
|
||||
* Cinder (OpenStack block storage)
|
||||
* GCEPersistentDisk
|
||||
* Glusterfs
|
||||
* NFS
|
||||
* Quobyte Volumes
|
||||
* RBD (Ceph Block Device)
|
||||
* StorageOS
|
||||
* VsphereVolume
|
||||
* iSCSI
|
||||
|
||||
Mount options are not validated, so mount will simply fail if one is invalid.
|
||||
|
||||
In the past, the annotation `volume.beta.kubernetes.io/mount-options` was used instead
|
||||
of the `mountOptions` attribute. This annotation is still working; however,
|
||||
it will become fully deprecated in a future Kubernetes release.
|
||||
|
||||
### Node Affinity
|
||||
|
||||
{{< note >}}
|
||||
For most volume types, you do not need to set this field. It is automatically populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore), [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and [Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
{{< /note >}}
|
||||
|
||||
A PV can specify [node affinity](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volumenodeaffinity-v1-core) to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that are selected by the node affinity.
|
||||
|
||||
### Phase
|
||||
|
||||
A volume will be in one of the following phases:
|
||||
|
||||
* Available -- a free resource that is not yet bound to a claim
|
||||
* Bound -- the volume is bound to a claim
|
||||
* Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster
|
||||
* Failed -- the volume has failed its automatic reclamation
|
||||
|
||||
The CLI will show the name of the PVC bound to the PV.
|
||||
|
||||
## PersistentVolumeClaims
|
||||
|
||||
Each PVC contains a spec and status, which is the specification and status of the claim.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: myclaim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
storageClassName: slow
|
||||
selector:
|
||||
matchLabels:
|
||||
release: "stable"
|
||||
matchExpressions:
|
||||
- {key: environment, operator: In, values: [dev]}
|
||||
```
|
||||
|
||||
### Access Modes
|
||||
|
||||
Claims use the same conventions as volumes when requesting storage with specific access modes.
|
||||
|
||||
### Volume Modes
|
||||
|
||||
Claims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device.
|
||||
|
||||
### Resources
|
||||
|
||||
Claims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) applies to both volumes and claims.
|
||||
|
||||
### Selector
|
||||
|
||||
Claims can specify a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors) to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:
|
||||
|
||||
* `matchLabels` - the volume must have a label with this value
|
||||
* `matchExpressions` - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.
|
||||
|
||||
All of the requirements, from both `matchLabels` and `matchExpressions`, are ANDed together – they must all be satisfied in order to match.
|
||||
|
||||
### Class
|
||||
|
||||
A claim can request a particular class by specifying the name of a
|
||||
[StorageClass](/docs/concepts/storage/storage-classes/)
|
||||
using the attribute `storageClassName`.
|
||||
Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can
|
||||
be bound to the PVC.
|
||||
|
||||
PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set
|
||||
equal to `""` is always interpreted to be requesting a PV with no class, so it
|
||||
can only be bound to PVs with no class (no annotation or one set equal to
|
||||
`""`). A PVC with no `storageClassName` is not quite the same and is treated differently
|
||||
by the cluster, depending on whether the
|
||||
[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
|
||||
is turned on.
|
||||
|
||||
* If the admission plugin is turned on, the administrator may specify a
|
||||
default `StorageClass`. All PVCs that have no `storageClassName` can be bound only to
|
||||
PVs of that default. Specifying a default `StorageClass` is done by setting the
|
||||
annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in
|
||||
a `StorageClass` object. If the administrator does not specify a default, the
|
||||
cluster responds to PVC creation as if the admission plugin were turned off. If
|
||||
more than one default is specified, the admission plugin forbids the creation of
|
||||
all PVCs.
|
||||
* If the admission plugin is turned off, there is no notion of a default
|
||||
`StorageClass`. All PVCs that have no `storageClassName` can be bound only to PVs that
|
||||
have no class. In this case, the PVCs that have no `storageClassName` are treated the
|
||||
same way as PVCs that have their `storageClassName` set to `""`.
|
||||
|
||||
Depending on installation method, a default StorageClass may be deployed
|
||||
to a Kubernetes cluster by addon manager during installation.
|
||||
|
||||
When a PVC specifies a `selector` in addition to requesting a `StorageClass`,
|
||||
the requirements are ANDed together: only a PV of the requested class and with
|
||||
the requested labels may be bound to the PVC.
|
||||
|
||||
{{< note >}}
|
||||
Currently, a PVC with a non-empty `selector` can't have a PV dynamically provisioned for it.
|
||||
{{< /note >}}
|
||||
|
||||
In the past, the annotation `volume.beta.kubernetes.io/storage-class` was used instead
|
||||
of `storageClassName` attribute. This annotation is still working; however,
|
||||
it won't be supported in a future Kubernetes release.
|
||||
|
||||
## Claims As Volumes
|
||||
|
||||
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the `PersistentVolume` backing the claim. The volume is then mounted to the host and into the Pod.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: myfrontend
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- mountPath: "/var/www/html"
|
||||
name: mypd
|
||||
volumes:
|
||||
- name: mypd
|
||||
persistentVolumeClaim:
|
||||
claimName: myclaim
|
||||
```
|
||||
|
||||
### A Note on Namespaces
|
||||
|
||||
`PersistentVolumes` binds are exclusive, and since `PersistentVolumeClaims` are namespaced objects, mounting claims with "Many" modes (`ROX`, `RWX`) is only possible within one namespace.
|
||||
|
||||
## Raw Block Volume Support
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
||||
|
||||
The following volume plugins support raw block volumes, including dynamic provisioning where
|
||||
applicable:
|
||||
|
||||
* AWSElasticBlockStore
|
||||
* AzureDisk
|
||||
* FC (Fibre Channel)
|
||||
* GCEPersistentDisk
|
||||
* iSCSI
|
||||
* Local volume
|
||||
* RBD (Ceph Block Device)
|
||||
* VsphereVolume (alpha)
|
||||
|
||||
{{< note >}}
|
||||
Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9.
|
||||
Support for the additional plugins was added in 1.10.
|
||||
{{< /note >}}
|
||||
|
||||
### Persistent Volumes using a Raw Block Volume
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: block-pv
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
fc:
|
||||
targetWWNs: ["50060e801049cfd1"]
|
||||
lun: 0
|
||||
readOnly: false
|
||||
```
|
||||
### Persistent Volume Claim requesting a Raw Block Volume
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: block-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
### Pod specification adding Raw Block Device path in container
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-block-volume
|
||||
spec:
|
||||
containers:
|
||||
- name: fc-container
|
||||
image: fedora:26
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
volumeDevices:
|
||||
- name: data
|
||||
devicePath: /dev/xvda
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: block-pvc
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
When adding a raw block device for a Pod, you specify the device path in the container instead of a mount path.
|
||||
{{< /note >}}
|
||||
|
||||
### Binding Block Volumes
|
||||
|
||||
If a user requests a raw block volume by indicating this using the `volumeMode` field in the `PersistentVolumeClaim` spec, the binding rules differ slightly from previous releases that didn't consider this mode as part of the spec.
|
||||
Listed is a table of possible combinations the user and admin might specify for requesting a raw block device. The table indicates if the volume will be bound or not given the combinations:
|
||||
Volume binding matrix for statically provisioned volumes:
|
||||
|
||||
| PV volumeMode | PVC volumeMode | Result |
|
||||
| --------------|:---------------:| ----------------:|
|
||||
| unspecified | unspecified | BIND |
|
||||
| unspecified | Block | NO BIND |
|
||||
| unspecified | Filesystem | BIND |
|
||||
| Block | unspecified | NO BIND |
|
||||
| Block | Block | BIND |
|
||||
| Block | Filesystem | NO BIND |
|
||||
| Filesystem | Filesystem | BIND |
|
||||
| Filesystem | Block | NO BIND |
|
||||
| Filesystem | unspecified | BIND |
|
||||
|
||||
{{< note >}}
|
||||
Only statically provisioned volumes are supported for alpha release. Administrators should take care to consider these values when working with raw block devices.
|
||||
{{< /note >}}
|
||||
|
||||
## Volume Snapshot and Restore Volume from Snapshot Support
|
||||
|
||||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
|
||||
Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/).
|
||||
|
||||
To enable support for restoring a volume from a volume snapshot data source, enable the
|
||||
`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager.
|
||||
|
||||
### Create Persistent Volume Claim from Volume Snapshot
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: restore-pvc
|
||||
spec:
|
||||
storageClassName: csi-hostpath-sc
|
||||
dataSource:
|
||||
name: new-snapshot-test
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
## Volume Cloning
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/).
|
||||
|
||||
To enable support for cloning a volume from a PVC data source, enable the
|
||||
`VolumePVCDataSource` feature gate on the apiserver and controller-manager.
|
||||
|
||||
### Create Persistent Volume Claim from an existing pvc
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: cloned-pvc
|
||||
spec:
|
||||
storageClassName: my-csi-plugin
|
||||
dataSource:
|
||||
name: existing-src-pvc-name
|
||||
kind: PersistentVolumeClaim
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
## Writing Portable Configuration
|
||||
|
||||
If you're writing configuration templates or examples that run on a wide range of clusters
|
||||
and need persistent storage, it is recommended that you use the following pattern:
|
||||
|
||||
- Include PersistentVolumeClaim objects in your bundle of config (alongside
|
||||
Deployments, ConfigMaps, etc).
|
||||
- Do not include PersistentVolume objects in the config, since the user instantiating
|
||||
the config may not have permission to create PersistentVolumes.
|
||||
- Give the user the option of providing a storage class name when instantiating
|
||||
the template.
|
||||
- If the user provides a storage class name, put that value into the
|
||||
`persistentVolumeClaim.storageClassName` field.
|
||||
This will cause the PVC to match the right storage
|
||||
class if the cluster has StorageClasses enabled by the admin.
|
||||
- If the user does not provide a storage class name, leave the
|
||||
`persistentVolumeClaim.storageClassName` field as nil. This will cause a
|
||||
PV to be automatically provisioned for the user with the default StorageClass
|
||||
in the cluster. Many cluster environments have a default StorageClass installed,
|
||||
or administrators can create their own default StorageClass.
|
||||
- In your tooling, watch for PVCs that are not getting bound after some time
|
||||
and surface this to the user, as this may indicate that the cluster has no
|
||||
dynamic storage support (in which case the user should create a matching PV)
|
||||
or the cluster has no storage system (in which case the user cannot deploy
|
||||
config requiring PVCs).
|
||||
|
||||
{{% /capture %}}
|
File diff suppressed because it is too large
Load Diff
|
@ -1,480 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- erictune
|
||||
- soltysh
|
||||
title: Jobs - Run to Completion
|
||||
content_template: templates/concept
|
||||
feature:
|
||||
title: Пакетна обробка
|
||||
description: >
|
||||
На додачу до Сервісів, Kubernetes може керувати вашими робочими навантаженнями систем безперервної інтеграції та пакетної обробки, за потреби замінюючи контейнери, що відмовляють.
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
|
||||
As pods successfully complete, the Job tracks the successful completions. When a specified number
|
||||
of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
|
||||
the Pods it created.
|
||||
|
||||
A simple case is to create one Job object in order to reliably run one Pod to completion.
|
||||
The Job object will start a new Pod if the first Pod fails or is deleted (for example
|
||||
due to a node hardware failure or a node reboot).
|
||||
|
||||
You can also use a Job to run multiple Pods in parallel.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Running an example Job
|
||||
|
||||
Here is an example Job config. It computes π to 2000 places and prints it out.
|
||||
It takes around 10s to complete.
|
||||
|
||||
{{< codenew file="controllers/job.yaml" >}}
|
||||
|
||||
You can run the example with this command:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/controllers/job.yaml
|
||||
```
|
||||
```
|
||||
job.batch/pi created
|
||||
```
|
||||
|
||||
Check on the status of the Job with `kubectl`:
|
||||
|
||||
```shell
|
||||
kubectl describe jobs/pi
|
||||
```
|
||||
```
|
||||
Name: pi
|
||||
Namespace: default
|
||||
Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
job-name=pi
|
||||
Annotations: kubectl.kubernetes.io/last-applied-configuration:
|
||||
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
|
||||
Parallelism: 1
|
||||
Completions: 1
|
||||
Start Time: Mon, 02 Dec 2019 15:20:11 +0200
|
||||
Completed At: Mon, 02 Dec 2019 15:21:16 +0200
|
||||
Duration: 65s
|
||||
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
|
||||
Pod Template:
|
||||
Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
job-name=pi
|
||||
Containers:
|
||||
pi:
|
||||
Image: perl
|
||||
Port: <none>
|
||||
Host Port: <none>
|
||||
Command:
|
||||
perl
|
||||
-Mbignum=bpi
|
||||
-wle
|
||||
print bpi(2000)
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
|
||||
```
|
||||
|
||||
To view completed Pods of a Job, use `kubectl get pods`.
|
||||
|
||||
To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
|
||||
|
||||
```shell
|
||||
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
echo $pods
|
||||
```
|
||||
```
|
||||
pi-5rwd7
|
||||
```
|
||||
|
||||
Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
|
||||
that just gets the name from each Pod in the returned list.
|
||||
|
||||
View the standard output of one of the pods:
|
||||
|
||||
```shell
|
||||
kubectl logs $pods
|
||||
```
|
||||
The output is similar to this:
|
||||
```shell
|
||||
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
|
||||
```
|
||||
|
||||
## Writing a Job Spec
|
||||
|
||||
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
|
||||
A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a Job must specify appropriate
|
||||
labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
|
||||
|
||||
Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is optional. In almost all cases you should not specify it.
|
||||
See section [specifying your own pod selector](#specifying-your-own-pod-selector).
|
||||
|
||||
|
||||
### Parallel Jobs
|
||||
|
||||
There are three main types of task suitable to run as a Job:
|
||||
|
||||
1. Non-parallel Jobs
|
||||
- normally, only one Pod is started, unless the Pod fails.
|
||||
- the Job is complete as soon as its Pod terminates successfully.
|
||||
1. Parallel Jobs with a *fixed completion count*:
|
||||
- specify a non-zero positive value for `.spec.completions`.
|
||||
- the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
|
||||
- **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
|
||||
1. Parallel Jobs with a *work queue*:
|
||||
- do not specify `.spec.completions`, default to `.spec.parallelism`.
|
||||
- the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
|
||||
- each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
|
||||
- when _any_ Pod from the Job terminates with success, no new Pods are created.
|
||||
- once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
|
||||
- once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
|
||||
|
||||
For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
|
||||
unset, both are defaulted to 1.
|
||||
|
||||
For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed.
|
||||
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
|
||||
|
||||
For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
|
||||
a non-negative integer.
|
||||
|
||||
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
|
||||
|
||||
|
||||
#### Controlling Parallelism
|
||||
|
||||
The requested parallelism (`.spec.parallelism`) can be set to any non-negative value.
|
||||
If it is unspecified, it defaults to 1.
|
||||
If it is specified as 0, then the Job is effectively paused until it is increased.
|
||||
|
||||
Actual parallelism (number of pods running at any instant) may be more or less than requested
|
||||
parallelism, for a variety of reasons:
|
||||
|
||||
- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of
|
||||
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
|
||||
- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however.
|
||||
- If the Job {{< glossary_tooltip term_id="controller" >}} has not had time to react.
|
||||
- If the Job controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.),
|
||||
then there may be fewer pods than requested.
|
||||
- The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
|
||||
- When a Pod is gracefully shut down, it takes time to stop.
|
||||
|
||||
## Handling Pod and Container Failures
|
||||
|
||||
A container in a Pod may fail for a number of reasons, such as because the process in it exited with
|
||||
a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
|
||||
happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays
|
||||
on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
|
||||
restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`.
|
||||
See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
|
||||
|
||||
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
|
||||
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
|
||||
`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
|
||||
starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
|
||||
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
|
||||
caused by previous runs.
|
||||
|
||||
Note that even if you specify `.spec.parallelism = 1` and `.spec.completions = 1` and
|
||||
`.spec.template.spec.restartPolicy = "Never"`, the same program may
|
||||
sometimes be started twice.
|
||||
|
||||
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
|
||||
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
|
||||
|
||||
### Pod backoff failure policy
|
||||
|
||||
There are situations where you want to fail a Job after some amount of retries
|
||||
due to a logical error in configuration etc.
|
||||
To do so, set `.spec.backoffLimit` to specify the number of retries before
|
||||
considering a Job as failed. The back-off limit is set by default to 6. Failed
|
||||
Pods associated with the Job are recreated by the Job controller with an
|
||||
exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The
|
||||
back-off count is reset if no new failed Pods appear before the Job's next
|
||||
status check.
|
||||
|
||||
{{< note >}}
|
||||
Issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870) still exists for versions of Kubernetes prior to version 1.12
|
||||
{{< /note >}}
|
||||
{{< note >}}
|
||||
If your job has `restartPolicy = "OnFailure"`, keep in mind that your container running the Job
|
||||
will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting
|
||||
`restartPolicy = "Never"` when debugging the Job or using a logging system to ensure output
|
||||
from failed Jobs is not lost inadvertently.
|
||||
{{< /note >}}
|
||||
|
||||
## Job Termination and Cleanup
|
||||
|
||||
When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around
|
||||
allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
|
||||
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
|
||||
By default, a Job will run uninterrupted unless a Pod fails (`restartPolicy=Never`) or a Container exits in error (`restartPolicy=OnFailure`), at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Once `.spec.backoffLimit` has been reached the Job will be marked as failed and any running Pods will be terminated.
|
||||
|
||||
Another way to terminate a Job is by setting an active deadline.
|
||||
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
|
||||
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
|
||||
Once a Job reaches `activeDeadlineSeconds`, all of its running Pods are terminated and the Job status will become `type: Failed` with `reason: DeadlineExceeded`.
|
||||
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi-with-timeout
|
||||
spec:
|
||||
backoffLimit: 5
|
||||
activeDeadlineSeconds: 100
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
image: perl
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
Note that both the Job spec and the [Pod template spec](/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
|
||||
|
||||
Keep in mind that the `restartPolicy` applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is `type: Failed`.
|
||||
That is, the Job termination mechanisms activated with `.spec.activeDeadlineSeconds` and `.spec.backoffLimit` result in a permanent Job failure that requires manual intervention to resolve.
|
||||
|
||||
## Clean Up Finished Jobs Automatically
|
||||
|
||||
Finished Jobs are usually no longer needed in the system. Keeping them around in
|
||||
the system will put pressure on the API server. If the Jobs are managed directly
|
||||
by a higher level controller, such as
|
||||
[CronJobs](/docs/concepts/workloads/controllers/cron-jobs/), the Jobs can be
|
||||
cleaned up by CronJobs based on the specified capacity-based cleanup policy.
|
||||
|
||||
### TTL Mechanism for Finished Jobs
|
||||
|
||||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
|
||||
Another way to clean up finished Jobs (either `Complete` or `Failed`)
|
||||
automatically is to use a TTL mechanism provided by a
|
||||
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
|
||||
finished resources, by specifying the `.spec.ttlSecondsAfterFinished` field of
|
||||
the Job.
|
||||
|
||||
When the TTL controller cleans up the Job, it will delete the Job cascadingly,
|
||||
i.e. delete its dependent objects, such as Pods, together with the Job. Note
|
||||
that when the Job is deleted, its lifecycle guarantees, such as finalizers, will
|
||||
be honored.
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: pi-with-ttl
|
||||
spec:
|
||||
ttlSecondsAfterFinished: 100
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: pi
|
||||
image: perl
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
The Job `pi-with-ttl` will be eligible to be automatically deleted, `100`
|
||||
seconds after it finishes.
|
||||
|
||||
If the field is set to `0`, the Job will be eligible to be automatically deleted
|
||||
immediately after it finishes. If the field is unset, this Job won't be cleaned
|
||||
up by the TTL controller after it finishes.
|
||||
|
||||
Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For
|
||||
more information, see the documentation for
|
||||
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
|
||||
finished resources.
|
||||
|
||||
## Job Patterns
|
||||
|
||||
The Job object can be used to support reliable parallel execution of Pods. The Job object is not
|
||||
designed to support closely-communicating parallel processes, as commonly found in scientific
|
||||
computing. It does support parallel processing of a set of independent but related *work items*.
|
||||
These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a
|
||||
NoSQL database to scan, and so on.
|
||||
|
||||
In a complex system, there may be multiple different sets of work items. Here we are just
|
||||
considering one set of work items that the user wants to manage together — a *batch job*.
|
||||
|
||||
There are several different patterns for parallel computation, each with strengths and weaknesses.
|
||||
The tradeoffs are:
|
||||
|
||||
- One Job object for each work item, vs. a single Job object for all work items. The latter is
|
||||
better for large numbers of work items. The former creates some overhead for the user and for the
|
||||
system to manage large numbers of Job objects.
|
||||
- Number of pods created equals number of work items, vs. each Pod can process multiple work items.
|
||||
The former typically requires less modification to existing code and containers. The latter
|
||||
is better for large numbers of work items, for similar reasons to the previous bullet.
|
||||
- Several approaches use a work queue. This requires running a queue service,
|
||||
and modifications to the existing program or container to make it use the work queue.
|
||||
Other approaches are easier to adapt to an existing containerised application.
|
||||
|
||||
|
||||
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.
|
||||
The pattern names are also links to examples and more detailed description.
|
||||
|
||||
| Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
|
||||
| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
|
||||
| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | sometimes | ✓ |
|
||||
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
|
||||
| Single Job with Static Work Assignment | ✓ | | ✓ | |
|
||||
|
||||
When you specify completions with `.spec.completions`, each Pod created by the Job controller
|
||||
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status). This means that
|
||||
all pods for a task will have the same command line and the same
|
||||
image, the same volumes, and (almost) the same environment variables. These patterns
|
||||
are different ways to arrange for pods to work on different things.
|
||||
|
||||
This table shows the required settings for `.spec.parallelism` and `.spec.completions` for each of the patterns.
|
||||
Here, `W` is the number of work items.
|
||||
|
||||
| Pattern | `.spec.completions` | `.spec.parallelism` |
|
||||
| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
|
||||
| [Job Template Expansion](/docs/tasks/job/parallel-processing-expansion/) | 1 | should be 1 |
|
||||
| [Queue with Pod Per Work Item](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
|
||||
| [Queue with Variable Pod Count](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
|
||||
| Single Job with Static Work Assignment | W | any |
|
||||
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Specifying your own pod selector
|
||||
|
||||
Normally, when you create a Job object, you do not specify `.spec.selector`.
|
||||
The system defaulting logic adds this field when the Job is created.
|
||||
It picks a selector value that will not overlap with any other jobs.
|
||||
|
||||
However, in some cases, you might need to override this automatically set selector.
|
||||
To do this, you can specify the `.spec.selector` of the Job.
|
||||
|
||||
Be very careful when doing this. If you specify a label selector which is not
|
||||
unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated
|
||||
job may be deleted, or this Job may count other Pods as completing it, or one or both
|
||||
Jobs may refuse to create Pods or run to completion. If a non-unique selector is
|
||||
chosen, then other controllers (e.g. ReplicationController) and their Pods may behave
|
||||
in unpredictable ways too. Kubernetes will not stop you from making a mistake when
|
||||
specifying `.spec.selector`.
|
||||
|
||||
Here is an example of a case when you might want to use this feature.
|
||||
|
||||
Say Job `old` is already running. You want existing Pods
|
||||
to keep running, but you want the rest of the Pods it creates
|
||||
to use a different pod template and for the Job to have a new name.
|
||||
You cannot update the Job because these fields are not updatable.
|
||||
Therefore, you delete Job `old` but _leave its pods
|
||||
running_, using `kubectl delete jobs/old --cascade=false`.
|
||||
Before deleting it, you make a note of what selector it uses:
|
||||
|
||||
```
|
||||
kubectl get job old -o yaml
|
||||
```
|
||||
```
|
||||
kind: Job
|
||||
metadata:
|
||||
name: old
|
||||
...
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
Then you create a new Job with name `new` and you explicitly specify the same selector.
|
||||
Since the existing Pods have label `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
|
||||
they are controlled by Job `new` as well.
|
||||
|
||||
You need to specify `manualSelector: true` in the new Job since you are not using
|
||||
the selector that the system normally generates for you automatically.
|
||||
|
||||
```
|
||||
kind: Job
|
||||
metadata:
|
||||
name: new
|
||||
...
|
||||
spec:
|
||||
manualSelector: true
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
|
||||
`manualSelector: true` tells the system to that you know what you are doing and to allow this
|
||||
mismatch.
|
||||
|
||||
## Alternatives
|
||||
|
||||
### Bare Pods
|
||||
|
||||
When the node that a Pod is running on reboots or fails, the pod is terminated
|
||||
and will not be restarted. However, a Job will create new Pods to replace terminated ones.
|
||||
For this reason, we recommend that you use a Job rather than a bare Pod, even if your application
|
||||
requires only a single Pod.
|
||||
|
||||
### Replication Controller
|
||||
|
||||
Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
|
||||
A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
|
||||
manages Pods that are expected to terminate (e.g. batch tasks).
|
||||
|
||||
As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate
|
||||
for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
|
||||
(Note: If `RestartPolicy` is not set, the default value is `Always`.)
|
||||
|
||||
### Single Job starts Controller Pod
|
||||
|
||||
Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort
|
||||
of custom controller for those Pods. This allows the most flexibility, but may be somewhat
|
||||
complicated to get started with and offers less integration with Kubernetes.
|
||||
|
||||
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
|
||||
starts a Spark master controller (see [spark example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)), runs a spark
|
||||
driver, and then cleans up.
|
||||
|
||||
An advantage of this approach is that the overall process gets the completion guarantee of a Job
|
||||
object, but complete control over what Pods are created and how work is assigned to them.
|
||||
|
||||
## Cron Jobs {#cron-jobs}
|
||||
|
||||
You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`.
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,291 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- bprashanth
|
||||
- janetkuo
|
||||
title: ReplicationController
|
||||
feature:
|
||||
title: Самозцілення
|
||||
anchor: How a ReplicationController Works
|
||||
description: >
|
||||
Перезапускає контейнери, що відмовили; заміняє і перерозподіляє контейнери у випадку непрацездатності вузла; зупиняє роботу контейнерів, що не відповідають на задану користувачем перевірку стану, і не повідомляє про них клієнтам, допоки ці контейнери не будуть у стані робочої готовності.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< note >}}
|
||||
A [`Deployment`](/docs/concepts/workloads/controllers/deployment/) that configures a [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is now the recommended way to set up replication.
|
||||
{{< /note >}}
|
||||
|
||||
A _ReplicationController_ ensures that a specified number of pod replicas are running at any one
|
||||
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
|
||||
always up and available.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## How a ReplicationController Works
|
||||
|
||||
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
|
||||
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
|
||||
ReplicationController are automatically replaced if they fail, are deleted, or are terminated.
|
||||
For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade.
|
||||
For this reason, you should use a ReplicationController even if your application requires
|
||||
only a single pod. A ReplicationController is similar to a process supervisor,
|
||||
but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods
|
||||
across multiple nodes.
|
||||
|
||||
ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in
|
||||
kubectl commands.
|
||||
|
||||
A simple case is to create one ReplicationController object to reliably run one instance of
|
||||
a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated
|
||||
service, such as web servers.
|
||||
|
||||
## Running an example ReplicationController
|
||||
|
||||
This example ReplicationController config runs three copies of the nginx web server.
|
||||
|
||||
{{< codenew file="controllers/replication.yaml" >}}
|
||||
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
|
||||
```
|
||||
```
|
||||
replicationcontroller/nginx created
|
||||
```
|
||||
|
||||
Check on the status of the ReplicationController using this command:
|
||||
|
||||
```shell
|
||||
kubectl describe replicationcontrollers/nginx
|
||||
```
|
||||
```
|
||||
Name: nginx
|
||||
Namespace: default
|
||||
Selector: app=nginx
|
||||
Labels: app=nginx
|
||||
Annotations: <none>
|
||||
Replicas: 3 current / 3 desired
|
||||
Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
|
||||
Pod Template:
|
||||
Labels: app=nginx
|
||||
Containers:
|
||||
nginx:
|
||||
Image: nginx
|
||||
Port: 80/TCP
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
Volumes: <none>
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- ---- ------ -------
|
||||
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-qrm3m
|
||||
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-3ntk0
|
||||
20s 20s 1 {replication-controller } Normal SuccessfulCreate Created pod: nginx-4ok8v
|
||||
```
|
||||
|
||||
Here, three pods are created, but none is running yet, perhaps because the image is being pulled.
|
||||
A little later, the same command may show:
|
||||
|
||||
```shell
|
||||
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
```
|
||||
|
||||
To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:
|
||||
|
||||
```shell
|
||||
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
|
||||
echo $pods
|
||||
```
|
||||
```
|
||||
nginx-3ntk0 nginx-4ok8v nginx-qrm3m
|
||||
```
|
||||
|
||||
Here, the selector is the same as the selector for the ReplicationController (seen in the
|
||||
`kubectl describe` output), and in a different form in `replication.yaml`. The `--output=jsonpath` option
|
||||
specifies an expression that just gets the name from each pod in the returned list.
|
||||
|
||||
|
||||
## Writing a ReplicationController Spec
|
||||
|
||||
As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
|
||||
For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/).
|
||||
|
||||
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
|
||||
|
||||
### Pod Template
|
||||
|
||||
The `.spec.template` is the only required field of the `.spec`.
|
||||
|
||||
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except it is nested and does not have an `apiVersion` or `kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate
|
||||
labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See [pod selector](#pod-selector).
|
||||
|
||||
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
|
||||
|
||||
For local container restarts, ReplicationControllers delegate to an agent on the node,
|
||||
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
|
||||
|
||||
### Labels on the ReplicationController
|
||||
|
||||
The ReplicationController can itself have labels (`.metadata.labels`). Typically, you
|
||||
would set these the same as the `.spec.template.metadata.labels`; if `.metadata.labels` is not specified
|
||||
then it defaults to `.spec.template.metadata.labels`. However, they are allowed to be
|
||||
different, and the `.metadata.labels` do not affect the behavior of the ReplicationController.
|
||||
|
||||
### Pod Selector
|
||||
|
||||
The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors). A ReplicationController
|
||||
manages all the pods with labels that match the selector. It does not distinguish
|
||||
between pods that it created or deleted and pods that another person or process created or
|
||||
deleted. This allows the ReplicationController to be replaced without affecting the running pods.
|
||||
|
||||
If specified, the `.spec.template.metadata.labels` must be equal to the `.spec.selector`, or it will
|
||||
be rejected by the API. If `.spec.selector` is unspecified, it will be defaulted to
|
||||
`.spec.template.metadata.labels`.
|
||||
|
||||
Also you should not normally create any pods whose labels match this selector, either directly, with
|
||||
another ReplicationController, or with another controller such as Job. If you do so, the
|
||||
ReplicationController thinks that it created the other pods. Kubernetes does not stop you
|
||||
from doing this.
|
||||
|
||||
If you do end up with multiple controllers that have overlapping selectors, you
|
||||
will have to manage the deletion yourself (see [below](#working-with-replicationcontrollers)).
|
||||
|
||||
### Multiple Replicas
|
||||
|
||||
You can specify how many pods should run concurrently by setting `.spec.replicas` to the number
|
||||
of pods you would like to have running concurrently. The number running at any time may be higher
|
||||
or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully
|
||||
shutdown, and a replacement starts early.
|
||||
|
||||
If you do not specify `.spec.replicas`, then it defaults to 1.
|
||||
|
||||
## Working with ReplicationControllers
|
||||
|
||||
### Deleting a ReplicationController and its Pods
|
||||
|
||||
To delete a ReplicationController and all its pods, use [`kubectl
|
||||
delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). Kubectl will scale the ReplicationController to zero and wait
|
||||
for it to delete each pod before deleting the ReplicationController itself. If this kubectl
|
||||
command is interrupted, it can be restarted.
|
||||
|
||||
When using the REST API or go client library, you need to do the steps explicitly (scale replicas to
|
||||
0, wait for pod deletions, then delete the ReplicationController).
|
||||
|
||||
### Deleting just a ReplicationController
|
||||
|
||||
You can delete a ReplicationController without affecting any of its pods.
|
||||
|
||||
Using kubectl, specify the `--cascade=false` option to [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete).
|
||||
|
||||
When using the REST API or go client library, simply delete the ReplicationController object.
|
||||
|
||||
Once the original is deleted, you can create a new ReplicationController to replace it. As long
|
||||
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
|
||||
However, it will not make any effort to make existing pods match a new, different pod template.
|
||||
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
|
||||
|
||||
### Isolating pods from a ReplicationController
|
||||
|
||||
Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
## Common usage patterns
|
||||
|
||||
### Rescheduling
|
||||
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent).
|
||||
|
||||
### Scaling
|
||||
|
||||
The ReplicationController makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
|
||||
### Rolling updates
|
||||
|
||||
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
|
||||
|
||||
The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.
|
||||
|
||||
Rolling update is implemented in the client tool
|
||||
[`kubectl rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update). Visit [`kubectl rolling-update` task](/docs/tasks/run-application/rolling-update-replication-controller/) for more concrete examples.
|
||||
|
||||
### Multiple release tracks
|
||||
|
||||
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
|
||||
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another ReplicationController with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.
|
||||
|
||||
### Using ReplicationControllers with Services
|
||||
|
||||
Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic
|
||||
goes to the old version, and some goes to the new version.
|
||||
|
||||
A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.
|
||||
|
||||
## Writing programs for Replication
|
||||
|
||||
Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the [RabbitMQ work queues](https://www.rabbitmq.com/tutorials/tutorial-two-python.html), as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.
|
||||
|
||||
## Responsibilities of the ReplicationController
|
||||
|
||||
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
|
||||
|
||||
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
|
||||
## API Object
|
||||
|
||||
Replication controller is a top-level resource in the Kubernetes REST API. More details about the
|
||||
API object can be found at:
|
||||
[ReplicationController API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replicationcontroller-v1-core).
|
||||
|
||||
## Alternatives to ReplicationController
|
||||
|
||||
### ReplicaSet
|
||||
|
||||
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/) is the next-generation ReplicationController that supports the new [set-based label selector](/docs/concepts/overview/working-with-objects/labels/#set-based-requirement).
|
||||
It’s mainly used by [`Deployment`](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod creation, deletion and updates.
|
||||
Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don’t require updates at all.
|
||||
|
||||
|
||||
### Deployment (Recommended)
|
||||
|
||||
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying Replica Sets and their Pods
|
||||
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality,
|
||||
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features.
|
||||
|
||||
### Bare Pods
|
||||
|
||||
Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
|
||||
|
||||
### Job
|
||||
|
||||
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own
|
||||
(that is, batch jobs).
|
||||
|
||||
### DaemonSet
|
||||
|
||||
Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicationController for pods that provide a
|
||||
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
|
||||
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
|
||||
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
|
||||
|
||||
## For more information
|
||||
|
||||
Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
|
||||
|
||||
{{% /capture %}}
|
|
@ -22,17 +22,15 @@ anchors:
|
|||
|
||||
* У випадку, якщо у перекладі термін набуває неоднозначності і розуміння тексту ускладнюється, надайте у дужках англійський варіант, наприклад: кінцеві точки (endpoints). Якщо при перекладі термін втрачає своє значення, краще не перекладати його, наприклад: характеристики affinity.
|
||||
|
||||
* Співзвучні слова передаємо транслітерацією зі збереженням написання (Service -> Сервіс).
|
||||
* Назви об'єктів Kubernetes залишаємо без перекладу і пишемо з великої літери: Service, Pod, Deployment, Volume, Namespace, за винятком терміна node (вузол). Назви об'єктів Kubernetes вважаємо за іменники ч.р. і відмінюємо за допомогою апострофа: Pod'ів, Deployment'ами.
|
||||
Для слів, що закінчуються на приголосний, у родовому відмінку однини використовуємо закінчення -а: Pod'а, Deployment'а.
|
||||
Слова, що закінчуються на голосний, не відмінюємо: доступ до Service, за допомогою Namespace. У множині використовуємо англійську форму: користуватися Services, спільні Volumes.
|
||||
|
||||
* Реалії Kubernetes пишемо з великої літери: Сервіс, Под, але вузол (node).
|
||||
|
||||
* Для слів з великих літер, які не мають трансліт-аналогу, використовуємо англійські слова (Deployment, Volume, Namespace).
|
||||
* Частовживані і усталені за межами Kubernetes слова перекладаємо українською і пишемо з малої літери (label -> мітка). У випадку, якщо термін для означення об'єкта Kubernetes вживається у своєму загальному значенні поза контекстом Kubernetes (service як службова програма, deployment як розгортання), перекладаємо його і пишемо з малої літери, наприклад: service discovery -> виявлення сервісу, continuous deployment -> безперервне розгортання.
|
||||
|
||||
* Складені слова вважаємо за власні назви і не перекладаємо (LabelSelector, kube-apiserver).
|
||||
|
||||
* Частовживані і усталені за межами K8s слова перекладаємо українською і пишемо з маленької літери (label -> мітка).
|
||||
|
||||
* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Більшість необхідних нам термінів є словами іншомовного походження, які у родовому відмінку однини приймають закінчення -а: Пода, Deployment'а. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv).
|
||||
* Для перевірки закінчень слів у родовому відмінку однини (-а/-я, -у/-ю) використовуйте [онлайн словник](https://slovnyk.ua/). Якщо слова немає у словнику, визначте його відміну і далі відмінюйте за правилами. Докладніше [дивіться тут](https://pidruchniki.com/1948041951499/dokumentoznavstvo/vidminyuvannya_imennikiv).
|
||||
|
||||
## Словник {#словник}
|
||||
|
||||
|
@ -41,24 +39,24 @@ English | Українська |
|
|||
addon | розширення |
|
||||
application | застосунок |
|
||||
backend | бекенд |
|
||||
build | збирати (процес) |
|
||||
build | збирання (результат) |
|
||||
build | збирати (процес) |
|
||||
cache | кеш |
|
||||
CLI | інтерфейс командного рядка |
|
||||
cloud | хмара; хмарний провайдер |
|
||||
containerized | контейнеризований |
|
||||
Continuous development | безперервна розробка |
|
||||
Continuous integration | безперервна інтеграція |
|
||||
Continuous deployment | безперервне розгортання |
|
||||
continuous deployment | безперервне розгортання |
|
||||
continuous development | безперервна розробка |
|
||||
continuous integration | безперервна інтеграція |
|
||||
contribute | робити внесок (до проекту), допомагати (проекту) |
|
||||
contributor | контриб'ютор, учасник проекту |
|
||||
control plane | площина управління |
|
||||
controller | контролер |
|
||||
CPU | ЦПУ |
|
||||
CPU | ЦП |
|
||||
dashboard | дашборд |
|
||||
data plane | площина даних |
|
||||
default settings | типові налаштування |
|
||||
default (by) | за умовчанням |
|
||||
default settings | типові налаштування |
|
||||
Deployment | Deployment |
|
||||
deprecated | застарілий |
|
||||
desired state | бажаний стан |
|
||||
|
@ -81,8 +79,8 @@ label | мітка |
|
|||
lifecycle | життєвий цикл |
|
||||
logging | логування |
|
||||
maintenance | обслуговування |
|
||||
master | master |
|
||||
map | спроектувати, зіставити, встановити відповідність |
|
||||
master | master |
|
||||
monitor | моніторити |
|
||||
monitoring | моніторинг |
|
||||
Namespace | Namespace |
|
||||
|
@ -91,7 +89,7 @@ node | вузол |
|
|||
orchestrate | оркеструвати |
|
||||
output | вивід |
|
||||
patch | патч |
|
||||
Pod | Под |
|
||||
Pod | Pod |
|
||||
production | прод |
|
||||
pull request | pull request |
|
||||
release | реліз |
|
||||
|
@ -101,13 +99,14 @@ rolling update | послідовне оновлення |
|
|||
rollout (new updates) | викатка (оновлень) |
|
||||
run | запускати |
|
||||
scale | масштабувати |
|
||||
schedule | розподіляти (Поди по вузлах) |
|
||||
scheduler | scheduler |
|
||||
secret | секрет |
|
||||
selector | селектор |
|
||||
schedule | розподіляти (Pod'и по вузлах) |
|
||||
Scheduler | Scheduler |
|
||||
Secret | Secret |
|
||||
Selector | Селектор |
|
||||
self-healing | самозцілення |
|
||||
self-restoring | самовідновлення |
|
||||
service | сервіс |
|
||||
Service | Service (як об'єкт Kubernetes) |
|
||||
service | сервіс (як службова програма) |
|
||||
service discovery | виявлення сервісу |
|
||||
source code | вихідний код |
|
||||
stateful app | застосунок зі станом |
|
||||
|
@ -115,7 +114,7 @@ stateless app | застосунок без стану |
|
|||
task | завдання |
|
||||
terminated | зупинений |
|
||||
traffic | трафік |
|
||||
VM (virtual machine) | ВМ |
|
||||
VM (virtual machine) | ВМ (віртуальна машина) |
|
||||
Volume | Volume |
|
||||
workload | робоче навантаження |
|
||||
YAML | YAML |
|
||||
|
|
|
@ -1,12 +1,10 @@
|
|||
---
|
||||
approvers:
|
||||
- chenopis
|
||||
title: Документація Kubernetes
|
||||
noedit: true
|
||||
cid: docsHome
|
||||
layout: docsportal_home
|
||||
class: gridPage
|
||||
linkTitle: "Home"
|
||||
class: gridPage gridPageHome
|
||||
linkTitle: "Головна"
|
||||
main_menu: true
|
||||
weight: 10
|
||||
hide_feedback: true
|
||||
|
|
|
@ -7,8 +7,7 @@ full_link:
|
|||
# short_description: >
|
||||
# Activities such as upgrading the clusters, implementing security, storage, ingress, networking, logging and monitoring, and other operations involved in managing a Kubernetes cluster.
|
||||
short_description: >
|
||||
Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
|
||||
|
||||
Дії і операції, такі як оновлення кластерів, впровадження і використання засобів безпеки, сховища даних, Ingress'а, мережі, логування, моніторингу та інших операцій, пов'язаних з управлінням Kubernetes кластером.
|
||||
aka:
|
||||
tags:
|
||||
- operations
|
||||
|
|
|
@ -19,4 +19,4 @@ tags:
|
|||
|
||||
<!--more-->
|
||||
<!-- The worker node(s) host the pods that are the components of the application. The Control Plane manages the worker nodes and the pods in the cluster. In production environments, the Control Plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability. -->
|
||||
На робочих вузлах розміщуються Поди, які є складовими застосунку. Площина управління керує робочими вузлами і Подами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності.
|
||||
На робочих вузлах розміщуються Pod'и, які є складовими застосунку. Площина управління керує робочими вузлами і Pod'ами кластера. У прод оточеннях площина управління зазвичай розповсюджується на багато комп'ютерів, а кластер складається з багатьох вузлів для забезпечення відмовостійкості і високої доступності.
|
||||
|
|
|
@ -7,7 +7,7 @@ full_link:
|
|||
# short_description: >
|
||||
# The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
|
||||
short_description: >
|
||||
Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
|
||||
Шар оркестрації контейнерів, який надає API та інтерфейси для визначення, розгортання і управління життєвим циклом контейнерів.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
|
|
@ -8,7 +8,7 @@ full_link: /docs/concepts/workloads/controllers/deployment/
|
|||
short_description: >
|
||||
Об'єкт API, що керує реплікованим застосунком.
|
||||
|
||||
aka:
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
|
@ -17,7 +17,7 @@ tags:
|
|||
<!-- An API object that manages a replicated application. -->
|
||||
Об'єкт API, що керує реплікованим застосунком.
|
||||
|
||||
<!--more-->
|
||||
<!--more-->
|
||||
|
||||
<!-- Each replica is represented by a {{< glossary_tooltip term_id="pod" >}}, and the Pods are distributed among the nodes of a cluster. -->
|
||||
Кожна репліка являє собою {{< glossary_tooltip term_id="Под" >}}; Поди розподіляються між вузлами кластера.
|
||||
Кожна репліка являє собою {{< glossary_tooltip term_id="pod" text="Pod" >}}; Pod'и розподіляються між вузлами кластера.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
approvers:
|
||||
- chenopis
|
||||
- abiogenesis-now
|
||||
- maxymvlasov
|
||||
- anastyakulyk
|
||||
# title: Standardized Glossary
|
||||
title: Глосарій
|
||||
layout: glossary
|
||||
|
|
|
@ -17,7 +17,7 @@ tags:
|
|||
network proxy that runs on each node in your cluster, implementing part of
|
||||
the Kubernetes {{< glossary_tooltip term_id="service">}} concept.
|
||||
-->
|
||||
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="сервісу">}}.
|
||||
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) є мережевим проксі, що запущене на кожному вузлі кластера і реалізує частину концепції Kubernetes {{< glossary_tooltip term_id="service" text="Service">}}.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
@ -25,7 +25,7 @@ the Kubernetes {{< glossary_tooltip term_id="service">}} concept.
|
|||
network communication to your Pods from network sessions inside or outside
|
||||
of your cluster.
|
||||
-->
|
||||
kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Подів всередині чи поза межами кластера.
|
||||
kube-proxy відповідає за мережеві правила на вузлах. Ці правила обумовлюють підключення по мережі до ваших Pod'ів всередині чи поза межами кластера.
|
||||
|
||||
<!--kube-proxy uses the operating system packet filtering layer if there is one
|
||||
and it's available. Otherwise, kube-proxy forwards the traffic itself.
|
||||
|
|
|
@ -6,14 +6,14 @@ full_link: /docs/reference/generated/kube-scheduler/
|
|||
# short_description: >
|
||||
# Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.
|
||||
short_description: >
|
||||
Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
|
||||
Компонент площини управління, що відстежує створені Pod'и, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
---
|
||||
<!-- Control Plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. -->
|
||||
Компонент площини управління, що відстежує створені Поди, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
|
||||
Компонент площини управління, що відстежує створені Pod'и, які ще не розподілені по вузлах, і обирає вузол, на якому вони працюватимуть.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ full_link: /docs/reference/generated/kubelet
|
|||
# short_description: >
|
||||
# An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.
|
||||
short_description: >
|
||||
Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
|
||||
Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -14,7 +14,7 @@ tags:
|
|||
- core-object
|
||||
---
|
||||
<!-- An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. -->
|
||||
Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Подах.
|
||||
Агент, що запущений на кожному вузлі кластера. Забезпечує запуск і роботу контейнерів у Pod'ах.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
# title: Pod
|
||||
title: Под
|
||||
title: Pod
|
||||
id: pod
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/pods/pod-overview/
|
||||
# short_description: >
|
||||
# The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
|
||||
short_description: >
|
||||
Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу контейнерів, що запущені у вашому кластері.
|
||||
Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу контейнерів, що запущені у вашому кластері.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -15,9 +15,9 @@ tags:
|
|||
- fundamental
|
||||
---
|
||||
<!-- The smallest and simplest Kubernetes object. A Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}} on your cluster. -->
|
||||
Найменший і найпростіший об'єкт Kubernetes. Под являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері.
|
||||
Найменший і найпростіший об'єкт Kubernetes. Pod являє собою групу {{< glossary_tooltip text="контейнерів" term_id="container" >}}, що запущені у вашому кластері.
|
||||
|
||||
<!--more-->
|
||||
|
||||
<!-- A Pod is typically set up to run a single primary container. It can also run optional sidecar containers that add supplementary features like logging. Pods are commonly managed by a {{< glossary_tooltip term_id="deployment" >}}. -->
|
||||
Як правило, в одному Поді запускається один контейнер. У Поді також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Подами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}.
|
||||
Як правило, в одному Pod'і запускається один контейнер. У Pod'і також можуть бути запущені допоміжні контейнери, що забезпечують додаткову функціональність, наприклад, логування. Управління Pod'ами зазвичай здійснює {{< glossary_tooltip term_id="deployment" >}}.
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: Сервіс
|
||||
title: Service
|
||||
id: service
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/services-networking/service/
|
||||
# A way to expose an application running on a set of Pods as a network service.
|
||||
short_description: >
|
||||
Спосіб відкрити доступ до застосунку, що запущений на декількох Подах у вигляді мережевої служби.
|
||||
Спосіб відкрити доступ до застосунку, що запущений на декількох Pod'ах у вигляді мережевої служби.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -15,10 +15,10 @@ tags:
|
|||
<!--
|
||||
An abstract way to expose an application running on a set of as a network service.
|
||||
-->
|
||||
Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Подів" term_id="pod" >}} у вигляді мережевої служби.
|
||||
Це абстрактний спосіб відкрити доступ до застосунку, що працює як один (або декілька) {{< glossary_tooltip text="Pod'ів" term_id="pod" >}} у вигляді мережевої служби.
|
||||
|
||||
<!--more-->
|
||||
|
||||
<!--The set of Pods targeted by a Service is (usually) determined by a {{< glossary_tooltip text="selector" term_id="selector" >}}. If more Pods are added or removed, the set of Pods matching the selector will change. The Service makes sure that network traffic can be directed to the current set of Pods for the workload.
|
||||
-->
|
||||
Переважно група Подів визначається як Сервіс за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Подів змінить групу Подів, визначених селектором. Сервіс забезпечує надходження мережевого трафіка до актуальної групи Подів для підтримки робочого навантаження.
|
||||
Переважно група Pod'ів визначається як Service за допомогою {{< glossary_tooltip text="селектора" term_id="selector" >}}. Додання або вилучення Pod'ів змінить групу Pod'ів, визначених селектором. Service забезпечує надходження мережевого трафіка до актуальної групи Pod'ів для підтримки робочого навантаження.
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
#title: Best practices
|
||||
title: Найкращі практики
|
||||
weight: 40
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: Learning environment
|
||||
title: Навчальне оточення
|
||||
weight: 20
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
#title: Production environment
|
||||
title: Прод оточення
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: On-Premises VMs
|
||||
title: Менеджери віртуалізації
|
||||
weight: 40
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: Installing Kubernetes with deployment tools
|
||||
title: Встановлення Kubernetes за допомогою інструментів розгортання
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: "Bootstrapping clusters with kubeadm"
|
||||
title: "Запуск кластерів з kubeadm"
|
||||
weight: 10
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: Turnkey Cloud Solutions
|
||||
title: Хмарні рішення під ключ
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
# title: "Windows in Kubernetes"
|
||||
title: "Windows в Kubernetes"
|
||||
weight: 50
|
||||
---
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
#title: "Release notes and version skew"
|
||||
title: "Зміни в релізах нових версій"
|
||||
weight: 10
|
||||
---
|
|
@ -1,293 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- fgrzadkowski
|
||||
- jszczepkowski
|
||||
- directxman12
|
||||
title: Horizontal Pod Autoscaler
|
||||
feature:
|
||||
title: Горизонтальне масштабування
|
||||
description: >
|
||||
Масштабуйте ваш застосунок за допомогою простої команди, інтерфейсу користувача чи автоматично, виходячи із навантаження на ЦПУ.
|
||||
|
||||
content_template: templates/concept
|
||||
weight: 90
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The Horizontal Pod Autoscaler automatically scales the number of pods
|
||||
in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with
|
||||
[custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)
|
||||
support, on some other application-provided metrics). Note that Horizontal
|
||||
Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
|
||||
|
||||
The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller.
|
||||
The resource determines the behavior of the controller.
|
||||
The controller periodically adjusts the number of replicas in a replication controller or deployment
|
||||
to match the observed average CPU utilization to the target specified by user.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## How does the Horizontal Pod Autoscaler work?
|
||||
|
||||
![Horizontal Pod Autoscaler diagram](/images/docs/horizontal-pod-autoscaler.svg)
|
||||
|
||||
The Horizontal Pod Autoscaler is implemented as a control loop, with a period controlled
|
||||
by the controller manager's `--horizontal-pod-autoscaler-sync-period` flag (with a default
|
||||
value of 15 seconds).
|
||||
|
||||
During each period, the controller manager queries the resource utilization against the
|
||||
metrics specified in each HorizontalPodAutoscaler definition. The controller manager
|
||||
obtains the metrics from either the resource metrics API (for per-pod resource metrics),
|
||||
or the custom metrics API (for all other metrics).
|
||||
|
||||
* For per-pod resource metrics (like CPU), the controller fetches the metrics
|
||||
from the resource metrics API for each pod targeted by the HorizontalPodAutoscaler.
|
||||
Then, if a target utilization value is set, the controller calculates the utilization
|
||||
value as a percentage of the equivalent resource request on the containers in
|
||||
each pod. If a target raw value is set, the raw metric values are used directly.
|
||||
The controller then takes the mean of the utilization or the raw value (depending on the type
|
||||
of target specified) across all targeted pods, and produces a ratio used to scale
|
||||
the number of desired replicas.
|
||||
|
||||
Please note that if some of the pod's containers do not have the relevant resource request set,
|
||||
CPU utilization for the pod will not be defined and the autoscaler will
|
||||
not take any action for that metric. See the [algorithm
|
||||
details](#algorithm-details) section below for more information about
|
||||
how the autoscaling algorithm works.
|
||||
|
||||
* For per-pod custom metrics, the controller functions similarly to per-pod resource metrics,
|
||||
except that it works with raw values, not utilization values.
|
||||
|
||||
* For object metrics and external metrics, a single metric is fetched, which describes
|
||||
the object in question. This metric is compared to the target
|
||||
value, to produce a ratio as above. In the `autoscaling/v2beta2` API
|
||||
version, this value can optionally be divided by the number of pods before the
|
||||
comparison is made.
|
||||
|
||||
The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`,
|
||||
`custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by
|
||||
metrics-server, which needs to be launched separately. See
|
||||
[metrics-server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server)
|
||||
for instructions. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster.
|
||||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
|
||||
Fetching metrics from Heapster is deprecated as of Kubernetes 1.11.
|
||||
{{< /note >}}
|
||||
|
||||
See [Support for metrics APIs](#support-for-metrics-apis) for more details.
|
||||
|
||||
The autoscaler accesses corresponding scalable controllers (such as replication controllers, deployments, and replica sets)
|
||||
by using the scale sub-resource. Scale is an interface that allows you to dynamically set the number of replicas and examine
|
||||
each of their current states. More details on scale sub-resource can be found
|
||||
[here](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource).
|
||||
|
||||
### Algorithm Details
|
||||
|
||||
From the most basic perspective, the Horizontal Pod Autoscaler controller
|
||||
operates on the ratio between desired metric value and current metric
|
||||
value:
|
||||
|
||||
```
|
||||
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
|
||||
```
|
||||
|
||||
For example, if the current metric value is `200m`, and the desired value
|
||||
is `100m`, the number of replicas will be doubled, since `200.0 / 100.0 ==
|
||||
2.0` If the current value is instead `50m`, we'll halve the number of
|
||||
replicas, since `50.0 / 100.0 == 0.5`. We'll skip scaling if the ratio is
|
||||
sufficiently close to 1.0 (within a globally-configurable tolerance, from
|
||||
the `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1).
|
||||
|
||||
When a `targetAverageValue` or `targetAverageUtilization` is specified,
|
||||
the `currentMetricValue` is computed by taking the average of the given
|
||||
metric across all Pods in the HorizontalPodAutoscaler's scale target.
|
||||
Before checking the tolerance and deciding on the final values, we take
|
||||
pod readiness and missing metrics into consideration, however.
|
||||
|
||||
All Pods with a deletion timestamp set (i.e. Pods in the process of being
|
||||
shut down) and all failed Pods are discarded.
|
||||
|
||||
If a particular Pod is missing metrics, it is set aside for later; Pods
|
||||
with missing metrics will be used to adjust the final scaling amount.
|
||||
|
||||
When scaling on CPU, if any pod has yet to become ready (i.e. it's still
|
||||
initializing) *or* the most recent metric point for the pod was before it
|
||||
became ready, that pod is set aside as well.
|
||||
|
||||
Due to technical constraints, the HorizontalPodAutoscaler controller
|
||||
cannot exactly determine the first time a pod becomes ready when
|
||||
determining whether to set aside certain CPU metrics. Instead, it
|
||||
considers a Pod "not yet ready" if it's unready and transitioned to
|
||||
unready within a short, configurable window of time since it started.
|
||||
This value is configured with the `--horizontal-pod-autoscaler-initial-readiness-delay` flag, and its default is 30
|
||||
seconds. Once a pod has become ready, it considers any transition to
|
||||
ready to be the first if it occurred within a longer, configurable time
|
||||
since it started. This value is configured with the `--horizontal-pod-autoscaler-cpu-initialization-period` flag, and its
|
||||
default is 5 minutes.
|
||||
|
||||
The `currentMetricValue / desiredMetricValue` base scale ratio is then
|
||||
calculated using the remaining pods not set aside or discarded from above.
|
||||
|
||||
If there were any missing metrics, we recompute the average more
|
||||
conservatively, assuming those pods were consuming 100% of the desired
|
||||
value in case of a scale down, and 0% in case of a scale up. This dampens
|
||||
the magnitude of any potential scale.
|
||||
|
||||
Furthermore, if any not-yet-ready pods were present, and we would have
|
||||
scaled up without factoring in missing metrics or not-yet-ready pods, we
|
||||
conservatively assume the not-yet-ready pods are consuming 0% of the
|
||||
desired metric, further dampening the magnitude of a scale up.
|
||||
|
||||
After factoring in the not-yet-ready pods and missing metrics, we
|
||||
recalculate the usage ratio. If the new ratio reverses the scale
|
||||
direction, or is within the tolerance, we skip scaling. Otherwise, we use
|
||||
the new ratio to scale.
|
||||
|
||||
Note that the *original* value for the average utilization is reported
|
||||
back via the HorizontalPodAutoscaler status, without factoring in the
|
||||
not-yet-ready pods or missing metrics, even when the new usage ratio is
|
||||
used.
|
||||
|
||||
If multiple metrics are specified in a HorizontalPodAutoscaler, this
|
||||
calculation is done for each metric, and then the largest of the desired
|
||||
replica counts is chosen. If any of these metrics cannot be converted
|
||||
into a desired replica count (e.g. due to an error fetching the metrics
|
||||
from the metrics APIs) and a scale down is suggested by the metrics which
|
||||
can be fetched, scaling is skipped. This means that the HPA is still capable
|
||||
of scaling up if one or more metrics give a `desiredReplicas` greater than
|
||||
the current value.
|
||||
|
||||
Finally, just before HPA scales the target, the scale recommendation is recorded. The
|
||||
controller considers all recommendations within a configurable window choosing the
|
||||
highest recommendation from within that window. This value can be configured using the `--horizontal-pod-autoscaler-downscale-stabilization` flag, which defaults to 5 minutes.
|
||||
This means that scaledowns will occur gradually, smoothing out the impact of rapidly
|
||||
fluctuating metric values.
|
||||
|
||||
## API Object
|
||||
|
||||
The Horizontal Pod Autoscaler is an API resource in the Kubernetes `autoscaling` API group.
|
||||
The current stable version, which only includes support for CPU autoscaling,
|
||||
can be found in the `autoscaling/v1` API version.
|
||||
|
||||
The beta version, which includes support for scaling on memory and custom metrics,
|
||||
can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2`
|
||||
are preserved as annotations when working with `autoscaling/v1`.
|
||||
|
||||
More details about the API object can be found at
|
||||
[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
|
||||
## Support for Horizontal Pod Autoscaler in kubectl
|
||||
|
||||
Horizontal Pod Autoscaler, like every API resource, is supported in a standard way by `kubectl`.
|
||||
We can create a new autoscaler using `kubectl create` command.
|
||||
We can list autoscalers by `kubectl get hpa` and get detailed description by `kubectl describe hpa`.
|
||||
Finally, we can delete an autoscaler using `kubectl delete hpa`.
|
||||
|
||||
In addition, there is a special `kubectl autoscale` command for easy creation of a Horizontal Pod Autoscaler.
|
||||
For instance, executing `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`
|
||||
will create an autoscaler for replication set *foo*, with target CPU utilization set to `80%`
|
||||
and the number of replicas between 2 and 5.
|
||||
The detailed documentation of `kubectl autoscale` can be found [here](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
|
||||
|
||||
## Autoscaling during rolling update
|
||||
|
||||
Currently in Kubernetes, it is possible to perform a [rolling update](/docs/tasks/run-application/rolling-update-replication-controller/) by managing replication controllers directly,
|
||||
or by using the deployment object, which manages the underlying replica sets for you.
|
||||
Horizontal Pod Autoscaler only supports the latter approach: the Horizontal Pod Autoscaler is bound to the deployment object,
|
||||
it sets the size for the deployment object, and the deployment is responsible for setting sizes of underlying replica sets.
|
||||
|
||||
Horizontal Pod Autoscaler does not work with rolling update using direct manipulation of replication controllers,
|
||||
i.e. you cannot bind a Horizontal Pod Autoscaler to a replication controller and do rolling update (e.g. using `kubectl rolling-update`).
|
||||
The reason this doesn't work is that when rolling update creates a new replication controller,
|
||||
the Horizontal Pod Autoscaler will not be bound to the new replication controller.
|
||||
|
||||
## Support for cooldown/delay
|
||||
|
||||
When managing the scale of a group of replicas using the Horizontal Pod Autoscaler,
|
||||
it is possible that the number of replicas keeps fluctuating frequently due to the
|
||||
dynamic nature of the metrics evaluated. This is sometimes referred to as *thrashing*.
|
||||
|
||||
Starting from v1.6, a cluster operator can mitigate this problem by tuning
|
||||
the global HPA settings exposed as flags for the `kube-controller-manager` component:
|
||||
|
||||
Starting from v1.12, a new algorithmic update removes the need for the
|
||||
upscale delay.
|
||||
|
||||
- `--horizontal-pod-autoscaler-downscale-stabilization`: The value for this option is a
|
||||
duration that specifies how long the autoscaler has to wait before another
|
||||
downscale operation can be performed after the current one has completed.
|
||||
The default value is 5 minutes (`5m0s`).
|
||||
|
||||
{{< note >}}
|
||||
When tuning these parameter values, a cluster operator should be aware of the possible
|
||||
consequences. If the delay (cooldown) value is set too long, there could be complaints
|
||||
that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if
|
||||
the delay value is set too short, the scale of the replicas set may keep thrashing as
|
||||
usual.
|
||||
{{< /note >}}
|
||||
|
||||
## Support for multiple metrics
|
||||
|
||||
Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API
|
||||
version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod
|
||||
Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the
|
||||
proposed scales will be used as the new scale.
|
||||
|
||||
## Support for custom metrics
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes 1.2 added alpha support for scaling based on application-specific metrics using special annotations.
|
||||
Support for these annotations was removed in Kubernetes 1.6 in favor of the new autoscaling API. While the old method for collecting
|
||||
custom metrics is still available, these metrics will not be available for use by the Horizontal Pod Autoscaler, and the former
|
||||
annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller.
|
||||
{{< /note >}}
|
||||
|
||||
Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler.
|
||||
You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API.
|
||||
Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics.
|
||||
|
||||
See [Support for metrics APIs](#support-for-metrics-apis) for the requirements.
|
||||
|
||||
## Support for metrics APIs
|
||||
|
||||
By default, the HorizontalPodAutoscaler controller retrieves metrics from a series of APIs. In order for it to access these
|
||||
APIs, cluster administrators must ensure that:
|
||||
|
||||
* The [API aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) is enabled.
|
||||
|
||||
* The corresponding APIs are registered:
|
||||
|
||||
* For resource metrics, this is the `metrics.k8s.io` API, generally provided by [metrics-server](https://github.com/kubernetes-incubator/metrics-server).
|
||||
It can be launched as a cluster addon.
|
||||
|
||||
* For custom metrics, this is the `custom.metrics.k8s.io` API. It's provided by "adapter" API servers provided by metrics solution vendors.
|
||||
Check with your metrics pipeline, or the [list of known solutions](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api).
|
||||
If you would like to write your own, check out the [boilerplate](https://github.com/kubernetes-incubator/custom-metrics-apiserver) to get started.
|
||||
|
||||
* For external metrics, this is the `external.metrics.k8s.io` API. It may be provided by the custom metrics adapters provided above.
|
||||
|
||||
* The `--horizontal-pod-autoscaler-use-rest-clients` is `true` or unset. Setting this to false switches to Heapster-based autoscaling, which is deprecated.
|
||||
|
||||
For more information on these different metrics paths and how they differ please see the relevant design proposals for
|
||||
[the HPA V2](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/hpa-v2.md),
|
||||
[custom.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md)
|
||||
and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md).
|
||||
|
||||
For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
|
||||
and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Design documentation: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
|
||||
* kubectl autoscale command: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
* Usage example of [Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
|
||||
|
||||
{{% /capture %}}
|
|
@ -41,13 +41,13 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
|
||||
|
||||
## Застосунки без стану (Stateless Applications)
|
||||
## Застосунки без стану (Stateless Applications) {#застосунки-без-стану}
|
||||
|
||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Застосунки зі станом (Stateful Applications)
|
||||
## Застосунки зі станом (Stateful Applications) {#застосунки-зі-станом}
|
||||
|
||||
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
|
|
|
@ -109,12 +109,12 @@ tutorial has only one Container. A Kubernetes
|
|||
Pod and restarts the Pod's Container if it terminates. Deployments are the
|
||||
recommended way to manage the creation and scaling of Pods.
|
||||
-->
|
||||
[*Под*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Под має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Пода і перезапускає контейнер Пода, якщо контейнер перестає працювати. Створювати і масштабувати Поди рекомендується за допомогою Deployment'ів.
|
||||
[*Pod*](/docs/concepts/workloads/pods/pod/) у Kubernetes -- це група з одного або декількох контейнерів, що об'єднані разом з метою адміністрування і роботи у мережі. У цьому навчальному матеріалі Pod має лише один контейнер. Kubernetes [*Deployment*](/docs/concepts/workloads/controllers/deployment/) перевіряє стан Pod'а і перезапускає контейнер Pod'а, якщо контейнер перестає працювати. Створювати і масштабувати Pod'и рекомендується за допомогою Deployment'ів.
|
||||
|
||||
<!--1. Use the `kubectl create` command to create a Deployment that manages a Pod. The
|
||||
Pod runs a Container based on the provided Docker image.
|
||||
-->
|
||||
1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Подом. Под запускає контейнер на основі наданого Docker образу.
|
||||
1. За допомогою команди `kubectl create` створіть Deployment, який керуватиме Pod'ом. Pod запускає контейнер на основі наданого Docker образу.
|
||||
|
||||
```shell
|
||||
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
|
||||
|
@ -139,7 +139,7 @@ Pod runs a Container based on the provided Docker image.
|
|||
|
||||
<!--3. View the Pod:
|
||||
-->
|
||||
3. Перегляньте інформацію про запущені Поди:
|
||||
3. Перегляньте інформацію про запущені Pod'и:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
@ -176,18 +176,18 @@ Pod runs a Container based on the provided Docker image.
|
|||
|
||||
<!--## Create a Service
|
||||
-->
|
||||
## Створення Сервісу
|
||||
## Створення Service
|
||||
|
||||
<!--By default, the Pod is only accessible by its internal IP address within the
|
||||
Kubernetes cluster. To make the `hello-node` Container accessible from outside the
|
||||
Kubernetes virtual network, you have to expose the Pod as a
|
||||
Kubernetes [*Service*](/docs/concepts/services-networking/service/).
|
||||
-->
|
||||
За умовчанням, Под доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Под необхідно відкрити як Kubernetes [*Сервіс*](/docs/concepts/services-networking/service/).
|
||||
За умовчанням, Pod доступний лише за внутрішньою IP-адресою у межах Kubernetes кластера. Для того, щоб контейнер `hello-node` став доступний за межами віртуальної мережі Kubernetes, Pod необхідно відкрити як Kubernetes [*Service*](/docs/concepts/services-networking/service/).
|
||||
|
||||
<!--1. Expose the Pod to the public internet using the `kubectl expose` command:
|
||||
-->
|
||||
1. Відкрийте Под для публічного доступу з інтернету за допомогою команди `kubectl expose`:
|
||||
1. Відкрийте Pod для публічного доступу з інтернету за допомогою команди `kubectl expose`:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
|
||||
|
@ -196,11 +196,11 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
|
|||
<!--The `--type=LoadBalancer` flag indicates that you want to expose your Service
|
||||
outside of the cluster.
|
||||
-->
|
||||
Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Сервісу за межами кластера.
|
||||
Прапорець `--type=LoadBalancer` вказує, що ви хочете відкрити доступ до Service за межами кластера.
|
||||
|
||||
<!--2. View the Service you just created:
|
||||
-->
|
||||
2. Перегляньте інформацію за Сервісом, який ви щойно створили:
|
||||
2. Перегляньте інформацію про Service, який ви щойно створили:
|
||||
|
||||
```shell
|
||||
kubectl get services
|
||||
|
@ -221,7 +221,7 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
|
|||
the `LoadBalancer` type makes the Service accessible through the `minikube service`
|
||||
command.
|
||||
-->
|
||||
Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Сервісу надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Сервіс доступним ззовні за допомогою команди `minikube service`.
|
||||
Для хмарних провайдерів, що підтримують балансування навантаження, доступ до Service надається через зовнішню IP-адресу. Для Minikube, тип `LoadBalancer` робить Service доступним ззовні за допомогою команди `minikube service`.
|
||||
|
||||
<!--3. Run the following command:
|
||||
-->
|
||||
|
@ -301,7 +301,7 @@ Minikube має ряд вбудованих {{< glossary_tooltip text="розш
|
|||
|
||||
<!--3. View the Pod and Service you just created:
|
||||
-->
|
||||
3. Перегляньте інформацію про Под і Сервіс, які ви щойно створили:
|
||||
3. Перегляньте інформацію про Pod і Service, які ви щойно створили:
|
||||
|
||||
```shell
|
||||
kubectl get pod,svc -n kube-system
|
||||
|
@ -389,6 +389,6 @@ minikube delete
|
|||
* Дізнайтеся більше про [розгортання застосунків](/docs/user-guide/deploying-applications/).
|
||||
<!--* Learn more about [Service objects](/docs/concepts/services-networking/service/).
|
||||
-->
|
||||
* Дізнайтеся більше про [об'єкти сервісу](/docs/concepts/services-networking/service/).
|
||||
* Дізнайтеся більше про [об'єкти Service](/docs/concepts/services-networking/service/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -43,16 +43,16 @@ weight: 10
|
|||
</p>
|
||||
-->
|
||||
<p>
|
||||
Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити <b>Deployment</b> конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Поди для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Поди по окремих вузлах кластера.
|
||||
Після того, як ви запустили Kubernetes кластер, ви можете розгортати в ньому контейнеризовані застосунки. Для цього вам необхідно створити <b>Deployment</b> конфігурацію. Вона інформує Kubernetes, як створювати і оновлювати Pod'и для вашого застосунку. Після того, як ви створили Deployment, Kubernetes master розподіляє ці Pod'и по окремих вузлах кластера.
|
||||
</p>
|
||||
|
||||
<!--<p>Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces the instance with an instance on another Node in the cluster. <b>This provides a self-healing mechanism to address machine failure or maintenance.</b></p>
|
||||
-->
|
||||
<p>Після створення Поди застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Под, зупинив роботу або був видалений, Deployment контролер переміщає цей Под на інший вузол кластера. <b>Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.</b></p>
|
||||
<p>Після створення Pod'и застосунку безперервно моніторяться контролером Kubernetes Deployment. Якщо вузол, на якому розміщено Pod, зупинив роботу або був видалений, Deployment контролер переміщає цей Pod на інший вузол кластера. <b>Так працює механізм самозцілення, що підтримує робочий стан кластера у разі апаратного збою чи технічних робіт.</b></p>
|
||||
|
||||
<!--<p>In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure. By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management. </p>
|
||||
-->
|
||||
<p>До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Подів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками. </p>
|
||||
<p>До появи оркестрації застосунки часто запускали за допомогою скриптів установлення. Однак скрипти не давали можливості відновити працездатний стан застосунку після апаратного збою. Завдяки створенню Pod'ів та їхньому запуску на вузлах кластера, Kubernetes Deployment надає цілковито інший підхід до управління застосунками. </p>
|
||||
|
||||
</div>
|
||||
|
||||
|
@ -74,7 +74,7 @@ weight: 10
|
|||
</i></p>
|
||||
-->
|
||||
<p><i>
|
||||
Deployment відповідає за створення і оновлення Подів для вашого застосунку
|
||||
Deployment відповідає за створення і оновлення Pod'ів для вашого застосунку
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Ознайомлення з Подами і вузлами (nodes)
|
||||
title: Ознайомлення з Pod'ами і вузлами (nodes)
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
@ -25,7 +25,7 @@ weight: 10
|
|||
<ul>
|
||||
<!--<li>Learn about Kubernetes Pods.</li>
|
||||
-->
|
||||
<li>Дізнатися, що таке Поди Kubernetes.</li>
|
||||
<li>Дізнатися, що таке Pod'и Kubernetes.</li>
|
||||
<!--<li>Learn about Kubernetes Nodes.</li>
|
||||
-->
|
||||
<li>Дізнатися, що таке вузли Kubernetes.</li>
|
||||
|
@ -38,35 +38,35 @@ weight: 10
|
|||
<div class="col-md-8">
|
||||
<!--<h2>Kubernetes Pods</h2>
|
||||
-->
|
||||
<h2>Поди Kubernetes</h2>
|
||||
<h2>Pod'и Kubernetes</h2>
|
||||
<!--<p>When you created a Deployment in Module <a href="/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>, Kubernetes created a <b>Pod</b> to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:</p>
|
||||
-->
|
||||
<p>Коли ви створили Deployment у модулі <a href="/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>, Kubernetes створив <b>Под</b>, щоб розмістити ваш застосунок. Под - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:</p>
|
||||
<p>Коли ви створили Deployment у модулі <a href="/docs/tutorials/kubernetes-basics/deploy-intro/">2</a>, Kubernetes створив <b>Pod</b>, щоб розмістити ваш застосунок. Pod - це абстракція в Kubernetes, що являє собою групу з одного або декількох контейнерів застосунку (як Docker або rkt) і ресурси, спільні для цих контейнерів. До цих ресурсів належать:</p>
|
||||
<ul>
|
||||
<!--<li>Shared storage, as Volumes</li>
|
||||
-->
|
||||
<li>Спільні сховища даних, або Volumes</li>
|
||||
<!--<li>Networking, as a unique cluster IP address</li>
|
||||
-->
|
||||
<li>Мережа, адже кожен Под у кластері має унікальну IP-адресу</li>
|
||||
<li>Мережа, адже кожен Pod у кластері має унікальну IP-адресу</li>
|
||||
<!--<li>Information about how to run each container, such as the container image version or specific ports to use</li>
|
||||
-->
|
||||
<li>Інформація з запуску кожного контейнера, така як версія образу контейнера або використання певних портів</li>
|
||||
</ul>
|
||||
<!--<p>A Pod models an application-specific "logical host" and can contain different application containers which are relatively tightly coupled. For example, a Pod might include both the container with your Node.js app as well as a different container that feeds the data to be published by the Node.js webserver. The containers in a Pod share an IP Address and port space, are always co-located and co-scheduled, and run in a shared context on the same Node.</p>
|
||||
-->
|
||||
<p>Под моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Поді може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Пода мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.</p>
|
||||
<p>Pod моделює специфічний для даного застосунку "логічний хост" і може містити різні, але доволі щільно зв'язані контейнери. Наприклад, в одному Pod'і може бути контейнер з вашим Node.js застосунком та інший контейнер, що передає дані для публікації Node.js вебсерверу. Контейнери в межах Pod'а мають спільну IP-адресу і порти, завжди є сполученими, плануються для запуску разом і запускаються у спільному контексті на одному вузлі.</p>
|
||||
|
||||
<!--<p>Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.</p>
|
||||
-->
|
||||
<p>Под є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Поди вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Под прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Поди розподіляються по інших доступних вузлах кластера.</p>
|
||||
<p>Pod є неподільною одиницею платформи Kubernetes. Коли ви створюєте Deployment у Kubernetes, цей Deployment створює Pod'и вже з контейнерами всередині, на відміну від створення контейнерів окремо. Кожен Pod прив'язаний до вузла, до якого його було розподілено, і лишається на ньому до припинення роботи (згідно з політикою перезапуску) або видалення. У разі відмови вузла ідентичні Pod'и розподіляються по інших доступних вузлах кластера.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>Зміст:</h3>
|
||||
<ul>
|
||||
<li>Поди</li>
|
||||
<li>Pod'и</li>
|
||||
<li>Вузли</li>
|
||||
<li>Основні команди kubectl</li>
|
||||
</ul>
|
||||
|
@ -77,7 +77,7 @@ weight: 10
|
|||
</i></p>
|
||||
-->
|
||||
<p><i>
|
||||
Под - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити.
|
||||
Pod - це група з одного або декількох контейнерів (таких як Docker або rkt), що має спільне сховище даних (volumes), унікальну IP-адресу і містить інформацію як їх запустити.
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -86,7 +86,7 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Узагальнена схема Подів</h2>
|
||||
<h2 style="color: #3771e3;">Узагальнена схема Pod'ів</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -104,7 +104,7 @@ weight: 10
|
|||
<h2>Вузли</h2>
|
||||
<!--<p>A Pod always runs on a <b>Node</b>. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node.</p>
|
||||
-->
|
||||
<p>Под завжди запускається на <b>вузлі</b>. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Подів. Kubernetes master автоматично розподіляє Поди по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.</p>
|
||||
<p>Pod завжди запускається на <b>вузлі</b>. Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. Функціонування кожного вузла контролюється master'ом. Вузол може мати декілька Pod'ів. Kubernetes master автоматично розподіляє Pod'и по вузлах кластера з урахуванням ресурсів, наявних на кожному вузлі.</p>
|
||||
|
||||
<!--<p>Every Kubernetes Node runs at least:</p>
|
||||
-->
|
||||
|
@ -112,7 +112,7 @@ weight: 10
|
|||
<ul>
|
||||
<!--<li>Kubelet, a process responsible for communication between the Kubernetes Master and the Node; it manages the Pods and the containers running on a machine.</li>
|
||||
-->
|
||||
<li>kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Поди і контейнери, запущені на машині.</li>
|
||||
<li>kubelet - процес, що забезпечує обмін даними між Kubernetes master і робочим вузлом; kubelet контролює Pod'и і контейнери, запущені на машині.</li>
|
||||
<!--<li>A container runtime (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.</li>
|
||||
-->
|
||||
<li>оточення для контейнерів (таке як Docker, rkt), що забезпечує завантаження образу контейнера з реєстру, розпакування контейнера і запуск застосунку.</li>
|
||||
|
@ -124,7 +124,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i> Containers should only be scheduled together in a single Pod if they are tightly coupled and need to share resources such as disk. </i></p>
|
||||
-->
|
||||
<p><i> Контейнери повинні бути разом в одному Поді, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск. </i></p>
|
||||
<p><i> Контейнери повинні бути разом в одному Pod'і, лише якщо вони щільно зв'язані і мають спільні ресурси, такі як диск. </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -161,10 +161,10 @@ weight: 10
|
|||
<li><b>kubectl describe</b> - показати детальну інформацію про ресурс</li>
|
||||
<!--<li><b>kubectl logs</b> - print the logs from a container in a pod</li>
|
||||
-->
|
||||
<li><b>kubectl logs</b> - вивести логи контейнера, розміщеного в Поді</li>
|
||||
<li><b>kubectl logs</b> - вивести логи контейнера, розміщеного в Pod'і</li>
|
||||
<!--<li><b>kubectl exec</b> - execute a command on a container in a pod</li>
|
||||
-->
|
||||
<li><b>kubectl exec</b> - виконати команду в контейнері, розміщеному в Поді</li>
|
||||
<li><b>kubectl exec</b> - виконати команду в контейнері, розміщеному в Pod'і</li>
|
||||
</ul>
|
||||
|
||||
<!--<p>You can use these commands to see when applications were deployed, what their current statuses are, where they are running and what their configurations are.</p>
|
||||
|
@ -180,7 +180,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i> A node is a worker machine in Kubernetes and may be a VM or physical machine, depending on the cluster. Multiple Pods can run on one Node. </i></p>
|
||||
-->
|
||||
<p><i> Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Подів. </i></p>
|
||||
<p><i> Вузол - це робоча машина в Kubernetes, віртуальна або фізична, в залежності від кластера. На одному вузлі можуть бути запущені декілька Pod'ів. </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -41,35 +41,35 @@ weight: 10
|
|||
|
||||
<!--<p>Kubernetes <a href="/docs/concepts/workloads/pods/pod-overview/">Pods</a> are mortal. Pods in fact have a <a href="/docs/concepts/workloads/pods/pod-lifecycle/">lifecycle</a>. When a worker node dies, the Pods running on the Node are also lost. A <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> might then dynamically drive the cluster back to desired state via creation of new Pods to keep your application running. As another example, consider an image-processing backend with 3 replicas. Those replicas are exchangeable; the front-end system should not care about backend replicas or even if a Pod is lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.</p>
|
||||
-->
|
||||
<p><a href="/docs/concepts/workloads/pods/pod-overview/">Поди</a> Kubernetes "смертні" і мають власний <a href="/docs/concepts/workloads/pods/pod-lifecycle/">життєвий цикл</a>. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Поди, запущені на ньому. <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> здатна динамічно повернути кластер до бажаного стану шляхом створення нових Подів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Поду. Водночас, кожний Под у Kubernetes кластері має унікальну IP-адресу, навіть Поди на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Подами для того, щоб ваші застосунки продовжували працювати.</p>
|
||||
<p><a href="/docs/concepts/workloads/pods/pod-overview/">Pod'и</a> Kubernetes "смертні" і мають власний <a href="/docs/concepts/workloads/pods/pod-lifecycle/">життєвий цикл</a>. Коли робочий вузол припиняє роботу, ми також втрачаємо всі Pod'и, запущені на ньому. <a href="/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> здатна динамічно повернути кластер до бажаного стану шляхом створення нових Pod'ів, забезпечуючи безперебійність роботи вашого застосунку. Як інший приклад, візьмемо бекенд застосунку для обробки зображень із трьома репліками. Ці репліки взаємозамінні; система фронтенду не повинна зважати на репліки бекенду чи на втрату та перестворення Pod'а. Водночас, кожний Pod у Kubernetes кластері має унікальну IP-адресу, навіть Pod'и на одному вузлі. Відповідно, має бути спосіб автоматично синхронізувати зміни між Pod'ами для того, щоб ваші застосунки продовжували працювати.</p>
|
||||
|
||||
<!--<p>A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(preferred)</a> or JSON, like all Kubernetes objects. The set of Pods targeted by a Service is usually determined by a <i>LabelSelector</i> (see below for why you might want a Service without including <code>selector</code> in the spec).</p>
|
||||
-->
|
||||
<p>Сервіс у Kubernetes - це абстракція, що визначає логічний набір Подів і політику доступу до них. Сервіси уможливлюють слабку зв'язаність між залежними Подами. Для визначення Сервісу використовують YAML-файл <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(рекомендовано)</a> або JSON, як для решти об'єктів Kubernetes. Набір Подів, призначених для Сервісу, зазвичай визначається через <i>LabelSelector</i> (нижче пояснюється, чому параметр <code>selector</code> іноді не включають у специфікацію сервісу).</p>
|
||||
<p>Service у Kubernetes - це абстракція, що визначає логічний набір Pod'ів і політику доступу до них. Services уможливлюють слабку зв'язаність між залежними Pod'ами. Для визначення Service використовують YAML-файл <a href="/docs/concepts/configuration/overview/#general-configuration-tips">(рекомендовано)</a> або JSON, як для решти об'єктів Kubernetes. Набір Pod'ів, призначених для Service, зазвичай визначається через <i>LabelSelector</i> (нижче пояснюється, чому параметр <code>selector</code> іноді не включають у специфікацію Service).</p>
|
||||
|
||||
<!--<p>Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a <code>type</code> in the ServiceSpec:</p>
|
||||
-->
|
||||
<p>Попри те, що кожен Под має унікальний IP, ці IP-адреси не видні за межами кластера без Сервісу. Сервіси уможливлюють надходження трафіка до ваших застосунків. Відкрити Сервіс можна по-різному, вказавши потрібний <code>type</code> у ServiceSpec:</p>
|
||||
<p>Попри те, що кожен Pod має унікальний IP, ці IP-адреси не видні за межами кластера без Service. Services уможливлюють надходження трафіка до ваших застосунків. Відкрити Service можна по-різному, вказавши потрібний <code>type</code> у ServiceSpec:</p>
|
||||
<ul>
|
||||
<!--<li><i>ClusterIP</i> (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.</li>
|
||||
-->
|
||||
<li><i>ClusterIP</i> (типове налаштування) - відкриває доступ до Сервісу у кластері за внутрішнім IP. Цей тип робить Сервіс доступним лише у межах кластера.</li>
|
||||
<li><i>ClusterIP</i> (типове налаштування) - відкриває доступ до Service у кластері за внутрішнім IP. Цей тип робить Service доступним лише у межах кластера.</li>
|
||||
<!--<li><i>NodePort</i> - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <code><NodeIP>:<NodePort></code>. Superset of ClusterIP.</li>
|
||||
-->
|
||||
<li><i>NodePort</i> - відкриває доступ до Сервісу на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Сервіс доступним поза межами кластера, використовуючи <code><NodeIP>:<NodePort></code>. Є надмножиною відносно ClusterIP.</li>
|
||||
<li><i>NodePort</i> - відкриває доступ до Service на однаковому порту кожного обраного вузла в кластері, використовуючи NAT. Робить Service доступним поза межами кластера, використовуючи <code><NodeIP>:<NodePort></code>. Є надмножиною відносно ClusterIP.</li>
|
||||
<!--<li><i>LoadBalancer</i> - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.</li>
|
||||
-->
|
||||
<li><i>LoadBalancer</i> - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Сервісу статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.</li>
|
||||
<li><i>LoadBalancer</i> - створює зовнішній балансувальник навантаження у хмарі (за умови хмарної інфраструктури) і призначає Service статичну зовнішню IP-адресу. Є надмножиною відносно NodePort.</li>
|
||||
<!--<li><i>ExternalName</i> - Exposes the Service using an arbitrary name (specified by <code>externalName</code> in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of <code>kube-dns</code>.</li>
|
||||
-->
|
||||
<li><i>ExternalName</i> - відкриває доступ до Сервісу, використовуючи довільне ім'я (визначається параметром <code>externalName</code> у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії <code>kube-dns</code> 1.7 і вище.</li>
|
||||
<li><i>ExternalName</i> - відкриває доступ до Service, використовуючи довільне ім'я (визначається параметром <code>externalName</code> у специфікації), повертає запис CNAME. Проксі не використовується. Цей тип потребує версії <code>kube-dns</code> 1.7 і вище.</li>
|
||||
</ul>
|
||||
<!--<p>More information about the different types of Services can be found in the <a href="/docs/tutorials/services/source-ip/">Using Source IP</a> tutorial. Also see <a href="/docs/concepts/services-networking/connect-applications-service">Connecting Applications with Services</a>.</p>
|
||||
-->
|
||||
<p>Більше інформації про різні типи Сервісів ви знайдете у навчальному матеріалі <a href="/docs/tutorials/services/source-ip/">Використання вихідної IP-адреси</a>. Дивіться також <a href="/docs/concepts/services-networking/connect-applications-service">Поєднання застосунків з Сервісами</a>.</p>
|
||||
<p>Більше інформації про різні типи Services ви знайдете у навчальному матеріалі <a href="/docs/tutorials/services/source-ip/">Використання вихідної IP-адреси</a>. Дивіться також <a href="/docs/concepts/services-networking/connect-applications-service">Поєднання застосунків з Services</a>.</p>
|
||||
<!--<p>Additionally, note that there are some use cases with Services that involve not defining <code>selector</code> in the spec. A Service created without <code>selector</code> will also not create the corresponding Endpoints object. This allows users to manually map a Service to specific endpoints. Another possibility why there may be no selector is you are strictly using <code>type: ExternalName</code>.</p>
|
||||
-->
|
||||
<p>Також зауважте, що для деяких сценаріїв використання Сервісів параметр <code>selector</code> не задається у специфікації Сервісу. Сервіс, створений без визначення параметра <code>selector</code>, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Сервіс на конкретні кінцеві точки (endpoints). Інший випадок, коли селектор може бути не потрібний - використання строго заданого параметра <code>type: ExternalName</code>.</p>
|
||||
<p>Також зауважте, що для деяких сценаріїв використання Services параметр <code>selector</code> не задається у специфікації Service. Service, створений без визначення параметра <code>selector</code>, також не створюватиме відповідного Endpoint об'єкта. Це дозволяє користувачам вручну спроектувати Service на конкретні кінцеві точки (endpoints). Інший випадок, коли Селектор може бути не потрібний - використання строго заданого параметра <code>type: ExternalName</code>.</p>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
|
@ -79,10 +79,10 @@ weight: 10
|
|||
<ul>
|
||||
<!--<li>Exposing Pods to external traffic</li>
|
||||
-->
|
||||
<li>Відкриття Подів для зовнішнього трафіка</li>
|
||||
<li>Відкриття Pod'ів для зовнішнього трафіка</li>
|
||||
<!--<li>Load balancing traffic across multiple Pods</li>
|
||||
-->
|
||||
<li>Балансування навантаження трафіка між Подами</li>
|
||||
<li>Балансування навантаження трафіка між Pod'ами</li>
|
||||
<!--<li>Using labels</li>
|
||||
-->
|
||||
<li>Використання міток</li>
|
||||
|
@ -91,7 +91,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i>A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.</i></p>
|
||||
-->
|
||||
<p><i>Сервіс Kubernetes - це шар абстракції, який визначає логічний набір Подів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Подів.</i></p>
|
||||
<p><i>Service Kubernetes - це шар абстракції, який визначає логічний набір Pod'ів і відкриває їх для зовнішнього трафіка, балансує навантаження і здійснює виявлення цих Pod'ів.</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -101,7 +101,7 @@ weight: 10
|
|||
<div class="col-md-8">
|
||||
<!--<h3>Services and Labels</h3>
|
||||
-->
|
||||
<h3>Сервіси і мітки</h3>
|
||||
<h3>Services і мітки</h3>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -115,10 +115,10 @@ weight: 10
|
|||
<div class="col-md-8">
|
||||
<!--<p>A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.</p>
|
||||
-->
|
||||
<p>Сервіс маршрутизує трафік між Подами, що входять до його складу. Сервіс - це абстракція, завдяки якій Поди в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Сервіси в Kubernetes здійснюють виявлення і маршрутизацію між залежними Подами (як наприклад, фронтенд- і бекенд-компоненти застосунку).</p>
|
||||
<p>Service маршрутизує трафік між Pod'ами, що входять до його складу. Service - це абстракція, завдяки якій Pod'и в Kubernetes "вмирають" і відтворюються, не впливаючи на роботу вашого застосунку. Services в Kubernetes здійснюють виявлення і маршрутизацію між залежними Pod'ами (як наприклад, фронтенд- і бекенд-компоненти застосунку).</p>
|
||||
<!--<p>Services match a set of Pods using <a href="/docs/concepts/overview/working-with-objects/labels">labels and selectors</a>, a grouping primitive that allows logical operation on objects in Kubernetes. Labels are key/value pairs attached to objects and can be used in any number of ways:</p>
|
||||
-->
|
||||
<p>Сервіси співвідносяться з набором Подів за допомогою <a href="/docs/concepts/overview/working-with-objects/labels">міток і селекторів</a> -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:</p>
|
||||
<p>Services співвідносяться з набором Pod'ів за допомогою <a href="/docs/concepts/overview/working-with-objects/labels">міток і Селекторів</a> -- примітивів групування, що роблять можливими логічні операції з об'єктами у Kubernetes. Мітки являють собою пари ключ/значення, що прикріплені до об'єктів і можуть використовуватися для різних цілей:</p>
|
||||
<ul>
|
||||
<!--<li>Designate objects for development, test, and production</li>
|
||||
-->
|
||||
|
@ -136,7 +136,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i>You can create a Service at the same time you create a Deployment by using<br><code>--expose</code> in kubectl.</i></p>
|
||||
-->
|
||||
<p><i>Ви можете створити Сервіс одночасно із Deployment, виконавши команду <br><code>--expose</code> в kubectl.</i></p>
|
||||
<p><i>Ви можете створити Service одночасно із Deployment, виконавши команду <br><code>--expose</code> в kubectl.</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -153,7 +153,7 @@ weight: 10
|
|||
<div class="col-md-8">
|
||||
<!--<p>Labels can be attached to objects at creation time or later on. They can be modified at any time. Let's expose our application now using a Service and apply some labels.</p>
|
||||
-->
|
||||
<p>Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Сервісу і прикріпимо мітки.</p>
|
||||
<p>Мітки можна прикріпити до об'єктів під час створення або пізніше. Їх можна змінити у будь-який час. А зараз давайте відкриємо наш застосунок за допомогою Service і прикріпимо мітки.</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Запуск вашого застосунку на декількох Подах
|
||||
title: Запуск вашого застосунку на декількох Pod'ах
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
@ -35,7 +35,7 @@ weight: 10
|
|||
|
||||
<!--<p>In the previous modules we created a <a href="/docs/concepts/workloads/controllers/deployment/">Deployment</a>, and then exposed it publicly via a <a href="/docs/concepts/services-networking/service/">Service</a>. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.</p>
|
||||
-->
|
||||
<p>У попередніх модулях ми створили <a href="/docs/concepts/workloads/controllers/deployment/">Deployment</a> і відкрили його для зовнішнього трафіка за допомогою <a href="/docs/concepts/services-networking/service/">Сервісу</a>. Deployment створив лише один Под для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.</p>
|
||||
<p>У попередніх модулях ми створили <a href="/docs/concepts/workloads/controllers/deployment/">Deployment</a> і відкрили його для зовнішнього трафіка за допомогою <a href="/docs/concepts/services-networking/service/">Service</a>. Deployment створив лише один Pod для запуску нашого застосунку. Коли трафік збільшиться, нам доведеться масштабувати застосунок, аби задовольнити вимоги користувачів.</p>
|
||||
|
||||
<!--<p><b>Scaling</b> is accomplished by changing the number of replicas in a Deployment</p>
|
||||
-->
|
||||
|
@ -56,7 +56,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i> You can create from the start a Deployment with multiple instances using the --replicas parameter for the kubectl run command </i></p>
|
||||
-->
|
||||
<p><i> Кількість Подів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run </i></p>
|
||||
<p><i> Кількість Pod'ів можна вказати одразу при створенні Deployment'а за допомогою параметра --replicas, під час запуску команди kubectl run </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -104,11 +104,11 @@ weight: 10
|
|||
|
||||
<!--<p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports <a href="/docs/user-guide/horizontal-pod-autoscaling/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p>
|
||||
-->
|
||||
<p>Масштабування Deployment'а забезпечує створення нових Подів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Подів відповідно до нового бажаного стану. Kubernetes також підтримує <a href="/docs/user-guide/horizontal-pod-autoscaling/">автоматичне масштабування</a>, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Подів у визначеному Deployment'і.</p>
|
||||
<p>Масштабування Deployment'а забезпечує створення нових Pod'ів і їх розподілення по вузлах з доступними ресурсами. Масштабування збільшить кількість Pod'ів відповідно до нового бажаного стану. Kubernetes також підтримує <a href="/docs/user-guide/horizontal-pod-autoscaling/">автоматичне масштабування</a>, однак це виходить за межі даного матеріалу. Масштабування до нуля також можливе - це призведе до видалення всіх Pod'ів у визначеному Deployment'і.</p>
|
||||
|
||||
<!--<p>Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.</p>
|
||||
-->
|
||||
<p>Запустивши застосунок на декількох Подах, необхідно розподілити між ними трафік. Сервіси мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Подами відкритого Deployment'а. Сервіси безперервно моніторять запущені Поди за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Поди.</p>
|
||||
<p>Запустивши застосунок на декількох Pod'ах, необхідно розподілити між ними трафік. Services мають інтегрований балансувальник навантаження, що розподіляє мережевий трафік між усіма Pod'ами відкритого Deployment'а. Services безперервно моніторять запущені Pod'и за допомогою кінцевих точок, для того щоб забезпечити надходження трафіка лише на доступні Pod'и.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
|
|
|
@ -36,12 +36,12 @@ weight: 10
|
|||
|
||||
<!--<p>Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. <b>Rolling updates</b> allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p>
|
||||
-->
|
||||
<p>Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. <b>Послідовні оновлення</b> дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Подів іншими. Нові Поди розподіляються по вузлах з доступними ресурсами.</p>
|
||||
<p>Користувачі очікують від застосунків високої доступності у будь-який час, а розробники - оновлення цих застосунків декілька разів на день. У Kubernetes це стає можливим завдяки послідовному оновленню. <b>Послідовні оновлення</b> дозволяють оновити Deployment без простою, шляхом послідовної заміни одних Pod'ів іншими. Нові Pod'и розподіляються по вузлах з доступними ресурсами.</p>
|
||||
|
||||
<!--<p>In the previous module we scaled our application to run multiple instances. This is a requirement for performing updates without affecting application availability. By default, the maximum number of Pods that can be unavailable during the update and the maximum number of new Pods that can be created, is one. Both options can be configured to either numbers or percentages (of Pods).
|
||||
In Kubernetes, updates are versioned and any Deployment update can be reverted to a previous (stable) version.</p>
|
||||
-->
|
||||
<p>У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Подах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Подів, недоступних під час оновлення, і максимальна кількість нових Подів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Подів) еквіваленті.
|
||||
<p>У попередньому модулі ми масштабували наш застосунок, запустивши його на декількох Pod'ах. Масштабування - необхідна умова для проведення оновлень без шкоди для доступності застосунку. За типовими налаштуваннями, максимальна кількість Pod'ів, недоступних під час оновлення, і максимальна кількість нових Pod'ів, які можуть бути створені, дорівнює одиниці. Обидві опції можна налаштувати в числовому або відсотковому (від кількості Pod'ів) еквіваленті.
|
||||
У Kubernetes оновлення версіонуються, тому кожне оновлення Deployment'а можна відкотити до попередньої (стабільної) версії.</p>
|
||||
|
||||
</div>
|
||||
|
@ -59,7 +59,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i>Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. </i></p>
|
||||
-->
|
||||
<p><i>Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Подів іншими. </i></p>
|
||||
<p><i>Послідовне оновлення дозволяє оновити Deployment без простою шляхом послідовної заміни одних Pod'ів іншими. </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -115,7 +115,7 @@ weight: 10
|
|||
|
||||
<!--<p>Similar to application Scaling, if a Deployment is exposed publicly, the Service will load-balance the traffic only to available Pods during the update. An available Pod is an instance that is available to the users of the application.</p>
|
||||
-->
|
||||
<p>Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Сервіс розподілятиме трафік лише на доступні Поди. Під доступним мається на увазі Под, готовий до експлуатації користувачами застосунку.</p>
|
||||
<p>Як і у випадку з масштабуванням, якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и. Під доступним мається на увазі Pod, готовий до експлуатації користувачами застосунку.</p>
|
||||
|
||||
<!--<p>Rolling updates allow the following actions:</p>
|
||||
-->
|
||||
|
@ -138,7 +138,7 @@ weight: 10
|
|||
<div class="content__box content__box_fill">
|
||||
<!--<p><i>If a Deployment is exposed publicly, the Service will load-balance the traffic only to available Pods during the update. </i></p>
|
||||
-->
|
||||
<p><i>Якщо Deployment "відкритий у світ", то під час оновлення сервіс розподілятиме трафік лише на доступні Поди. </i></p>
|
||||
<p><i>Якщо Deployment "відкритий у світ", то під час оновлення Service розподілятиме трафік лише на доступні Pod'и. </i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
/pl/docs/ /pl/docs/home/ 301!
|
||||
/pt/docs/ /pt/docs/home/ 301!
|
||||
/ru/docs/ /ru/docs/home/ 301!
|
||||
/uk/docs/ /uk/docs/home/ 301!
|
||||
/vi/docs/ /vi/docs/home/ 301!
|
||||
/zh/docs/ /zh/docs/home/ 301!
|
||||
/blog/2018/03/kubernetes-1.10-stabilizing-storage-security-networking/ /blog/2018/03/26/kubernetes-1.10-stabilizing-storage-security-networking/ 301!
|
||||
|
|
Loading…
Reference in New Issue