Merge branch 'master' of github.com:kubernetes/website
commit
4a19082f93
|
@ -20,21 +20,14 @@ For example, if we want to require scheduling on a node that is in the us-centra
|
|||
|
||||
|
||||
```
|
||||
affinity:
|
||||
|
||||
nodeAffinity:
|
||||
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
|
||||
nodeSelectorTerms:
|
||||
|
||||
- matchExpressions:
|
||||
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
|
||||
operator: In
|
||||
|
||||
values: ["us-central1-a"]
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
operator: In
|
||||
values: ["us-central1-a"]
|
||||
```
|
||||
|
||||
|
||||
|
@ -44,21 +37,14 @@ Preferred rules mean that if nodes match the rules, they will be chosen first, a
|
|||
|
||||
|
||||
```
|
||||
affinity:
|
||||
|
||||
nodeAffinity:
|
||||
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
|
||||
nodeSelectorTerms:
|
||||
|
||||
- matchExpressions:
|
||||
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
|
||||
operator: In
|
||||
|
||||
values: ["us-central1-a"]
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
operator: In
|
||||
values: ["us-central1-a"]
|
||||
```
|
||||
|
||||
|
||||
|
@ -67,21 +53,14 @@ Node anti-affinity can be achieved by using negative operators. So for instance
|
|||
|
||||
|
||||
```
|
||||
affinity:
|
||||
|
||||
nodeAffinity:
|
||||
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
|
||||
nodeSelectorTerms:
|
||||
|
||||
- matchExpressions:
|
||||
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
|
||||
operator: NotIn
|
||||
|
||||
values: ["us-central1-a"]
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: "failure-domain.beta.kubernetes.io/zone"
|
||||
operator: NotIn
|
||||
values: ["us-central1-a"]
|
||||
```
|
||||
|
||||
|
||||
|
@ -99,7 +78,7 @@ The kubectl command allows you to set taints on nodes, for example:
|
|||
|
||||
```
|
||||
kubectl taint nodes node1 key=value:NoSchedule
|
||||
```
|
||||
```
|
||||
|
||||
|
||||
creates a taint that marks the node as unschedulable by any pods that do not have a toleration for taint with key key, value value, and effect NoSchedule. (The other taint effects are PreferNoSchedule, which is the preferred version of NoSchedule, and NoExecute, which means any pods that are running on the node when the taint is applied will be evicted unless they tolerate the taint.) The toleration you would add to a PodSpec to have the corresponding pod tolerate this taint would look like this
|
||||
|
@ -107,15 +86,11 @@ creates a taint that marks the node as unschedulable by any pods that do not hav
|
|||
|
||||
|
||||
```
|
||||
tolerations:
|
||||
|
||||
- key: "key"
|
||||
|
||||
operator: "Equal"
|
||||
|
||||
value: "value"
|
||||
|
||||
effect: "NoSchedule"
|
||||
tolerations:
|
||||
- key: "key"
|
||||
operator: "Equal"
|
||||
value: "value"
|
||||
effect: "NoSchedule"
|
||||
```
|
||||
|
||||
|
||||
|
@ -138,21 +113,13 @@ Let’s look at an example. Say you have front-ends in service S1, and they comm
|
|||
|
||||
```
|
||||
affinity:
|
||||
|
||||
podAffinity:
|
||||
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
|
||||
- labelSelector:
|
||||
|
||||
matchExpressions:
|
||||
|
||||
- key: service
|
||||
|
||||
operator: In
|
||||
|
||||
values: [“S1”]
|
||||
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
```
|
||||
|
||||
|
@ -172,25 +139,15 @@ Here we have a Pod where we specify the schedulerName field:
|
|||
|
||||
```
|
||||
apiVersion: v1
|
||||
|
||||
kind: Pod
|
||||
|
||||
metadata:
|
||||
|
||||
name: nginx
|
||||
|
||||
labels:
|
||||
|
||||
app: nginx
|
||||
|
||||
spec:
|
||||
|
||||
schedulerName: my-scheduler
|
||||
|
||||
containers:
|
||||
|
||||
- name: nginx
|
||||
|
||||
image: nginx:1.10
|
||||
```
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ kube-apiserver \
|
|||
```
|
||||
|
||||
Alternatively, you can enable the v1alpha1 version of the API group
|
||||
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`.
|
||||
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true`.
|
||||
|
||||
The command-line flag `--enable-priority-and-fairness=false` will disable the
|
||||
API Priority and Fairness feature, even if other flags have enabled it.
|
||||
|
|
|
@ -124,6 +124,6 @@ that can act as a [client for the Kubernetes API](/docs/reference/using-api/clie
|
|||
you implement yourself
|
||||
* using the [Operator Framework](https://operatorframework.io)
|
||||
* [Publish](https://operatorhub.io/) your operator for other people to use
|
||||
* Read [CoreOS' original article](https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern
|
||||
* Read [CoreOS' original article](https://web.archive.org/web/20170129131616/https://coreos.com/blog/introducing-operators.html) that introduced the Operator pattern (this is an archived version of the original article).
|
||||
* Read an [article](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps) from Google Cloud about best practices for building Operators
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ Neither contention nor changes to quota will affect already created resources.
|
|||
## Enabling Resource Quota
|
||||
|
||||
Resource Quota support is enabled by default for many Kubernetes distributions. It is
|
||||
enabled when the API server `--enable-admission-plugins=` flag has `ResourceQuota` as
|
||||
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` flag has `ResourceQuota` as
|
||||
one of its arguments.
|
||||
|
||||
A resource quota is enforced in a particular namespace when there is a
|
||||
|
|
|
@ -32,7 +32,7 @@ should range from highly restricted to highly flexible:
|
|||
|
||||
- **_Privileged_** - Unrestricted policy, providing the widest possible level of permissions. This
|
||||
policy allows for known privilege escalations.
|
||||
- **_Baseline/Default_** - Minimally restrictive policy while preventing known privilege
|
||||
- **_Baseline_** - Minimally restrictive policy while preventing known privilege
|
||||
escalations. Allows the default (minimally specified) Pod configuration.
|
||||
- **_Restricted_** - Heavily restricted policy, following current Pod hardening best practices.
|
||||
|
||||
|
@ -48,9 +48,9 @@ mechanisms (such as gatekeeper), the privileged profile may be an absence of app
|
|||
rather than an instantiated policy. In contrast, for a deny-by-default mechanism (such as Pod
|
||||
Security Policy) the privileged policy should enable all controls (disable all restrictions).
|
||||
|
||||
### Baseline/Default
|
||||
### Baseline
|
||||
|
||||
The Baseline/Default policy is aimed at ease of adoption for common containerized workloads while
|
||||
The Baseline policy is aimed at ease of adoption for common containerized workloads while
|
||||
preventing known privilege escalations. This policy is targeted at application operators and
|
||||
developers of non-critical applications. The following listed controls should be
|
||||
enforced/disallowed:
|
||||
|
@ -115,7 +115,9 @@ enforced/disallowed:
|
|||
<tr>
|
||||
<td>AppArmor <em>(optional)</em></td>
|
||||
<td>
|
||||
On supported hosts, the 'runtime/default' AppArmor profile is applied by default. The default policy should prevent overriding or disabling the policy, or restrict overrides to an allowed set of profiles.<br>
|
||||
On supported hosts, the 'runtime/default' AppArmor profile is applied by default.
|
||||
The baseline policy should prevent overriding or disabling the default AppArmor
|
||||
profile, or restrict overrides to an allowed set of profiles.<br>
|
||||
<br><b>Restricted Fields:</b><br>
|
||||
metadata.annotations['container.apparmor.security.beta.kubernetes.io/*']<br>
|
||||
<br><b>Allowed Values:</b> 'runtime/default', undefined<br>
|
||||
|
@ -175,7 +177,7 @@ well as lower-trust users.The following listed controls should be enforced/disal
|
|||
<td><strong>Policy</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2"><em>Everything from the default profile.</em></td>
|
||||
<td colspan="2"><em>Everything from the baseline profile.</em></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Volume Types</td>
|
||||
|
@ -275,7 +277,7 @@ of individual policies are not defined here.
|
|||
|
||||
## FAQ
|
||||
|
||||
### Why isn't there a profile between privileged and default?
|
||||
### Why isn't there a profile between privileged and baseline?
|
||||
|
||||
The three profiles defined here have a clear linear progression from most secure (restricted) to least
|
||||
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline
|
||||
|
|
|
@ -49,6 +49,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress, designed as a library to build your custom proxy.
|
||||
* The [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) is an
|
||||
ingress controller for the [Traefik](https://traefik.io/traefik/) proxy.
|
||||
* [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) extends Ingress with Custom Resources to bring API Management capabilities to Ingress. Tyk Operator works with the Open Source Tyk Gateway & Tyk Cloud control plane.
|
||||
* [Voyager](https://appscode.com/products/voyager) is an ingress controller for
|
||||
[HAProxy](https://www.haproxy.org/#desc).
|
||||
|
||||
|
|
|
@ -38,8 +38,7 @@ If a {{< glossary_tooltip term_id="node" >}} dies, the Pods scheduled to that no
|
|||
are [scheduled for deletion](#pod-garbage-collection) after a timeout period.
|
||||
|
||||
Pods do not, by themselves, self-heal. If a Pod is scheduled to a
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} that then fails,
|
||||
or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won't
|
||||
{{< glossary_tooltip text="node" term_id="node" >}} that then fails, the Pod is deleted; likewise, a Pod won't
|
||||
survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a
|
||||
higher-level abstraction, called a
|
||||
{{< glossary_tooltip term_id="controller" text="controller" >}}, that handles the work of
|
||||
|
|
|
@ -267,7 +267,7 @@ Teams must merge localized content into the same release branch from which the c
|
|||
|
||||
An approver must maintain a development branch by keeping it current with its source branch and resolving merge conflicts. The longer a development branch stays open, the more maintenance it typically requires. Consider periodically merging development branches and opening new ones, rather than maintaining one extremely long-running development branch.
|
||||
|
||||
At the beginning of every team milestone, it's helpful to open an issue [comparing upstream changes](https://github.com/kubernetes/website/blob/master/scripts/upstream_changes.py) between the previous development branch and the current development branch.
|
||||
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous development branch and the current development branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
|
||||
|
||||
While only approvers can open a new development branch and merge pull requests, anyone can open a pull request for a new development branch. No special permissions are required.
|
||||
|
||||
|
|
|
@ -576,6 +576,10 @@ Avoid making promises or giving hints about the future. If you need to talk abou
|
|||
an alpha feature, put the text under a heading that identifies it as alpha
|
||||
information.
|
||||
|
||||
An exception to this rule is documentation about announced deprecations
|
||||
targeting removal in future versions. One example of documentation like this
|
||||
is the [Deprecated API migration guide](/docs/reference/using-api/deprecation-guide/).
|
||||
|
||||
### Avoid statements that will soon be out of date
|
||||
|
||||
Avoid words like "currently" and "new." A feature that is new today might not be
|
||||
|
|
|
@ -351,7 +351,7 @@ different Kubernetes components.
|
|||
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 |
|
||||
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
|
||||
| `VolumeScheduling` | `true` | GA | 1.13 | - |
|
||||
| `VolumeSubpath` | `true` | GA | 1.13 | - |
|
||||
| `VolumeSubpath` | `true` | GA | 1.10 | - |
|
||||
| `VolumeSubpathEnvExpansion` | `false` | Alpha | 1.14 | 1.14 |
|
||||
| `VolumeSubpathEnvExpansion` | `true` | Beta | 1.15 | 1.16 |
|
||||
| `VolumeSubpathEnvExpansion` | `true` | GA | 1.17 | - |
|
||||
|
|
|
@ -17,6 +17,6 @@ tags:
|
|||
Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.<br>
|
||||
|
||||
{{< note >}}
|
||||
Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API.
|
||||
Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -114,3 +114,143 @@ The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure tha
|
|||
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
|
||||
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
|
||||
|
||||
## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
|
||||
|
||||
Example: `node.kubernetes.io/windows-build=10.0.17763`
|
||||
|
||||
Used on: Node
|
||||
|
||||
When the kubelet is running on Microsoft Windows, it automatically labels its node to record the version of Windows Server in use.
|
||||
|
||||
The label's value is in the format "MajorVersion.MinorVersion.BuildNumber".
|
||||
|
||||
## service.kubernetes.io/headless {#servicekubernetesioheadless}
|
||||
|
||||
Example: `service.kubernetes.io/headless=""`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The control plane adds this label to an Endpoints object when the owning Service is headless.
|
||||
|
||||
## kubernetes.io/service-name {#kubernetesioservice-name}
|
||||
|
||||
Example: `kubernetes.io/service-name="nginx"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
Kubernetes uses this label to differentiate multiple Services. Used currently for `ELB`(Elastic Load Balancer) only.
|
||||
|
||||
## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
|
||||
|
||||
Example: `endpointslice.kubernetes.io/managed-by="controller"`
|
||||
|
||||
Used on: EndpointSlices
|
||||
|
||||
The label is used to indicate the controller or entity that manages an EndpointSlice. This label aims to enable different EndpointSlice objects to be managed by different controllers or entities within the same cluster.
|
||||
|
||||
## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror}
|
||||
|
||||
Example: `endpointslice.kubernetes.io/skip-mirror="true"`
|
||||
|
||||
Used on: Endpoints
|
||||
|
||||
The label can be set to `"true"` on an Endpoints resource to indicate that the EndpointSliceMirroring controller should not mirror this resource with EndpointSlices.
|
||||
|
||||
## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name}
|
||||
|
||||
Example: `service.kubernetes.io/service-proxy-name="foo-bar"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The kube-proxy has this label for custom proxy, which delegates service control to custom proxy.
|
||||
|
||||
## experimental.windows.kubernetes.io/isolation-type
|
||||
|
||||
Example: `experimental.windows.kubernetes.io/isolation-type: "hyperv"`
|
||||
|
||||
Used on: Pod
|
||||
|
||||
The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation experimental.windows.kubernetes.io/isolation-type=hyperv.
|
||||
|
||||
{{< note >}}
|
||||
You can only set this annotation on Pods that have a single container.
|
||||
{{< /note >}}
|
||||
|
||||
## ingressclass.kubernetes.io/is-default-class
|
||||
|
||||
Example: `ingressclass.kubernetes.io/is-default-class: "true"`
|
||||
|
||||
Used on: IngressClass
|
||||
|
||||
When a single IngressClass resource has this annotation set to `"true"`, new Ingress resource without a class specified will be assigned this default class.
|
||||
|
||||
## kubernetes.io/ingress.class (deprecated)
|
||||
|
||||
{{< note >}} Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassName`. {{< /note >}}
|
||||
|
||||
## alpha.kubernetes.io/provided-node-ip
|
||||
|
||||
Example: `alpha.kubernetes.io/provided-node-ip: "10.0.0.1"`
|
||||
|
||||
Used on: Node
|
||||
|
||||
The kubelet can set this annotation on a Node to denote its configured IPv4 address.
|
||||
|
||||
When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager.
|
||||
|
||||
**The taints listed below are always used on Nodes**
|
||||
|
||||
## node.kubernetes.io/not-ready
|
||||
|
||||
Example: `node.kubernetes.io/not-ready:NoExecute`
|
||||
|
||||
The node controller detects whether a node is ready by monitoring its health and adds or removes this taint accordingly.
|
||||
|
||||
## node.kubernetes.io/unreachable
|
||||
|
||||
Example: `node.kubernetes.io/unreachable:NoExecute`
|
||||
|
||||
The node controller adds the taint to a node corresponding to the [NodeCondition](/docs/concepts/architecture/nodes/#condition) `Ready` being `Unknown`.
|
||||
|
||||
## node.kubernetes.io/unschedulable
|
||||
|
||||
Example: `node.kubernetes.io/unschedulable:NoSchedule`
|
||||
|
||||
The taint will be added to a node when initializing the node to avoid race condition.
|
||||
|
||||
## node.kubernetes.io/memory-pressure
|
||||
|
||||
Example: `node.kubernetes.io/memory-pressure:NoSchedule`
|
||||
|
||||
The kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available` observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
|
||||
|
||||
## node.kubernetes.io/disk-pressure
|
||||
|
||||
Example: `node.kubernetes.io/disk-pressure:NoSchedule`
|
||||
|
||||
The kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`, `nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
|
||||
|
||||
## node.kubernetes.io/network-unavailable
|
||||
|
||||
Example: `node.kubernetes.io/network-unavailable:NoSchedule`
|
||||
|
||||
This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration. Only when the route on the cloud is configured properly will the taint be removed by the cloud provider.
|
||||
|
||||
## node.kubernetes.io/pid-pressure
|
||||
|
||||
Example: `node.kubernetes.io/pid-pressure:NoSchedule`
|
||||
|
||||
The kubelet checks D-value of the size of `/proc/sys/kernel/pid_max` and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the `pid.available` metric. The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added/removed.
|
||||
|
||||
## node.cloudprovider.kubernetes.io/uninitialized
|
||||
|
||||
Example: `node.cloudprovider.kubernetes.io/uninitialized:NoSchedule`
|
||||
|
||||
Sets this taint on a node to mark it as unusable, when kubelet is started with the "external" cloud provider, until a controller from the cloud-controller-manager initializes this node, and then removes the taint.
|
||||
|
||||
## node.cloudprovider.kubernetes.io/shutdown
|
||||
|
||||
Example: `node.cloudprovider.kubernetes.io/shutdown:NoSchedule`
|
||||
|
||||
If a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly with `node.cloudprovider.kubernetes.io/shutdown` and the taint effect of `NoSchedule`.
|
||||
|
||||
|
|
|
@ -0,0 +1,270 @@
|
|||
---
|
||||
reviewers:
|
||||
- liggitt
|
||||
- lavalamp
|
||||
- thockin
|
||||
- smarterclayton
|
||||
title: "Deprecated API Migration Guide"
|
||||
weight: 45
|
||||
content_type: reference
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
As the Kubernetes API evolves, APIs are periodically reorganized or upgraded.
|
||||
When APIs evolve, the old API is deprecated and eventually removed.
|
||||
This page contains information you need to know when migrating from
|
||||
deprecated API versions to newer and more stable API versions.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Removed APIs by release
|
||||
|
||||
|
||||
### v1.25
|
||||
|
||||
The **v1.25** release will stop serving the following deprecated API versions:
|
||||
|
||||
#### Event {#event-v125}
|
||||
|
||||
The **events.k8s.io/v1beta1** API version of Event will no longer be served in v1.25.
|
||||
|
||||
* Migrate manifests and API clients to use the **events.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **events.k8s.io/v1**:
|
||||
* `type` is limited to `Normal` and `Warning`
|
||||
* `involvedObject` is renamed to `regarding`
|
||||
* `action`, `reason`, `reportingComponent`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
|
||||
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingComponent` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
|
||||
|
||||
#### RuntimeClass {#runtimeclass-v125}
|
||||
|
||||
RuntimeClass in the **node.k8s.io/v1beta1** API version will no longer be served in v1.25.
|
||||
|
||||
* Migrate manifests and API clients to use the **node.k8s.io/v1** API version, available since v1.20.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* No notable changes
|
||||
|
||||
### v1.22
|
||||
|
||||
The **v1.22** release will stop serving the following deprecated API versions:
|
||||
|
||||
#### Webhook resources {#webhook-resources-v122}
|
||||
|
||||
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **admissionregistration.k8s.io/v1** API version, available since v1.16.
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
* Notable changes:
|
||||
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
|
||||
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
|
||||
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
|
||||
* `webhooks[*].sideEffects` default value is removed, and the field made required, and only `None` and `NoneOnDryRun` are permitted for v1
|
||||
* `webhooks[*].admissionReviewVersions` default value is removed and the field made required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
|
||||
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
|
||||
|
||||
#### CustomResourceDefinition {#customresourcedefinition-v122}
|
||||
|
||||
The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **apiextensions.k8s.io/v1** API version, available since v1.16.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
|
||||
* `spec.version` is removed in v1; use `spec.versions` instead
|
||||
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
|
||||
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
|
||||
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
|
||||
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
|
||||
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
|
||||
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects, and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
|
||||
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1 (fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
|
||||
#### APIService {#apiservice-v122}
|
||||
|
||||
The **apiregistration.k8s.io/v1beta1** API version of APIService will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **apiregistration.k8s.io/v1** API version, available since v1.10.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* No notable changes
|
||||
|
||||
#### TokenReview {#tokenreview-v122}
|
||||
|
||||
The **authentication.k8s.io/v1beta1** API version of TokenReview will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **authentication.k8s.io/v1** API version, available since v1.6.
|
||||
* No notable changes
|
||||
|
||||
#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}
|
||||
|
||||
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, and SubjectAccessReview will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6.
|
||||
* Notable changes:
|
||||
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
|
||||
|
||||
#### CertificateSigningRequest {#certificatesigningrequest-v122}
|
||||
|
||||
The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **certificates.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in `certificates.k8s.io/v1`:
|
||||
* For API clients requesting certificates:
|
||||
* `spec.signerName` is now required (see [known Kubernetes signers](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
|
||||
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
|
||||
* For API clients approving or signing certificates:
|
||||
* `status.conditions` may not contain duplicate types
|
||||
* `status.conditions[*].status` is now required
|
||||
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
|
||||
|
||||
#### Lease {#lease-v122}
|
||||
|
||||
The **coordination.k8s.io/v1beta1** API version of Lease will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **coordination.k8s.io/v1** API version, available since v1.14.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* No notable changes
|
||||
|
||||
#### Ingress {#ingress-v122}
|
||||
|
||||
The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ingress will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.backend` is renamed to `spec.defaultBackend`
|
||||
* The backend `serviceName` field is renamed to `service.name`
|
||||
* Numeric backend `servicePort` fields are renamed to `service.port.number`
|
||||
* String backend `servicePort` fields are renamed to `service.port.name`
|
||||
* `pathType` is now required for each specified path. Options are `Prefix`, `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
|
||||
|
||||
#### IngressClass {#ingressclass-v122}
|
||||
|
||||
The **networking.k8s.io/v1beta1** API version of IngressClass will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* No notable changes
|
||||
|
||||
#### RBAC resources {#rbac-resources-v122}
|
||||
|
||||
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8.
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
* No notable changes
|
||||
|
||||
#### PriorityClass {#priorityclass-v122}
|
||||
|
||||
The **scheduling.k8s.io/v1beta1** API version of PriorityClass will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **scheduling.k8s.io/v1** API version, available since v1.14.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* No notable changes
|
||||
|
||||
#### Storage resources {#storage-resources-v122}
|
||||
|
||||
The **storage.k8s.io/v1beta1** API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment will no longer be served in v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **storage.k8s.io/v1** API version
|
||||
* CSIDriver is available in **storage.k8s.io/v1** since v1.19.
|
||||
* CSINode is available in **storage.k8s.io/v1** since v1.17
|
||||
* StorageClass is available in **storage.k8s.io/v1** since v1.6
|
||||
* VolumeAttachment is available in **storage.k8s.io/v1** v1.13
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
* No notable changes
|
||||
|
||||
### v1.16
|
||||
|
||||
The **v1.16** release stopped serving the following deprecated API versions:
|
||||
|
||||
#### NetworkPolicy {#networkpolicy-v116}
|
||||
|
||||
The **extensions/v1beta1** API version of NetworkPolicy is no longer served as of v1.16.
|
||||
|
||||
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.8.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
|
||||
#### DaemonSet {#daemonset-v116}
|
||||
|
||||
The **extensions/v1beta1** and **apps/v1beta2** API versions of DaemonSet are no longer served as of v1.16.
|
||||
|
||||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.templateGeneration` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`)
|
||||
|
||||
#### Deployment {#deployment-v116}
|
||||
|
||||
The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of Deployment are no longer served as of v1.16.
|
||||
|
||||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.rollbackTo` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline)
|
||||
* `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
|
||||
* `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`)
|
||||
|
||||
#### StatefulSet {#statefulset-v116}
|
||||
|
||||
The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no longer served as of v1.16.
|
||||
|
||||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`)
|
||||
|
||||
#### ReplicaSet {#replicaset-v116}
|
||||
|
||||
The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions of ReplicaSet are no longer served as of v1.16.
|
||||
|
||||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
|
||||
## What to do
|
||||
|
||||
### Test with deprecated APIs disabled
|
||||
|
||||
You can test your clusters by starting an API server with specific API versions disabled
|
||||
to simulate upcoming removals. Add the following flag to the API server startup arguments:
|
||||
|
||||
`--runtime-config=<group>/<version>=false`
|
||||
|
||||
For example:
|
||||
|
||||
`--runtime-config=admissionregistration.k8s.io/v1beta1=false,apiextensions.k8s.io/v1beta1,...`
|
||||
|
||||
### Locate use of deprecated APIs
|
||||
|
||||
Use [client warnings, metrics, and audit information available in 1.19+](https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings)
|
||||
to locate use of deprecated APIs.
|
||||
|
||||
### Migrate to non-deprecated APIs
|
||||
|
||||
* Update custom integrations and controllers to call the non-deprecated APIs
|
||||
* Change YAML files to reference the non-deprecated APIs
|
||||
|
||||
You can use the `kubectl-convert` command (`kubectl convert` prior to v1.20)
|
||||
to automatically convert an existing object:
|
||||
|
||||
`kubectl-convert -f <file> --output-version <group>/<version>`.
|
||||
|
||||
For example, to convert an older Deployment to `apps/v1`, you can run:
|
||||
|
||||
`kubectl-convert -f ./my-deployment.yaml --output-version apps/v1`
|
||||
|
||||
Note that this may use non-ideal default values. To learn more about a specific
|
||||
resource, check the Kubernetes [API reference](/docs/reference/kubernetes-api/).
|
|
@ -78,7 +78,7 @@ kind: ClusterConfiguration
|
|||
kubernetesVersion: v1.16.0
|
||||
scheduler:
|
||||
extraArgs:
|
||||
address: 0.0.0.0
|
||||
bind-address: 0.0.0.0
|
||||
config: /home/johndoe/schedconfig.yaml
|
||||
kubeconfig: /home/johndoe/kubeconfig.yaml
|
||||
```
|
||||
|
|
|
@ -308,6 +308,7 @@ The following networking functionality is not supported on Windows nodes
|
|||
* Host networking mode is not available for Windows pods
|
||||
* Local NodePort access from the node itself fails (works for other nodes or external clients)
|
||||
* Accessing service VIPs from nodes will be available with a future release of Windows Server
|
||||
* A single service can only support up to 64 backend pods / unique destination IPs
|
||||
* Overlay networking support in kube-proxy is an alpha release. In addition, it requires [KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887) to be installed on Windows Server 2019
|
||||
* Local Traffic Policy and DSR mode
|
||||
* Windows containers connected to l2bridge, l2tunnel, or overlay networks do not support communicating over the IPv6 stack. There is outstanding Windows platform work required to enable these network drivers to consume IPv6 addresses and subsequent Kubernetes work in kubelet, kube-proxy, and CNI plugins.
|
||||
|
|
|
@ -10,35 +10,40 @@ content_type: task
|
|||
|
||||
{{< glossary_definition term_id="etcd" length="all" prepend="etcd is a ">}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Run etcd as a cluster of odd members.
|
||||
|
||||
* etcd is a leader-based distributed system. Ensure that the leader periodically send heartbeats on time to all followers to keep the cluster stable.
|
||||
* etcd is a leader-based distributed system. Ensure that the leader
|
||||
periodically send heartbeats on time to all followers to keep the cluster
|
||||
stable.
|
||||
|
||||
* Ensure that no resource starvation occurs.
|
||||
|
||||
Performance and stability of the cluster is sensitive to network and disk IO. Any resource starvation can lead to heartbeat timeout, causing instability of the cluster. An unstable etcd indicates that no leader is elected. Under such circumstances, a cluster cannot make any changes to its current state, which implies no new pods can be scheduled.
|
||||
Performance and stability of the cluster is sensitive to network and disk
|
||||
I/O. Any resource starvation can lead to heartbeat timeout, causing instability
|
||||
of the cluster. An unstable etcd indicates that no leader is elected. Under
|
||||
such circumstances, a cluster cannot make any changes to its current state,
|
||||
which implies no new pods can be scheduled.
|
||||
|
||||
* Keeping stable etcd clusters is critical to the stability of Kubernetes clusters. Therefore, run etcd clusters on dedicated machines or isolated environments for [guaranteed resource requirements](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations).
|
||||
* Keeping etcd clusters stable is critical to the stability of Kubernetes
|
||||
clusters. Therefore, run etcd clusters on dedicated machines or isolated
|
||||
environments for [guaranteed resource requirements](https://etcd.io/docs/current/op-guide/hardware/).
|
||||
|
||||
* The minimum recommended version of etcd to run in production is `3.2.10+`.
|
||||
|
||||
## Resource requirements
|
||||
|
||||
Operating etcd with limited resources is suitable only for testing purposes. For deploying in production, advanced hardware configuration is required. Before deploying etcd in production, see [resource requirement reference documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations).
|
||||
Operating etcd with limited resources is suitable only for testing purposes.
|
||||
For deploying in production, advanced hardware configuration is required.
|
||||
Before deploying etcd in production, see
|
||||
[resource requirement reference](https://etcd.io/docs/current/op-guide/hardware/#example-hardware-configurations).
|
||||
|
||||
## Starting etcd clusters
|
||||
|
||||
|
@ -50,33 +55,43 @@ Use a single-node etcd cluster only for testing purpose.
|
|||
|
||||
1. Run the following:
|
||||
|
||||
```sh
|
||||
./etcd --listen-client-urls=http://$PRIVATE_IP:2379 --advertise-client-urls=http://$PRIVATE_IP:2379
|
||||
```
|
||||
```sh
|
||||
etcd --listen-client-urls=http://$PRIVATE_IP:2379 \
|
||||
--advertise-client-urls=http://$PRIVATE_IP:2379
|
||||
```
|
||||
|
||||
2. Start Kubernetes API server with the flag `--etcd-servers=$PRIVATE_IP:2379`.
|
||||
2. Start the Kubernetes API server with the flag
|
||||
`--etcd-servers=$PRIVATE_IP:2379`.
|
||||
|
||||
Replace `PRIVATE_IP` with your etcd client IP.
|
||||
Make sure `PRIVATE_IP` is set to your etcd client IP.
|
||||
|
||||
### Multi-node etcd cluster
|
||||
|
||||
For durability and high availability, run etcd as a multi-node cluster in production and back it up periodically. A five-member cluster is recommended in production. For more information, see [FAQ Documentation](https://github.com/coreos/etcd/blob/master/Documentation/faq.md#what-is-failure-tolerance).
|
||||
For durability and high availability, run etcd as a multi-node cluster in
|
||||
production and back it up periodically. A five-member cluster is recommended
|
||||
in production. For more information, see
|
||||
[FAQ documentation](https://etcd.io/docs/current/faq/#what-is-failure-tolerance).
|
||||
|
||||
Configure an etcd cluster either by static member information or by dynamic discovery. For more information on clustering, see [etcd Clustering Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md).
|
||||
Configure an etcd cluster either by static member information or by dynamic
|
||||
discovery. For more information on clustering, see
|
||||
[etcd clustering documentation](https://etcd.io/docs/current/op-guide/clustering/).
|
||||
|
||||
For an example, consider a five-member etcd cluster running with the following client URLs: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`, `http://$IP4:2379`, and `http://$IP5:2379`. To start a Kubernetes API server:
|
||||
For an example, consider a five-member etcd cluster running with the following
|
||||
client URLs: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`,
|
||||
`http://$IP4:2379`, and `http://$IP5:2379`. To start a Kubernetes API server:
|
||||
|
||||
1. Run the following:
|
||||
|
||||
```sh
|
||||
./etcd --listen-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 --advertise-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379
|
||||
```
|
||||
```shell
|
||||
etcd --listen-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379 --advertise-client-urls=http://$IP1:2379,http://$IP2:2379,http://$IP3:2379,http://$IP4:2379,http://$IP5:2379
|
||||
```
|
||||
|
||||
2. Start Kubernetes API servers with the flag `--etcd-servers=$IP1:2379, $IP2:2379, $IP3:2379, $IP4:2379, $IP5:2379`.
|
||||
2. Start the Kubernetes API servers with the flag
|
||||
`--etcd-servers=$IP1:2379,$IP2:2379,$IP3:2379,$IP4:2379,$IP5:2379`.
|
||||
|
||||
Replace `IP` with your client IP addresses.
|
||||
Make sure the `IP<n>` variables are set to your client IP addresses.
|
||||
|
||||
### Multi-node etcd cluster with load balancer
|
||||
### Multi-node etcd cluster with load balancer
|
||||
|
||||
To run a load balancing etcd cluster:
|
||||
|
||||
|
@ -87,92 +102,160 @@ To run a load balancing etcd cluster:
|
|||
|
||||
## Securing etcd clusters
|
||||
|
||||
Access to etcd is equivalent to root permission in the cluster so ideally only the API server should have access to it. Considering the sensitivity of the data, it is recommended to grant permission to only those nodes that require access to etcd clusters.
|
||||
Access to etcd is equivalent to root permission in the cluster so ideally only
|
||||
the API server should have access to it. Considering the sensitivity of the
|
||||
data, it is recommended to grant permission to only those nodes that require
|
||||
access to etcd clusters.
|
||||
|
||||
To secure etcd, either set up firewall rules or use the security features provided by etcd. etcd security features depend on x509 Public Key Infrastructure (PKI). To begin, establish secure communication channels by generating a key and certificate pair. For example, use key pairs `peer.key` and `peer.cert` for securing communication between etcd members, and `client.key` and `client.cert` for securing communication between etcd and its clients. See the [example scripts](https://github.com/coreos/etcd/tree/master/hack/tls-setup) provided by the etcd project to generate key pairs and CA files for client authentication.
|
||||
To secure etcd, either set up firewall rules or use the security features
|
||||
provided by etcd. etcd security features depend on x509 Public Key
|
||||
Infrastructure (PKI). To begin, establish secure communication channels by
|
||||
generating a key and certificate pair. For example, use key pairs `peer.key`
|
||||
and `peer.cert` for securing communication between etcd members, and
|
||||
`client.key` and `client.cert` for securing communication between etcd and its
|
||||
clients. See the [example scripts](https://github.com/coreos/etcd/tree/master/hack/tls-setup)
|
||||
provided by the etcd project to generate key pairs and CA files for client
|
||||
authentication.
|
||||
|
||||
### Securing communication
|
||||
|
||||
To configure etcd with secure peer communication, specify flags `--peer-key-file=peer.key` and `--peer-cert-file=peer.cert`, and use https as URL schema.
|
||||
To configure etcd with secure peer communication, specify flags
|
||||
`--peer-key-file=peer.key` and `--peer-cert-file=peer.cert`, and use HTTPS as
|
||||
the URL schema.
|
||||
|
||||
Similarly, to configure etcd with secure client communication, specify flags `--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use https as URL schema.
|
||||
Similarly, to configure etcd with secure client communication, specify flags
|
||||
`--key-file=k8sclient.key` and `--cert-file=k8sclient.cert`, and use HTTPS as
|
||||
the URL schema.
|
||||
|
||||
### Limiting access of etcd clusters
|
||||
|
||||
After configuring secure communication, restrict the access of etcd cluster to only the Kubernetes API server. Use TLS authentication to do so.
|
||||
After configuring secure communication, restrict the access of etcd cluster to
|
||||
only the Kubernetes API servers. Use TLS authentication to do so.
|
||||
|
||||
For example, consider key pairs `k8sclient.key` and `k8sclient.cert` that are trusted by the CA `etcd.ca`. When etcd is configured with `--client-cert-auth` along with TLS, it verifies the certificates from clients by using system CAs or the CA passed in by `--trusted-ca-file` flag. Specifying flags `--client-cert-auth=true` and `--trusted-ca-file=etcd.ca` will restrict the access to clients with the certificate `k8sclient.cert`.
|
||||
For example, consider key pairs `k8sclient.key` and `k8sclient.cert` that are
|
||||
trusted by the CA `etcd.ca`. When etcd is configured with `--client-cert-auth`
|
||||
along with TLS, it verifies the certificates from clients by using system CAs
|
||||
or the CA passed in by `--trusted-ca-file` flag. Specifying flags
|
||||
`--client-cert-auth=true` and `--trusted-ca-file=etcd.ca` will restrict the
|
||||
access to clients with the certificate `k8sclient.cert`.
|
||||
|
||||
Once etcd is configured correctly, only clients with valid certificates can access it. To give Kubernetes API server the access, configure it with the flags `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` and `--etcd-cafile=ca.cert`.
|
||||
Once etcd is configured correctly, only clients with valid certificates can
|
||||
access it. To give Kubernetes API servers the access, configure them with the
|
||||
flags `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` and
|
||||
`--etcd-cafile=ca.cert`.
|
||||
|
||||
{{< note >}}
|
||||
etcd authentication is not currently supported by Kubernetes. For more information, see the related issue [Support Basic Auth for Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398).
|
||||
etcd authentication is not currently supported by Kubernetes. For more
|
||||
information, see the related issue
|
||||
[Support Basic Auth for Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398).
|
||||
{{< /note >}}
|
||||
|
||||
## Replacing a failed etcd member
|
||||
|
||||
etcd cluster achieves high availability by tolerating minor member failures. However, to improve the overall health of the cluster, replace failed members immediately. When multiple members fail, replace them one by one. Replacing a failed member involves two steps: removing the failed member and adding a new member.
|
||||
etcd cluster achieves high availability by tolerating minor member failures.
|
||||
However, to improve the overall health of the cluster, replace failed members
|
||||
immediately. When multiple members fail, replace them one by one. Replacing a
|
||||
failed member involves two steps: removing the failed member and adding a new
|
||||
member.
|
||||
|
||||
Though etcd keeps unique member IDs internally, it is recommended to use a unique name for each member to avoid human errors. For example, consider a three-member etcd cluster. Let the URLs be, member1=http://10.0.0.1, member2=http://10.0.0.2, and member3=http://10.0.0.3. When member1 fails, replace it with member4=http://10.0.0.4.
|
||||
Though etcd keeps unique member IDs internally, it is recommended to use a
|
||||
unique name for each member to avoid human errors. For example, consider a
|
||||
three-member etcd cluster. Let the URLs be, `member1=http://10.0.0.1`,
|
||||
`member2=http://10.0.0.2`, and `member3=http://10.0.0.3`. When `member1` fails,
|
||||
replace it with `member4=http://10.0.0.4`.
|
||||
|
||||
1. Get the member ID of the failed member1:
|
||||
1. Get the member ID of the failed `member1`:
|
||||
|
||||
`etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list`
|
||||
```shell
|
||||
etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list
|
||||
```
|
||||
|
||||
The following message is displayed:
|
||||
The following message is displayed:
|
||||
|
||||
8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379
|
||||
91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379
|
||||
fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379
|
||||
```console
|
||||
8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379
|
||||
91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379
|
||||
fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379
|
||||
```
|
||||
|
||||
2. Remove the failed member:
|
||||
|
||||
`etcdctl member remove 8211f1d0f64f3269`
|
||||
```shell
|
||||
etcdctl member remove 8211f1d0f64f3269
|
||||
```
|
||||
|
||||
The following message is displayed:
|
||||
The following message is displayed:
|
||||
|
||||
Removed member 8211f1d0f64f3269 from cluster
|
||||
```console
|
||||
Removed member 8211f1d0f64f3269 from cluster
|
||||
```
|
||||
|
||||
3. Add the new member:
|
||||
|
||||
`./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380`
|
||||
```shell
|
||||
etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
|
||||
```
|
||||
|
||||
The following message is displayed:
|
||||
The following message is displayed:
|
||||
|
||||
Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
|
||||
```console
|
||||
Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
|
||||
```
|
||||
|
||||
4. Start the newly added member on a machine with the IP `10.0.0.4`:
|
||||
|
||||
export ETCD_NAME="member4"
|
||||
export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380"
|
||||
export ETCD_INITIAL_CLUSTER_STATE=existing
|
||||
etcd [flags]
|
||||
```shell
|
||||
export ETCD_NAME="member4"
|
||||
export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380"
|
||||
export ETCD_INITIAL_CLUSTER_STATE=existing
|
||||
etcd [flags]
|
||||
```
|
||||
|
||||
5. Do either of the following:
|
||||
|
||||
1. Update its `--etcd-servers` flag to make Kubernetes aware of the configuration changes, then restart the Kubernetes API server.
|
||||
2. Update the load balancer configuration if a load balancer is used in the deployment.
|
||||
1. Update the `--etcd-servers` flag for the Kubernetes API servers to make
|
||||
Kubernetes aware of the configuration changes, then restart the
|
||||
Kubernetes API servers.
|
||||
2. Update the load balancer configuration if a load balancer is used in the
|
||||
deployment.
|
||||
|
||||
For more information on cluster reconfiguration, see [etcd Reconfiguration Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member).
|
||||
For more information on cluster reconfiguration, see
|
||||
[etcd reconfiguration documentation](https://etcd.io/docs/current/op-guide/runtime-configuration/#remove-a-member).
|
||||
|
||||
## Backing up an etcd cluster
|
||||
|
||||
All Kubernetes objects are stored on etcd. Periodically backing up the etcd cluster data is important to recover Kubernetes clusters under disaster scenarios, such as losing all master nodes. The snapshot file contains all the Kubernetes states and critical information. In order to keep the sensitive Kubernetes data safe, encrypt the snapshot files.
|
||||
All Kubernetes objects are stored on etcd. Periodically backing up the etcd
|
||||
cluster data is important to recover Kubernetes clusters under disaster
|
||||
scenarios, such as losing all control plane nodes. The snapshot file contains
|
||||
all the Kubernetes states and critical information. In order to keep the
|
||||
sensitive Kubernetes data safe, encrypt the snapshot files.
|
||||
|
||||
Backing up an etcd cluster can be accomplished in two ways: etcd built-in snapshot and volume snapshot.
|
||||
Backing up an etcd cluster can be accomplished in two ways: etcd built-in
|
||||
snapshot and volume snapshot.
|
||||
|
||||
### Built-in snapshot
|
||||
|
||||
etcd supports built-in snapshot. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) that is not currently used by an etcd process. Taking the snapshot will normally not affect the performance of the member.
|
||||
etcd supports built-in snapshot. A snapshot may either be taken from a live
|
||||
member with the `etcdctl snapshot save` command or by copying the
|
||||
`member/snap/db` file from an etcd
|
||||
[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir)
|
||||
that is not currently used by an etcd process. Taking the snapshot will
|
||||
not affect the performance of the member.
|
||||
|
||||
Below is an example for taking a snapshot of the keyspace served by `$ENDPOINT` to the file `snapshotdb`:
|
||||
Below is an example for taking a snapshot of the keyspace served by
|
||||
`$ENDPOINT` to the file `snapshotdb`:
|
||||
|
||||
```sh
|
||||
```shell
|
||||
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
|
||||
# exit 0
|
||||
```
|
||||
|
||||
# verify the snapshot
|
||||
Verify the snapshot:
|
||||
|
||||
```shell
|
||||
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
|
||||
```
|
||||
|
||||
```console
|
||||
+----------+----------+------------+------------+
|
||||
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
|
||||
+----------+----------+------------+------------+
|
||||
|
@ -182,74 +265,63 @@ ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
|
|||
|
||||
### Volume snapshot
|
||||
|
||||
If etcd is running on a storage volume that supports backup, such as Amazon Elastic Block Store, back up etcd data by taking a snapshot of the storage volume.
|
||||
If etcd is running on a storage volume that supports backup, such as Amazon
|
||||
Elastic Block Store, back up etcd data by taking a snapshot of the storage
|
||||
volume.
|
||||
|
||||
## Scaling up etcd clusters
|
||||
|
||||
Scaling up etcd clusters increases availability by trading off performance. Scaling does not increase cluster performance nor capability. A general rule is not to scale up or down etcd clusters. Do not configure any auto scaling groups for etcd clusters. It is highly recommended to always run a static five-member etcd cluster for production Kubernetes clusters at any officially supported scale.
|
||||
Scaling up etcd clusters increases availability by trading off performance.
|
||||
Scaling does not increase cluster performance nor capability. A general rule
|
||||
is not to scale up or down etcd clusters. Do not configure any auto scaling
|
||||
groups for etcd clusters. It is highly recommended to always run a static
|
||||
five-member etcd cluster for production Kubernetes clusters at any officially
|
||||
supported scale.
|
||||
|
||||
A reasonable scaling is to upgrade a three-member cluster to a five-member one, when more reliability is desired. See [etcd Reconfiguration Documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member) for information on how to add members into an existing cluster.
|
||||
A reasonable scaling is to upgrade a three-member cluster to a five-member
|
||||
one, when more reliability is desired. See
|
||||
[etcd reconfiguration documentation](https://etcd.io/docs/current/op-guide/runtime-configuration/#remove-a-member)
|
||||
for information on how to add members into an existing cluster.
|
||||
|
||||
## Restoring an etcd cluster
|
||||
|
||||
etcd supports restoring from snapshots that are taken from an etcd process of the [major.minor](http://semver.org/) version. Restoring a version from a different patch version of etcd also is supported. A restore operation is employed to recover the data of a failed cluster.
|
||||
etcd supports restoring from snapshots that are taken from an etcd process of
|
||||
the [major.minor](http://semver.org/) version. Restoring a version from a
|
||||
different patch version of etcd also is supported. A restore operation is
|
||||
employed to recover the data of a failed cluster.
|
||||
|
||||
Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). For more information and examples on restoring a cluster from a snapshot file, see [etcd disaster recovery documentation](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster).
|
||||
Before starting the restore operation, a snapshot file must be present. It can
|
||||
either be a snapshot file from a previous backup operation, or from a remaining
|
||||
[data directory]( https://etcd.io/docs/current/op-guide/configuration/#--data-dir).
|
||||
For more information and examples on restoring a cluster from a snapshot file, see
|
||||
[etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
|
||||
|
||||
If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead.
|
||||
If the access URLs of the restored cluster is changed from the previous
|
||||
cluster, the Kubernetes API server must be reconfigured accordingly. In this
|
||||
case, restart Kubernetes API servers with the flag
|
||||
`--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag
|
||||
`--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and
|
||||
`$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is
|
||||
used in front of an etcd cluster, you might need to update the load balancer
|
||||
instead.
|
||||
|
||||
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
|
||||
If the majority of etcd members have permanently failed, the etcd cluster is
|
||||
considered failed. In this scenario, Kubernetes cannot make any changes to its
|
||||
current state. Although the scheduled pods might continue to run, no new pods
|
||||
can be scheduled. In such cases, recover the etcd cluster and potentially
|
||||
reconfigure Kubernetes API servers to fix the issue.
|
||||
|
||||
{{< note >}}
|
||||
If any API servers are running in your cluster, you should not attempt to restore instances of etcd.
|
||||
Instead, follow these steps to restore etcd:
|
||||
If any API servers are running in your cluster, you should not attempt to
|
||||
restore instances of etcd. Instead, follow these steps to restore etcd:
|
||||
|
||||
- stop *all* kube-apiserver instances
|
||||
- stop *all* API server instances
|
||||
- restore state in all etcd instances
|
||||
- restart all kube-apiserver instances
|
||||
- restart all API server instances
|
||||
|
||||
We also recommend restarting any components (e.g. kube-scheduler, kube-controller-manager, kubelet) to ensure that they don't
|
||||
rely on some stale data. Note that in practice, the restore takes a bit of time.
|
||||
During the restoration, critical components will lose leader lock and restart themselves.
|
||||
We also recommend restarting any components (e.g. `kube-scheduler`,
|
||||
`kube-controller-manager`, `kubelet`) to ensure that they don't rely on some
|
||||
stale data. Note that in practice, the restore takes a bit of time. During the
|
||||
restoration, critical components will lose leader lock and restart themselves.
|
||||
{{< /note >}}
|
||||
|
||||
## Upgrading and rolling back etcd clusters
|
||||
|
||||
As of Kubernetes v1.13.0, etcd2 is no longer supported as a storage backend for
|
||||
new or existing Kubernetes clusters. The timeline for Kubernetes support for
|
||||
etcd2 and etcd3 is as follows:
|
||||
|
||||
- Kubernetes v1.0: etcd2 only
|
||||
- Kubernetes v1.5.1: etcd3 support added, new clusters still default to etcd2
|
||||
- Kubernetes v1.6.0: new clusters created with `kube-up.sh` default to etcd3,
|
||||
and `kube-apiserver` defaults to etcd3
|
||||
- Kubernetes v1.9.0: deprecation of etcd2 storage backend announced
|
||||
- Kubernetes v1.13.0: etcd2 storage backend removed, `kube-apiserver` will
|
||||
refuse to start with `--storage-backend=etcd2`, with the
|
||||
message `etcd2 is no longer a supported storage backend`
|
||||
|
||||
Before upgrading a v1.12.x kube-apiserver using `--storage-backend=etcd2` to
|
||||
v1.13.x, etcd v2 data must be migrated to the v3 storage backend and
|
||||
kube-apiserver invocations must be changed to use `--storage-backend=etcd3`.
|
||||
|
||||
The process for migrating from etcd2 to etcd3 is highly dependent on how the
|
||||
etcd cluster was deployed and configured, as well as how the Kubernetes
|
||||
cluster was deployed and configured. We recommend that you consult your cluster
|
||||
provider's documentation to see if there is a predefined solution.
|
||||
|
||||
If your cluster was created via `kube-up.sh` and is still using etcd2 as its
|
||||
storage backend, please consult the [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters)
|
||||
|
||||
## Known issue: etcd client balancer with secure endpoints
|
||||
|
||||
The etcd v3 client, released in etcd v3.3.13 or earlier, has a [critical bug](https://github.com/kubernetes/kubernetes/issues/72102) which affects the kube-apiserver and HA deployments. The etcd client balancer failover does not properly work against secure endpoints. As a result, etcd servers may fail or disconnect briefly from the kube-apiserver. This affects kube-apiserver HA deployments.
|
||||
|
||||
The fix was made in [etcd v3.4](https://github.com/etcd-io/etcd/pull/10911) (and backported to v3.3.14 or later): the new client now creates its own credential bundle to correctly set authority target in dial function.
|
||||
|
||||
Because the fix requires gRPC dependency upgrade (to v1.23.0), downstream Kubernetes [did not backport etcd upgrades](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978). Which means the [etcd fix in kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab) is only available from Kubernetes 1.16.
|
||||
|
||||
To urgently fix this bug for Kubernetes 1.15 or earlier, build a custom kube-apiserver. You can make local changes to [`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135) with [etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab).
|
||||
|
||||
See ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available"](https://github.com/kubernetes/kubernetes/issues/72102).
|
||||
|
||||
|
||||
|
|
|
@ -9,18 +9,13 @@ weight: 100
|
|||
This page shows how to create a Pod that uses a Secret to pull an image from a
|
||||
private Docker registry or repository.
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* To do this exercise, you need a
|
||||
[Docker ID](https://docs.docker.com/docker-id/) and password.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Log in to Docker
|
||||
|
@ -106,7 +101,8 @@ kubectl create secret docker-registry regcred --docker-server=<your-registry-ser
|
|||
|
||||
where:
|
||||
|
||||
* `<your-registry-server>` is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
|
||||
* `<your-registry-server>` is your Private Docker Registry FQDN.
|
||||
Use `https://index.docker.io/v2/` for DockerHub.
|
||||
* `<your-name>` is your Docker username.
|
||||
* `<your-pword>` is your Docker password.
|
||||
* `<your-email>` is your Docker email.
|
||||
|
@ -192,7 +188,8 @@ your.private.registry.example.com/janedoe/jdoe-private:v1
|
|||
```
|
||||
|
||||
To pull the image from the private registry, Kubernetes needs credentials.
|
||||
The `imagePullSecrets` field in the configuration file specifies that Kubernetes should get the credentials from a Secret named `regcred`.
|
||||
The `imagePullSecrets` field in the configuration file specifies that
|
||||
Kubernetes should get the credentials from a Secret named `regcred`.
|
||||
|
||||
Create a Pod that uses your Secret, and verify that the Pod is running:
|
||||
|
||||
|
@ -201,11 +198,8 @@ kubectl apply -f my-private-reg-pod.yaml
|
|||
kubectl get pod private-reg
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [Secrets](/docs/concepts/configuration/secret/).
|
||||
* Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
|
||||
* Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
|
||||
|
@ -213,5 +207,3 @@ kubectl get pod private-reg
|
|||
* See [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core).
|
||||
* See the `imagePullSecrets` field of [PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ before the kubelet deletes the name from the apiserver.
|
|||
|
||||
Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable.
|
||||
The Pods running on an unreachable Node enter the 'Terminating' or 'Unknown' state after a
|
||||
[timeout](/docs/concepts/architecture/nodes/#node-condition).
|
||||
[timeout](/docs/concepts/architecture/nodes/#condition).
|
||||
Pods may also enter these states when the user attempts graceful deletion of a Pod
|
||||
on an unreachable Node.
|
||||
The only ways in which a Pod in such a state can be removed from the apiserver are as follows:
|
||||
|
|
|
@ -33,7 +33,7 @@ weight: 10
|
|||
</p>
|
||||
<p>A Kubernetes cluster consists of two types of resources:
|
||||
<ul>
|
||||
<li>The <b>Master</b> coordinates the cluster</li>
|
||||
<li>The <b>Control Plane</b> coordinates the cluster</li>
|
||||
<li><b>Nodes</b> are the workers that run applications</li>
|
||||
</ul>
|
||||
</p>
|
||||
|
@ -71,22 +71,22 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><b>The Master is responsible for managing the cluster.</b> The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.</p>
|
||||
<p><b>A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster.</b> Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes master. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.</p>
|
||||
<p><b>The Control Plane is responsible for managing the cluster.</b> The Control Plane coordinates all activities in your cluster, such as scheduling applications, maintaining applications' desired state, scaling applications, and rolling out new updates.</p>
|
||||
<p><b>A node is a VM or a physical computer that serves as a worker machine in a Kubernetes cluster.</b> Each node has a Kubelet, which is an agent for managing the node and communicating with the Kubernetes control plane. The node should also have tools for handling container operations, such as containerd or Docker. A Kubernetes cluster that handles production traffic should have a minimum of three nodes.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Masters manage the cluster and the nodes that are used to host the running applications.</i></p>
|
||||
<p><i>Control Planes manage the cluster and the nodes that are used to host the running applications.</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>When you deploy applications on Kubernetes, you tell the master to start the application containers. The master schedules the containers to run on the cluster's nodes. <b>The nodes communicate with the master using the <a href="/docs/concepts/overview/kubernetes-api/">Kubernetes API</a></b>, which the master exposes. End users can also use the Kubernetes API directly to interact with the cluster.</p>
|
||||
<p>When you deploy applications on Kubernetes, you tell the control plane to start the application containers. The control plane schedules the containers to run on the cluster's nodes. <b>The nodes communicate with the control plane using the <a href="/docs/concepts/overview/kubernetes-api/">Kubernetes API</a></b>, which the control plane exposes. End users can also use the Kubernetes API directly to interact with the cluster.</p>
|
||||
|
||||
<p>A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.</p>
|
||||
<p>A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems. The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete. For this tutorial, however, you'll use a provided online terminal with Minikube pre-installed.</p>
|
||||
|
||||
<p>Now that you know what Kubernetes is, let's go to the online tutorial and start our first cluster!</p>
|
||||
|
||||
|
|
|
@ -1,6 +1,32 @@
|
|||
<?xml version="1.0"?>
|
||||
<svg width="476.1" height="385.3" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<style type="text/css">.st0{fill:#FFFFFF;stroke:#006DE9;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
width="476.1"
|
||||
height="385.3"
|
||||
version="1.1"
|
||||
id="svg161">
|
||||
<metadata
|
||||
id="metadata167">
|
||||
<rdf:RDF>
|
||||
<cc:Work
|
||||
rdf:about="">
|
||||
<dc:format>image/svg+xml</dc:format>
|
||||
<dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
|
||||
<dc:title></dc:title>
|
||||
</cc:Work>
|
||||
</rdf:RDF>
|
||||
</metadata>
|
||||
<defs
|
||||
id="defs165" />
|
||||
<style
|
||||
type="text/css"
|
||||
id="style10">.st0{fill:#FFFFFF;stroke:#006DE9;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st1{fill:#FFFFFF;stroke:#006DE9;stroke-width:6;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st2{fill:#FFFFFF;stroke:#326DE6;stroke-width:2;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st3{opacity:0.71;fill:#326CE6;}
|
||||
|
@ -64,176 +90,437 @@
|
|||
.st61{fill:#011F38;stroke:#414042;stroke-width:0.3;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st62{fill:none;stroke:#011F38;stroke-width:0.3;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}
|
||||
.st63{fill:none;stroke:#011F38;stroke-width:0.2813;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;}</style>
|
||||
<symbol viewBox="-68.6 -66.9 137.2 133.9" id="master_x5F_level1_1_">
|
||||
<g id="svg_1">
|
||||
<g id="svg_2">
|
||||
<line id="svg_3" y2="0.7" x2="0" y1="-11.1" x1="0" class="st0"/>
|
||||
<line id="svg_4" y2="-5.2" x2="-5.9" y1="-5.2" x1="5.9" class="st0"/>
|
||||
</g>
|
||||
<polygon id="svg_5" points="-29.2,-63.9 -65.6,-18.3 -52.6,38.6 0,63.9 52.6,38.6 65.6,-18.3 29.2,-63.9 " class="st1"/>
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol viewBox="-81 -93 162 186.1" id="node_high_level">
|
||||
<polygon id="svg_6" points="-80,-46 -80,46 0,92 80,46 80,-46 0,-92 " class="st2"/>
|
||||
<g id="Isolation_Mode_3_"/>
|
||||
</symbol>
|
||||
<symbol viewBox="-87.5 -100.6 175.1 201.1" id="node_x5F_empty">
|
||||
<use transform="matrix(1.0808,0,0,1.0808,-0.00003292006,-0.00003749943) " y="-93" x="-81" id="XMLID_201_" height="186.1" width="162" xlink:href="#node_high_level"/>
|
||||
<g id="svg_7">
|
||||
<polygon id="svg_8" points="76.8,-28.1 -14,-80.3 0,-88.3 76.7,-44.4 " class="st3"/>
|
||||
<polygon id="svg_9" points="76.8,-28.1 32.1,-53.8 38.8,-66.1 76.7,-44.4 " class="st4"/>
|
||||
</g>
|
||||
</symbol>
|
||||
<symbol viewBox="-87.6 -101 175.2 202" id="node_x5F_new">
|
||||
<polygon id="svg_10" points="0,-100 -86.6,-50 -86.6,50 0,100 86.6,50 86.6,-50 " class="st5"/>
|
||||
<polygon id="svg_11" points="-86.6,-20.2 -86.6,-50 0,-100 25.8,-85.1 " class="st6"/>
|
||||
<polygon id="svg_12" points="-40.8,-70.7 -32.9,-57 15.7,-85.1 0,-94.3 " class="st7"/>
|
||||
<text id="svg_13" font-family="'RobotoSlab-Regular'" font-size="11.3632px" class="st8" transform="matrix(0.866,-0.5,-0.5,-0.866,-33.9256,-70.7388) ">Docker</text>
|
||||
<text id="svg_14" font-family="'RobotoSlab-Regular'" font-size="11.3632px" class="st8" transform="matrix(0.866,-0.5,-0.5,-0.866,-76.0668,-46.4087) ">Kubelt</text>
|
||||
</symbol>
|
||||
<g>
|
||||
<title>Layer 1</title>
|
||||
<g id="CLUSTER">
|
||||
<g class="st9" id="XMLID_296_">
|
||||
<g id="svg_15">
|
||||
<linearGradient y2="185.2931" x2="343.0902" y1="185.2931" x1="28.6348" gradientUnits="userSpaceOnUse" id="SVGID_1_">
|
||||
<stop stop-color="#326DE6" offset="0"/>
|
||||
<stop stop-color="#10FFC6" offset="1"/>
|
||||
</linearGradient>
|
||||
<polygon id="svg_16" points="311.9,92.7 343.1,229.2 255.8,338.6 115.9,338.6 28.6,229.2 59.8,92.7 185.9,32 " class="st10"/>
|
||||
<symbol
|
||||
viewBox="-68.6 -66.9 137.2 133.9"
|
||||
id="master_x5F_level1_1_">
|
||||
<g
|
||||
id="svg_1">
|
||||
<g
|
||||
id="svg_2">
|
||||
<line
|
||||
id="svg_3"
|
||||
y2="0.7"
|
||||
x2="0"
|
||||
y1="-11.1"
|
||||
x1="0"
|
||||
class="st0" />
|
||||
<line
|
||||
id="svg_4"
|
||||
y2="-5.2"
|
||||
x2="-5.9"
|
||||
y1="-5.2"
|
||||
x1="5.9"
|
||||
class="st0" />
|
||||
</g>
|
||||
<polygon
|
||||
id="svg_5"
|
||||
points="-29.2,-63.9 -65.6,-18.3 -52.6,38.6 0,63.9 52.6,38.6 65.6,-18.3 29.2,-63.9 "
|
||||
class="st1" />
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g id="master">
|
||||
<use id="svg_17" transform="matrix(0.4,0,0,-0.4,185.8606,187.2514) " y="-66.9" x="-68.6" height="133.9" width="137.2" xlink:href="#master_x5F_level1_1_"/>
|
||||
<g id="master_x5F_level1"/>
|
||||
</g>
|
||||
<g id="description">
|
||||
<g id="svg_72">
|
||||
<path id="svg_73" d="m374.4,188.6l0,0l-2.8,6.8l-0.8,0l-2.8,-6.8l0,0l0.1,3.5l0,2.5l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-6.7l-1,-0.2l0,-0.7l1,0l1.5,0l2.7,6.9l0,0l2.7,-6.9l2.4,0l0,0.7l-1,0.2l0,6.7l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-2.6l0.2,-3.4z" class="st42"/>
|
||||
<path id="svg_74" d="m381.5,195.4c0,-0.2 -0.1,-0.3 -0.1,-0.5s0,-0.3 0,-0.4c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.7,0.3 -1.1,0.3c-0.7,0 -1.2,-0.2 -1.5,-0.5s-0.5,-0.8 -0.5,-1.4c0,-0.6 0.2,-1.1 0.7,-1.4s1.2,-0.5 2,-0.5l1.2,0l0,-0.7c0,-0.4 -0.1,-0.7 -0.4,-0.9s-0.6,-0.3 -1,-0.3c-0.3,0 -0.5,0 -0.8,0.1s-0.4,0.2 -0.5,0.3l-0.1,0.7l-0.9,0l0,-1.2c0.3,-0.2 0.6,-0.4 1,-0.6s0.9,-0.2 1.3,-0.2c0.7,0 1.3,0.2 1.7,0.6s0.7,0.9 0.7,1.6l0,3.1c0,0.1 0,0.2 0,0.2s0,0.2 0,0.2l0.5,0.1l0,0.7l-1.4,0zm-1.9,-0.8c0.4,0 0.7,-0.1 1,-0.3s0.5,-0.4 0.7,-0.7l0,-1l-1.2,0c-0.5,0 -0.8,0.1 -1.1,0.3s-0.4,0.5 -0.4,0.8c0,0.3 0.1,0.5 0.3,0.6s0.4,0.3 0.7,0.3z" class="st42"/>
|
||||
<path id="svg_75" d="m388.8,191.1l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" class="st42"/>
|
||||
<path id="svg_76" d="m392.1,187.5l0,1.5l1.2,0l0,0.9l-1.2,0l0,3.8c0,0.3 0.1,0.5 0.2,0.6s0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0s0.2,0 0.3,-0.1l0.2,0.8c-0.1,0.1 -0.3,0.1 -0.5,0.2s-0.4,0.1 -0.6,0.1c-0.5,0 -0.8,-0.1 -1.1,-0.4s-0.4,-0.7 -0.4,-1.3l0,-3.8l-1,0l0,-0.9l1,0l0,-1.5l1.1,0l0,-0.1z" class="st42"/>
|
||||
<path id="svg_77" d="m397.4,195.5c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" class="st42"/>
|
||||
<path id="svg_78" d="m400.9,189.8l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.4,-0.6 0.7,-0.8s0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0s0.2,0 0.2,0l-0.2,1.1l-0.7,0c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.6l0,3.6l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-4.4l-0.9,-0.2z" class="st42"/>
|
||||
</g>
|
||||
<g id="svg_79">
|
||||
<path id="svg_80" d="m366,230.1l1,-0.2l0,-4.5l-1,-0.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.5,-0.6 0.8,-0.8s0.7,-0.3 1.1,-0.3c0.7,0 1.2,0.2 1.6,0.6s0.6,1 0.6,1.9l0,3.1l1,0.2l0,0.7l-3.2,0l0,-0.7l1,-0.2l0,-3.1c0,-0.6 -0.1,-1 -0.3,-1.2s-0.6,-0.4 -1,-0.4c-0.3,0 -0.6,0.1 -0.9,0.2s-0.5,0.4 -0.6,0.7l0,3.7l1,0.2l0,0.7l-3.2,0l0,-0.6z" class="st42"/>
|
||||
<path id="svg_81" d="m373.9,227.6c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.1,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" class="st42"/>
|
||||
<path id="svg_82" d="m384.9,230c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.6,0.2 -1,0.2c-0.8,0 -1.4,-0.3 -1.8,-0.8s-0.7,-1.3 -0.7,-2.2l0,-0.1c0,-1 0.2,-1.8 0.7,-2.5s1,-0.9 1.8,-0.9c0.4,0 0.7,0.1 1,0.2s0.5,0.3 0.7,0.6l0,-2.6l-1,-0.2l0,-0.7l1,0l1.2,0l0,8.2l1,0.2l0,0.7l-2,0l-0.1,-0.8zm-3.2,-2.1c0,0.6 0.1,1.1 0.4,1.5s0.7,0.6 1.2,0.6c0.3,0 0.6,-0.1 0.9,-0.2s0.4,-0.4 0.6,-0.7l0,-2.9c-0.1,-0.3 -0.3,-0.5 -0.6,-0.6s-0.5,-0.2 -0.9,-0.2c-0.6,0 -1,0.2 -1.2,0.7s-0.4,1.1 -0.4,1.8l0,0z" class="st42"/>
|
||||
<path id="svg_83" d="m390.7,230.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" class="st42"/>
|
||||
<path id="svg_84" d="m397.1,232.5l1,-0.2l0,-7l-1,-0.2l0,-0.7l1.9,0l0.1,0.8c0.2,-0.3 0.5,-0.5 0.8,-0.7s0.7,-0.2 1.1,-0.2c0.8,0 1.4,0.3 1.8,0.9s0.7,1.4 0.7,2.5l0,0.1c0,0.9 -0.2,1.7 -0.7,2.2s-1,0.8 -1.8,0.8c-0.4,0 -0.7,-0.1 -1,-0.2s-0.5,-0.3 -0.8,-0.6l0,2.2l1,0.2l0,0.7l-3.1,0l0,-0.6zm5.2,-4.8c0,-0.7 -0.1,-1.3 -0.4,-1.8s-0.7,-0.7 -1.3,-0.7c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.4 -0.6,0.6l0,3.1c0.1,0.3 0.3,0.5 0.6,0.6s0.5,0.2 0.9,0.2c0.5,0 1,-0.2 1.2,-0.6s0.4,-0.9 0.4,-1.6l0,0z" class="st42"/>
|
||||
<path id="svg_85" d="m404.5,225.2l0,-0.7l2,0l0.1,0.9c0.2,-0.3 0.4,-0.6 0.7,-0.8s0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0s0.2,0 0.2,0l-0.2,1.1l-0.7,0c-0.3,0 -0.6,0.1 -0.8,0.2s-0.4,0.3 -0.5,0.6l0,3.6l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-4.5l-0.9,-0.1z" class="st42"/>
|
||||
<path id="svg_86" d="m409.3,227.6c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.2,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" class="st42"/>
|
||||
<path id="svg_87" d="m418.9,230c0.4,0 0.7,-0.1 1,-0.4s0.4,-0.5 0.4,-0.9l1,0l0,0c0,0.5 -0.2,1 -0.7,1.5s-1.1,0.6 -1.8,0.6c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.7,-1.4 -0.7,-2.3l0,-0.2c0,-0.9 0.2,-1.7 0.7,-2.3s1.2,-0.9 2.1,-0.9c0.5,0 1,0.1 1.4,0.3s0.7,0.4 1,0.7l0.1,1.4l-0.9,0l-0.3,-1c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.6,0 -1,0.2 -1.3,0.7s-0.4,1 -0.4,1.6l0,0.2c0,0.6 0.1,1.2 0.4,1.6s0.7,0.7 1.3,0.7z" class="st42"/>
|
||||
<path id="svg_88" d="m425.4,230.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.8,0.3 -1.4,0.3zm-0.1,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.6,-0.4 -1,-0.4z" class="st42"/>
|
||||
<path id="svg_89" d="m433.8,226.5l-0.8,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.2,1.2z" class="st42"/>
|
||||
<path id="svg_90" d="m440,226.5l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" class="st42"/>
|
||||
<path id="svg_91" d="m444.1,230.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.3c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.3 -1.4,0.3zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" class="st42"/>
|
||||
<path id="svg_92" d="m452.5,226.5l-0.9,0l-0.2,-0.8c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3s-0.5,-0.1 -0.7,-0.1c-0.4,0 -0.7,0.1 -0.9,0.3s-0.3,0.4 -0.3,0.7c0,0.2 0.1,0.4 0.3,0.6s0.5,0.3 1.1,0.4c0.8,0.2 1.4,0.4 1.8,0.7s0.6,0.7 0.6,1.2c0,0.6 -0.2,1 -0.7,1.4s-1,0.5 -1.8,0.5c-0.5,0 -0.9,-0.1 -1.3,-0.2s-0.7,-0.3 -1,-0.5l0,-1.4l0.9,0l0.2,0.8c0.1,0.1 0.3,0.2 0.5,0.3s0.5,0.1 0.7,0.1c0.4,0 0.7,-0.1 1,-0.2s0.3,-0.4 0.3,-0.7c0,-0.3 -0.1,-0.5 -0.3,-0.6s-0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.3,-0.4 -1.7,-0.7s-0.6,-0.7 -0.6,-1.2c0,-0.5 0.2,-1 0.7,-1.3s1,-0.5 1.7,-0.5c0.5,0 0.9,0.1 1.3,0.2s0.7,0.3 1,0.5l-0.1,1.2z" class="st42"/>
|
||||
</g>
|
||||
<g id="svg_93">
|
||||
<path id="svg_94" d="m374.4,111.8l0,0.7l-1,0.2l0,7.6l-1.2,0l-4.1,-6.6l0,0l0,5.7l1,0.2l0,0.7l-3.1,0l0,-0.7l1,-0.2l0,-6.7l-1,-0.2l0,-0.7l1,0l1.2,0l4.1,6.6l0,0l0,-5.7l-1,-0.2l0,-0.7l2.1,0l1,0z" class="st42"/>
|
||||
<path id="svg_95" d="m375.3,117.1c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 2.1,-0.9c0.9,0 1.6,0.3 2.1,0.9s0.8,1.4 0.8,2.3l0,0.1c0,0.9 -0.3,1.7 -0.8,2.3s-1.2,0.9 -2.1,0.9c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.1zm1.2,0.1c0,0.7 0.1,1.2 0.4,1.7s0.7,0.7 1.3,0.7c0.5,0 1,-0.2 1.2,-0.7s0.4,-1 0.4,-1.7l0,-0.1c0,-0.7 -0.1,-1.2 -0.4,-1.7s-0.7,-0.7 -1.3,-0.7s-1,0.2 -1.3,0.7s-0.4,1 -0.4,1.7l0,0.1l0.1,0z" class="st42"/>
|
||||
<path id="svg_96" d="m386.3,119.5c-0.2,0.3 -0.5,0.5 -0.8,0.7s-0.6,0.2 -1,0.2c-0.8,0 -1.4,-0.3 -1.8,-0.8s-0.7,-1.3 -0.7,-2.2l0,-0.1c0,-1 0.2,-1.8 0.7,-2.5s1,-0.9 1.8,-0.9c0.4,0 0.7,0.1 1,0.2s0.5,0.3 0.7,0.6l0,-2.6l-1,-0.2l0,-0.7l1,0l1.2,0l0,8.2l1,0.2l0,0.7l-2,0l-0.1,-0.8zm-3.1,-2.1c0,0.6 0.1,1.1 0.4,1.5s0.7,0.6 1.2,0.6c0.3,0 0.6,-0.1 0.9,-0.2s0.4,-0.4 0.6,-0.7l0,-2.9c-0.1,-0.3 -0.3,-0.5 -0.6,-0.6s-0.5,-0.2 -0.9,-0.2c-0.6,0 -1,0.2 -1.2,0.7s-0.4,1.1 -0.4,1.8l0,0z" class="st42"/>
|
||||
<path id="svg_97" d="m392.2,120.4c-0.9,0 -1.6,-0.3 -2.1,-0.9s-0.8,-1.4 -0.8,-2.3l0,-0.2c0,-0.9 0.3,-1.7 0.8,-2.3s1.2,-0.9 1.9,-0.9c0.9,0 1.5,0.3 1.9,0.8s0.7,1.2 0.7,2.1l0,0.7l-4.1,0l0,0c0,0.6 0.2,1.1 0.5,1.5s0.7,0.6 1.2,0.6c0.4,0 0.7,-0.1 1,-0.2s0.5,-0.3 0.8,-0.5l0.5,0.8c-0.2,0.2 -0.5,0.4 -0.9,0.6s-0.9,0.2 -1.4,0.2zm-0.2,-5.6c-0.4,0 -0.7,0.2 -1,0.5s-0.4,0.7 -0.5,1.2l0,0l2.9,0l0,-0.2c0,-0.5 -0.1,-0.8 -0.4,-1.1s-0.5,-0.4 -1,-0.4z" class="st42"/>
|
||||
</g>
|
||||
<g id="svg_98">
|
||||
<g id="svg_99">
|
||||
<path id="svg_100" d="m128.4,356.6l1,-0.2l0,-6.9l-1,-0.2l0,-1.2l4,0l0,1.2l-1,0.2l0,2.6l0.8,0l1.9,-2.7l-0.6,-0.1l0,-1.2l3.8,0l0,1.2l-1,0.2l-2.4,3.2l2.7,3.8l1,0.2l0,1.2l-3.8,0l0,-1.2l0.6,-0.1l-1.9,-2.8l-1.1,0l0,2.7l1,0.2l0,1.2l-4,0l0,-1.3z" class="st42"/>
|
||||
<path id="svg_101" d="m143,356.9c-0.2,0.3 -0.5,0.6 -0.9,0.8c-0.3,0.2 -0.7,0.3 -1.2,0.3c-0.8,0 -1.4,-0.2 -1.8,-0.7c-0.4,-0.5 -0.6,-1.2 -0.6,-2.3l0,-3l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,4.3c0,0.5 0.1,0.9 0.3,1.1c0.2,0.2 0.4,0.3 0.8,0.3c0.3,0 0.6,0 0.8,-0.1s0.4,-0.2 0.5,-0.4l0,-3.8l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,5.8l0.9,0.2l0,1.2l-2.6,0l0,-0.9z" class="st42"/>
|
||||
<path id="svg_102" d="m153.4,354.4c0,1.1 -0.2,1.9 -0.7,2.6c-0.5,0.6 -1.2,1 -2.1,1c-0.4,0 -0.8,-0.1 -1.1,-0.3c-0.3,-0.2 -0.6,-0.4 -0.8,-0.8l-0.1,0.9l-1.6,0l0,-9l-1,-0.2l0,-1.2l3,0l0,3.9c0.2,-0.3 0.5,-0.5 0.7,-0.7c0.3,-0.2 0.6,-0.2 1,-0.2c0.9,0 1.6,0.3 2.1,1c0.5,0.7 0.7,1.6 0.7,2.8l0,0.2l-0.1,0zm-1.9,-0.1c0,-0.7 -0.1,-1.3 -0.3,-1.7c-0.2,-0.4 -0.6,-0.6 -1.1,-0.6c-0.3,0 -0.6,0.1 -0.8,0.2c-0.2,0.1 -0.4,0.3 -0.5,0.5l0,3c0.1,0.2 0.3,0.4 0.5,0.5c0.2,0.1 0.5,0.2 0.8,0.2c0.5,0 0.8,-0.2 1,-0.5s0.3,-0.9 0.3,-1.5l0,-0.1l0.1,0z" class="st42"/>
|
||||
<path id="svg_103" d="m157.8,358c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1c0.3,-0.1 0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6c-0.5,-0.1 -1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1s-0.5,-0.3 -0.9,-0.3z" class="st42"/>
|
||||
<path id="svg_104" d="m161.7,356.6l0.9,-0.2l0,-4.4l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.4,-0.7 0.7,-0.9c0.3,-0.2 0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,0.1l-0.2,1.8l-0.8,0c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,3.5l0.9,0.2l0,1.2l-3.8,0l0,-1.3z" class="st42"/>
|
||||
<path id="svg_105" d="m167.2,356.6l0.9,-0.2l0,-4.4l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.5,-0.7 0.9,-0.9s0.7,-0.3 1.2,-0.3c0.7,0 1.3,0.2 1.7,0.7s0.6,1.2 0.6,2.1l0,3.1l0.9,0.2l0,1.2l-3.7,0l0,-1.2l0.8,-0.2l0,-3.1c0,-0.5 -0.1,-0.8 -0.3,-1c-0.2,-0.2 -0.5,-0.3 -0.9,-0.3c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.4,0.3 -0.5,0.4l0,3.9l0.8,0.2l0,1.2l-3.7,0l0,-1.2l0.1,0z" class="st42"/>
|
||||
<path id="svg_106" d="m179.3,358c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1c0.3,-0.1 0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6c-0.5,-0.1 -1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1s-0.5,-0.3 -0.9,-0.3z" class="st42"/>
|
||||
<path id="svg_107" d="m186,348.9l0,1.8l1.3,0l0,1.4l-1.3,0l0,3.7c0,0.3 0.1,0.5 0.2,0.6c0.1,0.1 0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,-0.1l0.2,1.4c-0.2,0.1 -0.4,0.1 -0.6,0.1c-0.2,0 -0.4,0 -0.7,0c-0.7,0 -1.2,-0.2 -1.5,-0.6c-0.4,-0.4 -0.5,-0.9 -0.5,-1.7l0,-3.7l-1.2,0l0,-1.4l1.1,0l0,-1.8l1.9,0l0,0.1z" class="st42"/>
|
||||
<path id="svg_108" d="m191.6,358c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1c0.3,-0.1 0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6c-0.5,-0.1 -1,0 -1.6,0zm-0.1,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1s-0.5,-0.3 -0.9,-0.3z" class="st42"/>
|
||||
<path stroke="null" style="vector-effect: non-scaling-stroke;" id="svg_110" d="m201,353.1l-1.3,0l-0.2,-0.9c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3c-0.2,-0.1 -0.4,-0.1 -0.7,-0.1c-0.3,0 -0.6,0.1 -0.8,0.2s-0.3,0.3 -0.3,0.6c0,0.2 0.1,0.4 0.3,0.5c0.2,0.1 0.6,0.3 1.1,0.4c0.9,0.2 1.5,0.4 2,0.8c0.4,0.3 0.6,0.8 0.6,1.4c0,0.6 -0.3,1.2 -0.8,1.6c-0.6,0.4 -1.3,0.6 -2.2,0.6c-0.6,0 -1.1,-0.1 -1.5,-0.2c-0.5,-0.2 -0.9,-0.4 -1.2,-0.7l0,-1.6l1.4,0l0.3,0.9c0.1,0.1 0.3,0.2 0.5,0.2s0.4,0.1 0.6,0.1c0.4,0 0.7,-0.1 0.9,-0.2c0.2,-0.1 0.3,-0.3 0.3,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.6c-0.2,-0.2 -0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.5,-0.4 -1.9,-0.8c-0.4,-0.3 -0.6,-0.8 -0.6,-1.4c0,-0.6 0.2,-1.1 0.7,-1.6s1.2,-0.7 2.1,-0.7c0.6,0 1.1,0.1 1.6,0.2c0.5,0.2 0.9,0.3 1.2,0.6l-0.2,2z" class="st42"/>
|
||||
<path id="svg_111" d="m208.9,356.5c0.3,0 0.6,-0.1 0.8,-0.3c0.2,-0.2 0.3,-0.5 0.3,-0.8l1.8,0l0,0c0,0.7 -0.3,1.3 -0.8,1.8c-0.6,0.5 -1.3,0.7 -2.1,0.7c-1.1,0 -1.9,-0.3 -2.5,-1s-0.9,-1.5 -0.9,-2.6l0,-0.2c0,-1.1 0.3,-1.9 0.9,-2.6s1.4,-1 2.5,-1c0.6,0 1.1,0.1 1.6,0.3c0.5,0.2 0.8,0.4 1.1,0.7l0,1.9l-1.6,0l-0.3,-1.1c-0.1,-0.1 -0.2,-0.2 -0.4,-0.2c-0.1,-0.1 -0.3,-0.1 -0.5,-0.1c-0.5,0 -0.9,0.2 -1.2,0.6s-0.3,0.9 -0.3,1.5l0,0.2c0,0.6 0.1,1.2 0.3,1.6c0.4,0.4 0.8,0.6 1.3,0.6z" class="st42"/>
|
||||
<path fill="black" id="svg_112" d="m212.5,348.6l0,-1.2l3,0l0,9l0.9,0.2l0,1.2l-3.8,0l0,-1.2l0.9,-0.2l0,-7.6l-1,-0.2z" class="st42"/>
|
||||
<path id="svg_113" d="m222,356.9c-0.2,0.3 -0.5,0.6 -0.9,0.8c-0.3,0.2 -0.7,0.3 -1.2,0.3c-0.8,0 -1.4,-0.2 -1.8,-0.7c-0.4,-0.5 -0.6,-1.2 -0.6,-2.3l0,-3l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,4.3c0,0.5 0.1,0.9 0.3,1.1c0.2,0.2 0.4,0.3 0.8,0.3c0.3,0 0.6,0 0.8,-0.1s0.4,-0.2 0.5,-0.4l0,-3.8l-0.8,-0.2l0,-1.2l0.8,0l1.9,0l0,5.8l0.9,0.2l0,1.2l-2.6,0l0,-0.9z" class="st42"/>
|
||||
<path id="svg_114" d="m231.4,353.1l-1.3,0l-0.2,-0.9c-0.1,-0.1 -0.3,-0.2 -0.5,-0.3c-0.2,-0.1 -0.4,-0.1 -0.7,-0.1c-0.3,0 -0.6,0.1 -0.8,0.2s-0.3,0.3 -0.3,0.6c0,0.2 0.1,0.4 0.3,0.5c0.2,0.1 0.6,0.3 1.1,0.4c0.9,0.2 1.5,0.4 2,0.8c0.4,0.3 0.6,0.8 0.6,1.4c0,0.6 -0.3,1.2 -0.8,1.6c-0.6,0.4 -1.3,0.6 -2.2,0.6c-0.6,0 -1.1,-0.1 -1.5,-0.2c-0.5,-0.2 -0.9,-0.4 -1.2,-0.7l0,-1.6l1.4,0l0.3,0.9c0.1,0.1 0.3,0.2 0.5,0.2s0.4,0.1 0.6,0.1c0.4,0 0.7,-0.1 0.9,-0.2c0.2,-0.1 0.3,-0.3 0.3,-0.6c0,-0.2 -0.1,-0.4 -0.3,-0.6c-0.2,-0.2 -0.6,-0.3 -1.1,-0.4c-0.8,-0.2 -1.5,-0.4 -1.9,-0.8c-0.4,-0.3 -0.6,-0.8 -0.6,-1.4c0,-0.6 0.2,-1.1 0.7,-1.6s1.2,-0.7 2.1,-0.7c0.6,0 1.1,0.1 1.6,0.2c0.5,0.2 0.9,0.3 1.2,0.6l-0.2,2z" class="st42"/>
|
||||
<path id="svg_115" d="m235.4,348.9l0,1.8l1.3,0l0,1.4l-1.3,0l0,3.7c0,0.3 0.1,0.5 0.2,0.6c0.1,0.1 0.3,0.2 0.5,0.2c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,-0.1l0.2,1.4c-0.2,0.1 -0.4,0.1 -0.6,0.1c-0.2,0 -0.4,0 -0.7,0c-0.7,0 -1.2,-0.2 -1.5,-0.6c-0.4,-0.4 -0.5,-0.9 -0.5,-1.7l0,-3.7l-1.1,0l0,-1.4l1.1,0l0,-1.8l1.8,0l0,0.1z" class="st42"/>
|
||||
<path id="svg_116" d="m241,358c-1,0 -1.9,-0.3 -2.5,-1c-0.6,-0.7 -0.9,-1.5 -0.9,-2.5l0,-0.3c0,-1.1 0.3,-1.9 0.9,-2.6c0.6,-0.7 1.4,-1 2.4,-1c1,0 1.7,0.3 2.3,0.9c0.5,0.6 0.8,1.4 0.8,2.4l0,1.1l-4.3,0l0,0c0,0.5 0.2,0.9 0.5,1.2c0.3,0.3 0.7,0.5 1.1,0.5c0.4,0 0.8,0 1.1,-0.1c0.3,-0.1 0.6,-0.2 0.9,-0.4l0.5,1.2c-0.3,0.2 -0.7,0.4 -1.2,0.6c-0.5,-0.1 -1,0 -1.6,0zm-0.2,-6c-0.4,0 -0.6,0.1 -0.8,0.4c-0.2,0.3 -0.3,0.6 -0.4,1.1l0,0l2.4,0l0,-0.2c0,-0.4 -0.1,-0.7 -0.3,-1s-0.5,-0.3 -0.9,-0.3z" class="st42"/>
|
||||
<path id="svg_117" d="m245,356.6l0.9,-0.2l0,-4.4l-1,-0.2l0,-1.2l2.8,0l0.1,1c0.2,-0.4 0.4,-0.7 0.7,-0.9c0.3,-0.2 0.6,-0.3 0.9,-0.3c0.1,0 0.2,0 0.3,0c0.1,0 0.2,0 0.3,0.1l-0.2,1.8l-0.8,0c-0.3,0 -0.5,0.1 -0.7,0.2c-0.2,0.1 -0.3,0.3 -0.4,0.5l0,3.5l0.9,0.2l0,1.2l-3.8,0l0,-1.3z" class="st42"/>
|
||||
</symbol>
|
||||
<symbol
|
||||
viewBox="-81 -93 162 186.1"
|
||||
id="node_high_level">
|
||||
<polygon
|
||||
id="svg_6"
|
||||
points="-80,-46 -80,46 0,92 80,46 80,-46 0,-92 "
|
||||
class="st2" />
|
||||
<g
|
||||
id="Isolation_Mode_3_" />
|
||||
</symbol>
|
||||
<symbol
|
||||
viewBox="-87.5 -100.6 175.1 201.1"
|
||||
id="node_x5F_empty">
|
||||
<use
|
||||
transform="matrix(1.0808,0,0,1.0808,-0.00003292006,-0.00003749943) "
|
||||
y="-93"
|
||||
x="-81"
|
||||
id="XMLID_201_"
|
||||
height="186.1"
|
||||
width="162"
|
||||
xlink:href="#node_high_level" />
|
||||
<g
|
||||
id="svg_7">
|
||||
<polygon
|
||||
id="svg_8"
|
||||
points="76.8,-28.1 -14,-80.3 0,-88.3 76.7,-44.4 "
|
||||
class="st3" />
|
||||
<polygon
|
||||
id="svg_9"
|
||||
points="76.8,-28.1 32.1,-53.8 38.8,-66.1 76.7,-44.4 "
|
||||
class="st4" />
|
||||
</g>
|
||||
</g>
|
||||
<line id="svg_118" y2="191.7" x2="201.6" y1="191.7" x1="360.6" class="st62"/>
|
||||
<line id="svg_119" y2="228.8" x2="160" y1="228.8" x1="360.6" class="st62"/>
|
||||
<line id="svg_120" y2="118.3" x2="243.4" y1="118.3" x1="360.6" class="st62"/>
|
||||
</symbol>
|
||||
<symbol
|
||||
viewBox="-87.6 -101 175.2 202"
|
||||
id="node_x5F_new">
|
||||
<polygon
|
||||
id="svg_10"
|
||||
points="0,-100 -86.6,-50 -86.6,50 0,100 86.6,50 86.6,-50 "
|
||||
class="st5" />
|
||||
<polygon
|
||||
id="svg_11"
|
||||
points="-86.6,-20.2 -86.6,-50 0,-100 25.8,-85.1 "
|
||||
class="st6" />
|
||||
<polygon
|
||||
id="svg_12"
|
||||
points="-40.8,-70.7 -32.9,-57 15.7,-85.1 0,-94.3 "
|
||||
class="st7" />
|
||||
<text
|
||||
id="svg_13"
|
||||
font-family="'RobotoSlab-Regular'"
|
||||
font-size="11.3632px"
|
||||
class="st8"
|
||||
transform="matrix(0.866,-0.5,-0.5,-0.866,-33.9256,-70.7388) ">Docker</text>
|
||||
<text
|
||||
id="svg_14"
|
||||
font-family="'RobotoSlab-Regular'"
|
||||
font-size="11.3632px"
|
||||
class="st8"
|
||||
transform="matrix(0.866,-0.5,-0.5,-0.866,-76.0668,-46.4087) ">Kubelt</text>
|
||||
</symbol>
|
||||
<g
|
||||
id="g159">
|
||||
<title
|
||||
id="title32">Layer 1</title>
|
||||
<g
|
||||
id="CLUSTER"
|
||||
transform="translate(-1.9963203,-15.750631)">
|
||||
<g
|
||||
class="st9"
|
||||
id="XMLID_296_">
|
||||
<g
|
||||
id="svg_15">
|
||||
<linearGradient
|
||||
y2="185.29311"
|
||||
x2="343.09021"
|
||||
y1="185.29311"
|
||||
x1="28.6348"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
id="SVGID_1_">
|
||||
<stop
|
||||
stop-color="#326DE6"
|
||||
offset="0"
|
||||
id="stop34" />
|
||||
<stop
|
||||
stop-color="#10FFC6"
|
||||
offset="1"
|
||||
id="stop36" />
|
||||
</linearGradient>
|
||||
<polygon
|
||||
id="svg_16"
|
||||
points="343.1,229.2 255.8,338.6 115.9,338.6 28.6,229.2 59.8,92.7 185.9,32 311.9,92.7 "
|
||||
class="st10"
|
||||
style="fill:url(#SVGID_1_)" />
|
||||
</g>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="master">
|
||||
<use
|
||||
id="svg_17"
|
||||
transform="matrix(0.4,0,0,-0.4,185.8606,187.2514)"
|
||||
y="-66.900002"
|
||||
x="-68.599998"
|
||||
height="133.89999"
|
||||
width="137.2"
|
||||
xlink:href="#master_x5F_level1_1_" />
|
||||
<g
|
||||
id="master_x5F_level1" />
|
||||
</g>
|
||||
<g
|
||||
id="description">
|
||||
<g
|
||||
id="svg_98" />
|
||||
<line
|
||||
id="svg_118"
|
||||
y2="194.52843"
|
||||
x2="213.52141"
|
||||
y1="194.52843"
|
||||
x1="351.89236"
|
||||
class="st62" />
|
||||
<line
|
||||
id="svg_119"
|
||||
y2="228.8"
|
||||
x2="160"
|
||||
y1="228.8"
|
||||
x1="353.23419"
|
||||
class="st62" />
|
||||
<line
|
||||
id="svg_120"
|
||||
y2="118.3"
|
||||
x2="243.39999"
|
||||
y1="118.3"
|
||||
x1="360.60001"
|
||||
class="st62" />
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal"
|
||||
x="106.34034"
|
||||
y="341.76721"
|
||||
id="text183"><tspan
|
||||
id="tspan181"
|
||||
x="106.34034"
|
||||
y="341.76721">Kubernetes Cluster</tspan></text>
|
||||
</g>
|
||||
<g
|
||||
id="Node">
|
||||
<g
|
||||
id="Node_x5F_level3_x5F_1">
|
||||
<g
|
||||
id="Isolation_Mode" />
|
||||
</g>
|
||||
<polygon
|
||||
id="svg_18"
|
||||
points="111.4,139.9 111.4,98.7 147.1,78.2 182.7,98.7 182.7,139.9 147.1,160.4 "
|
||||
class="st16" />
|
||||
<polygon
|
||||
id="svg_19"
|
||||
points="129.3,150.2 147.1,160.4 182.7,139.9 182.7,119.3 "
|
||||
class="st13" />
|
||||
<polygon
|
||||
id="svg_20"
|
||||
points="163.5,147.7 157.4,137.2 179.9,124.2 179.9,138.3 "
|
||||
class="st7" />
|
||||
<g
|
||||
id="svg_21">
|
||||
<path
|
||||
id="svg_22"
|
||||
d="m 162.5,139.3 c 0.4,-0.3 0.9,-0.3 1.3,-0.2 0.4,0.1 0.8,0.4 1,0.8 l 0.3,0.5 c 0.3,0.4 0.3,0.9 0.2,1.3 -0.1,0.4 -0.4,0.8 -0.8,1 l -1.4,0.8 -0.2,-0.3 0.3,-0.3 -1.6,-2.7 -0.4,0.2 -0.2,-0.3 0.4,-0.2 z m -0.4,0.7 1.6,2.7 0.6,-0.3 c 0.3,-0.2 0.5,-0.4 0.6,-0.8 0.1,-0.3 0,-0.6 -0.2,-1 l -0.3,-0.5 c -0.2,-0.3 -0.4,-0.5 -0.7,-0.6 -0.3,-0.1 -0.6,-0.1 -0.9,0.1 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_23"
|
||||
d="m 165.7,140.3 c -0.2,-0.4 -0.3,-0.8 -0.2,-1.1 0.1,-0.4 0.3,-0.6 0.6,-0.9 0.4,-0.2 0.7,-0.2 1.1,-0.1 0.3,0.1 0.6,0.4 0.9,0.8 v 0.1 c 0.2,0.4 0.3,0.8 0.2,1.1 -0.1,0.4 -0.3,0.6 -0.6,0.8 -0.4,0.2 -0.7,0.2 -1.1,0.1 -0.4,-0.1 -0.7,-0.3 -0.9,-0.8 z m 0.5,-0.2 c 0.2,0.3 0.3,0.5 0.6,0.6 0.2,0.1 0.4,0.1 0.7,0 0.2,-0.1 0.3,-0.3 0.4,-0.6 0,-0.2 -0.1,-0.5 -0.2,-0.8 v -0.1 c -0.2,-0.3 -0.3,-0.5 -0.6,-0.6 -0.2,-0.1 -0.4,-0.1 -0.7,0 -0.2,0.1 -0.3,0.3 -0.4,0.6 -0.1,0.3 0,0.6 0.2,0.9 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_24"
|
||||
d="m 170.1,139.1 c 0.2,-0.1 0.3,-0.2 0.3,-0.4 0.1,-0.2 0.1,-0.3 0,-0.5 l 0.4,-0.2 v 0 c 0.1,0.2 0.2,0.5 0,0.8 -0.1,0.3 -0.3,0.5 -0.6,0.7 -0.4,0.2 -0.7,0.3 -1.1,0.1 -0.3,-0.1 -0.6,-0.4 -0.8,-0.7 l -0.1,-0.1 c -0.2,-0.4 -0.3,-0.7 -0.2,-1.1 0.1,-0.4 0.3,-0.6 0.6,-0.9 0.2,-0.1 0.4,-0.2 0.6,-0.2 0.2,0 0.4,0 0.6,0 l 0.3,0.6 -0.4,0.2 -0.3,-0.3 c -0.1,0 -0.2,0 -0.3,0 -0.1,0 -0.2,0.1 -0.3,0.1 -0.2,0.1 -0.4,0.3 -0.4,0.6 0,0.2 0.1,0.5 0.2,0.7 l 0.1,0.1 c 0.2,0.3 0.3,0.5 0.5,0.6 0.5,0 0.7,0 0.9,-0.1 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_25"
|
||||
d="m 169.8,135.2 -0.2,-0.3 0.9,-0.5 1.2,2.1 0.3,-0.2 0.2,-1 -0.3,0.1 -0.2,-0.3 1.1,-0.6 0.2,0.3 -0.3,0.2 -0.2,1.2 1.4,0.6 0.3,-0.1 0.2,0.3 -1.1,0.6 -0.2,-0.3 0.2,-0.2 -1.2,-0.5 -0.3,0.2 0.5,0.8 0.4,-0.2 0.2,0.3 -1.2,0.7 -0.2,-0.3 0.3,-0.3 -1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_26"
|
||||
d="m 175.9,136.3 c -0.4,0.2 -0.7,0.3 -1.1,0.1 -0.4,-0.2 -0.6,-0.4 -0.9,-0.7 l -0.1,-0.1 c -0.2,-0.4 -0.3,-0.7 -0.2,-1.1 0.1,-0.4 0.3,-0.6 0.6,-0.8 0.3,-0.2 0.7,-0.2 1,-0.1 0.3,0.1 0.5,0.3 0.7,0.7 l 0.2,0.3 -1.7,1 v 0 c 0.1,0.2 0.3,0.4 0.5,0.5 0.2,0.1 0.4,0.1 0.6,-0.1 0.2,-0.1 0.3,-0.2 0.4,-0.3 0.1,-0.1 0.2,-0.2 0.2,-0.4 l 0.4,0.2 c 0,0.1 -0.1,0.3 -0.2,0.4 0,0.1 -0.2,0.2 -0.4,0.4 z m -1.4,-2.3 c -0.2,0.1 -0.3,0.2 -0.3,0.4 0,0.2 0,0.4 0.1,0.6 v 0 l 1.2,-0.7 v -0.1 c -0.1,-0.2 -0.2,-0.3 -0.4,-0.4 -0.2,0.1 -0.4,0.1 -0.6,0.2 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_27"
|
||||
d="m 176,133.1 -0.2,-0.3 0.8,-0.5 0.3,0.3 c 0,-0.2 0,-0.3 0.1,-0.5 0.1,-0.1 0.2,-0.2 0.3,-0.3 0,0 0.1,0 0.1,-0.1 0,0 0.1,0 0.1,0 l 0.2,0.5 -0.3,0.1 c -0.1,0.1 -0.2,0.2 -0.3,0.3 -0.1,0.1 -0.1,0.2 -0.1,0.4 l 0.8,1.5 0.4,-0.2 0.2,0.3 -1.2,0.7 -0.2,-0.3 0.3,-0.3 -1.1,-1.8 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_28">
|
||||
<path
|
||||
id="svg_29"
|
||||
d="m 141.1,151.8 -0.2,-0.3 0.9,-0.5 1.2,2.1 0.3,-0.2 0.2,-1 -0.3,0.1 -0.2,-0.3 1.1,-0.6 0.2,0.3 -0.3,0.2 -0.2,1.2 1.4,0.6 0.3,-0.1 0.2,0.3 -1.1,0.6 -0.2,-0.3 0.2,-0.2 -1.2,-0.5 -0.3,0.2 0.5,0.8 0.4,-0.2 0.2,0.3 -1.2,0.7 -0.2,-0.3 0.3,-0.3 -1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_30"
|
||||
d="m 147.6,152 c 0,0.2 0,0.3 -0.1,0.5 -0.1,0.1 -0.2,0.3 -0.4,0.4 -0.3,0.2 -0.5,0.2 -0.8,0.1 -0.3,-0.1 -0.5,-0.3 -0.7,-0.7 l -0.7,-1.1 -0.3,0.1 -0.2,-0.3 0.3,-0.2 0.5,-0.3 0.9,1.5 c 0.2,0.3 0.3,0.4 0.4,0.5 0.1,0.1 0.3,0 0.5,-0.1 0.2,-0.1 0.3,-0.2 0.4,-0.3 0.1,-0.1 0.1,-0.3 0.1,-0.4 l -0.9,-1.5 -0.4,0.1 -0.2,-0.3 0.3,-0.2 0.5,-0.3 1.3,2.2 0.3,-0.1 0.2,0.3 -0.7,0.4 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_31"
|
||||
d="m 150.6,149.2 c 0.2,0.4 0.3,0.7 0.2,1 0,0.3 -0.2,0.6 -0.5,0.8 -0.2,0.1 -0.3,0.1 -0.5,0.2 -0.2,0 -0.3,0 -0.5,-0.1 l 0.1,0.4 -0.4,0.2 -1.9,-3.3 -0.4,0.2 -0.2,-0.3 0.9,-0.5 0.8,1.4 c 0,-0.2 0.1,-0.3 0.1,-0.4 0.1,-0.1 0.2,-0.2 0.4,-0.3 0.3,-0.2 0.6,-0.2 1,0 0.4,0.2 0.7,0.2 0.9,0.7 z m -0.5,0.2 c -0.2,-0.3 -0.4,-0.5 -0.6,-0.6 -0.2,-0.1 -0.4,-0.1 -0.7,0 -0.1,0.1 -0.2,0.2 -0.3,0.3 -0.1,0.1 -0.1,0.3 -0.1,0.4 l 0.7,1.1 c 0.1,0.1 0.3,0.1 0.4,0.1 0.1,0 0.3,0 0.4,-0.1 0.2,-0.1 0.3,-0.3 0.4,-0.5 0.1,-0.2 0,-0.4 -0.2,-0.7 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_32"
|
||||
d="m 153,149.5 c -0.4,0.2 -0.7,0.3 -1.1,0.1 -0.4,-0.2 -0.6,-0.4 -0.9,-0.7 l -0.1,-0.1 c -0.2,-0.4 -0.3,-0.7 -0.2,-1.1 0.1,-0.4 0.3,-0.6 0.6,-0.8 0.3,-0.2 0.7,-0.2 1,-0.1 0.3,0.1 0.5,0.3 0.7,0.7 l 0.2,0.3 -1.7,1 v 0 c 0.1,0.2 0.3,0.4 0.5,0.5 0.2,0.1 0.4,0.1 0.6,-0.1 0.2,-0.1 0.3,-0.2 0.4,-0.3 0.1,-0.1 0.2,-0.2 0.2,-0.4 l 0.4,0.2 c 0,0.1 -0.1,0.3 -0.2,0.4 -0.1,0.1 -0.2,0.2 -0.4,0.4 z m -1.4,-2.3 c -0.2,0.1 -0.3,0.2 -0.3,0.4 0,0.2 0,0.4 0.1,0.6 v 0 l 1.2,-0.7 v -0.1 c -0.1,-0.2 -0.2,-0.3 -0.4,-0.4 -0.2,-0.1 -0.4,0.1 -0.6,0.2 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_33"
|
||||
d="m 152.4,145.2 -0.2,-0.3 0.9,-0.5 1.9,3.3 0.4,-0.2 0.2,0.3 -1.2,0.7 -0.2,-0.3 0.3,-0.3 -1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_34"
|
||||
d="m 157.1,147.1 c -0.4,0.2 -0.7,0.3 -1.1,0.1 -0.4,-0.2 -0.6,-0.4 -0.9,-0.7 l -0.1,-0.1 c -0.2,-0.4 -0.3,-0.7 -0.2,-1.1 0.1,-0.4 0.3,-0.6 0.6,-0.8 0.3,-0.2 0.7,-0.2 1,-0.1 0.3,0.1 0.5,0.3 0.7,0.7 l 0.2,0.3 -1.7,1 v 0 c 0.1,0.2 0.3,0.4 0.5,0.5 0.2,0.1 0.4,0.1 0.6,-0.1 0.2,-0.1 0.3,-0.2 0.4,-0.3 0.1,-0.1 0.2,-0.2 0.2,-0.4 l 0.4,0.2 c 0,0.1 -0.1,0.3 -0.2,0.4 -0.1,0.1 -0.2,0.3 -0.4,0.4 z m -1.4,-2.3 c -0.2,0.1 -0.3,0.2 -0.3,0.4 0,0.2 0,0.4 0.1,0.6 v 0 l 1.2,-0.7 V 145 c -0.1,-0.2 -0.2,-0.3 -0.4,-0.4 -0.2,0.1 -0.4,0.1 -0.6,0.2 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_35"
|
||||
d="m 157.5,142.6 0.4,0.6 0.5,-0.3 0.2,0.3 -0.5,0.3 0.9,1.6 c 0.1,0.1 0.1,0.2 0.2,0.2 0.1,0 0.2,0 0.2,0 0,0 0.1,-0.1 0.1,-0.1 0,0 0.1,-0.1 0.1,-0.1 l 0.2,0.3 c 0,0.1 -0.1,0.1 -0.2,0.2 -0.1,0.1 -0.2,0.1 -0.2,0.2 -0.2,0.1 -0.4,0.1 -0.6,0.1 -0.2,0 -0.3,-0.2 -0.5,-0.4 l -0.9,-1.6 -0.4,0.2 -0.2,-0.3 0.4,-0.2 -0.4,-0.6 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
<polygon
|
||||
id="svg_36"
|
||||
points="189.4,98.7 225.1,78.2 260.7,98.7 260.7,139.9 225.1,160.4 189.4,139.9 "
|
||||
class="st16" />
|
||||
<polygon
|
||||
id="svg_37"
|
||||
points="189.4,119.3 189.4,139.9 225.1,160.4 242.9,150.2 "
|
||||
class="st13" />
|
||||
<polygon
|
||||
id="svg_38"
|
||||
points="208.7,147.7 214.8,137.2 237.3,150.2 225.1,157.2 "
|
||||
class="st7" />
|
||||
<g
|
||||
id="svg_39">
|
||||
<path
|
||||
id="svg_40"
|
||||
d="m 215.5,142.6 c 0.5,0.3 0.7,0.6 0.8,1 0.1,0.4 0,0.9 -0.2,1.3 l -0.3,0.5 c -0.3,0.4 -0.6,0.7 -1,0.8 -0.4,0.1 -0.9,0 -1.3,-0.2 l -1.4,-0.8 0.2,-0.3 0.4,0.2 1.6,-2.7 -0.3,-0.3 0.2,-0.3 0.4,0.2 z m -0.8,0 -1.6,2.7 0.6,0.3 c 0.3,0.2 0.6,0.2 0.9,0.1 0.3,-0.1 0.6,-0.3 0.7,-0.6 l 0.3,-0.5 c 0.2,-0.3 0.2,-0.6 0.2,-1 -0.1,-0.3 -0.3,-0.6 -0.6,-0.8 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_41"
|
||||
d="m 216.2,145.9 c 0.2,-0.4 0.5,-0.6 0.8,-0.8 0.3,-0.1 0.7,-0.1 1,0.1 0.4,0.2 0.6,0.5 0.6,0.9 0.1,0.4 0,0.7 -0.2,1.1 v 0.1 c -0.2,0.4 -0.5,0.6 -0.8,0.8 -0.3,0.1 -0.7,0.1 -1,-0.1 -0.4,-0.2 -0.6,-0.5 -0.6,-0.9 -0.1,-0.4 0,-0.8 0.2,-1.2 z m 0.5,0.3 c -0.2,0.3 -0.2,0.5 -0.2,0.8 0,0.2 0.1,0.4 0.4,0.6 0.2,0.1 0.4,0.1 0.7,0 0.2,-0.1 0.4,-0.3 0.6,-0.6 v -0.1 c 0.2,-0.3 0.2,-0.5 0.2,-0.8 0,-0.2 -0.1,-0.4 -0.4,-0.6 -0.2,-0.1 -0.4,-0.1 -0.7,0 -0.3,0.2 -0.5,0.4 -0.6,0.7 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_42"
|
||||
d="m 219.5,149.1 c 0.2,0.1 0.3,0.1 0.5,0.1 0.2,0 0.3,-0.1 0.4,-0.2 l 0.4,0.2 v 0 c -0.1,0.2 -0.3,0.4 -0.6,0.4 -0.3,0.1 -0.6,0 -0.9,-0.2 -0.4,-0.2 -0.6,-0.5 -0.6,-0.9 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.8,-0.7 0.3,-0.1 0.7,-0.1 1.1,0.1 0.2,0.1 0.4,0.3 0.5,0.4 0.1,0.1 0.2,0.3 0.2,0.5 l -0.3,0.6 -0.4,-0.2 0.1,-0.5 c 0,-0.1 -0.1,-0.2 -0.1,-0.3 -0.1,-0.1 -0.2,-0.2 -0.3,-0.2 -0.2,-0.1 -0.5,-0.2 -0.7,0 -0.2,0.1 -0.4,0.3 -0.5,0.6 l -0.1,0.1 c -0.2,0.3 -0.2,0.5 -0.2,0.7 0.1,0.4 0.2,0.6 0.4,0.7 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_43"
|
||||
d="m 222.7,146.8 0.2,-0.3 0.9,0.5 -1.2,2.1 0.3,0.2 0.9,-0.4 -0.2,-0.2 0.2,-0.3 1.1,0.6 -0.2,0.3 -0.3,-0.1 -1.1,0.4 0.2,1.5 0.3,0.2 -0.2,0.3 -1.1,-0.6 0.2,-0.3 0.3,0.1 -0.2,-1.2 -0.3,-0.2 -0.5,0.8 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_44"
|
||||
d="m 224.8,152.7 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 0,-0.3 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.1,0 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.2,0 -0.4,-0.2 -0.7,-0.3 z m 1.3,-2.4 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.4,0.2 -0.5,0.4 v 0 l 1.2,0.7 v -0.1 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.2 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_45"
|
||||
d="m 227.6,151.2 0.2,-0.3 0.8,0.5 -0.2,0.4 c 0.1,-0.1 0.3,-0.1 0.4,-0.2 0.1,0 0.3,0 0.4,0.1 0,0 0.1,0 0.1,0.1 0,0 0.1,0.1 0.1,0.1 l -0.3,0.4 -0.3,-0.2 c -0.1,-0.1 -0.2,-0.1 -0.4,-0.1 -0.1,0 -0.2,0.1 -0.3,0.1 l -0.8,1.5 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.1,-1.8 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_46">
|
||||
<path
|
||||
id="svg_47"
|
||||
d="m 194,130.3 0.2,-0.3 0.9,0.5 -1.2,2.1 0.3,0.2 0.9,-0.4 -0.2,-0.2 0.2,-0.3 1.1,0.6 -0.2,0.3 -0.3,-0.1 -1.1,0.4 0.2,1.5 0.3,0.2 -0.2,0.3 -1.1,-0.6 0.2,-0.3 0.3,0.1 -0.2,-1.2 -0.3,-0.2 -0.5,0.8 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_48"
|
||||
d="m 197.1,136.1 c -0.2,0.1 -0.3,0.1 -0.5,0.1 -0.2,0 -0.3,0 -0.5,-0.1 -0.3,-0.2 -0.4,-0.4 -0.5,-0.6 -0.1,-0.3 0,-0.6 0.2,-1 l 0.7,-1.1 -0.3,-0.2 0.2,-0.3 0.3,0.2 0.5,0.3 -0.9,1.5 c -0.2,0.3 -0.2,0.5 -0.2,0.6 0,0.1 0.1,0.3 0.3,0.4 0.2,0.1 0.3,0.1 0.5,0.1 0.1,0 0.3,-0.1 0.4,-0.1 l 0.9,-1.5 -0.3,-0.3 0.2,-0.3 0.3,0.2 0.5,0.3 -1.3,2.2 0.3,0.2 -0.2,0.3 -0.7,-0.4 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_49"
|
||||
d="m 201,137.3 c -0.2,0.4 -0.5,0.6 -0.8,0.7 -0.3,0.1 -0.6,0.1 -0.9,-0.1 -0.2,-0.1 -0.3,-0.2 -0.4,-0.3 -0.1,-0.1 -0.1,-0.3 -0.1,-0.5 l -0.2,0.3 -0.4,-0.2 1.9,-3.3 -0.3,-0.3 0.2,-0.3 0.9,0.5 -0.8,1.4 c 0.1,-0.1 0.3,-0.1 0.5,-0.1 0.2,0 0.3,0.1 0.5,0.2 0.3,0.2 0.5,0.5 0.5,0.8 0,0.3 -0.3,0.7 -0.6,1.2 z m -0.4,-0.4 c 0.2,-0.3 0.3,-0.6 0.3,-0.8 0,-0.2 -0.1,-0.4 -0.3,-0.6 -0.1,-0.1 -0.3,-0.1 -0.4,-0.1 -0.1,0 -0.3,0.1 -0.4,0.1 l -0.7,1.1 c 0,0.2 0,0.3 0.1,0.4 0.1,0.1 0.2,0.2 0.3,0.3 0.2,0.1 0.4,0.1 0.6,0 0.1,0.1 0.3,-0.1 0.5,-0.4 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_50"
|
||||
d="m 202,139.4 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.1,0 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.3,-0.1 -0.5,-0.1 -0.7,-0.3 z m 1.2,-2.3 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.3,0.2 -0.5,0.4 v 0 l 1.2,0.7 v -0.1 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.2 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_51"
|
||||
d="m 205.3,136.8 0.2,-0.3 0.9,0.5 -1.9,3.3 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_52"
|
||||
d="m 206.1,141.8 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.1,0 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.3,-0.1 -0.5,-0.2 -0.7,-0.3 z m 1.2,-2.3 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.3,0.2 -0.5,0.4 v 0 l 1.2,0.7 v -0.1 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.3 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_53"
|
||||
d="m 210.2,139.9 -0.4,0.6 0.5,0.3 -0.2,0.3 -0.5,-0.3 -0.9,1.6 c -0.1,0.1 -0.1,0.2 -0.1,0.3 0,0.1 0.1,0.1 0.2,0.2 0,0 0.1,0 0.1,0.1 0.1,0 0.1,0 0.1,0.1 l -0.1,0.4 c -0.1,0 -0.1,0 -0.2,0 -0.1,0 -0.2,-0.1 -0.3,-0.1 -0.2,-0.1 -0.3,-0.3 -0.4,-0.4 0,-0.2 0,-0.4 0.1,-0.6 l 0.9,-1.6 -0.4,-0.2 0.2,-0.3 0.4,0.2 0.4,-0.6 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
<polygon
|
||||
id="svg_54"
|
||||
points="111.4,234.7 147.1,214.1 182.7,234.7 182.7,275.9 147.1,296.4 111.4,275.9 "
|
||||
class="st16" />
|
||||
<polygon
|
||||
id="svg_55"
|
||||
points="129.3,224.4 147.1,214.1 182.7,234.7 182.7,255.3 "
|
||||
class="st13" />
|
||||
<polygon
|
||||
id="svg_56"
|
||||
points="163.5,226.9 157.4,237.4 179.9,250.4 179.9,236.3 "
|
||||
class="st7" />
|
||||
<g
|
||||
id="svg_57">
|
||||
<path
|
||||
id="svg_58"
|
||||
d="m 164.2,232.3 c 0.5,0.3 0.7,0.6 0.8,1 0.1,0.4 0,0.9 -0.2,1.3 l -0.3,0.5 c -0.3,0.4 -0.6,0.7 -1,0.8 -0.4,0.1 -0.9,0 -1.3,-0.2 l -1.4,-0.8 0.2,-0.3 0.4,0.2 1.6,-2.7 -0.3,-0.3 0.2,-0.3 0.4,0.2 z m -0.8,0 -1.6,2.7 0.6,0.3 c 0.3,0.2 0.6,0.2 0.9,0.1 0.3,-0.1 0.6,-0.3 0.7,-0.6 l 0.3,-0.5 c 0.2,-0.3 0.3,-0.6 0.2,-1 -0.1,-0.3 -0.3,-0.6 -0.6,-0.8 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_59"
|
||||
d="m 164.9,235.6 c 0.2,-0.4 0.5,-0.6 0.8,-0.8 0.3,-0.1 0.7,-0.1 1,0.1 0.4,0.2 0.6,0.5 0.6,0.9 0.1,0.4 0,0.7 -0.2,1.1 v 0.1 c -0.2,0.4 -0.5,0.6 -0.8,0.8 -0.3,0.1 -0.7,0.1 -1,-0.1 -0.4,-0.2 -0.6,-0.5 -0.6,-0.9 -0.1,-0.4 0,-0.8 0.2,-1.2 z m 0.5,0.3 c -0.2,0.3 -0.2,0.5 -0.2,0.8 0,0.2 0.1,0.4 0.4,0.6 0.2,0.1 0.4,0.1 0.7,0 0.2,-0.1 0.4,-0.3 0.6,-0.6 v -0.1 c 0.2,-0.3 0.2,-0.5 0.2,-0.8 0,-0.2 -0.1,-0.4 -0.4,-0.6 -0.2,-0.1 -0.4,-0.1 -0.7,0 -0.2,0.2 -0.4,0.4 -0.6,0.7 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_60"
|
||||
d="m 168.2,238.8 c 0.2,0.1 0.3,0.1 0.5,0.1 0.2,0 0.3,-0.1 0.4,-0.2 l 0.4,0.2 v 0 c -0.1,0.2 -0.3,0.4 -0.6,0.4 -0.3,0.1 -0.6,0 -0.9,-0.2 -0.4,-0.2 -0.6,-0.5 -0.6,-0.9 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.8,-0.7 0.3,-0.1 0.7,-0.1 1.1,0.1 0.2,0.1 0.4,0.3 0.5,0.4 0.1,0.1 0.2,0.3 0.2,0.5 l -0.3,0.7 -0.4,-0.2 0.1,-0.5 c 0,-0.1 -0.1,-0.2 -0.1,-0.3 -0.1,-0.1 -0.2,-0.2 -0.3,-0.2 -0.2,-0.1 -0.5,-0.2 -0.7,0 -0.2,0.1 -0.4,0.3 -0.5,0.6 l -0.1,0.1 c -0.2,0.3 -0.2,0.5 -0.2,0.7 0.1,0.3 0.2,0.5 0.4,0.6 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_61"
|
||||
d="m 171.4,236.5 0.2,-0.3 0.9,0.5 -1.2,2.1 0.3,0.2 0.9,-0.4 -0.2,-0.2 0.2,-0.3 1.1,0.6 -0.2,0.3 H 173 l -1.1,0.4 0.2,1.5 0.3,0.2 -0.2,0.3 -1.1,-0.6 0.2,-0.3 0.3,0.1 -0.2,-1.2 -0.3,-0.2 -0.5,0.8 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_62"
|
||||
d="m 173.6,242.4 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.2,0.1 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.3,-0.1 -0.5,-0.2 -0.7,-0.3 z m 1.2,-2.4 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.4,0.2 -0.5,0.4 v 0 l 1.2,0.7 V 241 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.2 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_63"
|
||||
d="m 176.3,240.9 0.2,-0.3 0.8,0.5 -0.2,0.4 c 0.1,-0.1 0.3,-0.1 0.4,-0.2 0.2,0 0.3,0 0.4,0.1 0,0 0.1,0 0.1,0.1 0,0 0.1,0.1 0.1,0.1 l -0.3,0.4 -0.3,-0.2 c -0.1,-0.1 -0.2,-0.1 -0.4,-0.1 -0.1,0 -0.2,0.1 -0.3,0.1 l -0.8,1.5 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.1,-1.8 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
<g
|
||||
id="svg_64">
|
||||
<path
|
||||
id="svg_65"
|
||||
d="m 142.7,220 0.2,-0.3 0.9,0.5 -1.2,2.1 0.3,0.2 0.9,-0.4 -0.2,-0.2 0.2,-0.3 1.1,0.6 -0.2,0.3 -0.3,-0.1 -1.1,0.4 0.2,1.5 0.3,0.2 -0.2,0.3 -1.1,-0.6 0.2,-0.3 0.3,0.1 -0.2,-1.2 -0.3,-0.2 -0.5,0.8 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_66"
|
||||
d="m 145.8,225.8 c -0.2,0.1 -0.3,0.1 -0.5,0.1 -0.2,0 -0.3,0 -0.5,-0.1 -0.3,-0.2 -0.4,-0.4 -0.5,-0.6 -0.1,-0.2 0,-0.6 0.2,-1 l 0.7,-1.1 -0.3,-0.2 0.2,-0.3 0.3,0.2 0.5,0.3 -0.9,1.5 c -0.2,0.3 -0.2,0.5 -0.2,0.6 0,0.1 0.1,0.3 0.3,0.4 0.2,0.1 0.3,0.1 0.5,0.1 0.1,0 0.3,-0.1 0.4,-0.1 l 0.9,-1.5 -0.3,-0.3 0.2,-0.3 0.3,0.2 0.5,0.3 -1.3,2.2 0.3,0.2 -0.2,0.3 -0.7,-0.4 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_67"
|
||||
d="m 149.8,227 c -0.2,0.4 -0.5,0.6 -0.8,0.7 -0.3,0.1 -0.6,0.1 -0.9,-0.1 -0.2,-0.1 -0.3,-0.2 -0.4,-0.3 -0.1,-0.1 -0.1,-0.3 -0.1,-0.5 l -0.2,0.3 -0.4,-0.2 1.9,-3.3 -0.3,-0.3 0.2,-0.3 0.9,0.5 -0.8,1.4 c 0.1,-0.1 0.3,-0.1 0.5,-0.1 0.2,0 0.3,0.1 0.5,0.2 0.3,0.2 0.5,0.5 0.5,0.8 -0.3,0.3 -0.4,0.7 -0.6,1.2 z m -0.5,-0.4 c 0.2,-0.3 0.3,-0.6 0.3,-0.8 0,-0.2 -0.1,-0.4 -0.3,-0.6 -0.1,-0.1 -0.3,-0.1 -0.4,-0.1 -0.1,0 -0.3,0.1 -0.4,0.1 l -0.7,1.1 c 0,0.2 0,0.3 0.1,0.4 0.1,0.1 0.2,0.2 0.3,0.3 0.2,0.1 0.4,0.1 0.6,0 0.2,0.1 0.3,0 0.5,-0.4 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_68"
|
||||
d="m 150.7,229.1 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.1,0 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.3,-0.1 -0.5,-0.1 -0.7,-0.3 z m 1.2,-2.3 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.4,0.2 -0.5,0.4 v 0 l 1.2,0.7 v -0.1 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.2 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_69"
|
||||
d="m 154,226.5 0.2,-0.3 0.9,0.5 -1.9,3.3 0.3,0.3 -0.2,0.3 -1.2,-0.7 0.2,-0.3 0.4,0.2 1.7,-3 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_70"
|
||||
d="m 154.8,231.5 c -0.4,-0.2 -0.6,-0.5 -0.6,-0.8 -0.1,-0.4 0,-0.7 0.2,-1.1 l 0.1,-0.1 c 0.2,-0.4 0.5,-0.6 0.9,-0.7 0.4,-0.1 0.7,-0.1 1,0.1 0.3,0.2 0.5,0.5 0.6,0.8 0.1,0.3 0,0.6 -0.2,1 l -0.2,0.3 -1.7,-1 v 0 c -0.1,0.2 -0.2,0.5 -0.2,0.7 0,0.2 0.2,0.4 0.4,0.5 0.2,0.1 0.3,0.1 0.5,0.2 0.1,0 0.3,0 0.4,0 v 0.4 c -0.1,0 -0.3,0 -0.5,0 -0.3,-0.1 -0.5,-0.1 -0.7,-0.3 z m 1.3,-2.3 c -0.2,-0.1 -0.3,-0.1 -0.5,0 -0.2,0.1 -0.4,0.2 -0.5,0.4 v 0 l 1.2,0.7 v -0.1 c 0.1,-0.2 0.1,-0.4 0.1,-0.5 0,-0.2 -0.1,-0.4 -0.3,-0.5 z"
|
||||
class="st8" />
|
||||
<path
|
||||
id="svg_71"
|
||||
d="m 158.9,229.6 -0.4,0.6 0.5,0.3 -0.2,0.3 -0.5,-0.3 -0.9,1.6 c -0.1,0.1 -0.1,0.2 -0.1,0.3 0,0.1 0.1,0.1 0.2,0.2 0,0 0.1,0 0.1,0.1 0.1,0 0.1,0 0.1,0.1 l -0.1,0.4 c -0.1,0 -0.1,0 -0.2,0 -0.1,0 -0.2,-0.1 -0.3,-0.1 -0.2,-0.1 -0.3,-0.3 -0.4,-0.4 0,-0.2 0,-0.4 0.1,-0.6 l 0.9,-1.6 -0.4,-0.2 0.2,-0.3 0.4,0.2 0.4,-0.6 z"
|
||||
class="st8" />
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="service" />
|
||||
<g
|
||||
id="pods" />
|
||||
<g
|
||||
id="IP" />
|
||||
<g
|
||||
id="deployments" />
|
||||
<g
|
||||
id="containers_x2F_volumes" />
|
||||
<g
|
||||
id="labels_x2F_selectors" />
|
||||
<g
|
||||
id="Layer_14" />
|
||||
</g>
|
||||
<g id="Node">
|
||||
<g id="Node_x5F_level3_x5F_1">
|
||||
<g id="Isolation_Mode"/>
|
||||
</g>
|
||||
<polygon id="svg_18" points="182.7,139.9 147.1,160.4 111.4,139.9 111.4,98.7 147.1,78.2 182.7,98.7 " class="st16"/>
|
||||
<polygon id="svg_19" points="129.3,150.2 147.1,160.4 182.7,139.9 182.7,119.3 " class="st13"/>
|
||||
<polygon id="svg_20" points="163.5,147.7 157.4,137.2 179.9,124.2 179.9,138.3 " class="st7"/>
|
||||
<g id="svg_21">
|
||||
<path id="svg_22" d="m162.5,139.3c0.4,-0.3 0.9,-0.3 1.3,-0.2c0.4,0.1 0.8,0.4 1,0.8l0.3,0.5c0.3,0.4 0.3,0.9 0.2,1.3c-0.1,0.4 -0.4,0.8 -0.8,1l-1.4,0.8l-0.2,-0.3l0.3,-0.3l-1.6,-2.7l-0.4,0.2l-0.2,-0.3l0.4,-0.2l1.1,-0.6zm-0.4,0.7l1.6,2.7l0.6,-0.3c0.3,-0.2 0.5,-0.4 0.6,-0.8c0.1,-0.3 0,-0.6 -0.2,-1l-0.3,-0.5c-0.2,-0.3 -0.4,-0.5 -0.7,-0.6c-0.3,-0.1 -0.6,-0.1 -0.9,0.1l-0.7,0.4z" class="st8"/>
|
||||
<path id="svg_23" d="m165.7,140.3c-0.2,-0.4 -0.3,-0.8 -0.2,-1.1c0.1,-0.4 0.3,-0.6 0.6,-0.9c0.4,-0.2 0.7,-0.2 1.1,-0.1c0.3,0.1 0.6,0.4 0.9,0.8l0,0.1c0.2,0.4 0.3,0.8 0.2,1.1c-0.1,0.4 -0.3,0.6 -0.6,0.8c-0.4,0.2 -0.7,0.2 -1.1,0.1c-0.4,-0.1 -0.7,-0.3 -0.9,-0.8l0,0zm0.5,-0.2c0.2,0.3 0.3,0.5 0.6,0.6c0.2,0.1 0.4,0.1 0.7,0c0.2,-0.1 0.3,-0.3 0.4,-0.6c0,-0.2 -0.1,-0.5 -0.2,-0.8l0,-0.1c-0.2,-0.3 -0.3,-0.5 -0.6,-0.6c-0.2,-0.1 -0.4,-0.1 -0.7,0c-0.2,0.1 -0.3,0.3 -0.4,0.6c-0.1,0.3 0,0.6 0.2,0.9l0,0z" class="st8"/>
|
||||
<path id="svg_24" d="m170.1,139.1c0.2,-0.1 0.3,-0.2 0.3,-0.4c0.1,-0.2 0.1,-0.3 0,-0.5l0.4,-0.2l0,0c0.1,0.2 0.2,0.5 0,0.8c-0.1,0.3 -0.3,0.5 -0.6,0.7c-0.4,0.2 -0.7,0.3 -1.1,0.1c-0.3,-0.1 -0.6,-0.4 -0.8,-0.7l-0.1,-0.1c-0.2,-0.4 -0.3,-0.7 -0.2,-1.1s0.3,-0.6 0.6,-0.9c0.2,-0.1 0.4,-0.2 0.6,-0.2c0.2,0 0.4,0 0.6,0l0.3,0.6l-0.4,0.2l-0.3,-0.3c-0.1,0 -0.2,0 -0.3,0c-0.1,0 -0.2,0.1 -0.3,0.1c-0.2,0.1 -0.4,0.3 -0.4,0.6c0,0.2 0.1,0.5 0.2,0.7l0.1,0.1c0.2,0.3 0.3,0.5 0.5,0.6c0.5,0 0.7,0 0.9,-0.1z" class="st8"/>
|
||||
<path id="svg_25" d="m169.8,135.2l-0.2,-0.3l0.9,-0.5l1.2,2.1l0.3,-0.2l0.2,-1l-0.3,0.1l-0.2,-0.3l1.1,-0.6l0.2,0.3l-0.3,0.2l-0.2,1.2l1.4,0.6l0.3,-0.1l0.2,0.3l-1.1,0.6l-0.2,-0.3l0.2,-0.2l-1.2,-0.5l-0.3,0.2l0.5,0.8l0.4,-0.2l0.2,0.3l-1.2,0.7l-0.2,-0.3l0.3,-0.3l-1.7,-3l-0.3,0.4z" class="st8"/>
|
||||
<path id="svg_26" d="m175.9,136.3c-0.4,0.2 -0.7,0.3 -1.1,0.1s-0.6,-0.4 -0.9,-0.7l-0.1,-0.1c-0.2,-0.4 -0.3,-0.7 -0.2,-1.1c0.1,-0.4 0.3,-0.6 0.6,-0.8c0.3,-0.2 0.7,-0.2 1,-0.1s0.5,0.3 0.7,0.7l0.2,0.3l-1.7,1l0,0c0.1,0.2 0.3,0.4 0.5,0.5c0.2,0.1 0.4,0.1 0.6,-0.1c0.2,-0.1 0.3,-0.2 0.4,-0.3c0.1,-0.1 0.2,-0.2 0.2,-0.4l0.4,0.2c0,0.1 -0.1,0.3 -0.2,0.4c0,0.1 -0.2,0.2 -0.4,0.4zm-1.4,-2.3c-0.2,0.1 -0.3,0.2 -0.3,0.4s0,0.4 0.1,0.6l0,0l1.2,-0.7l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.4,-0.4c-0.2,0.1 -0.4,0.1 -0.6,0.2z" class="st8"/>
|
||||
<path id="svg_27" d="m176,133.1l-0.2,-0.3l0.8,-0.5l0.3,0.3c0,-0.2 0,-0.3 0.1,-0.5c0.1,-0.1 0.2,-0.2 0.3,-0.3c0,0 0.1,0 0.1,-0.1c0,0 0.1,0 0.1,0l0.2,0.5l-0.3,0.1c-0.1,0.1 -0.2,0.2 -0.3,0.3c-0.1,0.1 -0.1,0.2 -0.1,0.4l0.8,1.5l0.4,-0.2l0.2,0.3l-1.2,0.7l-0.2,-0.3l0.3,-0.3l-1.1,-1.8l-0.2,0.2z" class="st8"/>
|
||||
</g>
|
||||
<g id="svg_28">
|
||||
<path id="svg_29" d="m141.1,151.8l-0.2,-0.3l0.9,-0.5l1.2,2.1l0.3,-0.2l0.2,-1l-0.3,0.1l-0.2,-0.3l1.1,-0.6l0.2,0.3l-0.3,0.2l-0.2,1.2l1.4,0.6l0.3,-0.1l0.2,0.3l-1.1,0.6l-0.2,-0.3l0.2,-0.2l-1.2,-0.5l-0.3,0.2l0.5,0.8l0.4,-0.2l0.2,0.3l-1.2,0.7l-0.2,-0.3l0.3,-0.3l-1.7,-3l-0.3,0.4z" class="st8"/>
|
||||
<path id="svg_30" d="m147.6,152c0,0.2 0,0.3 -0.1,0.5c-0.1,0.1 -0.2,0.3 -0.4,0.4c-0.3,0.2 -0.5,0.2 -0.8,0.1c-0.3,-0.1 -0.5,-0.3 -0.7,-0.7l-0.7,-1.1l-0.3,0.1l-0.2,-0.3l0.3,-0.2l0.5,-0.3l0.9,1.5c0.2,0.3 0.3,0.4 0.4,0.5s0.3,0 0.5,-0.1c0.2,-0.1 0.3,-0.2 0.4,-0.3c0.1,-0.1 0.1,-0.3 0.1,-0.4l-0.9,-1.5l-0.4,0.1l-0.2,-0.3l0.3,-0.2l0.5,-0.3l1.3,2.2l0.3,-0.1l0.2,0.3l-0.7,0.4l-0.3,-0.3z" class="st8"/>
|
||||
<path id="svg_31" d="m150.6,149.2c0.2,0.4 0.3,0.7 0.2,1c0,0.3 -0.2,0.6 -0.5,0.8c-0.2,0.1 -0.3,0.1 -0.5,0.2c-0.2,0 -0.3,0 -0.5,-0.1l0.1,0.4l-0.4,0.2l-1.9,-3.3l-0.4,0.2l-0.2,-0.3l0.9,-0.5l0.8,1.4c0,-0.2 0.1,-0.3 0.1,-0.4c0.1,-0.1 0.2,-0.2 0.4,-0.3c0.3,-0.2 0.6,-0.2 1,0s0.7,0.2 0.9,0.7l0,0zm-0.5,0.2c-0.2,-0.3 -0.4,-0.5 -0.6,-0.6c-0.2,-0.1 -0.4,-0.1 -0.7,0c-0.1,0.1 -0.2,0.2 -0.3,0.3c-0.1,0.1 -0.1,0.3 -0.1,0.4l0.7,1.1c0.1,0.1 0.3,0.1 0.4,0.1c0.1,0 0.3,0 0.4,-0.1c0.2,-0.1 0.3,-0.3 0.4,-0.5c0.1,-0.2 0,-0.4 -0.2,-0.7l0,0z" class="st8"/>
|
||||
<path id="svg_32" d="m153,149.5c-0.4,0.2 -0.7,0.3 -1.1,0.1s-0.6,-0.4 -0.9,-0.7l-0.1,-0.1c-0.2,-0.4 -0.3,-0.7 -0.2,-1.1c0.1,-0.4 0.3,-0.6 0.6,-0.8c0.3,-0.2 0.7,-0.2 1,-0.1s0.5,0.3 0.7,0.7l0.2,0.3l-1.7,1l0,0c0.1,0.2 0.3,0.4 0.5,0.5c0.2,0.1 0.4,0.1 0.6,-0.1c0.2,-0.1 0.3,-0.2 0.4,-0.3c0.1,-0.1 0.2,-0.2 0.2,-0.4l0.4,0.2c0,0.1 -0.1,0.3 -0.2,0.4s-0.2,0.2 -0.4,0.4zm-1.4,-2.3c-0.2,0.1 -0.3,0.2 -0.3,0.4c0,0.2 0,0.4 0.1,0.6l0,0l1.2,-0.7l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.4,-0.4s-0.4,0.1 -0.6,0.2z" class="st8"/>
|
||||
<path id="svg_33" d="m152.4,145.2l-0.2,-0.3l0.9,-0.5l1.9,3.3l0.4,-0.2l0.2,0.3l-1.2,0.7l-0.2,-0.3l0.3,-0.3l-1.7,-3l-0.4,0.3z" class="st8"/>
|
||||
<path id="svg_34" d="m157.1,147.1c-0.4,0.2 -0.7,0.3 -1.1,0.1s-0.6,-0.4 -0.9,-0.7l-0.1,-0.1c-0.2,-0.4 -0.3,-0.7 -0.2,-1.1c0.1,-0.4 0.3,-0.6 0.6,-0.8c0.3,-0.2 0.7,-0.2 1,-0.1s0.5,0.3 0.7,0.7l0.2,0.3l-1.7,1l0,0c0.1,0.2 0.3,0.4 0.5,0.5c0.2,0.1 0.4,0.1 0.6,-0.1c0.2,-0.1 0.3,-0.2 0.4,-0.3c0.1,-0.1 0.2,-0.2 0.2,-0.4l0.4,0.2c0,0.1 -0.1,0.3 -0.2,0.4s-0.2,0.3 -0.4,0.4zm-1.4,-2.3c-0.2,0.1 -0.3,0.2 -0.3,0.4s0,0.4 0.1,0.6l0,0l1.2,-0.7l0,-0.1c-0.1,-0.2 -0.2,-0.3 -0.4,-0.4c-0.2,0.1 -0.4,0.1 -0.6,0.2z" class="st8"/>
|
||||
<path id="svg_35" d="m157.5,142.6l0.4,0.6l0.5,-0.3l0.2,0.3l-0.5,0.3l0.9,1.6c0.1,0.1 0.1,0.2 0.2,0.2c0.1,0 0.2,0 0.2,0c0,0 0.1,-0.1 0.1,-0.1c0,0 0.1,-0.1 0.1,-0.1l0.2,0.3c0,0.1 -0.1,0.1 -0.2,0.2c-0.1,0.1 -0.2,0.1 -0.2,0.2c-0.2,0.1 -0.4,0.1 -0.6,0.1c-0.2,0 -0.3,-0.2 -0.5,-0.4l-0.9,-1.6l-0.4,0.2l-0.2,-0.3l0.4,-0.2l-0.4,-0.6l0.7,-0.4z" class="st8"/>
|
||||
</g>
|
||||
<polygon id="svg_36" points="225.1,160.4 189.4,139.9 189.4,98.7 225.1,78.2 260.7,98.7 260.7,139.9 " class="st16"/>
|
||||
<polygon id="svg_37" points="189.4,119.3 189.4,139.9 225.1,160.4 242.9,150.2 " class="st13"/>
|
||||
<polygon id="svg_38" points="208.7,147.7 214.8,137.2 237.3,150.2 225.1,157.2 " class="st7"/>
|
||||
<g id="svg_39">
|
||||
<path id="svg_40" d="m215.5,142.6c0.5,0.3 0.7,0.6 0.8,1c0.1,0.4 0,0.9 -0.2,1.3l-0.3,0.5c-0.3,0.4 -0.6,0.7 -1,0.8c-0.4,0.1 -0.9,0 -1.3,-0.2l-1.4,-0.8l0.2,-0.3l0.4,0.2l1.6,-2.7l-0.3,-0.3l0.2,-0.3l0.4,0.2l0.9,0.6zm-0.8,0l-1.6,2.7l0.6,0.3c0.3,0.2 0.6,0.2 0.9,0.1c0.3,-0.1 0.6,-0.3 0.7,-0.6l0.3,-0.5c0.2,-0.3 0.2,-0.6 0.2,-1c-0.1,-0.3 -0.3,-0.6 -0.6,-0.8l-0.5,-0.2z" class="st8"/>
|
||||
<path id="svg_41" d="m216.2,145.9c0.2,-0.4 0.5,-0.6 0.8,-0.8c0.3,-0.1 0.7,-0.1 1,0.1c0.4,0.2 0.6,0.5 0.6,0.9c0.1,0.4 0,0.7 -0.2,1.1l0,0.1c-0.2,0.4 -0.5,0.6 -0.8,0.8c-0.3,0.1 -0.7,0.1 -1,-0.1c-0.4,-0.2 -0.6,-0.5 -0.6,-0.9c-0.1,-0.4 0,-0.8 0.2,-1.2l0,0zm0.5,0.3c-0.2,0.3 -0.2,0.5 -0.2,0.8c0,0.2 0.1,0.4 0.4,0.6c0.2,0.1 0.4,0.1 0.7,0c0.2,-0.1 0.4,-0.3 0.6,-0.6l0,-0.1c0.2,-0.3 0.2,-0.5 0.2,-0.8c0,-0.2 -0.1,-0.4 -0.4,-0.6c-0.2,-0.1 -0.4,-0.1 -0.7,0c-0.3,0.2 -0.5,0.4 -0.6,0.7l0,0z" class="st8"/>
|
||||
<path id="svg_42" d="m219.5,149.1c0.2,0.1 0.3,0.1 0.5,0.1c0.2,0 0.3,-0.1 0.4,-0.2l0.4,0.2l0,0c-0.1,0.2 -0.3,0.4 -0.6,0.4c-0.3,0.1 -0.6,0 -0.9,-0.2c-0.4,-0.2 -0.6,-0.5 -0.6,-0.9c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.8,-0.7s0.7,-0.1 1.1,0.1c0.2,0.1 0.4,0.3 0.5,0.4s0.2,0.3 0.2,0.5l-0.3,0.6l-0.4,-0.2l0.1,-0.5c0,-0.1 -0.1,-0.2 -0.1,-0.3c-0.1,-0.1 -0.2,-0.2 -0.3,-0.2c-0.2,-0.1 -0.5,-0.2 -0.7,0c-0.2,0.1 -0.4,0.3 -0.5,0.6l-0.1,0.1c-0.2,0.3 -0.2,0.5 -0.2,0.7c0.1,0.4 0.2,0.6 0.4,0.7z" class="st8"/>
|
||||
<path id="svg_43" d="m222.7,146.8l0.2,-0.3l0.9,0.5l-1.2,2.1l0.3,0.2l0.9,-0.4l-0.2,-0.2l0.2,-0.3l1.1,0.6l-0.2,0.3l-0.3,-0.1l-1.1,0.4l0.2,1.5l0.3,0.2l-0.2,0.3l-1.1,-0.6l0.2,-0.3l0.3,0.1l-0.2,-1.2l-0.3,-0.2l-0.5,0.8l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.5,-0.2z" class="st8"/>
|
||||
<path id="svg_44" d="m224.8,152.7c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8s0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7s0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8c0.1,0.3 0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7c0,0.2 0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2c0.1,0 0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0s-0.4,-0.2 -0.7,-0.3zm1.3,-2.4c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.4,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.2 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_45" d="m227.6,151.2l0.2,-0.3l0.8,0.5l-0.2,0.4c0.1,-0.1 0.3,-0.1 0.4,-0.2c0.1,0 0.3,0 0.4,0.1c0,0 0.1,0 0.1,0.1c0,0 0.1,0.1 0.1,0.1l-0.3,0.4l-0.3,-0.2c-0.1,-0.1 -0.2,-0.1 -0.4,-0.1c-0.1,0 -0.2,0.1 -0.3,0.1l-0.8,1.5l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.1,-1.8l-0.3,-0.4z" class="st8"/>
|
||||
</g>
|
||||
<g id="svg_46">
|
||||
<path id="svg_47" d="m194,130.3l0.2,-0.3l0.9,0.5l-1.2,2.1l0.3,0.2l0.9,-0.4l-0.2,-0.2l0.2,-0.3l1.1,0.6l-0.2,0.3l-0.3,-0.1l-1.1,0.4l0.2,1.5l0.3,0.2l-0.2,0.3l-1.1,-0.6l0.2,-0.3l0.3,0.1l-0.2,-1.2l-0.3,-0.2l-0.5,0.8l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.5,-0.2z" class="st8"/>
|
||||
<path id="svg_48" d="m197.1,136.1c-0.2,0.1 -0.3,0.1 -0.5,0.1c-0.2,0 -0.3,0 -0.5,-0.1c-0.3,-0.2 -0.4,-0.4 -0.5,-0.6c-0.1,-0.3 0,-0.6 0.2,-1l0.7,-1.1l-0.3,-0.2l0.2,-0.3l0.3,0.2l0.5,0.3l-0.9,1.5c-0.2,0.3 -0.2,0.5 -0.2,0.6c0,0.1 0.1,0.3 0.3,0.4c0.2,0.1 0.3,0.1 0.5,0.1c0.1,0 0.3,-0.1 0.4,-0.1l0.9,-1.5l-0.3,-0.3l0.2,-0.3l0.3,0.2l0.5,0.3l-1.3,2.2l0.3,0.2l-0.2,0.3l-0.7,-0.4l0.1,-0.5z" class="st8"/>
|
||||
<path id="svg_49" d="m201,137.3c-0.2,0.4 -0.5,0.6 -0.8,0.7s-0.6,0.1 -0.9,-0.1c-0.2,-0.1 -0.3,-0.2 -0.4,-0.3c-0.1,-0.1 -0.1,-0.3 -0.1,-0.5l-0.2,0.3l-0.4,-0.2l1.9,-3.3l-0.3,-0.3l0.2,-0.3l0.9,0.5l-0.8,1.4c0.1,-0.1 0.3,-0.1 0.5,-0.1s0.3,0.1 0.5,0.2c0.3,0.2 0.5,0.5 0.5,0.8s-0.3,0.7 -0.6,1.2l0,0zm-0.4,-0.4c0.2,-0.3 0.3,-0.6 0.3,-0.8c0,-0.2 -0.1,-0.4 -0.3,-0.6c-0.1,-0.1 -0.3,-0.1 -0.4,-0.1c-0.1,0 -0.3,0.1 -0.4,0.1l-0.7,1.1c0,0.2 0,0.3 0.1,0.4c0.1,0.1 0.2,0.2 0.3,0.3c0.2,0.1 0.4,0.1 0.6,0c0.1,0.1 0.3,-0.1 0.5,-0.4l0,0z" class="st8"/>
|
||||
<path id="svg_50" d="m202,139.4c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7c0.4,-0.1 0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8c0.1,0.3 0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7c0,0.2 0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2c0.1,0 0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0c-0.3,-0.1 -0.5,-0.1 -0.7,-0.3zm1.2,-2.3c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.3,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.2 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_51" d="m205.3,136.8l0.2,-0.3l0.9,0.5l-1.9,3.3l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.4,-0.3z" class="st8"/>
|
||||
<path id="svg_52" d="m206.1,141.8c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7c0.4,-0.1 0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8c0.1,0.3 0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7c0,0.2 0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2c0.1,0 0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0c-0.3,-0.1 -0.5,-0.2 -0.7,-0.3zm1.2,-2.3c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.3,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.3 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_53" d="m210.2,139.9l-0.4,0.6l0.5,0.3l-0.2,0.3l-0.5,-0.3l-0.9,1.6c-0.1,0.1 -0.1,0.2 -0.1,0.3c0,0.1 0.1,0.1 0.2,0.2c0,0 0.1,0 0.1,0.1c0.1,0 0.1,0 0.1,0.1l-0.1,0.4c-0.1,0 -0.1,0 -0.2,0c-0.1,0 -0.2,-0.1 -0.3,-0.1c-0.2,-0.1 -0.3,-0.3 -0.4,-0.4c0,-0.2 0,-0.4 0.1,-0.6l0.9,-1.6l-0.4,-0.2l0.2,-0.3l0.4,0.2l0.4,-0.6l0.6,0z" class="st8"/>
|
||||
</g>
|
||||
<polygon id="svg_54" points="147.1,296.4 111.4,275.9 111.4,234.7 147.1,214.1 182.7,234.7 182.7,275.9 " class="st16"/>
|
||||
<polygon id="svg_55" points="129.3,224.4 147.1,214.1 182.7,234.7 182.7,255.3 " class="st13"/>
|
||||
<polygon id="svg_56" points="163.5,226.9 157.4,237.4 179.9,250.4 179.9,236.3 " class="st7"/>
|
||||
<g id="svg_57">
|
||||
<path id="svg_58" d="m164.2,232.3c0.5,0.3 0.7,0.6 0.8,1c0.1,0.4 0,0.9 -0.2,1.3l-0.3,0.5c-0.3,0.4 -0.6,0.7 -1,0.8c-0.4,0.1 -0.9,0 -1.3,-0.2l-1.4,-0.8l0.2,-0.3l0.4,0.2l1.6,-2.7l-0.3,-0.3l0.2,-0.3l0.4,0.2l0.9,0.6zm-0.8,0l-1.6,2.7l0.6,0.3c0.3,0.2 0.6,0.2 0.9,0.1c0.3,-0.1 0.6,-0.3 0.7,-0.6l0.3,-0.5c0.2,-0.3 0.3,-0.6 0.2,-1c-0.1,-0.3 -0.3,-0.6 -0.6,-0.8l-0.5,-0.2z" class="st8"/>
|
||||
<path id="svg_59" d="m164.9,235.6c0.2,-0.4 0.5,-0.6 0.8,-0.8c0.3,-0.1 0.7,-0.1 1,0.1c0.4,0.2 0.6,0.5 0.6,0.9c0.1,0.4 0,0.7 -0.2,1.1l0,0.1c-0.2,0.4 -0.5,0.6 -0.8,0.8c-0.3,0.1 -0.7,0.1 -1,-0.1c-0.4,-0.2 -0.6,-0.5 -0.6,-0.9c-0.1,-0.4 0,-0.8 0.2,-1.2l0,0zm0.5,0.3c-0.2,0.3 -0.2,0.5 -0.2,0.8c0,0.2 0.1,0.4 0.4,0.6c0.2,0.1 0.4,0.1 0.7,0c0.2,-0.1 0.4,-0.3 0.6,-0.6l0,-0.1c0.2,-0.3 0.2,-0.5 0.2,-0.8c0,-0.2 -0.1,-0.4 -0.4,-0.6c-0.2,-0.1 -0.4,-0.1 -0.7,0c-0.2,0.2 -0.4,0.4 -0.6,0.7l0,0z" class="st8"/>
|
||||
<path id="svg_60" d="m168.2,238.8c0.2,0.1 0.3,0.1 0.5,0.1c0.2,0 0.3,-0.1 0.4,-0.2l0.4,0.2l0,0c-0.1,0.2 -0.3,0.4 -0.6,0.4c-0.3,0.1 -0.6,0 -0.9,-0.2c-0.4,-0.2 -0.6,-0.5 -0.6,-0.9c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.8,-0.7c0.3,-0.1 0.7,-0.1 1.1,0.1c0.2,0.1 0.4,0.3 0.5,0.4s0.2,0.3 0.2,0.5l-0.3,0.7l-0.4,-0.2l0.1,-0.5c0,-0.1 -0.1,-0.2 -0.1,-0.3c-0.1,-0.1 -0.2,-0.2 -0.3,-0.2c-0.2,-0.1 -0.5,-0.2 -0.7,0c-0.2,0.1 -0.4,0.3 -0.5,0.6l-0.1,0.1c-0.2,0.3 -0.2,0.5 -0.2,0.7c0.1,0.3 0.2,0.5 0.4,0.6z" class="st8"/>
|
||||
<path id="svg_61" d="m171.4,236.5l0.2,-0.3l0.9,0.5l-1.2,2.1l0.3,0.2l0.9,-0.4l-0.2,-0.2l0.2,-0.3l1.1,0.6l-0.2,0.3l-0.4,0l-1.1,0.4l0.2,1.5l0.3,0.2l-0.2,0.3l-1.1,-0.6l0.2,-0.3l0.3,0.1l-0.2,-1.2l-0.3,-0.2l-0.5,0.8l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.4,-0.3z" class="st8"/>
|
||||
<path id="svg_62" d="m173.6,242.4c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7c0.4,-0.1 0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8c0.1,0.3 0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7c0,0.2 0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2s0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0c-0.3,-0.1 -0.5,-0.2 -0.7,-0.3zm1.2,-2.4c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.4,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.2 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_63" d="m176.3,240.9l0.2,-0.3l0.8,0.5l-0.2,0.4c0.1,-0.1 0.3,-0.1 0.4,-0.2c0.2,0 0.3,0 0.4,0.1c0,0 0.1,0 0.1,0.1c0,0 0.1,0.1 0.1,0.1l-0.3,0.4l-0.3,-0.2c-0.1,-0.1 -0.2,-0.1 -0.4,-0.1c-0.1,0 -0.2,0.1 -0.3,0.1l-0.8,1.5l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.1,-1.8l-0.3,-0.4z" class="st8"/>
|
||||
</g>
|
||||
<g id="svg_64">
|
||||
<path id="svg_65" d="m142.7,220l0.2,-0.3l0.9,0.5l-1.2,2.1l0.3,0.2l0.9,-0.4l-0.2,-0.2l0.2,-0.3l1.1,0.6l-0.2,0.3l-0.3,-0.1l-1.1,0.4l0.2,1.5l0.3,0.2l-0.2,0.3l-1.1,-0.6l0.2,-0.3l0.3,0.1l-0.2,-1.2l-0.3,-0.2l-0.5,0.8l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.5,-0.2z" class="st8"/>
|
||||
<path id="svg_66" d="m145.8,225.8c-0.2,0.1 -0.3,0.1 -0.5,0.1s-0.3,0 -0.5,-0.1c-0.3,-0.2 -0.4,-0.4 -0.5,-0.6s0,-0.6 0.2,-1l0.7,-1.1l-0.3,-0.2l0.2,-0.3l0.3,0.2l0.5,0.3l-0.9,1.5c-0.2,0.3 -0.2,0.5 -0.2,0.6s0.1,0.3 0.3,0.4c0.2,0.1 0.3,0.1 0.5,0.1c0.1,0 0.3,-0.1 0.4,-0.1l0.9,-1.5l-0.3,-0.3l0.2,-0.3l0.3,0.2l0.5,0.3l-1.3,2.2l0.3,0.2l-0.2,0.3l-0.7,-0.4l0.1,-0.5z" class="st8"/>
|
||||
<path id="svg_67" d="m149.8,227c-0.2,0.4 -0.5,0.6 -0.8,0.7c-0.3,0.1 -0.6,0.1 -0.9,-0.1c-0.2,-0.1 -0.3,-0.2 -0.4,-0.3c-0.1,-0.1 -0.1,-0.3 -0.1,-0.5l-0.2,0.3l-0.4,-0.2l1.9,-3.3l-0.3,-0.3l0.2,-0.3l0.9,0.5l-0.8,1.4c0.1,-0.1 0.3,-0.1 0.5,-0.1c0.2,0 0.3,0.1 0.5,0.2c0.3,0.2 0.5,0.5 0.5,0.8c-0.3,0.3 -0.4,0.7 -0.6,1.2l0,0zm-0.5,-0.4c0.2,-0.3 0.3,-0.6 0.3,-0.8c0,-0.2 -0.1,-0.4 -0.3,-0.6c-0.1,-0.1 -0.3,-0.1 -0.4,-0.1c-0.1,0 -0.3,0.1 -0.4,0.1l-0.7,1.1c0,0.2 0,0.3 0.1,0.4s0.2,0.2 0.3,0.3c0.2,0.1 0.4,0.1 0.6,0c0.2,0.1 0.3,0 0.5,-0.4l0,0z" class="st8"/>
|
||||
<path id="svg_68" d="m150.7,229.1c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7c0.4,-0.1 0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8s0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7s0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2c0.1,0 0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0c-0.3,-0.1 -0.5,-0.1 -0.7,-0.3zm1.2,-2.3c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.4,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.2 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_69" d="m154,226.5l0.2,-0.3l0.9,0.5l-1.9,3.3l0.3,0.3l-0.2,0.3l-1.2,-0.7l0.2,-0.3l0.4,0.2l1.7,-3l-0.4,-0.3z" class="st8"/>
|
||||
<path id="svg_70" d="m154.8,231.5c-0.4,-0.2 -0.6,-0.5 -0.6,-0.8c-0.1,-0.4 0,-0.7 0.2,-1.1l0.1,-0.1c0.2,-0.4 0.5,-0.6 0.9,-0.7c0.4,-0.1 0.7,-0.1 1,0.1c0.3,0.2 0.5,0.5 0.6,0.8s0,0.6 -0.2,1l-0.2,0.3l-1.7,-1l0,0c-0.1,0.2 -0.2,0.5 -0.2,0.7s0.2,0.4 0.4,0.5c0.2,0.1 0.3,0.1 0.5,0.2c0.1,0 0.3,0 0.4,0l0,0.4c-0.1,0 -0.3,0 -0.5,0c-0.3,-0.1 -0.5,-0.1 -0.7,-0.3zm1.3,-2.3c-0.2,-0.1 -0.3,-0.1 -0.5,0c-0.2,0.1 -0.4,0.2 -0.5,0.4l0,0l1.2,0.7l0,-0.1c0.1,-0.2 0.1,-0.4 0.1,-0.5c0,-0.2 -0.1,-0.4 -0.3,-0.5z" class="st8"/>
|
||||
<path id="svg_71" d="m158.9,229.6l-0.4,0.6l0.5,0.3l-0.2,0.3l-0.5,-0.3l-0.9,1.6c-0.1,0.1 -0.1,0.2 -0.1,0.3c0,0.1 0.1,0.1 0.2,0.2c0,0 0.1,0 0.1,0.1c0.1,0 0.1,0 0.1,0.1l-0.1,0.4c-0.1,0 -0.1,0 -0.2,0c-0.1,0 -0.2,-0.1 -0.3,-0.1c-0.2,-0.1 -0.3,-0.3 -0.4,-0.4c0,-0.2 0,-0.4 0.1,-0.6l0.9,-1.6l-0.4,-0.2l0.2,-0.3l0.4,0.2l0.4,-0.6l0.6,0z" class="st8"/>
|
||||
</g>
|
||||
</g>
|
||||
<g id="service"/>
|
||||
<g id="pods"/>
|
||||
<g id="IP"/>
|
||||
<g id="deployments"/>
|
||||
<g id="containers_x2F_volumes"/>
|
||||
<g id="labels_x2F_selectors"/>
|
||||
<g id="Layer_14"/>
|
||||
</g>
|
||||
</svg>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-east-asian:normal"
|
||||
x="354.17346"
|
||||
y="198.82574"
|
||||
id="text171"><tspan
|
||||
id="tspan169"
|
||||
x="354.17346"
|
||||
y="198.82574">Control Plane</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="364.43982"
|
||||
y="122.11347"
|
||||
id="text175"><tspan
|
||||
id="tspan173"
|
||||
x="364.43982"
|
||||
y="122.11347">Node</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:13.3333px;line-height:1.25;font-family:Courier;-inkscape-font-specification:'Courier, Normal'"
|
||||
x="354.85016"
|
||||
y="232.48218"
|
||||
id="text179"><tspan
|
||||
id="tspan177"
|
||||
x="354.85016"
|
||||
y="232.48218">Node Processes</tspan></text>
|
||||
</svg>
|
||||
|
|
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 30 KiB |
|
@ -412,7 +412,7 @@ protocol between the loadbalancer and backend to communicate the true client IP
|
|||
such as the HTTP [Forwarded](https://tools.ietf.org/html/rfc7239#section-5.2)
|
||||
or [X-FORWARDED-FOR](https://en.wikipedia.org/wiki/X-Forwarded-For)
|
||||
headers, or the
|
||||
[proxy protocol](https://www.haproxy.org/download/1.5/doc/proxy-protocol.txt).
|
||||
[proxy protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt).
|
||||
Load balancers in the second category can leverage the feature described above
|
||||
by creating an HTTP health check pointing at the port stored in
|
||||
the `service.spec.healthCheckNodePort` field on the Service.
|
||||
|
|
|
@ -134,7 +134,7 @@ kubectl apply -f ./content/en/examples/application/guestbook/frontend-deployment
|
|||
1. Query the list of Pods to verify that the three frontend replicas are running:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=guestbook -l tier=frontend
|
||||
kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
|
||||
```
|
||||
|
||||
The response should be similar to this:
|
||||
|
|
|
@ -10,16 +10,13 @@ feature:
|
|||
weight: 50
|
||||
---
|
||||
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Los objetos de tipo {{< glossary_tooltip text="Secret" term_id="secret" >}} en Kubernetes te permiten almacenar y administrar información confidencial, como
|
||||
contraseñas, tokens OAuth y llaves ssh. Poniendo esta información en un Secret
|
||||
es más seguro y más flexible que ponerlo en la definición de un {{< glossary_tooltip term_id="pod" >}} o en un {{< glossary_tooltip text="container image" term_id="image" >}}. Ver [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) para más información.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
<!-- body -->
|
||||
|
||||
## Introducción a Secrets
|
||||
|
||||
|
@ -58,9 +55,11 @@ empaqueta esos archivos en un Secret y crea el objeto en el Apiserver.
|
|||
```shell
|
||||
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
Secret "db-user-pass" created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Si la contraseña que está utilizando tiene caracteres especiales como por ejemplo `$`, `\`, `*`, o `!`, es posible que sean interpretados por tu intérprete de comandos y es necesario escapar cada carácter utilizando `\` o introduciéndolos entre comillas simples `'`.
|
||||
Por ejemplo, si tú password actual es `S!B\*d$zDsb`, deberías ejecutar el comando de esta manera:
|
||||
|
@ -76,14 +75,17 @@ Puedes comprobar que el Secret se haya creado, así:
|
|||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
NAME TYPE DATA AGE
|
||||
db-user-pass Opaque 2 51s
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl describe secrets/db-user-pass
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
Name: db-user-pass
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
|
@ -137,7 +139,8 @@ Ahora escribe un Secret usando [`kubectl apply`](/docs/reference/generated/kubec
|
|||
```shell
|
||||
kubectl apply -f ./secret.yaml
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
secret "mysecret" created
|
||||
```
|
||||
|
||||
|
@ -242,6 +245,7 @@ desde 1.14. Con esta nueva característica,
|
|||
puedes tambien crear un Secret a partir de un generador y luego aplicarlo para crear el objeto en el Apiserver. Los generadores deben ser especificados en un `kustomization.yaml` dentro de un directorio.
|
||||
|
||||
Por ejemplo, para generar un Secret a partir de los archivos `./username.txt` y `./password.txt`
|
||||
|
||||
```shell
|
||||
# Crear un fichero llamado kustomization.yaml con SecretGenerator
|
||||
cat <<EOF >./kustomization.yaml
|
||||
|
@ -281,9 +285,10 @@ username.txt: 5 bytes
|
|||
|
||||
Por ejemplo, para generar un Secret a partir de literales `username=admin` y `password=secret`,
|
||||
puedes especificar el generador del Secret en `kustomization.yaml` como:
|
||||
|
||||
```shell
|
||||
# Crea un fichero kustomization.yaml con SecretGenerator
|
||||
$ cat <<EOF >./kustomization.yaml
|
||||
cat <<EOF >./kustomization.yaml
|
||||
secretGenerator:
|
||||
- name: db-user-pass
|
||||
literals:
|
||||
|
@ -291,11 +296,14 @@ secretGenerator:
|
|||
- password=secret
|
||||
EOF
|
||||
```
|
||||
|
||||
Aplica el directorio kustomization para crear el objeto Secret.
|
||||
|
||||
```shell
|
||||
$ kubectl apply -k .
|
||||
kubectl apply -k .
|
||||
secret/db-user-pass-dddghtt9b5 created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
El nombre generado del Secret tiene un sufijo agregado al hashing de los contenidos. Esto asegura que se genera un nuevo Secret cada vez que el contenido es modificado.
|
||||
{{< /note >}}
|
||||
|
@ -307,7 +315,8 @@ Los Secrets se pueden recuperar a través del comando `kubectl get secret` . Por
|
|||
```shell
|
||||
kubectl get secret mysecret -o yaml
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
|
@ -328,7 +337,8 @@ Decodifica el campo de contraseña:
|
|||
```shell
|
||||
echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
|
@ -480,7 +490,8 @@ Este es el resultado de comandos ejecutados dentro del contenedor del ejemplo an
|
|||
```shell
|
||||
ls /etc/foo/
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
username
|
||||
password
|
||||
```
|
||||
|
@ -488,15 +499,16 @@ password
|
|||
```shell
|
||||
cat /etc/foo/username
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
admin
|
||||
```
|
||||
|
||||
|
||||
```shell
|
||||
cat /etc/foo/password
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
|
@ -562,13 +574,16 @@ Este es el resultado de comandos ejecutados dentro del contenedor del ejemplo an
|
|||
```shell
|
||||
echo $SECRET_USERNAME
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
admin
|
||||
```
|
||||
|
||||
```shell
|
||||
echo $SECRET_PASSWORD
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
|
@ -641,7 +656,7 @@ Cree un fichero kustomization.yaml con SecretGenerator conteniendo algunas llave
|
|||
kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
secret "ssh-key-secret" created
|
||||
```
|
||||
|
||||
|
@ -649,7 +664,6 @@ secret "ssh-key-secret" created
|
|||
Piense detenidamente antes de enviar tus propias llaves ssh: otros usuarios del cluster pueden tener acceso al Secret. Utilice una cuenta de servicio a la que desee que estén accesibles todos los usuarios con los que comparte el cluster de Kubernetes, y pueda revocarlas si se ven comprometidas.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
Ahora podemos crear un pod que haga referencia al Secret con la llave ssh key y lo consuma en un volumen:
|
||||
|
||||
```yaml
|
||||
|
@ -691,16 +705,19 @@ Crear un fichero kustomization.yaml con SecretGenerator
|
|||
```shell
|
||||
kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
secret "prod-db-secret" created
|
||||
```
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
secret "test-db-secret" created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Caracteres especiales como `$`, `\*`, y `!` requieren ser escapados.
|
||||
Si el password que estas usando tiene caracteres especiales, necesitas escaparlos usando el caracter `\\` . Por ejemplo, si tu password actual es `S!B\*d$zDsb`, deberías ejecutar el comando de esta forma:
|
||||
|
@ -715,7 +732,7 @@ No necesitas escapar caracteres especiales en contraseñas de los archivos (`--f
|
|||
Ahora haz los pods:
|
||||
|
||||
```shell
|
||||
$ cat <<EOF > pod.yaml
|
||||
cat <<EOF > pod.yaml
|
||||
apiVersion: v1
|
||||
kind: List
|
||||
items:
|
||||
|
@ -759,8 +776,9 @@ EOF
|
|||
```
|
||||
|
||||
Añade los pods a el mismo fichero kustomization.yaml
|
||||
|
||||
```shell
|
||||
$ cat <<EOF >> kustomization.yaml
|
||||
cat <<EOF >> kustomization.yaml
|
||||
resources:
|
||||
- pod.yaml
|
||||
EOF
|
||||
|
@ -833,7 +851,6 @@ spec:
|
|||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
|
||||
El `secret-volume` contendrá un solo archivo, llamado `.secret-file`, y
|
||||
el `dotfile-test-container` tendrá este fichero presente en el path
|
||||
`/etc/secret-volume/.secret-file`.
|
||||
|
@ -874,7 +891,6 @@ para que los clientes puedan `watch` recursos individuales, y probablemente esta
|
|||
|
||||
## Propiedades de seguridad
|
||||
|
||||
|
||||
### Protecciones
|
||||
|
||||
Debido a que los objetos `Secret` se pueden crear independientemente de los `Pods` que los usan, hay menos riesgo de que el Secret expuesto durante el flujo de trabajo de la creación, visualización, y edición de pods. El sistema también puede tomar precausiones con los objetos`Secret`, tal como eviar escribirlos en el disco siempre que sea posible.
|
||||
|
@ -906,7 +922,4 @@ para datos secretos, para que los Secrets no se almacenen en claro en {{< glossa
|
|||
- Un usuario que puede crear un pod que usa un Secret también puede ver el valor del Secret. Incluso si una política del apiserver no permite que ese usuario lea el objeto Secret, el usuario puede ejecutar el pod que expone el Secret.
|
||||
- Actualmente, cualquier persona con root en cualquier nodo puede leer _cualquier_ secret del apiserver, haciéndose pasar por el kubelet. Es una característica planificada enviar Secrets a los nodos que realmente lo requieran, para restringir el impacto de una explosión de root en un single node.
|
||||
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
{{% /capture %}}
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -83,3 +83,7 @@ para proporcionar un punto de partida.
|
|||
- Proponer mejoras al sitio web de Kubernetes y otras herramientas
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
También puedes leer la
|
||||
[guía de localización para español](/es/docs/contribute/localization_es/).
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Contribuir a la documentación de Kubernetes en español
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
¡Bienvenido(a)!
|
||||
|
||||
En esta página encontrarás información sobre convenciones utilizadas en la documentación en castellano y un glosario de términos con sus traducciones.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Glosario de terminología {#terminologia}
|
||||
|
||||
| English | Español | Género | Commentarios |
|
||||
| ----------------- | ---------------------- | ------------ | ------------------------ |
|
||||
| availability zone | zona de disponibilidad | femenino | |
|
||||
| bearer token | bearer token | masculino | |
|
||||
| built-in | incorporados | masculino | |
|
||||
| conditions | condiciones | masculino | para node conditions |
|
||||
| container | contenedor | masculino | |
|
||||
| controller | controlador | masculino | |
|
||||
| deploy | desplegar | | |
|
||||
| Deployment | Deployment | masculino | objeto Kubernetes |
|
||||
| Endpoints | Endpoints | masculino | objeto Kubernetes |
|
||||
| file | archivo | masculino | |
|
||||
| frontend | frontend | masculino | |
|
||||
| healthy | operativo | | |
|
||||
| high availability | alta disponibilidad | | |
|
||||
| hook | hook | masculino | |
|
||||
| instance | instancia | femenino | |
|
||||
| Lease | Lease | masculino | objeto Kubernetes |
|
||||
| Pod | Pod | masculino | objeto Kubernetes |
|
||||
| ratio | ritmo | | |
|
||||
| runtime | motor de ejecución | masculino | Container Runtime |
|
||||
| scheduler | planificador | masculino | |
|
||||
| Secret | Secret | masculino | objeto Kubernetes |
|
||||
| secret | secreto | masculino | información confidencial |
|
||||
| shell | terminal | femenino | |
|
||||
| stateless | stateless | | |
|
||||
| taint | contaminación | | |
|
||||
| worker node | nodo de trabajo | masculino | |
|
|
@ -140,9 +140,9 @@ Nodeアフィニティでは、`In`、`NotIn`、`Exists`、`DoesNotExist`、`Gt`
|
|||
|
||||
`nodeSelector`と`nodeAffinity`の両方を指定した場合、Podは**両方の**条件を満たすNodeにスケジュールされます。
|
||||
|
||||
`nodeAffinity`内で複数の`nodeSelectorTerms`を指定した場合、Podは**全ての**`nodeSelectorTerms`を満たしたNodeへスケジュールされます。
|
||||
`nodeAffinity`内で複数の`nodeSelectorTerms`を指定した場合、Podは**いずれかの**`nodeSelectorTerms`を満たしたNodeへスケジュールされます。
|
||||
|
||||
`nodeSelectorTerms`内で複数の`matchExpressions`を指定した場合にはPodは**いずれかの**`matchExpressions`を満たしたNodeへスケジュールされます。
|
||||
`nodeSelectorTerms`内で複数の`matchExpressions`を指定した場合にはPodは**全ての**`matchExpressions`を満たしたNodeへスケジュールされます。
|
||||
|
||||
PodがスケジュールされたNodeのラベルを削除したり変更しても、Podは削除されません。
|
||||
言い換えると、アフィニティはPodをスケジュールする際にのみ考慮されます。
|
||||
|
|
|
@ -5,7 +5,9 @@ date: 2020-12-02
|
|||
slug: dont-panic-kubernetes-and-docker
|
||||
---
|
||||
|
||||
**작성자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
**저자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
|
||||
**번역:** 박재화(삼성SDS), 손석호(한국전자통신연구원)
|
||||
|
||||
쿠버네티스는 v1.20 이후 컨테이너 런타임으로서
|
||||
[도커를
|
||||
|
|
|
@ -1,9 +1,5 @@
|
|||
---
|
||||
title: 쿠버네티스 컨트롤 플레인에 대한 메트릭
|
||||
|
||||
|
||||
|
||||
|
||||
title: 쿠버네티스 시스템 컴포넌트에 대한 메트릭
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
@ -12,7 +8,7 @@ weight: 60
|
|||
|
||||
시스템 컴포넌트 메트릭으로 내부에서 발생하는 상황을 더 잘 파악할 수 있다. 메트릭은 대시보드와 경고를 만드는 데 특히 유용하다.
|
||||
|
||||
쿠버네티스 컨트롤 플레인의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다.
|
||||
쿠버네티스 컴포넌트의 메트릭은 [프로메테우스 형식](https://prometheus.io/docs/instrumenting/exposition_formats/)으로 출력된다.
|
||||
이 형식은 구조화된 평문으로 디자인되어 있으므로 사람과 기계 모두가 쉽게 읽을 수 있다.
|
||||
|
||||
<!-- body -->
|
||||
|
@ -36,7 +32,7 @@ weight: 60
|
|||
|
||||
클러스터가 {{< glossary_tooltip term_id="rbac" text="RBAC" >}}을 사용하는 경우, 메트릭을 읽으려면 `/metrics` 에 접근을 허용하는 클러스터롤(ClusterRole)을 가지는 사용자, 그룹 또는 서비스어카운트(ServiceAccount)를 통한 권한이 필요하다.
|
||||
예를 들면, 다음과 같다.
|
||||
```
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
|
@ -156,5 +152,4 @@ kube-scheduler는 각 파드에 대해 구성된 리소스 [요청과 제한](/k
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* 메트릭에 대한 [프로메테우스 텍스트 형식](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)에 대해 읽어본다
|
||||
* [안정적인 쿠버네티스 메트릭](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml) 목록을 참고한다
|
||||
* [쿠버네티스 사용 중단 정책](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)에 대해 읽어본다
|
||||
|
|
|
@ -22,6 +22,16 @@ weight: 30
|
|||
명세나 이미지에 포함될 수 있다. 사용자는 시크릿을 만들 수 있고 시스템도
|
||||
일부 시크릿을 만들 수 있다.
|
||||
|
||||
{{< caution >}}
|
||||
쿠버네티스 시크릿은 기본적으로 암호화되지 않은 base64 인코딩 문자열로 저장된다.
|
||||
기본적으로 API 액세스 권한이 있는 모든 사용자 또는 쿠버네티스의 기본 데이터 저장소 etcd에
|
||||
액세스할 수 있는 모든 사용자가 일반 텍스트로 검색 할 수 있다.
|
||||
시크릿을 안전하게 사용하려면 (최소한) 다음과 같이 하는 것이 좋다.
|
||||
|
||||
1. 시크릿에 대한 [암호화 활성화](/docs/tasks/administer-cluster/encrypt-data/).
|
||||
2. 시크릿 읽기 및 쓰기를 제한하는 [RBAC 규칙 활성화 또는 구성](/docs/reference/access-authn-authz/authorization/). 파드를 만들 권한이 있는 모든 사용자는 시크릿을 암묵적으로 얻을 수 있다.
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 시크릿 개요
|
||||
|
@ -269,6 +279,13 @@ SSH 인증 시크릿 타입은 사용자 편의만을 위해서 제공된다.
|
|||
API 서버는 요구되는 키가 시크릿 구성에서 제공되고 있는지
|
||||
검증도 한다.
|
||||
|
||||
{{< caution >}}
|
||||
SSH 개인 키는 자체적으로 SSH 클라이언트와 호스트 서버간에 신뢰할 수있는 통신을
|
||||
설정하지 않는다. ConfigMap에 추가된 `known_hosts` 파일과 같은
|
||||
"중간자(man in the middle)" 공격을 완화하려면 신뢰를 설정하는
|
||||
2차 수단이 필요하다.
|
||||
{{< /caution >}}
|
||||
|
||||
### TLS 시크릿
|
||||
|
||||
쿠버네티스는 보통 TLS를 위해 사용되는 인증서와 관련된 키를 저장하기 위해서
|
||||
|
@ -786,7 +803,6 @@ immutable: true
|
|||
|
||||
수동으로 생성된 시크릿(예: GitHub 계정에 접근하기 위한 토큰이 포함된 시크릿)은
|
||||
시크릿의 서비스 어카운트를 기반한 파드에 자동으로 연결될 수 있다.
|
||||
해당 프로세스에 대한 자세한 설명은 [파드프리셋(PodPreset)을 사용하여 파드에 정보 주입하기](/docs/tasks/inject-data-application/podpreset/)를 참고한다.
|
||||
|
||||
## 상세 내용
|
||||
|
||||
|
@ -1233,3 +1249,4 @@ API 서버에서 kubelet으로의 통신은 SSL/TLS로 보호된다.
|
|||
- [`kubectl` 을 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-kubectl/)하는 방법 배우기
|
||||
- [구성 파일을 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-config-file/)하는 방법 배우기
|
||||
- [kustomize를 사용한 시크릿 관리](/docs/tasks/configmap-secret/managing-secret-using-kustomize/)하는 방법 배우기
|
||||
|
||||
|
|
|
@ -54,7 +54,7 @@ Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로
|
|||
|
||||
컨테이너 라이프사이클 관리 훅이 호출되면,
|
||||
쿠버네티스 관리 시스템은 훅 동작에 따라 핸들러를 실행하고,
|
||||
`exec` 와 `tcpSocket` 은 컨테이너에서 실행되고, `httpGet` 은 kubelet 프로세스에 의해 실행된다.
|
||||
`httpGet` 와 `tcpSocket` 은 kubelet 프로세스에 의해 실행되고, `exec` 은 컨테이너에서 실행된다.
|
||||
|
||||
훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 컨텍스트와 동기적으로 동작한다.
|
||||
이것은 `PostStart` 훅에 대해서,
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
---
|
||||
|
||||
|
||||
|
||||
title: 런타임클래스(RuntimeClass)
|
||||
content_type: concept
|
||||
weight: 20
|
||||
|
@ -35,10 +32,6 @@ weight: 20
|
|||
|
||||
## 셋업
|
||||
|
||||
런타임클래스 기능 게이트가 활성화(기본값)된 것을 확인한다.
|
||||
기능 게이트 활성화에 대한 설명은 [기능 게이트](/ko/docs/reference/command-line-tools-reference/feature-gates/)를
|
||||
참고한다. `RuntimeClass` 기능 게이트는 API 서버 _및_ kubelets에서 활성화되어야 한다.
|
||||
|
||||
1. CRI 구현(implementation)을 노드에 설정(런타임에 따라서).
|
||||
2. 상응하는 런타임클래스 리소스 생성.
|
||||
|
||||
|
@ -144,11 +137,9 @@ https://github.com/containerd/cri/blob/master/docs/config.md
|
|||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
쿠버네티스 v1.16 부터, 런타임 클래스는 `scheduling` 필드를 통해 이종의 클러스터
|
||||
지원을 포함한다. 이 필드를 사용하면, 이 런타임 클래스를 갖는 파드가 이를 지원하는
|
||||
노드로 스케줄된다는 것을 보장할 수 있다. 이 스케줄링 기능을 사용하려면,
|
||||
[런타임 클래스 어드미션(admission) 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#runtimeclass)를
|
||||
활성화(1.16 부터 기본값)해야 한다.
|
||||
RuntimeClass에 `scheduling` 필드를 지정하면, 이 RuntimeClass로 실행되는 파드가
|
||||
이를 지원하는 노드로 예약되도록 제약 조건을 설정할 수 있다.
|
||||
`scheduling`이 설정되지 않은 경우 이 RuntimeClass는 모든 노드에서 지원되는 것으로 간주된다.
|
||||
|
||||
파드가 지정된 런타임클래스를 지원하는 노드에 안착한다는 것을 보장하려면,
|
||||
해당 노드들은 `runtimeClass.scheduling.nodeSelector` 필드에서 선택되는 공통 레이블을 가져야한다.
|
||||
|
|
|
@ -69,7 +69,7 @@ weight: 10
|
|||
웹훅 모델에서 쿠버네티스는 원격 서비스에 네트워크 요청을 한다.
|
||||
*바이너리 플러그인* 모델에서 쿠버네티스는 바이너리(프로그램)를 실행한다.
|
||||
바이너리 플러그인은 kubelet(예:
|
||||
[Flex Volume 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과
|
||||
[Flex 볼륨 플러그인](/ko/docs/concepts/storage/volumes/#flexvolume)과
|
||||
[네트워크 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))과
|
||||
kubectl에서
|
||||
사용한다.
|
||||
|
@ -157,7 +157,7 @@ API를 추가해도 기존 API(예: 파드)의 동작에 직접 영향을 미치
|
|||
|
||||
### 스토리지 플러그인
|
||||
|
||||
[Flex Volumes](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면
|
||||
[Flex 볼륨](/ko/docs/concepts/storage/volumes/#flexvolume)을 사용하면
|
||||
Kubelet이 바이너리 플러그인을 호출하여 볼륨을 마운트하도록 함으로써
|
||||
빌트인 지원 없이 볼륨 유형을 마운트 할 수 있다.
|
||||
|
||||
|
|
|
@ -9,11 +9,11 @@ weight: 40
|
|||
|
||||
인그레스 리소스가 작동하려면, 클러스터는 실행 중인 인그레스 컨트롤러가 반드시 필요하다.
|
||||
|
||||
kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는
|
||||
`kube-controller-manager` 바이너리의 일부로 실행되는 컨트롤러의 다른 타입과 달리 인그레스 컨트롤러는
|
||||
클러스터와 함께 자동으로 실행되지 않는다.
|
||||
클러스터에 가장 적합한 인그레스 컨트롤러 구현을 선택하는데 이 페이지를 사용한다.
|
||||
|
||||
프로젝트로써 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와
|
||||
프로젝트로서 쿠버네티스는 [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme)와
|
||||
[nginx](https://git.k8s.io/ingress-nginx/README.md#readme) 인그레스 컨트롤러를 지원하고 유지한다.
|
||||
|
||||
|
||||
|
@ -26,6 +26,7 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의
|
|||
* [AKS 애플리케이션 게이트웨이 인그레스 컨트롤러] (https://azure.github.io/application-gateway-kubernetes-ingress/)는 [Azure 애플리케이션 게이트웨이](https://docs.microsoft.com)를 구성하는 인그레스 컨트롤러다.
|
||||
* [Ambassador](https://www.getambassador.io/) API 게이트웨이는 [Envoy](https://www.envoyproxy.io) 기반 인그레스
|
||||
컨트롤러다.
|
||||
* [Apache APISIX 인그레스 컨트롤러](https://github.com/apache/apisix-ingress-controller)는 [Apache APISIX](https://github.com/apache/apisix) 기반의 인그레스 컨트롤러이다.
|
||||
* [Avi 쿠버네티스 오퍼레이터](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes)는 [VMware NSX Advanced Load Balancer](https://avinetworks.com/)을 사용하는 L4-L7 로드 밸런싱을 제공한다.
|
||||
* [Citrix 인그레스 컨트롤러](https://github.com/citrix/citrix-k8s-ingress-controller#readme)는
|
||||
Citrix 애플리케이션 딜리버리 컨트롤러에서 작동한다.
|
||||
|
@ -42,7 +43,7 @@ kube-controller-manager 바이너리의 일부로 실행되는 컨트롤러의
|
|||
기반 인그레스 컨트롤러다.
|
||||
* [쿠버네티스 용 Kong 인그레스 컨트롤러](https://github.com/Kong/kubernetes-ingress-controller#readme)는 [Kong 게이트웨이](https://konghq.com/kong/)를
|
||||
구동하는 인그레스 컨트롤러다.
|
||||
* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx/kubernetes-ingress-controller)는 [NGINX](https://www.nginx.com/resources/glossary)
|
||||
* [쿠버네티스 용 NGINX 인그레스 컨트롤러](https://www.nginx.com/products/nginx-ingress-controller/)는 [NGINX](https://www.nginx.com/resources/glossary/nginx/)
|
||||
웹서버(프록시로 사용)와 함께 작동한다.
|
||||
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/)는 사용자의 커스텀 프록시를 구축하기 위한 라이브러리로 설계된 쿠버네티스 인그레스와 같은 유스케이스를 포함한 서비스 구성을 위한 HTTP 라우터 및 역방향 프록시다.
|
||||
* [Traefik 쿠버네티스 인그레스 제공자](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)는
|
||||
|
|
|
@ -376,7 +376,7 @@ graph LR;
|
|||
트래픽을 일치 시킬 수 있다.
|
||||
|
||||
예를 들어, 다음 인그레스는 `first.bar.com`에 요청된 트래픽을
|
||||
`service1`로, `second.foo.com`는 `service2`로, 호스트 이름이 정의되지
|
||||
`service1`로, `second.bar.com`는 `service2`로, 호스트 이름이 정의되지
|
||||
않은(즉, 요청 헤더가 표시 되지 않는) IP 주소로의 모든
|
||||
트래픽은 `service3`로 라우팅 한다.
|
||||
|
||||
|
|
|
@ -134,7 +134,7 @@ spec:
|
|||
* 한 서비스에서 다른
|
||||
{{< glossary_tooltip term_id="namespace" text="네임스페이스">}} 또는 다른 클러스터의 서비스를 지정하려고 한다.
|
||||
* 워크로드를 쿠버네티스로 마이그레이션하고 있다. 해당 방식을 평가하는 동안,
|
||||
쿠버네티스에서는 일정 비율의 백엔드만 실행한다.
|
||||
쿠버네티스에서는 백엔드의 일부만 실행한다.
|
||||
|
||||
이러한 시나리오 중에서 파드 셀렉터 _없이_ 서비스를 정의 할 수 있다.
|
||||
예를 들면
|
||||
|
|
|
@ -13,7 +13,7 @@ weight: 50
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료되도록 한다.
|
||||
잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료될 때까지 계속해서 파드의 실행을 재시도한다.
|
||||
파드가 성공적으로 완료되면, 성공적으로 완료된 잡을 추적한다. 지정된 수의
|
||||
성공 완료에 도달하면, 작업(즉, 잡)이 완료된다. 잡을 삭제하면 잡이 생성한
|
||||
파드가 정리된다.
|
||||
|
|
|
@ -76,4 +76,4 @@ TTL 컨트롤러는 쿠버네티스 리소스에
|
|||
|
||||
* [자동으로 잡 정리](/ko/docs/concepts/workloads/controllers/job/#완료된-잡을-자동으로-정리)
|
||||
|
||||
* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
|
||||
* [디자인 문서](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/README.md)
|
||||
|
|
|
@ -18,7 +18,8 @@ content_type: concept
|
|||
|
||||
## API 레퍼런스
|
||||
|
||||
* [쿠버네티스 API 레퍼런스 {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [쿠버네티스 API 레퍼런스](/docs/reference/kubernetes-api/)
|
||||
* [쿠버네티스 {{< param "version" >}}용 원페이지(One-page) API 레퍼런스](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [쿠버네티스 API 사용](/ko/docs/reference/using-api/) - 쿠버네티스 API에 대한 개요
|
||||
|
||||
## API 클라이언트 라이브러리
|
||||
|
|
|
@ -48,13 +48,15 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
|
||||
| 기능 | 디폴트 | 단계 | 도입 | 종료 |
|
||||
|---------|---------|-------|-------|-------|
|
||||
| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | |
|
||||
| `APIListChunking` | `false` | 알파 | 1.8 | 1.8 |
|
||||
| `APIListChunking` | `true` | 베타 | 1.9 | |
|
||||
| `APIPriorityAndFairness` | `false` | 알파 | 1.17 | 1.19 |
|
||||
| `APIPriorityAndFairness` | `true` | 베타 | 1.20 | |
|
||||
| `APIResponseCompression` | `false` | 알파 | 1.7 | |
|
||||
| `APIResponseCompression` | `false` | 알파 | 1.7 | 1.15 |
|
||||
| `APIResponseCompression` | `false` | 베타 | 1.16 | |
|
||||
| `APIServerIdentity` | `false` | 알파 | 1.20 | |
|
||||
| `AllowInsecureBackendProxy` | `true` | 베타 | 1.17 | |
|
||||
| `AnyVolumeDataSource` | `false` | 알파 | 1.18 | |
|
||||
| `AppArmor` | `true` | 베타 | 1.4 | |
|
||||
| `BalanceAttachedNodeVolumes` | `false` | 알파 | 1.11 | |
|
||||
| `BoundServiceAccountTokenVolume` | `false` | 알파 | 1.13 | |
|
||||
|
@ -77,7 +79,8 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `CSIMigrationGCE` | `false` | 알파 | 1.14 | 1.16 |
|
||||
| `CSIMigrationGCE` | `false` | 베타 | 1.17 | |
|
||||
| `CSIMigrationGCEComplete` | `false` | 알파 | 1.17 | |
|
||||
| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | |
|
||||
| `CSIMigrationOpenStack` | `false` | 알파 | 1.14 | 1.17 |
|
||||
| `CSIMigrationOpenStack` | `true` | 베타 | 1.18 | |
|
||||
| `CSIMigrationOpenStackComplete` | `false` | 알파 | 1.17 | |
|
||||
| `CSIMigrationvSphere` | `false` | 베타 | 1.19 | |
|
||||
| `CSIMigrationvSphereComplete` | `false` | 베타 | 1.19 | |
|
||||
|
@ -89,26 +92,23 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `ConfigurableFSGroupPolicy` | `true` | 베타 | 1.20 | |
|
||||
| `CronJobControllerV2` | `false` | 알파 | 1.20 | |
|
||||
| `CustomCPUCFSQuotaPeriod` | `false` | 알파 | 1.12 | |
|
||||
| `CustomResourceDefaulting` | `false` | 알파| 1.15 | 1.15 |
|
||||
| `CustomResourceDefaulting` | `true` | 베타 | 1.16 | |
|
||||
| `DefaultPodTopologySpread` | `false` | 알파 | 1.19 | 1.19 |
|
||||
| `DefaultPodTopologySpread` | `true` | 베타 | 1.20 | |
|
||||
| `DevicePlugins` | `false` | 알파 | 1.8 | 1.9 |
|
||||
| `DevicePlugins` | `true` | 베타 | 1.10 | |
|
||||
| `DisableAcceleratorUsageMetrics` | `false` | 알파 | 1.19 | 1.19 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | 1.22 |
|
||||
| `DisableAcceleratorUsageMetrics` | `true` | 베타 | 1.20 | |
|
||||
| `DownwardAPIHugePages` | `false` | 알파 | 1.20 | |
|
||||
| `DryRun` | `false` | 알파 | 1.12 | 1.12 |
|
||||
| `DryRun` | `true` | 베타 | 1.13 | |
|
||||
| `DynamicKubeletConfig` | `false` | 알파 | 1.4 | 1.10 |
|
||||
| `DynamicKubeletConfig` | `true` | 베타 | 1.11 | |
|
||||
| `EfficientWatchResumption` | `false` | 알파 | 1.20 | |
|
||||
| `EndpointSlice` | `false` | 알파 | 1.16 | 1.16 |
|
||||
| `EndpointSlice` | `false` | 베타 | 1.17 | |
|
||||
| `EndpointSlice` | `true` | 베타 | 1.18 | |
|
||||
| `EndpointSliceNodeName` | `false` | 알파 | 1.20 | |
|
||||
| `EndpointSliceProxying` | `false` | 알파 | 1.18 | 1.18 |
|
||||
| `EndpointSliceProxying` | `true` | 베타 | 1.19 | |
|
||||
| `EndpointSliceTerminating` | `false` | 알파 | 1.20 | |
|
||||
| `EndpointSliceTerminatingCondition` | `false` | 알파 | 1.20 | |
|
||||
| `EphemeralContainers` | `false` | 알파 | 1.16 | |
|
||||
| `ExpandCSIVolumes` | `false` | 알파 | 1.14 | 1.15 |
|
||||
| `ExpandCSIVolumes` | `true` | 베타 | 1.16 | |
|
||||
|
@ -119,19 +119,22 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `ExperimentalHostUserNamespaceDefaulting` | `false` | 베타 | 1.5 | |
|
||||
| `GenericEphemeralVolume` | `false` | 알파 | 1.19 | |
|
||||
| `GracefulNodeShutdown` | `false` | 알파 | 1.20 | |
|
||||
| `HPAContainerMetrics` | `false` | 알파 | 1.20 | |
|
||||
| `HPAScaleToZero` | `false` | 알파 | 1.16 | |
|
||||
| `HugePageStorageMediumSize` | `false` | 알파 | 1.18 | 1.18 |
|
||||
| `HugePageStorageMediumSize` | `true` | 베타 | 1.19 | |
|
||||
| `HyperVContainer` | `false` | 알파 | 1.10 | |
|
||||
| `IPv6DualStack` | `false` | 알파 | 1.15 | |
|
||||
| `ImmutableEphemeralVolumes` | `false` | 알파 | 1.18 | 1.18 |
|
||||
| `ImmutableEphemeralVolumes` | `true` | 베타 | 1.19 | |
|
||||
| `IPv6DualStack` | `false` | 알파 | 1.16 | |
|
||||
| `LegacyNodeRoleBehavior` | `true` | 알파 | 1.16 | |
|
||||
| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | |
|
||||
| `KubeletPodResources` | `true` | 알파 | 1.13 | 1.14 |
|
||||
| `KubeletPodResources` | `true` | 베타 | 1.15 | |
|
||||
| `LegacyNodeRoleBehavior` | `false` | 알파 | 1.16 | 1.18 |
|
||||
| `LegacyNodeRoleBehavior` | `true` | True | 1.19 | |
|
||||
| `LocalStorageCapacityIsolation` | `false` | 알파 | 1.7 | 1.9 |
|
||||
| `LocalStorageCapacityIsolation` | `true` | 베타 | 1.10 | |
|
||||
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | 알파 | 1.15 | |
|
||||
| `MixedProtocolLBService` | `false` | 알파 | 1.20 | |
|
||||
| `MountContainers` | `false` | 알파 | 1.9 | |
|
||||
| `NodeDisruptionExclusion` | `false` | 알파 | 1.16 | 1.18 |
|
||||
| `NodeDisruptionExclusion` | `true` | 베타 | 1.19 | |
|
||||
| `NonPreemptingPriority` | `false` | 알파 | 1.15 | 1.18 |
|
||||
|
@ -143,25 +146,27 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `ProcMountType` | `false` | 알파 | 1.12 | |
|
||||
| `QOSReserved` | `false` | 알파 | 1.11 | |
|
||||
| `RemainingItemCount` | `false` | 알파 | 1.15 | |
|
||||
| `RemoveSelfLink` | `false` | 알파 | 1.16 | 1.19 |
|
||||
| `RemoveSelfLink` | `true` | 베타 | 1.20 | |
|
||||
| `RootCAConfigMap` | `false` | 알파 | 1.13 | 1.19 |
|
||||
| `RootCAConfigMap` | `true` | 베타 | 1.20 | |
|
||||
| `RotateKubeletServerCertificate` | `false` | 알파 | 1.7 | 1.11 |
|
||||
| `RotateKubeletServerCertificate` | `true` | 베타 | 1.12 | |
|
||||
| `RunAsGroup` | `true` | 베타 | 1.14 | |
|
||||
| `RuntimeClass` | `false` | 알파 | 1.12 | 1.13 |
|
||||
| `RuntimeClass` | `true` | 베타 | 1.14 | |
|
||||
| `SCTPSupport` | `false` | 알파 | 1.12 | 1.18 |
|
||||
| `SCTPSupport` | `true` | 베타 | 1.19 | |
|
||||
| `ServerSideApply` | `false` | 알파 | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | 베타 | 1.16 | |
|
||||
| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | |
|
||||
| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | 1.20 |
|
||||
| `ServiceAccountIssuerDiscovery` | `false` | 알파 | 1.18 | 1.19 |
|
||||
| `ServiceAccountIssuerDiscovery` | `true` | 베타 | 1.20 | |
|
||||
| `ServiceLBNodePortControl` | `false` | 알파 | 1.20 | |
|
||||
| `ServiceNodeExclusion` | `false` | 알파 | 1.8 | 1.18 |
|
||||
| `ServiceNodeExclusion` | `true` | 베타 | 1.19 | |
|
||||
| `ServiceTopology` | `false` | 알파 | 1.17 | |
|
||||
| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | |
|
||||
| `SetHostnameAsFQDN` | `false` | 알파 | 1.19 | 1.19 |
|
||||
| `SetHostnameAsFQDN` | `true` | 베타 | 1.20 | |
|
||||
| `SizeMemoryBackedVolumes` | `false` | 알파 | 1.20 | |
|
||||
| `StorageVersionAPI` | `false` | 알파 | 1.20 | |
|
||||
| `StorageVersionHash` | `false` | 알파 | 1.14 | 1.14 |
|
||||
| `StorageVersionHash` | `true` | 베타 | 1.15 | |
|
||||
| `Sysctls` | `true` | 베타 | 1.11 | |
|
||||
|
@ -170,11 +175,11 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `TopologyManager` | `true` | 베타 | 1.18 | |
|
||||
| `ValidateProxyRedirects` | `false` | 알파 | 1.12 | 1.13 |
|
||||
| `ValidateProxyRedirects` | `true` | 베타 | 1.14 | |
|
||||
| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | |
|
||||
| `WindowsGMSA` | `false` | 알파 | 1.14 | |
|
||||
| `WindowsGMSA` | `true` | 베타 | 1.16 | |
|
||||
| `WarningHeaders` | `true` | 베타 | 1.19 | |
|
||||
| `WinDSR` | `false` | 알파 | 1.14 | |
|
||||
| `WinOverlay` | `false` | 알파 | 1.14 | |
|
||||
| `WinOverlay` | `false` | 알파 | 1.14 | 1.19 |
|
||||
| `WinOverlay` | `true` | 베타 | 1.20 | |
|
||||
| `WindowsEndpointSliceProxying` | `false` | 알파 | 1.19 | |
|
||||
{{< /table >}}
|
||||
|
||||
### GA 또는 사용 중단된 기능을 위한 기능 게이트
|
||||
|
@ -228,6 +233,9 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `CustomResourceWebhookConversion` | `false` | 알파 | 1.13 | 1.14 |
|
||||
| `CustomResourceWebhookConversion` | `true` | 베타 | 1.15 | 1.15 |
|
||||
| `CustomResourceWebhookConversion` | `true` | GA | 1.16 | - |
|
||||
| `DryRun` | `false` | 알파 | 1.12 | 1.12 |
|
||||
| `DryRun` | `true` | 베타 | 1.13 | 1.18 |
|
||||
| `DryRun` | `true` | GA | 1.19 | - |
|
||||
| `DynamicAuditing` | `false` | 알파 | 1.13 | 1.18 |
|
||||
| `DynamicAuditing` | - | 사용중단 | 1.19 | - |
|
||||
| `DynamicProvisioningScheduling` | `false` | 알파 | 1.11 | 1.11 |
|
||||
|
@ -247,23 +255,28 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `HugePages` | `false` | 알파 | 1.8 | 1.9 |
|
||||
| `HugePages` | `true` | 베타| 1.10 | 1.13 |
|
||||
| `HugePages` | `true` | GA | 1.14 | - |
|
||||
| `HyperVContainer` | `false` | 알파 | 1.10 | 1.19 |
|
||||
| `HyperVContainer` | `false` | 사용중단 | 1.20 | - |
|
||||
| `Initializers` | `false` | 알파 | 1.7 | 1.13 |
|
||||
| `Initializers` | - | 사용중단 | 1.14 | - |
|
||||
| `KubeletConfigFile` | `false` | 알파 | 1.8 | 1.9 |
|
||||
| `KubeletConfigFile` | - | 사용중단 | 1.10 | - |
|
||||
| `KubeletCredentialProviders` | `false` | 알파 | 1.20 | 1.20 |
|
||||
| `KubeletPluginsWatcher` | `false` | 알파 | 1.11 | 1.11 |
|
||||
| `KubeletPluginsWatcher` | `true` | 베타 | 1.12 | 1.12 |
|
||||
| `KubeletPluginsWatcher` | `true` | GA | 1.13 | - |
|
||||
| `KubeletPodResources` | `false` | 알파 | 1.13 | 1.14 |
|
||||
| `KubeletPodResources` | `true` | 베타 | 1.15 | |
|
||||
| `KubeletPodResources` | `true` | GA | 1.20 | |
|
||||
| `MountContainers` | `false` | 알파 | 1.9 | 1.16 |
|
||||
| `MountContainers` | `false` | 사용중단 | 1.17 | - |
|
||||
| `MountPropagation` | `false` | 알파 | 1.8 | 1.9 |
|
||||
| `MountPropagation` | `true` | 베타 | 1.10 | 1.11 |
|
||||
| `MountPropagation` | `true` | GA | 1.12 | - |
|
||||
| `NodeLease` | `false` | 알파 | 1.12 | 1.13 |
|
||||
| `NodeLease` | `true` | 베타 | 1.14 | 1.16 |
|
||||
| `NodeLease` | `true` | GA | 1.17 | - |
|
||||
| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 |
|
||||
| `PVCProtection` | - | 사용중단 | 1.10 | - |
|
||||
| `PersistentLocalVolumes` | `false` | 알파 | 1.7 | 1.9 |
|
||||
| `PersistentLocalVolumes` | `true` | 베타 | 1.10 | 1.13 |
|
||||
| `PersistentLocalVolumes` | `true` | GA | 1.14 | - |
|
||||
|
@ -276,8 +289,6 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
| `PodShareProcessNamespace` | `false` | 알파 | 1.10 | 1.11 |
|
||||
| `PodShareProcessNamespace` | `true` | 베타 | 1.12 | 1.16 |
|
||||
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
|
||||
| `PVCProtection` | `false` | 알파 | 1.9 | 1.9 |
|
||||
| `PVCProtection` | - | 사용중단 | 1.10 | - |
|
||||
| `RequestManagement` | `false` | 알파 | 1.15 | 1.16 |
|
||||
| `ResourceLimitsPriorityFunction` | `false` | 알파 | 1.9 | 1.18 |
|
||||
| `ResourceLimitsPriorityFunction` | - | 사용중단 | 1.19 | - |
|
||||
|
@ -398,62 +409,131 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
|
||||
각 기능 게이트는 특정 기능을 활성화/비활성화하도록 설계되었다.
|
||||
|
||||
- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`)
|
||||
리소스를 청크(chunks)로 검색할 수 있도록 한다.
|
||||
- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을
|
||||
관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨)
|
||||
- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다.
|
||||
- `APIServerIdentity`: 클러스터의 각 API 서버에 ID를 할당한다.
|
||||
- `Accelerators`: 도커 사용 시 Nvidia GPU 지원 활성화한다.
|
||||
- `AdvancedAuditing`: [고급 감사](/docs/tasks/debug-application-cluster/audit/#advanced-audit) 기능을 활성화한다.
|
||||
- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity) 설정을 활성화한다.
|
||||
- `AffinityInAnnotations`(*사용 중단됨*): [파드 어피니티 또는 안티-어피니티](/ko/docs/concepts/scheduling-eviction/assign-pod-node/#어피니티-affinity-와-안티-어피니티-anti-affinity)
|
||||
설정을 활성화한다.
|
||||
- `AllowExtTrafficLocalEndpoints`: 서비스가 외부 요청을 노드의 로컬 엔드포인트로 라우팅할 수 있도록 한다.
|
||||
- `AllowInsecureBackendProxy`: 사용자가 파드 로그 요청에서 kubelet의
|
||||
TLS 확인을 건너뛸 수 있도록 한다.
|
||||
- `AnyVolumeDataSource`: {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}의
|
||||
`DataSource` 로 모든 사용자 정의 리소스 사용을 활성화한다.
|
||||
- `APIListChunking`: API 클라이언트가 API 서버에서 (`LIST` 또는 `GET`) 리소스를 청크(chunks)로 검색할 수 있도록 한다.
|
||||
- `APIPriorityAndFairness`: 각 서버의 우선 순위와 공정성을 통해 동시 요청을 관리할 수 있다. (`RequestManagement` 에서 이름이 변경됨)
|
||||
- `APIResponseCompression`: `LIST` 또는 `GET` 요청에 대한 API 응답을 압축한다.
|
||||
- `APIServerIdentity`: 클러스터의 각 kube-apiserver에 ID를 할당한다.
|
||||
- `AppArmor`: 도커를 사용할 때 리눅스 노드에서 AppArmor 기반의 필수 접근 제어를 활성화한다.
|
||||
자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다.
|
||||
자세한 내용은 [AppArmor 튜토리얼](/ko/docs/tutorials/clusters/apparmor/)을 참고한다.
|
||||
- `AttachVolumeLimit`: 볼륨 플러그인이 노드에 연결될 수 있는 볼륨 수에
|
||||
대한 제한을 보고하도록 한다.
|
||||
자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다.
|
||||
자세한 내용은 [동적 볼륨 제한](/ko/docs/concepts/storage/storage-limits/#동적-볼륨-한도)을 참고한다.
|
||||
- `BalanceAttachedNodeVolumes`: 스케줄링 시 균형 잡힌 리소스 할당을 위해 고려할 노드의 볼륨 수를
|
||||
포함한다. 스케줄러가 결정을 내리는 동안 CPU, 메모리 사용률 및 볼륨 수가
|
||||
더 가까운 노드가 선호된다.
|
||||
- `BlockVolume`: 파드에서 원시 블록 장치의 정의와 사용을 활성화한다.
|
||||
자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을
|
||||
참고한다.
|
||||
자세한 내용은 [원시 블록 볼륨 지원](/ko/docs/concepts/storage/persistent-volumes/#원시-블록-볼륨-지원)을
|
||||
참고한다.
|
||||
- `BoundServiceAccountTokenVolume`: ServiceAccountTokenVolumeProjection으로 구성된 프로젝션 볼륨을 사용하도록 서비스어카운트 볼륨을
|
||||
마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여
|
||||
확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로
|
||||
`kube-apiserver`를 시작하여 확장 토큰 기능을 끈다.
|
||||
자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
|
||||
확인한다.
|
||||
- `ConfigurableFSGroupPolicy`: 파드에 볼륨을 마운트할 때 fsGroups에 대한 볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은 [파드에 대한 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을 참고한다.
|
||||
-`CronJobControllerV2` : {{< glossary_tooltip text="크론잡" term_id="cronjob" >}} 컨트롤러의 대체 구현을 사용한다. 그렇지 않으면 동일한 컨트롤러의 버전 1이 선택된다. 버전 2 컨트롤러는 실험적인 성능 향상을 제공한다.
|
||||
- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다. [CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다.
|
||||
마이그레이션한다. 클러스터 관리자는 `serviceaccount_stale_tokens_total` 메트릭을 사용하여
|
||||
확장 토큰에 의존하는 워크로드를 모니터링 할 수 있다. 이러한 워크로드가 없는 경우 `--service-account-extend-token-expiration=false` 플래그로
|
||||
`kube-apiserver`를 시작하여 확장 토큰 기능을 끈다.
|
||||
자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
|
||||
확인한다.
|
||||
- `CPUManager`: 컨테이너 수준의 CPU 어피니티 지원을 활성화한다.
|
||||
[CPU 관리 정책](/docs/tasks/administer-cluster/cpu-management-policies/)을 참고한다.
|
||||
- `CRIContainerLogRotation`: cri 컨테이너 런타임에 컨테이너 로그 로테이션을 활성화한다.
|
||||
- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다. 자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원) 문서를 참고한다.
|
||||
- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된 모든 로직을 활성화한다.
|
||||
- `CSIBlockVolume`: 외부 CSI 볼륨 드라이버가 블록 스토리지를 지원할 수 있게 한다.
|
||||
자세한 내용은 [`csi` 원시 블록 볼륨 지원](/ko/docs/concepts/storage/volumes/#csi-원시-raw-블록-볼륨-지원)
|
||||
문서를 참고한다.
|
||||
- `CSIDriverRegistry`: csi.storage.k8s.io에서 CSIDriver API 오브젝트와 관련된
|
||||
모든 로직을 활성화한다.
|
||||
- `CSIInlineVolume`: 파드에 대한 CSI 인라인 볼륨 지원을 활성화한다.
|
||||
- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서 사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다.
|
||||
- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에 EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로 폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고 EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능 플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을 Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능 플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에 PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. CSIMigration과 CSIMigrationGCE 기능 플래그가 필요하다.
|
||||
- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에 Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서 Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고 Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅하는 shim 및 변환 로직을 사용한다. 노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및 CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigration`: shim 및 변환 로직을 통해 볼륨 작업을 인-트리 플러그인에서
|
||||
사전 설치된 해당 CSI 플러그인으로 라우팅할 수 있다.
|
||||
- `CSIMigrationAWS`: shim 및 변환 로직을 통해 볼륨 작업을
|
||||
AWS-EBS 인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다. 노드에
|
||||
EBS CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리 EBS 플러그인으로
|
||||
폴백(falling back)을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationAWSComplete`: kubelet 및 볼륨 컨트롤러에서 EBS 인-트리
|
||||
플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을 AWS-EBS
|
||||
인-트리 플러그인에서 EBS CSI 플러그인으로 라우팅할 수 있다.
|
||||
클러스터의 모든 노드에 CSIMigration과 CSIMigrationAWS 기능 플래그가 활성화되고
|
||||
EBS CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationAzureDisk`: shim 및 변환 로직을 통해 볼륨 작업을
|
||||
Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로 라우팅할 수 있다.
|
||||
노드에 AzureDisk CSI 플러그인이 설치와 구성이 되어 있지 않은 경우 인-트리
|
||||
AzureDisk 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가
|
||||
필요하다.
|
||||
- `CSIMigrationAzureDiskComplete`: kubelet 및 볼륨 컨트롤러에서 Azure-Disk 인-트리
|
||||
플러그인 등록을 중지하고 shim 및 변환 로직을 사용하여 볼륨 작업을
|
||||
Azure-Disk 인-트리 플러그인에서 AzureDisk CSI 플러그인으로
|
||||
라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureDisk 기능
|
||||
플래그가 활성화되고 AzureDisk CSI 플러그인이 설치 및 구성이 되어
|
||||
있어야 한다.
|
||||
- `CSIMigrationAzureFile`: shim 및 변환 로직을 통해 볼륨 작업을
|
||||
Azure-File 인-트리 플러그인에서 AzureFile CSI 플러그인으로 라우팅할 수 있다.
|
||||
노드에 AzureFile CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리
|
||||
AzureFile 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가
|
||||
필요하다.
|
||||
- `CSIMigrationAzureFileComplete`: kubelet 및 볼륨 컨트롤러에서 Azure 파일 인-트리
|
||||
플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을
|
||||
Azure 파일 인-트리 플러그인에서 AzureFile CSI 플러그인으로
|
||||
라우팅할 수 있다. 클러스터의 모든 노드에 CSIMigration과 CSIMigrationAzureFile 기능
|
||||
플래그가 활성화되고 AzureFile CSI 플러그인이 설치 및 구성이 되어
|
||||
있어야 한다.
|
||||
- `CSIMigrationGCE`: shim 및 변환 로직을 통해 볼륨 작업을
|
||||
GCE-PD 인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다. 노드에
|
||||
PD CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리 GCE 플러그인으로 폴백을
|
||||
지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationGCEComplete`: kubelet 및 볼륨 컨트롤러에서 GCE-PD
|
||||
인-트리 플러그인 등록을 중지하고 shim 및 변환 로직을 통해 볼륨 작업을 GCE-PD
|
||||
인-트리 플러그인에서 PD CSI 플러그인으로 라우팅할 수 있다.
|
||||
CSIMigration과 CSIMigrationGCE 기능 플래그가 활성화되고 PD CSI
|
||||
플러그인이 클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationOpenStack`: shim 및 변환 로직을 통해 볼륨 작업을
|
||||
Cinder 인-트리 플러그인에서 Cinder CSI 플러그인으로 라우팅할 수 있다. 노드에
|
||||
Cinder CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우 인-트리
|
||||
Cinder 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationOpenStackComplete`: kubelet 및 볼륨 컨트롤러에서
|
||||
Cinder 인-트리 플러그인 등록을 중지하고 shim 및 변환 로직이 Cinder 인-트리
|
||||
플러그인에서 Cinder CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다.
|
||||
클러스터의 모든 노드에 CSIMigration과 CSIMigrationOpenStack 기능 플래그가 활성화되고
|
||||
Cinder CSI 플러그인이 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSIMigrationvSphere`: vSphere 인-트리 플러그인에서 vSphere CSI 플러그인으로 볼륨 작업을
|
||||
라우팅하는 shim 및 변환 로직을 사용한다.
|
||||
노드에 vSphere CSI 플러그인이 설치 및 구성이 되어 있지 않은 경우
|
||||
인-트리 vSphere 플러그인으로 폴백을 지원한다. CSIMigration 기능 플래그가 필요하다.
|
||||
- `CSIMigrationvSphereComplete`: kubelet 및 볼륨 컨트롤러에서 vSphere 인-트리
|
||||
플러그인 등록을 중지하고 shim 및 변환 로직을 활성화하여 vSphere 인-트리 플러그인에서
|
||||
vSphere CSI 플러그인으로 볼륨 작업을 라우팅할 수 있도록 한다. CSIMigration 및
|
||||
CSIMigrationvSphere 기능 플래그가 활성화되고 vSphere CSI 플러그인이
|
||||
클러스터의 모든 노드에 설치 및 구성이 되어 있어야 한다.
|
||||
- `CSINodeInfo`: csi.storage.k8s.io에서 CSINodeInfo API 오브젝트와 관련된 모든 로직을 활성화한다.
|
||||
- `CSIPersistentVolume`: [CSI (Container Storage Interface)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
|
||||
호환 볼륨 플러그인을 통해 프로비저닝된 볼륨을 감지하고
|
||||
마운트할 수 있다.
|
||||
- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록 CSI 드라이버를 활성화한다. [토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다.
|
||||
- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고 쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다. [스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다.
|
||||
- `CSIServiceAccountToken` : 볼륨을 마운트하는 파드의 서비스 계정 토큰을 받을 수 있도록
|
||||
CSI 드라이버를 활성화한다.
|
||||
[토큰 요청](https://kubernetes-csi.github.io/docs/token-requests.html)을 참조한다.
|
||||
- `CSIStorageCapacity`: CSI 드라이버가 스토리지 용량 정보를 게시하고
|
||||
쿠버네티스 스케줄러가 파드를 스케줄할 때 해당 정보를 사용하도록 한다.
|
||||
[스토리지 용량](/docs/concepts/storage/storage-capacity/)을 참고한다.
|
||||
자세한 내용은 [`csi` 볼륨 유형](/ko/docs/concepts/storage/volumes/#csi) 문서를 확인한다.
|
||||
- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다. 이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과 권한 수정을 지원하는지 여부를 제어한다.
|
||||
- `CustomCPUCFSQuotaPeriod`: 노드가 CPUCFSQuotaPeriod를 변경하도록 한다.
|
||||
- `CSIVolumeFSGroupPolicy`: CSI드라이버가 `fsGroupPolicy` 필드를 사용하도록 허용한다.
|
||||
이 필드는 CSI드라이버에서 생성된 볼륨이 마운트될 때 볼륨 소유권과
|
||||
권한 수정을 지원하는지 여부를 제어한다.
|
||||
- `ConfigurableFSGroupPolicy`: 사용자가 파드에 볼륨을 마운트할 때 fsGroups에 대한
|
||||
볼륨 권한 변경 정책을 구성할 수 있다. 자세한 내용은
|
||||
[파드의 볼륨 권한 및 소유권 변경 정책 구성](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods)을
|
||||
참고한다.
|
||||
- `CronJobControllerV2`: {{< glossary_tooltip text="크론잡(CronJob)" term_id="cronjob" >}}
|
||||
컨트롤러의 대체 구현을 사용한다. 그렇지 않으면,
|
||||
동일한 컨트롤러의 버전 1이 선택된다.
|
||||
버전 2 컨트롤러는 실험적인 성능 향상을 제공한다.
|
||||
- `CustomCPUCFSQuotaPeriod`: [kubelet config](/docs/tasks/administer-cluster/kubelet-config-file/)에서
|
||||
`cpuCFSQuotaPeriod` 를 노드가 변경할 수 있도록 한다.
|
||||
- `CustomPodDNS`: `dnsConfig` 속성을 사용하여 파드의 DNS 설정을 사용자 정의할 수 있다.
|
||||
자세한 내용은 [파드의 DNS 설정](/ko/docs/concepts/services-networking/dns-pod-service/#pod-dns-config)을
|
||||
확인한다.
|
||||
|
@ -466,147 +546,248 @@ kubelet과 같은 컴포넌트의 기능 게이트를 설정하려면, 기능
|
|||
- `CustomResourceWebhookConversion`: [커스텀리소스데피니션](/ko/docs/concepts/extend-kubernetes/api-extension/custom-resources/)에서
|
||||
생성된 리소스에 대해 웹 훅 기반의 변환을 활성화한다.
|
||||
실행 중인 파드 문제를 해결한다.
|
||||
- `DisableAcceleratorUsageMetrics`: [kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화).
|
||||
- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
기반 리소스 프로비저닝을 활성화한다.
|
||||
- `DefaultPodTopologySpread`: `PodTopologySpread` 스케줄링 플러그인을 사용하여
|
||||
[기본 분배](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/#내부-기본-제약)를 수행한다.
|
||||
- `DownwardAPIHugePages`: 다운워드 API에서 hugepages 사용을 활성화한다.
|
||||
- `DevicePlugins`: 노드에서 [장치 플러그인](/ko/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
|
||||
기반 리소스 프로비저닝을 활성화한다.
|
||||
- `DisableAcceleratorUsageMetrics`:
|
||||
[kubelet이 수집한 액셀러레이터 지표 비활성화](/ko/docs/concepts/cluster-administration/system-metrics/#액셀러레이터-메트릭-비활성화).
|
||||
- `DownwardAPIHugePages`: [다운워드 API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information)에서
|
||||
hugepages 사용을 활성화한다.
|
||||
- `DryRun`: 서버 측의 [dry run](/docs/reference/using-api/api-concepts/#dry-run) 요청을
|
||||
요청을 활성화하여 커밋하지 않고 유효성 검사, 병합 및 변화를 테스트할 수 있다.
|
||||
- `DynamicAuditing`(*사용 중단됨*): v1.19 이전의 버전에서 동적 감사를 활성화하는 데 사용된다.
|
||||
- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다. [kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다.
|
||||
- `DynamicProvisioningScheduling`: 볼륨 스케줄을 인식하고 PV 프로비저닝을 처리하도록 기본 스케줄러를 확장한다.
|
||||
- `DynamicKubeletConfig`: kubelet의 동적 구성을 활성화한다.
|
||||
[kubelet 재구성](/docs/tasks/administer-cluster/reconfigure-kubelet/)을 참고한다.
|
||||
- `DynamicProvisioningScheduling`: 볼륨 토폴로지를 인식하고 PV 프로비저닝을 처리하도록
|
||||
기본 스케줄러를 확장한다.
|
||||
이 기능은 v1.12의 `VolumeScheduling` 기능으로 대체되었다.
|
||||
- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의 [동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다.
|
||||
- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초 시간 초과를 활성화한다.
|
||||
- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의 동등성을 캐시할 수 있게 한다.
|
||||
- `EphemeralContainers`: 파드를 실행하기 위한 {{< glossary_tooltip text="임시 컨테이너"
|
||||
term_id="ephemeral-container" >}}를 추가할 수 있다.
|
||||
- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다. [파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다.
|
||||
-`ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다. 이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한 현재 수정된 결함에 의존하는 경우 존재한다. [준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다.
|
||||
- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다. [사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다.
|
||||
- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다. [퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다.
|
||||
- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로 어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다.
|
||||
- `DynamicVolumeProvisioning`(*사용 중단됨*): 파드에 퍼시스턴트 볼륨의
|
||||
[동적 프로비저닝](/ko/docs/concepts/storage/dynamic-provisioning/)을 활성화한다.
|
||||
- `EfficientWatchResumption`: 스토리지에서 생성된 북마크(진행
|
||||
알림) 이벤트를 사용자에게 전달할 수 있다. 이것은 감시 작업에만
|
||||
적용된다.
|
||||
- `EnableAggregatedDiscoveryTimeout` (*사용 중단됨*): 수집된 검색 호출에서 5초
|
||||
시간 초과를 활성화한다.
|
||||
- `EnableEquivalenceClassCache`: 스케줄러가 파드를 스케줄링할 때 노드의
|
||||
동등성을 캐시할 수 있게 한다.
|
||||
- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한
|
||||
엔드포인트슬라이스(EndpointSlices)를 활성화한다. [엔드포인트슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
- `EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다.
|
||||
- `EndpointSliceProxying`: 활성화되면, 리눅스에서 실행되는
|
||||
kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
|
||||
기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
|
||||
[엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
- `EndpointSliceTerminatingCondition`: 엔드포인트슬라이스 `terminating` 및 `serving`
|
||||
조건 필드를 활성화한다.
|
||||
- `EphemeralContainers`: 파드를 실행하기 위한
|
||||
{{< glossary_tooltip text="임시 컨테이너" term_id="ephemeral-container" >}}를
|
||||
추가할 수 있다.
|
||||
- `EvenPodsSpread`: 토폴로지 도메인 간에 파드를 균등하게 스케줄링할 수 있다.
|
||||
[파드 토폴로지 분배 제약 조건](/ko/docs/concepts/workloads/pods/pod-topology-spread-constraints/)을 참고한다.
|
||||
- `ExecProbeTimeout` : kubelet이 exec 프로브 시간 초과를 준수하는지 확인한다.
|
||||
이 기능 게이트는 기존 워크로드가 쿠버네티스가 exec 프로브 제한 시간을 무시한
|
||||
현재 수정된 결함에 의존하는 경우 존재한다.
|
||||
[준비성 프로브](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)를 참조한다.
|
||||
- `ExpandCSIVolumes`: CSI 볼륨 확장을 활성화한다.
|
||||
- `ExpandInUsePersistentVolumes`: 사용 중인 PVC를 확장할 수 있다.
|
||||
[사용 중인 퍼시스턴트볼륨클레임 크기 조정](/ko/docs/concepts/storage/persistent-volumes/#사용-중인-퍼시스턴트볼륨클레임-크기-조정)을 참고한다.
|
||||
- `ExpandPersistentVolumes`: 퍼시스턴트 볼륨 확장을 활성화한다.
|
||||
[퍼시스턴트 볼륨 클레임 확장](/ko/docs/concepts/storage/persistent-volumes/#퍼시스턴트-볼륨-클레임-확장)을 참고한다.
|
||||
- `ExperimentalCriticalPodAnnotation`: 특정 파드에 *critical* 로
|
||||
어노테이션을 달아서 [스케줄링이 보장되도록](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) 한다.
|
||||
이 기능은 v1.13부터 파드 우선 순위 및 선점으로 인해 사용 중단되었다.
|
||||
- `ExperimentalHostUserNamespaceDefaultingGate`: 사용자 네임스페이스를 호스트로
|
||||
기본 활성화한다. 이것은 다른 호스트 네임스페이스, 호스트 마운트,
|
||||
권한이 있는 컨테이너 또는 특정 비-네임스페이스(non-namespaced) 기능(예: `MKNODE`, `SYS_MODULE` 등)을
|
||||
사용하는 컨테이너를 위한 것이다. 도커 데몬에서 사용자 네임스페이스
|
||||
재 매핑이 활성화된 경우에만 활성화해야 한다.
|
||||
- `EndpointSlice`: 보다 스케일링 가능하고 확장 가능한 네트워크 엔드포인트에 대한
|
||||
엔드포인트 슬라이스를 활성화한다. [엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
-`EndpointSliceNodeName` : 엔드포인트슬라이스 `nodeName` 필드를 활성화한다.
|
||||
-`EndpointSliceTerminating` : 엔드포인트슬라이스 `terminating` 및 `serving` 조건 필드를
|
||||
활성화한다.
|
||||
- `EndpointSliceProxying`: 이 기능 게이트가 활성화되면, 리눅스에서 실행되는
|
||||
kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
|
||||
기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
|
||||
[엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
- `WindowsEndpointSliceProxying`: 이 기능 게이트가 활성화되면, 윈도우에서 실행되는
|
||||
kube-proxy는 엔드포인트 대신 엔드포인트슬라이스를
|
||||
기본 데이터 소스로 사용하여 확장성과 성능을 향상시킨다.
|
||||
[엔드포인트 슬라이스 활성화](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
- `GCERegionalPersistentDisk`: GCE에서 지역 PD 기능을 활성화한다.
|
||||
- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인 볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원 등에서 제공할 수 있음). [임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다.
|
||||
-`GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다. 시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행중인 파드를 정상적으로 종료하려고 시도한다. 자세한 내용은 [Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을 참조한다.
|
||||
- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 할당 및 사용을 활성화한다.
|
||||
- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의 여러 크기를 지원한다.
|
||||
- `HyperVContainer`: 윈도우 컨테이너를 위한 [Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container) 기능을 활성화한다.
|
||||
- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해 `minReplicas` 를 0으로 설정한다.
|
||||
- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을 변경할 수 없는(immutable) 것으로 표시할 수 있다.
|
||||
- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서 kubelet 구성을 로드할 수 있다.
|
||||
자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을 참고한다.
|
||||
- `GenericEphemeralVolume`: 일반 볼륨의 모든 기능을 지원하는 임시, 인라인
|
||||
볼륨을 활성화한다(타사 스토리지 공급 업체, 스토리지 용량 추적, 스냅샷으로부터 복원
|
||||
등에서 제공할 수 있음).
|
||||
[임시 볼륨](/docs/concepts/storage/ephemeral-volumes/)을 참고한다.
|
||||
- `GracefulNodeShutdown` : kubelet에서 정상 종료를 지원한다.
|
||||
시스템 종료 중에 kubelet은 종료 이벤트를 감지하고 노드에서 실행 중인
|
||||
파드를 정상적으로 종료하려고 시도한다. 자세한 내용은
|
||||
[Graceful Node Shutdown](/ko/docs/concepts/architecture/nodes/#그레이스풀-graceful-노드-셧다운)을
|
||||
참조한다.
|
||||
- `HPAContainerMetrics`: `HorizontalPodAutoscaler`를 활성화하여 대상 파드의
|
||||
개별 컨테이너 메트릭을 기반으로 확장한다.
|
||||
- `HPAScaleToZero`: 사용자 정의 또는 외부 메트릭을 사용할 때 `HorizontalPodAutoscaler` 리소스에 대해
|
||||
`minReplicas` 를 0으로 설정한다.
|
||||
- `HugePages`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의
|
||||
할당 및 사용을 활성화한다.
|
||||
- `HugePageStorageMediumSize`: 사전 할당된 [huge page](/ko/docs/tasks/manage-hugepages/scheduling-hugepages/)의
|
||||
여러 크기를 지원한다.
|
||||
- `HyperVContainer`: 윈도우 컨테이너를 위한
|
||||
[Hyper-V 격리](https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/manage-containers/hyperv-container)
|
||||
기능을 활성화한다.
|
||||
- `IPv6DualStack`: IPv6에 대한 [듀얼 스택](/ko/docs/concepts/services-networking/dual-stack/)
|
||||
지원을 활성화한다.
|
||||
- `ImmutableEphemeralVolumes`: 안정성과 성능 향상을 위해 개별 시크릿(Secret)과 컨피그맵(ConfigMap)을
|
||||
변경할 수 없는(immutable) 것으로 표시할 수 있다.
|
||||
- `KubeletConfigFile`: 구성 파일을 사용하여 지정된 파일에서
|
||||
kubelet 구성을 로드할 수 있다.
|
||||
자세한 내용은 [구성 파일을 통해 kubelet 파라미터 설정](/docs/tasks/administer-cluster/kubelet-config-file/)을
|
||||
참고한다.
|
||||
- `KubeletCredentialProviders`: 이미지 풀 자격 증명에 대해 kubelet exec 자격 증명 공급자를 활성화한다.
|
||||
- `KubeletPluginsWatcher`: kubelet이 [CSI 볼륨 드라이버](/ko/docs/concepts/storage/volumes/#csi)와 같은
|
||||
플러그인을 검색할 수 있도록 프로브 기반 플러그인 감시자(watcher) 유틸리티를 사용한다.
|
||||
- `KubeletPodResources`: kubelet의 파드 리소스 grpc 엔드포인트를 활성화한다.
|
||||
자세한 내용은 [장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)을 참고한다.
|
||||
- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은 `NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여 `node-role.kubernetes.io/master` 레이블을 무시한다.
|
||||
- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 `sizeLimit` 속성을 사용할 수 있게 한다.
|
||||
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에 `LocalStorageCapacityIsolation` 이 활성화되고 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의 백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는 프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir) 스토리지 사용을 모니터링하여 성능과 정확성을 향상시킨다.
|
||||
- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜 사용을 활성화한다.
|
||||
- `MountContainers`: 호스트의 유틸리티 컨테이너를 볼륨 마운터로 사용할 수 있다.
|
||||
- `KubeletPodResources`: kubelet의 파드 리소스 GPRC 엔드포인트를 활성화한다. 자세한 내용은
|
||||
[장치 모니터링 지원](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md)을
|
||||
참고한다.
|
||||
- `LegacyNodeRoleBehavior`: 비활성화되면, 서비스 로드 밸런서 및 노드 중단의 레거시 동작은
|
||||
`NodeDisruptionExclusion` 과 `ServiceNodeExclusion` 에 의해 제공된 기능별 레이블을 대신하여
|
||||
`node-role.kubernetes.io/master` 레이블을 무시한다.
|
||||
- `LocalStorageCapacityIsolation`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)와
|
||||
[emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의
|
||||
`sizeLimit` 속성을 사용할 수 있게 한다.
|
||||
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: [로컬 임시 스토리지](/ko/docs/concepts/configuration/manage-resources-containers/)에
|
||||
`LocalStorageCapacityIsolation` 이 활성화되고
|
||||
[emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)의
|
||||
백업 파일시스템이 프로젝트 쿼터를 지원하고 활성화된 경우, 파일시스템 사용보다는
|
||||
프로젝트 쿼터를 사용하여 [emptyDir 볼륨](/ko/docs/concepts/storage/volumes/#emptydir)
|
||||
스토리지 사용을 모니터링하여 성능과 정확성을
|
||||
향상시킨다.
|
||||
- `MixedProtocolLBService`: 동일한 로드밸런서 유형 서비스 인스턴스에서 다른 프로토콜
|
||||
사용을 활성화한다.
|
||||
- `MountContainers` (*사용 중단됨*): 호스트의 유틸리티 컨테이너를 볼륨 마운터로
|
||||
사용할 수 있다.
|
||||
- `MountPropagation`: 한 컨테이너에서 다른 컨테이너 또는 파드로 마운트된 볼륨을 공유할 수 있다.
|
||||
자세한 내용은 [마운트 전파(propagation)](/ko/docs/concepts/storage/volumes/#마운트-전파-propagation)을 참고한다.
|
||||
- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption` 사용을 활성화한다.
|
||||
- `NodeDisruptionExclusion`: 영역(zone) 장애 시 노드가 제외되지 않도록 노드 레이블 `node.kubernetes.io/exclude-disruption`
|
||||
사용을 활성화한다.
|
||||
- `NodeLease`: 새로운 리스(Lease) API가 노드 상태 신호로 사용될 수 있는 노드 하트비트(heartbeats)를 보고할 수 있게 한다.
|
||||
- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 NonPreempting 옵션을 활성화한다.
|
||||
- `NonPreemptingPriority`: 프라이어리티클래스(PriorityClass)와 파드에 `preemptionPolicy` 필드를 활성화한다.
|
||||
- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이
|
||||
삭제되지 않도록 한다.
|
||||
- `PersistentLocalVolumes`: 파드에서 `local` 볼륨 유형의 사용을 활성화한다.
|
||||
`local` 볼륨을 요청하는 경우 파드 어피니티를 지정해야 한다.
|
||||
- `PodDisruptionBudget`: [PodDisruptionBudget](/docs/tasks/run-application/configure-pdb/) 기능을 활성화한다.
|
||||
- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/) 기능을 활성화한다.
|
||||
- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를 기반으로 파드의 스케줄링 취소와 선점을 활성화한다.
|
||||
- `PodOverhead`: 파드 오버헤드를 판단하기 위해 [파드오버헤드(PodOverhead)](/ko/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
기능을 활성화한다.
|
||||
- `PodPriority`: [우선 순위](/ko/docs/concepts/configuration/pod-priority-preemption/)를
|
||||
기반으로 파드의 스케줄링 취소와 선점을 활성화한다.
|
||||
- `PodReadinessGates`: 파드 준비성 평가를 확장하기 위해
|
||||
`PodReadinessGate` 필드 설정을 활성화한다. 자세한 내용은 [파드의 준비성 게이트](/ko/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate)를
|
||||
참고한다.
|
||||
- `PodShareProcessNamespace`: 파드에서 실행되는 컨테이너 간에 단일 프로세스 네임스페이스를
|
||||
공유하기 위해 파드에서 `shareProcessNamespace` 설정을 활성화한다. 자세한 내용은
|
||||
[파드의 컨테이너 간 프로세스 네임스페이스 공유](/docs/tasks/configure-pod-container/share-process-namespace/)에서 확인할 수 있다.
|
||||
- `ProcMountType`: 컨테이너의 ProcMountType 제어를 활성화한다.
|
||||
- `PVCProtection`: 파드에서 사용 중일 때 퍼시스턴트볼륨클레임(PVC)이
|
||||
삭제되지 않도록 한다.
|
||||
- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가 더 높은 QoS 수준에서
|
||||
요청된 리소스로 파열되는 것을 방지한다(현재 메모리만 해당).
|
||||
- `ProcMountType`: SecurityContext의 `procMount` 필드를 설정하여
|
||||
컨테이너의 proc 타입의 마운트를 제어할 수 있다.
|
||||
- `QOSReserved`: QoS 수준에서 리소스 예약을 허용하여 낮은 QoS 수준의 파드가
|
||||
더 높은 QoS 수준에서 요청된 리소스로 파열되는 것을 방지한다
|
||||
(현재 메모리만 해당).
|
||||
- `RemainingItemCount`: API 서버가
|
||||
[청크(chunking) 목록 요청](/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks)에 대한
|
||||
응답에서 남은 항목 수를 표시하도록 허용한다.
|
||||
- `RemoveSelfLink`: ObjectMeta 및 ListMeta에서 `selfLink` 를 사용하지 않고
|
||||
제거한다.
|
||||
- `ResourceLimitsPriorityFunction` (*사용 중단됨*): 입력 파드의 CPU 및 메모리 한도 중
|
||||
하나 이상을 만족하는 노드에 가능한 최저 점수 1을 할당하는
|
||||
스케줄러 우선 순위 기능을 활성화한다. 의도는 동일한 점수를 가진
|
||||
노드 사이의 관계를 끊는 것이다.
|
||||
- `ResourceQuotaScopeSelectors`: 리소스 쿼터 범위 셀렉터를 활성화한다.
|
||||
- `RootCAConfigMap`: 모든 네임 스페이스에 `kube-root-ca.crt`라는 {{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록 kube-controller-manager를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데 사용되는 CA 번들이 포함되어 있다.
|
||||
자세한 내용은 [바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을 참조한다.
|
||||
- `RootCAConfigMap`: 모든 네임스페이스에 `kube-root-ca.crt`라는
|
||||
{{< glossary_tooltip text="컨피그맵" term_id="configmap" >}}을 게시하도록
|
||||
`kube-controller-manager` 를 구성한다. 이 컨피그맵에는 kube-apiserver에 대한 연결을 확인하는 데
|
||||
사용되는 CA 번들이 포함되어 있다. 자세한 내용은
|
||||
[바운드 서비스 계정 토큰](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md)을
|
||||
참조한다.
|
||||
- `RotateKubeletClientCertificate`: kubelet에서 클라이언트 TLS 인증서의 로테이션을 활성화한다.
|
||||
자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다.
|
||||
- `RotateKubeletServerCertificate`: kubelet에서 서버 TLS 인증서의 로테이션을 활성화한다.
|
||||
자세한 내용은 [kubelet 구성](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration)을 참고한다.
|
||||
- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를 활성화한다.
|
||||
- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/) 기능을 활성화한다.
|
||||
- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를 스케줄링할 수 있다.
|
||||
- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서 _SCTP_ `protocol` 값을 활성화한다.
|
||||
- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/) 경로를 활성화한다.
|
||||
- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및 JWKS URL)를 활성화한다. 자세한 내용은 [파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을 참고한다.
|
||||
- `RunAsGroup`: 컨테이너의 init 프로세스에 설정된 기본 그룹 ID 제어를
|
||||
활성화한다.
|
||||
- `RuntimeClass`: 컨테이너 런타임 구성을 선택하기 위해 [런타임클래스(RuntimeClass)](/ko/docs/concepts/containers/runtime-class/)
|
||||
기능을 활성화한다.
|
||||
- `ScheduleDaemonSetPods`: 데몬셋(DaemonSet) 컨트롤러 대신 기본 스케줄러로 데몬셋 파드를
|
||||
스케줄링할 수 있다.
|
||||
- `SCTPSupport`: 파드, 서비스, 엔드포인트, 엔드포인트슬라이스 및 네트워크폴리시 정의에서
|
||||
_SCTP_ `protocol` 값을 활성화한다.
|
||||
- `ServerSideApply`: API 서버에서 [SSA(Sever Side Apply)](/docs/reference/using-api/server-side-apply/)
|
||||
경로를 활성화한다.
|
||||
- `ServiceAccountIssuerDiscovery`: API 서버에서 서비스 어카운트 발행자에 대해 OIDC 디스커버리 엔드포인트(발급자 및
|
||||
JWKS URL)를 활성화한다. 자세한 내용은
|
||||
[파드의 서비스 어카운트 구성](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery)을
|
||||
참고한다.
|
||||
- `ServiceAppProtocol`: 서비스와 엔드포인트에서 `AppProtocol` 필드를 활성화한다.
|
||||
- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를 활성화한다.
|
||||
- `ServiceLBNodePortControl`: 서비스에서`spec.allocateLoadBalancerNodePorts` 필드를
|
||||
활성화한다.
|
||||
- `ServiceLoadBalancerFinalizer`: 서비스 로드 밸런서에 대한 Finalizer 보호를 활성화한다.
|
||||
- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를 제외할 수 있다.
|
||||
"`alpha.service-controller.kubernetes.io/exclude-balancer`" 키 또는 `node.kubernetes.io/exclude-from-external-load-balancers` 로 레이블이 지정된 경우 노드를 제외할 수 있다.
|
||||
- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수 있도록 한다. 자세한 내용은 [서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를 참고한다.
|
||||
- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다. 자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다.
|
||||
- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로 설정하는 기능을 활성화한다. [파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다.
|
||||
- `StartupProbe`: kubelet에서 [스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가) 프로브를 활성화한다.
|
||||
- `ServiceNodeExclusion`: 클라우드 제공자가 생성한 로드 밸런서에서 노드를
|
||||
제외할 수 있다. "`node.kubernetes.io/exclude-from-external-load-balancers`"로
|
||||
레이블이 지정된 경우 노드를 제외할 수 있다.
|
||||
- `ServiceTopology`: 서비스가 클러스터의 노드 토폴로지를 기반으로 트래픽을 라우팅할 수
|
||||
있도록 한다. 자세한 내용은
|
||||
[서비스토폴로지(ServiceTopology)](/ko/docs/concepts/services-networking/service-topology/)를
|
||||
참고한다.
|
||||
- `SizeMemoryBackedVolumes`: kubelet 지원을 사용하여 메모리 백업 볼륨의 크기를 조정한다.
|
||||
자세한 내용은 [volumes](/ko/docs/concepts/storage/volumes)를 참조한다.
|
||||
- `SetHostnameAsFQDN`: 전체 주소 도메인 이름(FQDN)을 파드의 호스트 이름으로
|
||||
설정하는 기능을 활성화한다.
|
||||
[파드의 `setHostnameAsFQDN` 필드](/ko/docs/concepts/services-networking/dns-pod-service/#파드의-sethostnameasfqdn-필드)를 참고한다.
|
||||
- `StartupProbe`: kubelet에서
|
||||
[스타트업](/ko/docs/concepts/workloads/pods/pod-lifecycle/#언제-스타트업-프로브를-사용해야-하는가)
|
||||
프로브를 활성화한다.
|
||||
- `StorageObjectInUseProtection`: 퍼시스턴트볼륨 또는 퍼시스턴트볼륨클레임 오브젝트가 여전히
|
||||
사용 중인 경우 삭제를 연기한다.
|
||||
- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록 허용한다.
|
||||
- `StorageVersionAPI`: [스토리지 버전 API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#storageversion-v1alpha1-internal-apiserver-k8s-io)를
|
||||
활성화한다.
|
||||
- `StorageVersionHash`: API 서버가 디스커버리에서 스토리지 버전 해시를 노출하도록
|
||||
허용한다.
|
||||
- `StreamingProxyRedirects`: 스트리밍 요청을 위해 백엔드(kubelet)에서 리디렉션을
|
||||
가로채서 따르도록 API 서버에 지시한다.
|
||||
스트리밍 요청의 예로는 `exec`, `attach` 및 `port-forward` 요청이 있다.
|
||||
- `SupportIPVSProxyMode`: IPVS를 사용하여 클러스터 내 서비스 로드 밸런싱을 제공한다.
|
||||
자세한 내용은 [서비스 프록시](/ko/docs/concepts/services-networking/service/#가상-ip와-서비스-프록시)를 참고한다.
|
||||
- `SupportPodPidsLimit`: 파드의 PID 제한을 지원한다.
|
||||
- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다. `--system-reserved` 및 `--kube-reserved` 옵션의 `pid=<number>` 매개 변수를 지정하여 지정된 수의 프로세스 ID가 시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록 할 수 있다.
|
||||
- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널 파라미터(sysctl)를 지원한다.
|
||||
자세한 내용은 [sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다.
|
||||
- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로 노드에서 파드를 축출할 수 있다.
|
||||
자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을 참고한다.
|
||||
- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을 기반으로 자동 테인트 노드를 활성화한다.
|
||||
- `SupportNodePidsLimit`: 노드에서 PID 제한 지원을 활성화한다.
|
||||
`--system-reserved` 및 `--kube-reserved` 옵션의 `pid=<number>`
|
||||
파라미터를 지정하여 지정된 수의 프로세스 ID가
|
||||
시스템 전체와 각각 쿠버네티스 시스템 데몬에 대해 예약되도록
|
||||
할 수 있다.
|
||||
- `Sysctls`: 각 파드에 설정할 수 있는 네임스페이스 커널
|
||||
파라미터(sysctl)를 지원한다. 자세한 내용은
|
||||
[sysctl](/docs/tasks/administer-cluster/sysctl-cluster/)을 참고한다.
|
||||
- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가
|
||||
실행이 끝난 후 리소스를 정리하도록
|
||||
허용한다.
|
||||
- `TaintBasedEvictions`: 노드의 테인트(taint) 및 파드의 톨러레이션(toleration)을 기반으로
|
||||
노드에서 파드를 축출할 수 있다.
|
||||
자세한 내용은 [테인트와 톨러레이션](/ko/docs/concepts/scheduling-eviction/taint-and-toleration/)을
|
||||
참고한다.
|
||||
- `TaintNodesByCondition`: [노드 컨디션](/ko/docs/concepts/architecture/nodes/#condition)을
|
||||
기반으로 자동 테인트 노드를 활성화한다.
|
||||
- `TokenRequest`: 서비스 어카운트 리소스에서 `TokenRequest` 엔드포인트를 활성화한다.
|
||||
- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해 서비스 어카운트
|
||||
토큰을 파드에 주입할 수 있다.
|
||||
- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스 할당을 조정하는 메커니즘을 활성화한다. [노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다.
|
||||
- `TTLAfterFinished`: [TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)가 실행이 끝난 후 리소스를 정리하도록 허용한다.
|
||||
- `TokenRequestProjection`: [`projected` 볼륨](/ko/docs/concepts/storage/volumes/#projected)을 통해
|
||||
서비스 어카운트 토큰을 파드에 주입할 수 있다.
|
||||
- `TopologyManager`: 쿠버네티스의 다른 컴포넌트에 대한 세분화된 하드웨어 리소스
|
||||
할당을 조정하는 메커니즘을 활성화한다.
|
||||
[노드의 토폴로지 관리 정책 제어](/docs/tasks/administer-cluster/topology-manager/)를 참고한다.
|
||||
- `VolumePVCDataSource`: 기존 PVC를 데이터 소스로 지정하는 기능을 지원한다.
|
||||
- `VolumeScheduling`: 볼륨 토폴로지 인식 스케줄링을 활성화하고
|
||||
퍼시스턴트볼륨클레임(PVC) 바인딩이 스케줄링 결정을 인식하도록 한다. 또한
|
||||
`PersistentLocalVolumes` 기능 게이트와 함께 사용될 때
|
||||
[`local`](/ko/docs/concepts/storage/volumes/#local) 볼륨 유형을 사용할 수 있다.
|
||||
- `VolumeSnapshotDataSource`: 볼륨 스냅샷 데이터 소스 지원을 활성화한다.
|
||||
- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해 `subPathExpr` 필드를 활성화한다.
|
||||
- `VolumeSubpathEnvExpansion`: 환경 변수를 `subPath`로 확장하기 위해
|
||||
`subPathExpr` 필드를 활성화한다.
|
||||
- `WarningHeaders`: API 응답에서 경고 헤더를 보낼 수 있다.
|
||||
- `WatchBookmark`: 감시자 북마크(watch bookmark) 이벤트 지원을 활성화한다.
|
||||
- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다.
|
||||
- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서 애플리케이션을 실행할 수 있도록 지원한다.
|
||||
자세한 내용은 [RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을 참고한다.
|
||||
- `WinDSR`: kube-proxy가 윈도우용 DSR 로드 밸런서를 생성할 수 있다.
|
||||
- `WinOverlay`: kube-proxy가 윈도우용 오버레이 모드에서 실행될 수 있도록 한다.
|
||||
- `WindowsGMSA`: 파드에서 컨테이너 런타임으로 GMSA 자격 증명 스펙을 전달할 수 있다.
|
||||
- `WindowsRunAsUserName` : 기본 사용자가 아닌(non-default) 사용자로 윈도우 컨테이너에서
|
||||
애플리케이션을 실행할 수 있도록 지원한다. 자세한 내용은
|
||||
[RunAsUserName 구성](/docs/tasks/configure-pod-container/configure-runasusername)을
|
||||
참고한다.
|
||||
- `WindowsEndpointSliceProxying`: 활성화되면, 윈도우에서 실행되는 kube-proxy는
|
||||
엔드포인트 대신 엔드포인트슬라이스를 기본 데이터 소스로 사용하여
|
||||
확장성과 성능을 향상시킨다.
|
||||
[엔드포인트 슬라이스 활성화하기](/docs/tasks/administer-cluster/enabling-endpointslices/)를 참고한다.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: API 그룹(API Group)
|
||||
id: api-group
|
||||
date: 2019-09-02
|
||||
full_link: /ko/docs/concepts/overview/kubernetes-api/#api-groups
|
||||
full_link: /ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙
|
||||
short_description: >
|
||||
쿠버네티스 API의 연관된 경로들의 집합.
|
||||
|
||||
|
@ -11,9 +11,9 @@ tags:
|
|||
- fundamental
|
||||
- architecture
|
||||
---
|
||||
쿠버네티스 API의 연관된 경로들의 집합.
|
||||
쿠버네티스 API의 연관된 경로들의 집합.
|
||||
|
||||
<!--more-->
|
||||
API 서버의 구성을 변경하여 각 API 그룹을 활성화하거나 비활성화할 수 있다. 특정 리소스에 대한 경로를 비활성화하거나 활성화할 수도 있다. API 그룹을 사용하면 쿠버네티스 API를 더 쉽게 확장할 수 있다. API 그룹은 REST 경로 및 직렬화된 오브젝트의 `apiVersion` 필드에 지정된다.
|
||||
|
||||
* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-groups)을 참조한다.
|
||||
* 자세한 내용은 [API 그룹(/ko/docs/concepts/overview/kubernetes-api/#api-그룹과-버전-규칙)을 참조한다.
|
||||
|
|
|
@ -5,7 +5,7 @@ date: 2018-04-12
|
|||
full_link: /ko/docs/concepts/architecture/cloud-controller/
|
||||
short_description: >
|
||||
쿠버네티스를 타사 클라우드 공급자와 통합하는 컨트롤 플레인 컴포넌트.
|
||||
aka:
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- architecture
|
||||
|
@ -13,7 +13,7 @@ tags:
|
|||
---
|
||||
클라우드별 컨트롤 로직을 포함하는 쿠버네티스
|
||||
{{< glossary_tooltip text="컨트롤 플레인" term_id="control-plane" >}} 컴포넌트이다.
|
||||
클라우트 컨트롤러 매니저를 통해 클러스터를 클라우드 공급자의 API에 연결하고,
|
||||
클라우드 컨트롤러 매니저를 통해 클러스터를 클라우드 공급자의 API에 연결하고,
|
||||
해당 클라우드 플랫폼과 상호 작용하는 컴포넌트와 클러스터와 상호 작용하는 컴포넌트를 분리할 수 있다.
|
||||
|
||||
<!--more-->
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: 수량(Quantity)
|
||||
id: quantity
|
||||
date: 2018-08-07
|
||||
full_link:
|
||||
short_description: >
|
||||
SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
---
|
||||
SI 접미사를 사용하는 작거나 큰 숫자의 정수(whole-number) 표현.
|
||||
|
||||
<!--more-->
|
||||
|
||||
수량은 SI 접미사가 포함된 간결한 정수 표기법을 통해서 작거나 큰 숫자를 표현한 것이다.
|
||||
분수는 밀리(milli) 단위로 표시되는 반면,
|
||||
큰 숫자는 킬로(kilo), 메가(mega), 또는 기가(giga)
|
||||
단위로 표시할 수 있다.
|
||||
|
||||
|
||||
예를 들어, 숫자 `1.5`는 `1500m`으로, 숫자 `1000`은 `1k`로, `1000000`은
|
||||
`1M`으로 표시할 수 있다. 또한, 이진 표기법 접미사도 명시 가능하므로,
|
||||
숫자 2048은 `2Ki`로 표기될 수 있다.
|
||||
|
||||
허용되는 10진수(10의 거듭 제곱) 단위는 `m` (밀리), `k` (킬로, 의도적인 소문자),
|
||||
`M` (메가), `G` (기가), `T` (테라), `P` (페타),
|
||||
`E` (엑사)가 있다.
|
||||
|
||||
허용되는 2진수(2의 거듭 제곱) 단위는 `Ki` (키비), `Mi` (메비), `Gi` (기비),
|
||||
`Ti` (테비), `Pi` (페비), `Ei` (엑비)가 있다.
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: 시크릿(Secret)
|
||||
id: secret
|
||||
date: 2018-04-12
|
||||
full_link: /ko/docs/concepts/configuration/secret/
|
||||
short_description: >
|
||||
비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- security
|
||||
---
|
||||
비밀번호, OAuth 토큰 및 ssh 키와 같은 민감한 정보를 저장한다.
|
||||
|
||||
<!--more-->
|
||||
|
||||
민감한 정보를 사용하는 방식에 대해 더 세밀하게 제어할 수 있으며, 유휴 상태의 [암호화](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)를 포함하여 우발적인 노출 위험을 줄인다. {{< glossary_tooltip text="파드(Pod)" term_id="pod" >}}는 시크릿을 마운트된 볼륨의 파일로 참조하거나, 파드의 이미지를 풀링하는 kubelet이 시크릿을 참조한다. 시크릿은 기밀 데이터에 적합하고 [컨피그맵](/docs/tasks/configure-pod-container/configure-pod-configmap/)은 기밀이 아닌 데이터에 적합하다.
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: 스토리지 클래스(Storage Class)
|
||||
id: storageclass
|
||||
date: 2018-04-12
|
||||
full_link: /ko/docs/concepts/storage/storage-classes
|
||||
short_description: >
|
||||
스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- storage
|
||||
---
|
||||
스토리지클래스는 관리자가 사용 가능한 다양한 스토리지 유형을 설명할 수 있는 방법을 제공한다.
|
||||
|
||||
<!--more-->
|
||||
|
||||
스토리지 클래스는 서비스 품질 수준, 백업 정책 혹은 클러스터 관리자가 결정한 임의의 정책에 매핑할 수 있다. 각 스토리지클래스에는 클래스에 속한 {{< glossary_tooltip text="퍼시스턴트 볼륨(Persistent Volume)" term_id="persistent-volume" >}}을 동적으로 프로비저닝해야 할 때 사용되는 `provisioner`, `parameters` 및 `reclaimPolicy` 필드가 있다. 사용자는 스토리지클래스 객체의 이름을 사용하여 특정 클래스를 요청할 수 있다.
|
||||
|
||||
|
|
@ -122,7 +122,7 @@ sudo apt-get update && sudo apt-get install -y containerd.io
|
|||
```shell
|
||||
# containerd 구성
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
|
@ -140,7 +140,7 @@ sudo apt-get update && sudo apt-get install -y containerd
|
|||
```shell
|
||||
# containerd 구성
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
|
@ -210,7 +210,7 @@ sudo yum update -y && sudo yum install -y containerd.io
|
|||
```shell
|
||||
## containerd 구성
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
|
|
|
@ -1,11 +1,18 @@
|
|||
---
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
title: 쿠버네티스 버전 및 버전 차이(skew) 지원 정책
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다.
|
||||
이 문서는 다양한 쿠버네티스 구성 요소 간에 지원되는 최대 버전 차이를 설명한다.
|
||||
특정 클러스터 배포 도구는 버전 차이에 대한 추가적인 제한을 설정할 수 있다.
|
||||
|
||||
|
||||
|
@ -19,14 +26,14 @@ weight: 30
|
|||
|
||||
쿠버네티스 프로젝트는 최근 세 개의 마이너 릴리스 ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}) 에 대한 릴리스 분기를 유지한다. 쿠버네티스 1.19 이상은 약 1년간의 패치 지원을 받는다. 쿠버네티스 1.18 이상은 약 9개월의 패치 지원을 받는다.
|
||||
|
||||
보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다.
|
||||
보안 수정사항을 포함한 해당 수정사항은 심각도와 타당성에 따라 세 개의 릴리스 브랜치로 백포트(backport) 될 수 있다.
|
||||
패치 릴리스는 각 브랜치별로 [정기적인 주기](https://git.k8s.io/sig-release/releases/patch-releases.md#cadence)로 제공하며, 필요한 경우 추가 긴급 릴리스도 추가한다.
|
||||
|
||||
[릴리스 관리자](https://git.k8s.io/sig-release/release-managers.md) 그룹이 이러한 결정 권한을 가진다.
|
||||
|
||||
자세한 내용은 쿠버네티스 [패치 릴리스](https://git.k8s.io/sig-release/releases/patch-releases.md) 페이지를 참조한다.
|
||||
|
||||
## 지원되는 버전 차이
|
||||
## 지원되는 버전 차이
|
||||
|
||||
### kube-apiserver
|
||||
|
||||
|
@ -133,6 +140,11 @@ HA 클러스터의 `kube-apiserver` 인스턴스 간에 버전 차이가 있으
|
|||
|
||||
필요에 따라서 `kubelet` 인스턴스를 **{{< skew latestVersion >}}** 으로 업그레이드할 수 있다(또는 **{{< skew prevMinorVersion >}}** 아니면 **{{< skew oldestMinorVersion >}}** 으로 유지할 수 있음).
|
||||
|
||||
{{< note >}}
|
||||
`kubelet` 마이너 버전 업그레이드를 수행하기 전에, 해당 노드의 파드를 [드레인(drain)](/docs/tasks/administer-cluster/safely-drain-node/)해야 한다.
|
||||
인플레이스(In-place) 마이너 버전 `kubelet` 업그레이드는 지원되지 않는다.
|
||||
{{</ note >}}
|
||||
|
||||
{{< warning >}}
|
||||
클러스터 안의 `kubelet` 인스턴스를 `kube-apiserver`의 버전보다 2단계 낮은 버전으로 실행하는 것을 권장하지 않는다:
|
||||
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
---
|
||||
|
||||
|
||||
title: kubectl 설치 및 설정
|
||||
content_type: task
|
||||
weight: 10
|
||||
|
@ -30,33 +32,73 @@ kubectl을 사용하여 애플리케이션을 배포하고, 클러스터 리소
|
|||
|
||||
1. 다음 명령으로 최신 릴리스를 다운로드한다.
|
||||
|
||||
```
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
```
|
||||
|
||||
특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
|
||||
{{< note >}}
|
||||
특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
|
||||
|
||||
예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
예를 들어, 리눅스에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
|
||||
|
||||
2. kubectl 바이너리를 실행 가능하게 만든다.
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
```
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
1. 바이너리를 검증한다. (선택 사항)
|
||||
|
||||
3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다.
|
||||
kubectl 체크섬(checksum) 파일을 다운로드한다.
|
||||
|
||||
```
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
4. 설치한 버전이 최신 버전인지 확인한다.
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
```
|
||||
kubectl version --client
|
||||
```
|
||||
kubectl 바이너리를 체크섬 파일을 통해 검증한다.
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
|
||||
```
|
||||
|
||||
검증이 성공한다면, 출력은 다음과 같다.
|
||||
|
||||
```bash
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
검증이 실패한다면, `sha256` 가 0이 아닌 상태로 종료되며 다음과 유사한 결과를 출력한다.
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
동일한 버전의 바이너리와 체크섬을 다운로드한다.
|
||||
{{< /note >}}
|
||||
|
||||
1. kubectl 설치
|
||||
|
||||
```bash
|
||||
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
대상 시스템에 root 접근 권한을 가지고 있지 않더라도, `~/.local/bin` 디렉터리에 kubectl을 설치할 수 있다.
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.local/bin/kubectl
|
||||
mv ./kubectl ~/.local/bin/kubectl
|
||||
# 그리고 ~/.local/bin/kubectl을 $PATH에 추가
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. 설치한 버전이 최신인지 확인한다.
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
### 기본 패키지 관리 도구를 사용하여 설치
|
||||
|
||||
|
@ -117,29 +159,65 @@ kubectl version --client
|
|||
1. 최신 릴리스를 다운로드한다.
|
||||
|
||||
```bash
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
특정 버전을 다운로드하려면, `$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
|
||||
{{< note >}}
|
||||
특정 버전을 다운로드하려면, `$(curl -L -s https://dl.k8s.io/release/stable.txt)` 명령 부분을 특정 버전으로 바꾼다.
|
||||
|
||||
예를 들어, macOS에서 버전 {{< param "fullversion" >}}을 다운로드하려면, 다음을 입력한다.
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
kubectl 바이너리를 실행 가능하게 만든다.
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
1. 바이너리를 검증한다. (선택 사항)
|
||||
|
||||
kubectl 체크섬 파일을 다운로드한다.
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
kubectl 바이너리를 체크섬 파일을 통해 검증한다.
|
||||
|
||||
```bash
|
||||
echo "$(<kubectl.sha256) kubectl" | shasum -a 256 --check
|
||||
```
|
||||
|
||||
검증이 성공한다면, 출력은 다음과 같다.
|
||||
|
||||
```bash
|
||||
kubectl: OK
|
||||
```
|
||||
|
||||
검증이 실패한다면, `sha256` 가 0이 아닌 상태로 종료되며 다음과 유사한 결과를 출력한다.
|
||||
|
||||
```bash
|
||||
kubectl: FAILED
|
||||
shasum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
동일한 버전의 바이너리와 체크섬을 다운로드한다.
|
||||
{{< /note >}}
|
||||
|
||||
1. kubectl 바이너리를 실행 가능하게 한다.
|
||||
|
||||
```bash
|
||||
chmod +x ./kubectl
|
||||
```
|
||||
|
||||
3. 바이너리를 PATH가 설정된 디렉터리로 옮긴다.
|
||||
1. kubectl 바이너리를 시스템 `PATH` 의 파일 위치로 옮긴다.
|
||||
|
||||
```bash
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl && \
|
||||
sudo chown root: /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
4. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
|
@ -161,7 +239,7 @@ macOS에서 [Homebrew](https://brew.sh/) 패키지 관리자를 사용하는 경
|
|||
brew install kubernetes-cli
|
||||
```
|
||||
|
||||
2. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
|
@ -178,7 +256,7 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
|
|||
sudo port install kubectl
|
||||
```
|
||||
|
||||
2. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
|
@ -188,30 +266,55 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
|
|||
|
||||
### 윈도우에서 curl을 사용하여 kubectl 바이너리 설치
|
||||
|
||||
1. [이 링크](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)에서 최신 릴리스 {{< param "fullversion" >}}을 다운로드한다.
|
||||
1. [최신 릴리스 {{< param "fullversion" >}}](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe)를 다운로드한다.
|
||||
|
||||
또는 `curl` 을 설치한 경우, 다음 명령을 사용한다.
|
||||
|
||||
```bash
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://storage.googleapis.com/kubernetes-release/release/stable.txt](https://storage.googleapis.com/kubernetes-release/release/stable.txt)를 참고한다.
|
||||
{{< note >}}
|
||||
최신의 안정 버전(예: 스크립팅을 위한)을 찾으려면, [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt)를 참고한다.
|
||||
{{< /note >}}
|
||||
|
||||
2. 바이너리를 PATH가 설정된 디렉터리에 추가한다.
|
||||
1. 바이너리를 검증한다. (선택 사항)
|
||||
|
||||
3. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다.
|
||||
kubectl 체크섬 파일을 다운로드한다.
|
||||
|
||||
```bash
|
||||
```powershell
|
||||
curl -LO https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256
|
||||
```
|
||||
|
||||
kubectl 바이너리를 체크섬 파일을 통해 검증한다.
|
||||
|
||||
- 수동으로 `CertUtil` 의 출력과 다운로드한 체크섬 파일을 비교하기 위해서 커맨드 프롬프트를 사용한다.
|
||||
|
||||
```cmd
|
||||
CertUtil -hashfile kubectl.exe SHA256
|
||||
type kubectl.exe.sha256
|
||||
```
|
||||
|
||||
- `-eq` 연산자를 통해 `True` 또는 `False` 결과를 얻는 자동 검증을 위해서 PowerShell을 사용한다.
|
||||
|
||||
```powershell
|
||||
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
|
||||
```
|
||||
|
||||
1. 바이너리를 `PATH` 가 설정된 디렉터리에 추가한다.
|
||||
|
||||
1. `kubectl` 의 버전이 다운로드한 버전과 같은지 확인한다.
|
||||
|
||||
```cmd
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 PATH에 추가한다.
|
||||
도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 PATH 항목 앞에 PATH 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다.
|
||||
[윈도우용 도커 데스크톱](https://docs.docker.com/docker-for-windows/#kubernetes)은 자체 버전의 `kubectl` 을 `PATH` 에 추가한다.
|
||||
도커 데스크톱을 이전에 설치한 경우, 도커 데스크톱 설치 프로그램에서 추가한 `PATH` 항목 앞에 `PATH` 항목을 배치하거나 도커 데스크톱의 `kubectl` 을 제거해야 할 수도 있다.
|
||||
{{< /note >}}
|
||||
|
||||
### PSGallery에서 Powershell로 설치
|
||||
### PSGallery에서 PowerShell로 설치
|
||||
|
||||
윈도우에서 [Powershell Gallery](https://www.powershellgallery.com/) 패키지 관리자를 사용하는 경우, Powershell로 kubectl을 설치하고 업데이트할 수 있다.
|
||||
|
||||
|
@ -223,12 +326,12 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
`DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 임시 디렉터리에 설치된다.
|
||||
`DownloadLocation` 을 지정하지 않으면, `kubectl` 은 사용자의 `temp` 디렉터리에 설치된다.
|
||||
{{< /note >}}
|
||||
|
||||
설치 프로그램은 `$HOME/.kube` 를 생성하고 구성 파일을 작성하도록 지시한다.
|
||||
|
||||
2. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
|
@ -256,32 +359,32 @@ macOS에서 [Macports](https://macports.org/) 패키지 관리자를 사용하
|
|||
{{< /tabs >}}
|
||||
|
||||
|
||||
2. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```powershell
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
3. 홈 디렉터리로 이동한다.
|
||||
1. 홈 디렉터리로 이동한다.
|
||||
|
||||
```powershell
|
||||
# cmd.exe를 사용한다면, 다음을 실행한다. cd %USERPROFILE%
|
||||
cd ~
|
||||
```
|
||||
|
||||
4. `.kube` 디렉터리를 생성한다.
|
||||
1. `.kube` 디렉터리를 생성한다.
|
||||
|
||||
```powershell
|
||||
mkdir .kube
|
||||
```
|
||||
|
||||
5. 금방 생성한 `.kube` 디렉터리로 이동한다.
|
||||
1. 금방 생성한 `.kube` 디렉터리로 이동한다.
|
||||
|
||||
```powershell
|
||||
cd .kube
|
||||
```
|
||||
|
||||
6. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다.
|
||||
1. 원격 쿠버네티스 클러스터를 사용하도록 kubectl을 구성한다.
|
||||
|
||||
```powershell
|
||||
New-Item config -type file
|
||||
|
@ -297,13 +400,13 @@ kubectl을 Google Cloud SDK의 일부로 설치할 수 있다.
|
|||
|
||||
1. [Google Cloud SDK](https://cloud.google.com/sdk/)를 설치한다.
|
||||
|
||||
2. `kubectl` 설치 명령을 실행한다.
|
||||
1. `kubectl` 설치 명령을 실행한다.
|
||||
|
||||
```shell
|
||||
gcloud components install kubectl
|
||||
```
|
||||
|
||||
3. 설치한 버전이 최신 버전인지 확인한다.
|
||||
1. 설치한 버전이 최신 버전인지 확인한다.
|
||||
|
||||
```shell
|
||||
kubectl version --client
|
||||
|
@ -381,11 +484,13 @@ source /usr/share/bash-completion/bash_completion
|
|||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bashrc
|
||||
```
|
||||
|
||||
- 완성 스크립트를 `/etc/bash_completion.d` 디렉터리에 추가한다.
|
||||
|
||||
```bash
|
||||
kubectl completion bash >/etc/bash_completion.d/kubectl
|
||||
```
|
||||
|
||||
kubectl에 대한 앨리어스(alias)가 있는 경우, 해당 앨리어스로 작업하도록 셸 완성을 확장할 수 있다.
|
||||
|
||||
```bash
|
||||
|
@ -466,7 +571,6 @@ export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
|
|||
|
||||
```bash
|
||||
echo 'source <(kubectl completion bash)' >>~/.bash_profile
|
||||
|
||||
```
|
||||
|
||||
- 완성 스크립트를 `/usr/local/etc/bash_completion.d` 디렉터리에 추가한다.
|
||||
|
|
|
@ -4,7 +4,7 @@ content_type: concept
|
|||
weight: 30
|
||||
description: >
|
||||
API Kubernetesa służy do odpytywania i zmiany stanu obiektów Kubernetesa.
|
||||
Sercem warstwy sterowania Kubernetesa jest serwer API i udostępniane przez niego HTTP API. Przez ten serwer odbywa się komunikacja pomiędzy użytkownikami, różnymi częściami składowymi klastra oraz komponentami zewnętrznymi.
|
||||
Sercem warstwy sterowania Kubernetesa jest serwer API i udostępniane po HTTP API. Przez ten serwer odbywa się komunikacja pomiędzy użytkownikami, różnymi częściami składowymi klastra oraz komponentami zewnętrznymi.
|
||||
card:
|
||||
name: concepts
|
||||
weight: 30
|
||||
|
@ -14,13 +14,16 @@ card:
|
|||
|
||||
Sercem {{< glossary_tooltip text="warstwy sterowania" term_id="control-plane" >}} Kubernetes
|
||||
jest {{< glossary_tooltip text="serwer API" term_id="kube-apiserver" >}}. Serwer udostępnia
|
||||
API poprzez HTTP, umożliwiając wzajemną komunikację pomiędzy użytkownikami, częściami składowymi klastra i komponentami zewnętrznymi.
|
||||
API poprzez HTTP, umożliwiając wzajemną komunikację pomiędzy użytkownikami, częściami składowymi klastra
|
||||
i komponentami zewnętrznymi.
|
||||
|
||||
API Kubernetes pozwala na sprawdzanie i zmianę stanu obiektów (przykładowo: pody, _Namespaces_, _ConfigMaps_, _Events_).
|
||||
API Kubernetesa pozwala na sprawdzanie i zmianę stanu obiektów
|
||||
(przykładowo: pody, _Namespaces_, _ConfigMaps_, _Events_).
|
||||
|
||||
Większość operacji może zostać wykonana poprzez
|
||||
interfejs linii komend (CLI) [kubectl](/docs/reference/kubectl/overview/) lub inne
|
||||
programy, takie jak [kubeadm](/docs/reference/setup-tools/kubeadm/), które używają
|
||||
programy, takie jak
|
||||
[kubeadm](/docs/reference/setup-tools/kubeadm/), które używają
|
||||
API. Możesz też korzystać z API bezpośrednio przez wywołania typu REST.
|
||||
|
||||
Jeśli piszesz aplikację używającą API Kubernetesa,
|
||||
|
@ -66,54 +69,77 @@ Aby wybrać format odpowiedzi, użyj nagłówków żądania zgodnie z tabelą:
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
W Kubernetesie zaimplementowany jest alternatywny format serializacji na potrzeby API oparty o Protobuf,
|
||||
który jest przede wszystkim przeznaczony na potrzeby wewnętrznej komunikacji w klastrze
|
||||
i opisany w [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md).
|
||||
Pliki IDL dla każdego ze schematów można znaleźć w pakietach Go, które definiują obiekty API.
|
||||
W Kubernetesie zaimplementowany jest alternatywny format serializacji na potrzeby API oparty o
|
||||
Protobuf, który jest przede wszystkim przeznaczony na potrzeby wewnętrznej komunikacji w klastrze.
|
||||
Więcej szczegółów znajduje się w dokumencie [Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md).
|
||||
oraz w plikach *Interface Definition Language* (IDL) dla każdego ze schematów
|
||||
zamieszczonych w pakietach Go, które definiują obiekty API.
|
||||
|
||||
## Zmiany API
|
||||
## Przechowywanie stanu
|
||||
|
||||
Kubernetes przechowuje serializowany stan swoich obiektów w
|
||||
{{< glossary_tooltip term_id="etcd" >}}.
|
||||
|
||||
## Grupy i wersje API
|
||||
|
||||
Aby ułatwić usuwanie poszczególnych pól lub restrukturyzację reprezentacji zasobów, Kubernetes obsługuje
|
||||
równocześnie wiele wersji API, każde poprzez osobną ścieżkę API,
|
||||
na przykład: `/api/v1` lub `/apis/rbac.authorization.k8s.io/v1alpha1`.
|
||||
|
||||
Rozdział wersji wprowadzony jest na poziomie całego API, a nie na poziomach poszczególnych zasobów lub pól,
|
||||
aby być pewnym, że API odzwierciedla w sposób przejrzysty i spójny zasoby systemowe
|
||||
i ich zachowania oraz pozwala na kontrolowany dostęp do tych API, które są w fazie wycofywania
|
||||
lub fazie eksperymentalnej.
|
||||
|
||||
Aby ułatwić rozbudowę API Kubernetes, wprowadziliśmy
|
||||
[*grupy API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md), które mogą
|
||||
być [włączane i wyłączane](/docs/reference/using-api/#enabling-or-disabling).
|
||||
|
||||
Zasoby API są rozróżniane poprzez przynależność do grupy API, typ zasobu, przestrzeń nazw (_namespace_,
|
||||
o ile ma zastosowanie) oraz nazwę. Serwer API może przeprowadzać konwersję między
|
||||
różnymi wersjami API w sposób niewidoczny dla użytkownika: wszystkie te różne wersje
|
||||
reprezentują w rzeczywistości ten sam zasób. Serwer API może udostępniać te same dane
|
||||
poprzez kilka różnych wersji API.
|
||||
|
||||
Załóżmy przykładowo, że istnieją dwie wersje `v1` i `v1beta1` tego samego zasobu.
|
||||
Obiekt utworzony przez wersję `v1beta1` może być odczytany,
|
||||
zaktualizowany i skasowany zarówno przez wersję
|
||||
`v1beta1`, jak i `v1`.
|
||||
|
||||
## Trwałość API
|
||||
|
||||
Z naszego doświadczenia wynika, że każdy system, który odniósł sukces, musi się nieustająco rozwijać w miarę zmieniających się potrzeb.
|
||||
Dlatego Kubernetes został tak zaprojektowany, aby API mogło się zmieniać i rozrastać.
|
||||
Projekt Kubernetes dąży do tego, aby nie wprowadzać zmian niezgodnych z istniejącymi aplikacjami klienckimi
|
||||
i utrzymywać zgodność przez wystarczająco długi czas, aby inne projekty zdążyły się dostosować do zmian.
|
||||
|
||||
W ogólności, nowe zasoby i pola definiujące zasoby API są dodawane stosunkowo często. Usuwanie zasobów lub pól
|
||||
jest regulowane przez [API deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
Definicja zmiany zgodnej (kompatybilnej) oraz metody wprowadzania zmian w API opisano w szczegółach
|
||||
w [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md).
|
||||
W ogólności, nowe zasoby i pola definiujące zasoby API są dodawane stosunkowo często.
|
||||
Usuwanie zasobów lub pól jest regulowane przez
|
||||
[API deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
## Grupy i wersje API
|
||||
Po osiągnięciu przez API statusu ogólnej dostępności (_general availability_ - GA),
|
||||
oznaczanej zazwyczaj jako wersja API `v1`, bardzo zależy nam na utrzymaniu jej zgodności w kolejnych wydaniach.
|
||||
Kubernetes utrzymuje także zgodność dla wersji _beta_ API tam, gdzie jest to możliwe:
|
||||
jeśli zdecydowałeś się używać API w wersji beta, możesz z niego korzystać także później,
|
||||
kiedy dana funkcjonalność osiągnie status stabilnej.
|
||||
|
||||
Aby ułatwić usuwanie poszczególnych pól lub restrukturyzację reprezentacji zasobów, Kubernetes obsługuje
|
||||
równocześnie wiele wersji API, każde poprzez osobną ścieżkę API, na przykład: `/api/v1` lub
|
||||
`/apis/rbac.authorization.k8s.io/v1alpha1`.
|
||||
|
||||
Rozdział wersji wprowadzony jest na poziomie całego API, a nie na poziomach poszczególnych zasobów lub pól, aby być pewnym,
|
||||
że API odzwierciedla w sposób przejrzysty i spójny zasoby systemowe i ich zachowania i pozwala
|
||||
na kontrolowany dostęp do tych API, które są w fazie wycofywania lub fazie eksperymentalnej.
|
||||
|
||||
Aby ułatwić rozbudowę API Kubernetes, wprowadziliśmy [*grupy API*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md),
|
||||
które mogą być [włączane i wyłączane](/docs/reference/using-api/#enabling-or-disabling).
|
||||
|
||||
Zasoby API są rozróżniane poprzez przynależność do grupy API, typ zasobu, przestrzeń nazw (_namespace_,
|
||||
o ile ma zastosowanie) oraz nazwę. Serwer API może obsługiwać
|
||||
te same dane poprzez różne wersje API i przeprowadzać konwersję między
|
||||
różnymi wersjami API w sposób niewidoczny dla użytkownika. Wszystkie te różne wersje
|
||||
reprezentują w rzeczywistości ten sam zasób. Załóżmy przykładowo, że istnieją dwie
|
||||
wersje `v1` i `v1beta1` tego samego zasobu. Obiekt utworzony przez
|
||||
wersję `v1beta1` może być odczytany, zaktualizowany i skasowany zarówno przez wersję
|
||||
`v1beta1`, jak i `v1`.
|
||||
{{< note >}}
|
||||
Mimo, że Kubernetes stara się także zachować zgodność dla API w wersji _alpha_, zdarzają się przypadki,
|
||||
kiedy nie jest to możliwe. Jeśli korzystasz z API w wersji alfa, przed aktualizacją klastra do nowej wersji
|
||||
zalecamy sprawdzenie w informacjach o wydaniu, czy nie nastąpiła jakaś zmiana w tej części API.
|
||||
{{< /note >}}
|
||||
|
||||
Zajrzyj do [API versions reference](/docs/reference/using-api/#api-versioning)
|
||||
po szczegółowe informacje, jak definiuje się poziomy wersji API.
|
||||
po szczegółowe definicje różnych poziomów wersji API.
|
||||
|
||||
|
||||
|
||||
## Rozbudowa API
|
||||
|
||||
API Kubernetesa można rozbudowywać (rozszerzać) na dwa sposoby:
|
||||
API Kubernetesa można rozszerzać na dwa sposoby:
|
||||
|
||||
1. [Definicje zasobów własnych](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
pozwalają deklaratywnie określać, jak serwer API powinien dostarczać wybrane zasoby API.
|
||||
1. [Definicje zasobów własnych (_custom resources_)](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
pozwalają deklaratywnie określać, jak serwer API powinien dostarczać wybrane przez Ciebie zasoby API.
|
||||
1. Można także rozszerzać API Kubernetesa implementując
|
||||
[warstwę agregacji](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
|
@ -121,6 +147,9 @@ API Kubernetesa można rozbudowywać (rozszerzać) na dwa sposoby:
|
|||
|
||||
- Naucz się, jak rozbudowywać API Kubernetesa poprzez dodawanie własnych
|
||||
[CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
- [Controlling API Access](/docs/reference/access-authn-authz/controlling-access/) opisuje
|
||||
- [Controlling Access To The Kubernetes API](/docs/concepts/security/controlling-access/) opisuje
|
||||
sposoby, jakimi klaster zarządza dostępem do API.
|
||||
- Punkty dostępowe API _(endpoints)_, typy zasobów i przykłady zamieszczono w [API Reference](/docs/reference/kubernetes-api/).
|
||||
- Punkty dostępowe API _(endpoints)_, typy zasobów i przykłady zamieszczono w
|
||||
[API Reference](/docs/reference/kubernetes-api/).
|
||||
- Aby dowiedzieć się, jaki rodzaj zmian można określić jako zgodne i jak zmieniać API, zajrzyj do
|
||||
[API changes](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#readme).
|
||||
|
|
|
@ -42,7 +42,7 @@ Kontenery działają w sposób zbliżony do maszyn wirtualnych, ale mają mniejs
|
|||
Kontenery zyskały popularność ze względu na swoje zalety, takie jak:
|
||||
|
||||
* Szybkość i elastyczność w tworzeniu i instalacji aplikacji: obraz kontenera buduje się łatwiej niż obraz VM.
|
||||
* Ułatwienie ciągłego rozwoju, integracji oraz wdrażania aplikacji (*Continuous development, integration, and deployment*): obrazy kontenerów mogą być budowane w sposób wiarygodny i częsty. Wycofanie zmian jest łatwe i szybkie (ponieważ obrazy są niezmienne).
|
||||
* Ułatwienie ciągłego rozwoju, integracji oraz wdrażania aplikacji (*Continuous development, integration, and deployment*): obrazy kontenerów mogą być budowane w sposób wiarygodny i częsty. Wycofywanie zmian jest skuteczne i szybkie (ponieważ obrazy są niezmienne).
|
||||
* Rozdzielenie zadań *Dev* i *Ops*: obrazy kontenerów powstają w fazie *build/release*, oddzielając w ten sposób aplikacje od infrastruktury.
|
||||
* Obserwowalność obejmuje nie tylko informacje i metryki z poziomu systemu operacyjnego, ale także poprawność działania samej aplikacji i inne sygnały.
|
||||
* Spójność środowiska na etapach rozwoju oprogramowania, testowania i działania w trybie produkcyjnym: działa w ten sam sposób na laptopie i w chmurze.
|
||||
|
|
|
@ -8,13 +8,14 @@ content_type: concept
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Tutaj znajdziesz dokumentację źródłową Kubernetes.
|
||||
Tutaj znajdziesz dokumentację źródłową Kubernetesa.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Dokumentacja API
|
||||
|
||||
* [Dokumentacja źródłowa API Kubernetesa {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
|
||||
* [Kubernetes API Reference](/docs/reference/kubernetes-api/)
|
||||
* [One-page API Reference for Kubernetes {{< param "version" >}}](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [Using The Kubernetes API](/docs/reference/using-api/) - ogólne informacje na temat API Kubernetesa.
|
||||
|
||||
## Biblioteki klientów API
|
||||
|
|
|
@ -18,7 +18,7 @@ Kubernetes zawiera różne wbudowane narzędzia służące do pracy z systemem:
|
|||
|
||||
## Minikube
|
||||
|
||||
[`minikube`](https://minikube.sigs.k8s.io/docs/) to narzędzie do łatwego uruchamiania lokalnego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów.
|
||||
[`minikube`](https://minikube.sigs.k8s.io/docs/) to narzędzie do uruchamiania jednowęzłowego klastra Kubernetes na twojej stacji roboczej na potrzeby rozwoju oprogramowania lub prowadzenia testów.
|
||||
|
||||
## Pulpit *(Dashboard)*
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ Przed zapoznaniem się z samouczkami warto stworzyć zakładkę do
|
|||
|
||||
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Example: Deploying PHP Guestbook application with Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
* [Example: Deploying PHP Guestbook application with MongoDB](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Aplikacje stanowe *(Stateful Applications)*
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ card:
|
|||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>Co Kubernetes może dla Ciebie zrobić?</h2>
|
||||
<p>Użytkownicy oczekują od współczesnych serwisów internetowych dostępności non-stop, a deweloperzy chcą móc instalować nowe wersje swoich serwisów kilka razy dziennie. Używając kontenerów można przygotowywać oprogramowanie w taki sposób, aby mogło być instalowane i aktualizowane łatwo i nie powodując żadnych przestojów. Kubernetes pomaga uruchamiać te aplikacje w kontenerach tam, gdzie chcesz i kiedy chcesz i znajdować niezbędne zasoby i narzędzia wymagane do ich pracy. Kubernetes może działać w środowiskach produkcyjnych, jest otwartym oprogramowaniem zaprojektowanym z wykorzystaniem nagromadzonego przez Google doświadczenia w zarządzaniu kontenerami, w połączeniu z najcenniejszymi ideami społeczności.</p>
|
||||
<p>Użytkownicy oczekują od współczesnych serwisów internetowych dostępności non-stop, a deweloperzy chcą móc instalować nowe wersje swoich serwisów kilka razy dziennie. Używając kontenerów można przygotowywać oprogramowanie w taki sposób, aby mogło być instalowane i aktualizowane nie powodując żadnych przestojów. Kubernetes pomaga uruchamiać te aplikacje w kontenerach tam, gdzie chcesz i kiedy chcesz i znajdować niezbędne zasoby i narzędzia wymagane do ich pracy. Kubernetes może działać w środowiskach produkcyjnych, jest otwartym oprogramowaniem zaprojektowanym z wykorzystaniem nagromadzonego przez Google doświadczenia w zarządzaniu kontenerami, w połączeniu z najcenniejszymi ideami społeczności.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -91,9 +91,7 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Na potrzeby pierwszej instalacji użyjesz aplikacji na Node.js zapakowaną w kontener Docker-a. (Jeśli jeszcze nie próbowałeś stworzyć
|
||||
aplikacji na Node.js i uruchomić za pomocą kontenerów, możesz spróbować teraz, kierując się instrukcjami samouczka
|
||||
<a href="/pl/docs/tutorials/hello-minikube/">Hello Minikube</a>).
|
||||
Na potrzeby pierwszej instalacji użyjesz aplikacji hello-node zapakowaną w kontener Docker-a, która korzysta z NGINXa i powtarza wszystkie wysłane do niej zapytania. (Jeśli jeszcze nie próbowałeś stworzyć aplikacji hello-node i uruchomić za pomocą kontenerów, możesz spróbować teraz, kierując się instrukcjami samouczka <a href="/pl/docs/tutorials/hello-minikube/">Hello Minikube</a>).
|
||||
<p>
|
||||
|
||||
<p>Teraz, kiedy wiesz, czym są Deploymenty, przejdźmy do samouczka online, żeby zainstalować naszą pierwszą aplikację!</p>
|
||||
|
|
|
@ -64,12 +64,6 @@ weight: 10
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_04_services.svg" width="150%" height="150%"></p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>Serwis kieruje przychodzący ruch do grupy Podów. Serwisy są obiektami abstrakcyjnymi, dzięki którym pody mogą się psuć i być zastępowane przez Kubernetes nowymi bez ujemnego wpływu na działanie twoich aplikacji. Detekcją nowych podów i kierowaniem ruchu pomiędzy zależnymi podami (takimi, jak składowe front-end i back-end w aplikacji) zajmują się Serwisy Kubernetes.</p>
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Escalando a rede do Kubernetes com EndpointSlices'
|
||||
date: 2020-09-02
|
||||
slug: scaling-kubernetes-networking-with-endpointslices
|
||||
---
|
||||
|
||||
**Autor:** Rob Scott (Google)
|
||||
|
||||
EndpointSlices é um novo tipo de API que provê uma alternativa escalável e extensível à API de Endpoints. EndpointSlices mantém o rastreio dos endereços IP, portas, informações de topologia e prontidão de Pods que compõem um serviço.
|
||||
|
||||
No Kubernetes 1.19 essa funcionalidade está habilitada por padrão, com o kube-proxy lendo os [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) ao invés de Endpoints. Apesar de isso ser uma mudança praticamente transparente, resulta numa melhoria notável de escalabilidade em grandes clusters. Também permite a adição de novas funcionalidades em releases futuras do Kubernetes, como o [Roteamento baseado em topologia.](/docs/concepts/services-networking/service-topology/).
|
||||
|
||||
## Limitações de escalabilidade da API de Endpoints
|
||||
Na API de Endpoints, existia apenas um recurso de Endpoint por serviço (Service). Isso significa que
|
||||
era necessário ser possível armazenar endereços IPs e portas para cada Pod que compunha o serviço correspondente. Isso resultava em recursos imensos de API. Para piorar, o kube-proxy rodava em cada um dos nós e observava qualquer alteração nos recursos de Endpoint. Mesmo que fosse uma simples mudança em um Endpoint, todo o objeto precisava ser enviado para cada uma das instâncias do kube-proxy.
|
||||
|
||||
Outra limitação da API de Endpoints era que ela limitava o número de objetos que podiam ser associados a um _Service_. O tamanho padrão de um objeto armazenado no etcd é 1.5MB. Em alguns casos, isso poderia limitar um Endpoint a 5,000 IPs de Pod. Isso não chega a ser um problema para a maioria dos usuários, mas torna-se um problema significativo para serviços que se aproximem desse tamanho.
|
||||
|
||||
Para demonstrar o quão significante se torna esse problema em grande escala, vamos usar de um simples exemplo: Imagine um _Service_ que possua 5,000 Pods, e que possa causar o Endpoint a ter 1.5Mb . Se apenas um Endpoint nessa lista sofra uma alteração, todo o objeto de Endpoint precisará ser redistribuído para cada um dos nós do cluster. Em um cluster com 3.000 nós, essa atualização causará o envio de 4.5Gb de dados (1.5Mb de Endpoints * 3,000 nós) para todo o cluster. Isso é quase que o suficiente para encher um DVD, e acontecerá para cada mudança de Endpoint. Agora imagine uma atualização gradual em um _Deployment_ que resulte nos 5,000 Pods serem substituídos - isso é mais que 22Tb (ou 5,000 DVDs) de dados transferidos.
|
||||
|
||||
## Dividindo os endpoints com a API de EndpointSlice
|
||||
A API de EndpointSlice foi desenhada para resolver esse problema com um modelo similar de _sharding_. Ao invés de rastrar todos os IPs dos Pods para um _Service_, com um único recurso de Endpoint, nós dividimos eles em múltiplos EndpointSlices menores.
|
||||
|
||||
Usemos por exemplo um serviço com 15 pods. Nós teríamos um único recurso de Endpoints referente a todos eles. Se o EndpointSlices for configurado para armazenar 5 _endpoints_ cada, nós teríamos 3 EndpointSlices diferentes:
|
||||

|
||||
|
||||
Por padrão, o EndpointSlices armazena um máximo de 100 _endpoints_ cada, podendo isso ser configurado com a flag `--max-endpoints-per-slice` no kube-controller-manager.
|
||||
|
||||
## EndpointSlices provê uma melhoria de escalabilidade em 10x
|
||||
Essa API melhora dramaticamente a escalabilidade da rede. Agora quando um Pod é adicionado ou removido, apenas 1 pequeno EndpointSlice necessita ser atualizado. Essa diferença começa a ser notada quando centenas ou milhares de Pods compõem um único _Service_.
|
||||
|
||||
Mais significativo, agora que todos os IPs de Pods para um _Service_ não precisam ser armazenados em um único recurso, nós não precisamos nos preocupar com o limite de tamanho para objetos armazendos no etcd. EndpointSlices já foram utilizados para escalar um serviço além de 100,000 endpoints de rede.
|
||||
|
||||
Tudo isso é possível com uma melhoria significativa de performance feita no kube-proxy. Quando o EndpointSlices é usado em grande escala, muito menos dados serão transferidos para as atualizações de endpoints e o kube-proxy torna-se mais rápido para atualizar regras do iptables ou do ipvs. Além disso, os _Services_ podem escalar agora para pelo menos 10x mais além dos limites anteriores.
|
||||
|
||||
## EndpointSlices permitem novas funcionalidades
|
||||
Introduzido como uma funcionalidade alpha no Kubernetes v1.16, os EndpointSlices foram construídos para permitir algumas novas funcionalidades arrebatadoras em futuras versões do Kubernetes. Isso inclui serviços dual-stack, roteamento baseado em topologia e subconjuntos de _endpoints_.
|
||||
|
||||
Serviços Dual-stack são uma nova funcionalidade que foi desenvolvida juntamente com o EndpointSlices. Eles irão utilizar simultâneamente endereços IPv4 e IPv6 para serviços, e dependem do campo addressType do Endpointslices para conter esses novos tipos de endereço por família de IP.
|
||||
|
||||
O roteamento baseado por topologia irá atualizar o kube-proxy para dar preferência no roteamento de requisições para a mesma região ou zona, utilizando-se de campos de topologia armazenados em cada endpoint dentro de um EndpointSlice. Como uma melhoria futura disso, estamos explorando o potencial de subconjuntos de endpoint. Isso irá permitir o kube-proxy apenas observar um subconjunto de EndpointSlices. Por exemplo, isso pode ser combinado com o roteamento baseado em topologia e assim, o kube-proxy precisará observar apenas EndpointSlices contendo _endpoints_ na mesma zona. Isso irá permitir uma outra melhoria significativa de escalabilidade.
|
||||
|
||||
## O que isso significa para a API de Endpoints?
|
||||
Apesar da API de EndpointSlice prover uma alternativa nova e escalável à API de Endpoints, a API de Endpoints continuará a ser considerada uma funcionalidade estável. A mudança mais significativa para a API de Endpoints envolve começar a truncar Endpoints que podem causar problemas de escalabilidade.
|
||||
|
||||
A API de Endpoints não será removida, mas muitas novas funcionalidades irão depender da nova API EndpointSlice. Para obter vantágem da funcionalidade e escalabilidade que os EndpointSlices provém, aplicações que hoje consomem a API de Endpoints devem considerar suportar EndpointSlices no futuro.
|
|
@ -0,0 +1,131 @@
|
|||
---
|
||||
title: Organizando o acesso ao cluster usando arquivos kubeconfig
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Utilize arquivos kubeconfig para organizar informações sobre clusters, usuários, namespaces e mecanismos de autenticação. A ferramenta de linha de comando `kubectl` faz uso dos arquivos kubeconfig para encontrar as informações necessárias para escolher e se comunicar com o serviço de API de um cluster.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
Um arquivo que é utilizado para configurar o acesso aos clusters é chamado de *kubeconfig*. Esta á uma forma genérica de referenciamento para um arquivo de configuração desta natureza. Isso não significa que existe um arquivo com o nome `kubeconfig`.
|
||||
{{< /note >}}
|
||||
|
||||
Por padrão, o `kubectl` procura por um arquivo de nome `config` no diretório `$HOME/.kube`
|
||||
|
||||
Você pode especificar outros arquivos kubeconfig através da variável de ambiente `KUBECONFIG` ou adicionando a opção [`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/).
|
||||
|
||||
Para maiores detalhes na criação e especificação de um kubeconfig, veja o passo a passo em [Configurar Acesso para Múltiplos Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters).
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Suportando múltiplos clusters, usuários e mecanismos de autenticação
|
||||
|
||||
Imagine que você possua inúmeros clusters, e seus usuários e componentes se autenticam de várias formas. Por exemplo:
|
||||
|
||||
- Um kubelet ativo pode se autenticar utilizando certificados
|
||||
- Um usuário pode se autenticar através de tokens
|
||||
- Administradores podem possuir conjuntos de certificados os quais provém acesso aos usuários de forma individual.
|
||||
|
||||
Através de arquivos kubeconfig, você pode organizar os seus clusters, usuários, e namespaces. Você também pode definir contextos para uma fácil troca entre clusters e namespaces.
|
||||
|
||||
|
||||
## Contexto
|
||||
|
||||
Um elemento de *contexto* em um kubeconfig é utilizado para agrupar parâmetros de acesso em um nome conveniente. Cada contexto possui três parâmetros: cluster, namespace, e usuário.
|
||||
|
||||
Por padrão, a ferramenta de linha de comando `kubectl` utiliza os parâmetros do _contexto atual_ para se comunicar com o cluster.
|
||||
|
||||
Para escolher o contexto atual:
|
||||
|
||||
```shell
|
||||
kubectl config use-context
|
||||
```
|
||||
|
||||
## A variável de ambiente KUBECONFIG
|
||||
|
||||
A variável de ambiente `KUBECONFIG` possui uma lista dos arquivos kubeconfig. Para Linux e Mac, esta lista é delimitada por vírgula. No Windows, a lista é delimitada por ponto e vírgula. A variável de ambiente `KUBECONFIG` não é um requisito obrigatório - caso ela não exista o `kubectl` utilizará o arquivo kubeconfig padrão localizado no caminho `$HOME/.kube/config`.
|
||||
|
||||
Se a variável de ambiente `KUBECONFIG` existir, o `kubectl` utilizará uma configuração que é o resultado da combinação dos arquivos listados na variável de ambiente `KUBECONFIG`.
|
||||
|
||||
## Combinando arquivos kubeconfig
|
||||
|
||||
Para inspecionar a sua configuração atual, execute o seguinte comando:
|
||||
|
||||
```shell
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
Como descrito anteriormente, a saída poderá ser resultado de um único arquivo kubeconfig, ou poderá ser o resultado da junção de vários arquivos kubeconfig.
|
||||
|
||||
Aqui estão as regras que o `kubectl` utiliza quando realiza a combinação de arquivos kubeconfig:
|
||||
|
||||
1. Se o argumento `--kubeconfig` está definido, apenas o arquivo especificado será utilizado. Apenas uma instância desta flag é permitida.
|
||||
|
||||
Caso contrário, se a variável de ambiente `KUBECONFIG` estiver definida, esta deverá ser utilizada como uma lista de arquivos a serem combinados, seguindo o fluxo a seguir:
|
||||
|
||||
* Ignorar arquivos vazios.
|
||||
* Produzir erros para aquivos cujo conteúdo não for possível desserializar.
|
||||
* O primeiro arquivo que definir um valor ou mapear uma chave determinada, será o escolhido.
|
||||
* Nunca modificar um valor ou mapear uma chave.
|
||||
Exemplo: Preservar o contexto do primeiro arquivo que definir `current-context`.
|
||||
Exemplo: Se dois arquivos especificarem um `red-user`, use apenas os valores do primeiro `red-user`. Mesmo se um segundo arquivo possuir entradas não conflitantes sobre a mesma entrada `red-user`, estas deverão ser descartadas.
|
||||
|
||||
Para um exemplo de definição da variável de ambiente `KUBECONFIG` veja [Definido a variável de ambiente KUBECONFIG](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable).
|
||||
|
||||
Caso contrário, utilize o arquivo kubeconfig padrão encontrado no diretório `$HOME/.kube/config`, sem qualquer tipo de combinação.
|
||||
|
||||
1. Determine o contexto a ser utilizado baseado no primeiro padrão encontrado, nesta ordem:
|
||||
|
||||
1. Usar o conteúdo da flag `--context` caso ela existir.
|
||||
1. Usar o `current-context` a partir da combinação dos arquivos kubeconfig.
|
||||
|
||||
|
||||
Um contexto vazio é permitido neste momento.
|
||||
|
||||
|
||||
1. Determinar o cluster e o usuário. Neste ponto, poderá ou não existir um contexto.
|
||||
Determinar o cluster e o usuário no primeiro padrão encontrado de acordo com a ordem à seguir. Este procedimento deverá executado duas vezes: uma para definir o usuário a outra para definir o cluster.
|
||||
|
||||
1. Utilizar a flag caso ela existir: `--user` ou `--cluster`.
|
||||
1. Se o contexto não estiver vazio, utilizar o cluster ou usuário deste contexto.
|
||||
|
||||
O usuário e o cluster poderão estar vazios neste ponto.
|
||||
|
||||
1. Determinar as informações do cluster atual a serem utilizadas. Neste ponto, poderá ou não existir informações de um cluster.
|
||||
|
||||
Construir cada peça de informação do cluster baseado nas opções à seguir; a primeira ocorrência encontrada será a opção vencedora:
|
||||
|
||||
1. Usar as flags de linha de comando caso existirem: `--server`, `--certificate-authority`, `--insecure-skip-tls-verify`.
|
||||
1. Se algum atributo do cluster existir a partir da combinação de kubeconfigs, estes deverão ser utilizados.
|
||||
1. Se não existir informação de localização do servidor falhar.
|
||||
|
||||
1. Determinar a informação atual de usuário a ser utilizada. Construir a informação de usuário utilizando as mesmas regras utilizadas para o caso de informações de cluster, exceto para a regra de técnica de autenticação que deverá ser única por usuário:
|
||||
|
||||
1. Usar as flags, caso existirem: `--client-certificate`, `--client-key`, `--username`, `--password`, `--token`.
|
||||
1. Usar os campos `user` resultado da combinação de arquivos kubeconfig.
|
||||
1. Se existirem duas técnicas conflitantes, falhar.
|
||||
|
||||
1. Para qualquer informação que ainda estiver ausente, utilizar os valores padrão e potencialmente solicitar informações de autenticação a partir do prompt de comando.
|
||||
|
||||
|
||||
## Referências de arquivos
|
||||
|
||||
Arquivos e caminhos referenciados em um arquivo kubeconfig são relativos à localização do arquivo kubeconfig.
|
||||
|
||||
Referências de arquivos na linha de comando são relativas ao diretório de trabalho vigente.
|
||||
|
||||
No arquivo `$HOME/.kube/config`, caminhos relativos são armazenados de forma relativa, e caminhos absolutos são armazenados de forma absoluta.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [Configurar Accesso para Multiplos Clusters](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
|
||||
|
||||
|
||||
|
||||
|
|
@ -21,7 +21,7 @@ Antes de iniciar um tutorial, é interessante que vocẽ salve a página de [Glo
|
|||
|
||||
* [Introdução ao Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) é um curso gratuíto da edX que te guia no entendimento do Kubernetes, seus conceitos, bem como na execução de tarefas mais simples.
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube
|
||||
* [Olá, Minikube!](/pt/docs/tutorials/hello-minikube/) é um "Hello World" que te permite testar rapidamente o Kubernetes em sua estação com o uso do Minikube
|
||||
|
||||
## Configuração
|
||||
|
||||
|
|
|
@ -54,25 +54,25 @@ card:
|
|||
<div class="row">
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Criar um cluster Kubernetes</h5></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Criar um cluster Kubernetes</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Implantar um aplicativo</h5></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Implantar um aplicativo</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="thumbnail">
|
||||
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Explore seu aplicativo</h5></a>
|
||||
<a href="/pt/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Explore seu aplicativo</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -25,7 +25,7 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Continue para o Módulo 2<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Continue para o Módulo 2<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -100,7 +100,7 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" role="button">Iniciar tutorial interativo <span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" role="button">Iniciar tutorial interativo <span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ weight: 20
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Continue para o Módulo 3<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Continue para o Módulo 3<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -93,7 +93,7 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do <a href="/docs/tutorials/hello-minikube/">tutorial Hello Minikube</a>).
|
||||
Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do <a href="/pt/docs/tutorials/hello-minikube/">tutorial Olá, Minikube!</a>).
|
||||
<p>
|
||||
|
||||
<p>Agora que você sabe o que são implantações (Deployment), vamos para o tutorial online e implantar nosso primeiro aplicativo!</p>
|
||||
|
@ -103,7 +103,7 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">Iniciar tutorial interativo<span class="btn__next">›</span></a>
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">Iniciar tutorial interativo<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: Explore seu aplicativo
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: Tutorial Interativo - Explorando seu aplicativo
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<br>
|
||||
<div class="katacoda">
|
||||
|
||||
<div class="katacoda__alert">
|
||||
Para interagir com o Terminal, por favor, use a versão para desktop ou table.
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/expose/expose-intro/" role="button">Continue para o Módulo 4<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,143 @@
|
|||
---
|
||||
title: Visualizando Pods e Nós (Nodes)
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Objetivos</h3>
|
||||
<ul>
|
||||
<li>Aprenda sobre Pods do Kubernetes.</li>
|
||||
<li>Aprenda sobre Nós do Kubernetes.</li>
|
||||
<li>Solucionar problemas de aplicativos implantados no Kubernetes.</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h2>Kubernetes Pods</h2>
|
||||
<p>Quando você criou um Deployment no Módulo <a href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, o Kubernetes criou um <b>Pod</b> para hospedar a instância do seu aplicativo. Um Pod é uma abstração do Kubernetes que representa um grupo de um ou mais contêineres de aplicativos (como Docker) e alguns recursos compartilhados para esses contêineres. Esses recursos incluem:</p>
|
||||
<ul>
|
||||
<li>Armazenamento compartilhado, como Volumes</li>
|
||||
<li>Rede, como um endereço IP único no cluster</li>
|
||||
<li>Informações sobre como executar cada contêiner, como a versão da imagem do contêiner ou portas específicas a serem usadas</li>
|
||||
</ul>
|
||||
<p>Um Pod define um "host lógico" específico para o aplicativo e pode conter diferentes contêineres que, na maioria dos casos, são fortemente acoplados. Por exemplo, um Pod pode incluir o contêiner com seu aplicativo Node.js, bem como um outro contêiner que alimenta os dados a serem publicados pelo servidor web Node.js. Os contêineres de um Pod compartilham um endereço IP e intervalo de portas; são sempre localizados, programados e executam em um contexto compartilhado no mesmo Nó.</p>
|
||||
|
||||
<p>Pods são a unidade atômica na plataforma Kubernetes. Quando criamos um Deployment no Kubernetes, esse Deployment cria Pods com contêineres dentro dele (em vez de você criar contêineres diretamente). Cada Pod está vinculado ao nó onde está programado (scheduled) e lá permanece até o encerramento (de acordo com a política de reinicialização) ou exclusão. Em caso de falha do nó, Pods idênticos são programados em outros nós disponíveis no cluster.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>Sumário:</h3>
|
||||
<ul>
|
||||
<li>Pods</li>
|
||||
<li>Nós (Nodes)</li>
|
||||
<li>Principais comandos do Kubectl</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
Um Pod é um grupo de um ou mais contêineres de aplicativos (como Docker) que inclui armazenamento compartilhado (volumes), endereço IP e informações sobre como executá-los.
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Visão geral sobre os Pods</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_pods.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2>Nós (Nodes)</h2>
|
||||
<p>Um Pod sempre será executando em um <b>Nó</b>. Um Nó é uma máquina de processamento em um cluster Kubernetes e pode ser uma máquina física ou virtual. Cada Nó é gerenciado pelo Control Plane. Um Nó pode possuir múltiplos Pods e o Control Plane do Kubernetes gerencia automaticamente o agendamento dos Pods nos nós do cluster. Para o agendamento automático dos Pods, o Control Plane leva em consideração os recursos disponíveis em cada Nó.</p>
|
||||
|
||||
<p>Cada Nó do Kubernetes executa pelo menos:</p>
|
||||
<ul>
|
||||
<li>Kubelet, o processo responsável pela comunicação entre o Control Plane e o Nó; gerencia os Pods e os contêineres rodando em uma máquina.</li>
|
||||
<li>Um runtime de contêiner (por exemplo o Docker) é responsável por baixar a imagem do contêiner de um registro de imagens (por exemplo o <a href="https://hub.docker.com/">Docker Hub</a>), extrair o contêiner e executar a aplicação.</li>
|
||||
</ul>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Os contêineres só devem ser agendados juntos em um único Pod se estiverem fortemente acoplados e precisarem compartilhar recursos, como disco e IP.</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Visão Geral sobre os Nós</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2>Solucionar problemas usando o comando kubectl</h2>
|
||||
<p>No Módulo <a href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/">2</a>, você usou o comando Kubectl. Você pode continuar utilizando o Kubectl no Módulo 3 para obter informação sobre Deployment realizado e seus recursos. As operações mais comuns podem ser realizadas com os comandos abaixo: </p>
|
||||
<ul>
|
||||
<li><b>kubectl get</b> - listar recursos</li>
|
||||
<li><b>kubectl describe</b> - mostrar informações detalhadas sobre um recurso</li>
|
||||
<li><b>kubectl logs</b> - mostrar os logs de um container em um Pod</li>
|
||||
<li><b>kubectl exec</b> - executar um comando em um contêiner em um Pod</li>
|
||||
</ul>
|
||||
|
||||
<p> Você pode usar esses comandos para verificar quando o Deployment foi realizado, qual seu status atual, ondes os Pods estão rodando e qual são as suas configurações.</p>
|
||||
|
||||
<p>Agora que sabemos mais sobre os componentes de um cluster Kubernetes e o comando kubectl, vamos explorar a nossa aplicação.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>Um nó é uma máquina operária do Kubernetes e pode ser uma VM ou máquina física, dependendo do cluster. Vários Pods podem ser executados em um nó.</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/explore/explore-interactive/" role="button">Inciar o Tutorial Interativo<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -186,6 +186,9 @@ kubectl get pods --show-labels
|
|||
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
|
||||
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
|
||||
|
||||
# Вывод декодированных секретов без внешних инструментов
|
||||
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
|
||||
|
||||
# Вывести все секреты, используемые сейчас в поде.
|
||||
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
|
||||
|
||||
|
|
|
@ -0,0 +1,316 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "弃用 Dockershim 的常见问题"
|
||||
date: 2020-12-02
|
||||
slug: dockershim-faq
|
||||
aliases: [ '/dockershim' ]
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Dockershim Deprecation FAQ"
|
||||
date: 2020-12-02
|
||||
slug: dockershim-faq
|
||||
aliases: [ '/dockershim' ]
|
||||
-->
|
||||
|
||||
<!--
|
||||
This document goes over some frequently asked questions regarding the Dockershim
|
||||
deprecation announced as a part of the Kubernetes v1.20 release. For more detail
|
||||
on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
|
||||
what that means, check out the blog post
|
||||
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).
|
||||
-->
|
||||
本文回顾了自 Kubernetes v1.20 版宣布弃用 Dockershim 以来所引发的一些常见问题。
|
||||
关于 Kubernetes kubelets 从容器运行时的角度弃用 Docker 的细节以及这些细节背后的含义,请参考博文
|
||||
[别慌: Kubernetes 和 Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/)
|
||||
|
||||
<!--
|
||||
### Why is dockershim being deprecated?
|
||||
-->
|
||||
### 为什么弃用 dockershim {#why-is-dockershim-being-deprecated}
|
||||
|
||||
<!--
|
||||
Maintaining dockershim has become a heavy burden on the Kubernetes maintainers.
|
||||
The CRI standard was created to reduce this burden and allow smooth interoperability
|
||||
of different container runtimes. Docker itself doesn't currently implement CRI,
|
||||
thus the problem.
|
||||
-->
|
||||
维护 dockershim 已经成为 Kubernetes 维护者肩头一个沉重的负担。
|
||||
创建 CRI 标准就是为了减轻这个负担,同时也可以增加不同容器运行时之间平滑的互操作性。
|
||||
但反观 Docker 却至今也没有实现 CRI,所以麻烦就来了。
|
||||
|
||||
<!--
|
||||
Dockershim was always intended to be a temporary solution (hence the name: shim).
|
||||
You can read more about the community discussion and planning in the
|
||||
[Dockershim Removal Kubernetes Enhancement Proposal][drkep].
|
||||
-->
|
||||
Dockershim 向来都是一个临时解决方案(因此得名:shim)。
|
||||
你可以进一步阅读
|
||||
[移除 Kubernetes 增强方案 Dockershim](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1985-remove-dockershim)
|
||||
以了解相关的社区讨论和计划。
|
||||
|
||||
<!--
|
||||
Additionally, features that were largely incompatible with the dockershim, such
|
||||
as cgroups v2 and user namespaces are being implemented in these newer CRI
|
||||
runtimes. Removing support for the dockershim will allow further development in
|
||||
those areas.
|
||||
-->
|
||||
此外,与 dockershim 不兼容的一些特性,例如:控制组(cgoups)v2 和用户名字空间(user namespace),已经在新的 CRI 运行时中被实现。
|
||||
移除对 dockershim 的支持将加速这些领域的发展。
|
||||
|
||||
<!--
|
||||
### Can I still use Docker in Kubernetes 1.20?
|
||||
-->
|
||||
### 在 Kubernetes 1.20 版本中,我还可以用 Docker 吗? {#can-I-still-use-docker-in-kubernetes-1.20}
|
||||
|
||||
<!--
|
||||
Yes, the only thing changing in 1.20 is a single warning log printed at [kubelet]
|
||||
startup if using Docker as the runtime.
|
||||
-->
|
||||
当然可以,在 1.20 版本中仅有的改变就是:如果使用 Docker 运行时,启动
|
||||
[kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/)
|
||||
的过程中将打印一条警告日志。
|
||||
|
||||
<!--
|
||||
### When will dockershim be removed?
|
||||
-->
|
||||
### 什么时候移除 dockershim {#when-will-dockershim-be-removed}
|
||||
|
||||
<!--
|
||||
Given the impact of this change, we are using an extended deprecation timeline.
|
||||
It will not be removed before Kubernetes 1.22, meaning the earliest release without
|
||||
dockershim would be 1.23 in late 2021. We will be working closely with vendors
|
||||
and other ecosystem groups to ensure a smooth transition and will evaluate things
|
||||
as the situation evolves.
|
||||
-->
|
||||
考虑到此改变带来的影响,我们使用了一个加长的废弃时间表。
|
||||
在 Kubernetes 1.22 版之前,它不会被彻底移除;换句话说,dockershim 被移除的最早版本会是 2021 年底发布 1.23 版。
|
||||
我们将与供应商以及其他生态团队紧密合作,确保顺利过渡,并将依据事态的发展评估后续事项。
|
||||
|
||||
<!--
|
||||
### Will my existing Docker images still work?
|
||||
-->
|
||||
### 我现有的 Docker 镜像还能正常工作吗? {#will-my-existing-docker-image-still-work}
|
||||
|
||||
<!--
|
||||
Yes, the images produced from `docker build` will work with all CRI implementations.
|
||||
All your existing images will still work exactly the same.
|
||||
-->
|
||||
当然可以,`docker build` 创建的镜像适用于任何 CRI 实现。
|
||||
所有你的现有镜像将和往常一样工作。
|
||||
|
||||
<!--
|
||||
### What about private images?
|
||||
-->
|
||||
### 私有镜像呢?{#what-about-private-images}
|
||||
|
||||
<!--
|
||||
Yes. All CRI runtimes support the same pull secrets configuration used in
|
||||
Kubernetes, either via the PodSpec or ServiceAccount.
|
||||
-->
|
||||
当然可以。所有 CRI 运行时均支持 Kubernetes 中相同的拉取(pull)Secret 配置,
|
||||
不管是通过 PodSpec 还是通过 ServiceAccount 均可。
|
||||
|
||||
<!--
|
||||
### Are Docker and containers the same thing?
|
||||
-->
|
||||
### Docker 和容器是一回事吗? {#are-docker-and-containers-the-same-thing}
|
||||
|
||||
<!--
|
||||
Docker popularized the Linux containers pattern and has been instrumental in
|
||||
developing the underlying technology, however containers in Linux have existed
|
||||
for a long time. The container ecosystem has grown to be much broader than just
|
||||
Docker. Standards like OCI and CRI have helped many tools grow and thrive in our
|
||||
ecosystem, some replacing aspects of Docker while others enhance existing
|
||||
functionality.
|
||||
-->
|
||||
虽然 Linux 的容器技术已经存在了很久,
|
||||
但 Docker 普及了 Linux 容器这种技术模式,并在开发底层技术方面发挥了重要作用。
|
||||
容器的生态相比于单纯的 Docker,已经进化到了一个更宽广的领域。
|
||||
像 OCI 和 CRI 这类标准帮助许多工具在我们的生态中成长和繁荣,
|
||||
其中一些工具替代了 Docker 的某些部分,另一些增强了现有功能。
|
||||
|
||||
<!--
|
||||
### Are there examples of folks using other runtimes in production today?
|
||||
-->
|
||||
### 现在是否有在生产系统中使用其他运行时的例子? {#are-there-example-of-folks-using-other-runtimes-in-production-today}
|
||||
|
||||
<!--
|
||||
All Kubernetes project produced artifacts (Kubernetes binaries) are validated
|
||||
with each release.
|
||||
-->
|
||||
Kubernetes 所有项目在所有版本中出产的工件(Kubernetes 二进制文件)都经过了验证。
|
||||
|
||||
<!--
|
||||
Additionally, the [kind] project has been using containerd for some time and has
|
||||
seen an improvement in stability for its use case. Kind and containerd are leveraged
|
||||
multiple times every day to validate any changes to the Kubernetes codebase. Other
|
||||
related projects follow a similar pattern as well, demonstrating the stability and
|
||||
usability of other container runtimes. As an example, OpenShift 4.x has been
|
||||
using the [CRI-O] runtime in production since June 2019.
|
||||
-->
|
||||
此外,[kind](https://kind.sigs.k8s.io/) 项目使用 containerd 已经有年头了,
|
||||
并且在这个场景中,稳定性还明显得到提升。
|
||||
Kind 和 containerd 每天都会做多次协调,以验证对 Kubernetes 代码库的所有更改。
|
||||
其他相关项目也遵循同样的模式,从而展示了其他容器运行时的稳定性和可用性。
|
||||
例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O](https://cri-o.io/) 运行时。
|
||||
|
||||
<!--
|
||||
For other examples and references you can look at the adopters of containerd and
|
||||
CRI-O, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
|
||||
- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
|
||||
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
|
||||
-->
|
||||
至于其他示例和参考资料,你可以查看 containerd 和 CRI-O 的使用者列表,
|
||||
这两个容器运行时是云原生基金会([CNCF])下的项目。
|
||||
|
||||
- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
|
||||
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
|
||||
|
||||
<!--
|
||||
### People keep referencing OCI, what is that?
|
||||
-->
|
||||
### 人们总在谈论 OCI,那是什么? {#people-keep-referenceing-oci-what-is-that}
|
||||
|
||||
<!--
|
||||
OCI stands for the [Open Container Initiative], which standardized many of the
|
||||
interfaces between container tools and technologies. They maintain a standard
|
||||
specification for packaging container images (OCI image-spec) and running containers
|
||||
(OCI runtime-spec). They also maintain an actual implementation of the runtime-spec
|
||||
in the form of [runc], which is the underlying default runtime for both
|
||||
[containerd] and [CRI-O]. The CRI builds on these low-level specifications to
|
||||
provide an end-to-end standard for managing containers.
|
||||
-->
|
||||
OCI 代表[开放容器标准](https://opencontainers.org/about/overview/),
|
||||
它标准化了容器工具和底层实现(technologies)之间的大量接口。
|
||||
他们维护了打包容器镜像(OCI image-spec)和运行容器(OCI runtime-spec)的标准规范。
|
||||
他们还以 [runc](https://github.com/opencontainers/runc)
|
||||
的形式维护了一个 runtime-spec 的真实实现,
|
||||
这也是 [containerd](https://containerd.io/) 和 [CRI-O](https://cri-o.io/) 依赖的默认运行时。
|
||||
CRI 建立在这些底层规范之上,为管理容器提供端到端的标准。
|
||||
|
||||
<!--
|
||||
### Which CRI implementation should I use?
|
||||
-->
|
||||
### 我应该用哪个 CRI 实现? {#which-cri-implementation-should-I-use}
|
||||
|
||||
<!--
|
||||
That’s a complex question and it depends on a lot of factors. If Docker is
|
||||
working for you, moving to containerd should be a relatively easy swap and
|
||||
will have strictly better performance and less overhead. However, we encourage you
|
||||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
-->
|
||||
这是一个复杂的问题,依赖于许多因素。
|
||||
在 Docker 工作良好的情况下,迁移到 containerd 是一个相对容易的转换,并将获得更好的性能和更少的开销。
|
||||
然而,我们建议你先探索 [CNCF 全景图](https://landscape.cncf.io/category=container-runtime&format=card-mode&grouping=category)
|
||||
提供的所有选项,以做出更适合你的环境的选择。
|
||||
|
||||
<!--
|
||||
### What should I look out for when changing CRI implementations?
|
||||
-->
|
||||
### 当切换 CRI 底层实现时,我应该注意什么? {#what-should-I-look-out-for-when-changing-CRI-implementation}
|
||||
|
||||
<!--
|
||||
While the underlying containerization code is the same between Docker and most
|
||||
CRIs (including containerd), there are a few differences around the edges. Some
|
||||
common things to consider when migrating are:
|
||||
-->
|
||||
Docker 和大多数 CRI(包括 containerd)的底层容器化代码是相同的,但其周边部分却存在一些不同。
|
||||
迁移时一些常见的关注点是:
|
||||
|
||||
<!--
|
||||
- Logging configuration
|
||||
- Runtime resource limitations
|
||||
- Node provisioning scripts that call docker or use docker via it's control socket
|
||||
- Kubectl plugins that require docker CLI or the control socket
|
||||
- Kubernetes tools that require direct access to Docker (e.g. kube-imagepuller)
|
||||
- Configuration of functionality like `registry-mirrors` and insecure registries
|
||||
- Other support scripts or daemons that expect Docker to be available and are run
|
||||
outside of Kubernetes (e.g. monitoring or security agents)
|
||||
- GPUs or special hardware and how they integrate with your runtime and Kubernetes
|
||||
-->
|
||||
|
||||
- 日志配置
|
||||
- 运行时的资源限制
|
||||
- 直接访问 docker 命令或通过控制套接字调用 Docker 的节点供应脚本
|
||||
- 需要访问 docker 命令或控制套接字的 kubectl 插件
|
||||
- 需要直接访问 Docker 的 Kubernetes 工具(例如:kube-imagepuller)
|
||||
- 像 `registry-mirrors` 和不安全的注册表这类功能的配置
|
||||
- 需要 Docker 保持可用、且运行在 Kubernetes 之外的,其他支持脚本或守护进程(例如:监视或安全代理)
|
||||
- GPU 或特殊硬件,以及它们如何与你的运行时和 Kubernetes 集成
|
||||
|
||||
<!--
|
||||
If you use Kubernetes resource requests/limits or file-based log collection
|
||||
DaemonSets then they will continue to work the same, but if you’ve customized
|
||||
your dockerd configuration, you’ll need to adapt that for your new container
|
||||
runtime where possible.
|
||||
-->
|
||||
如果你只是用了 Kubernetes 资源请求/限制或基于文件的日志收集 DaemonSet,它们将继续稳定工作,
|
||||
但是如果你用了自定义了 dockerd 配置,则可能需要为新容器运行时做一些适配工作。
|
||||
|
||||
<!--
|
||||
Another thing to look out for is anything expecting to run for system maintenance
|
||||
or nested inside a container when building images will no longer work. For the
|
||||
former, you can use the [`crictl`][cr] tool as a drop-in replacement (see [mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) and for the
|
||||
latter you can use newer container build options like [img], [buildah],
|
||||
[kaniko], or [buildkit-cli-for-kubectl] that don’t require Docker.
|
||||
-->
|
||||
另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。
|
||||
对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案
|
||||
(参见 [从 docker 命令映射到 crictl](https://kubernetes.io/zh/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl));
|
||||
对于后者,可以用新的容器创建选项,比如
|
||||
[img](https://github.com/genuinetools/img)、
|
||||
[buildah](https://github.com/containers/buildah)、
|
||||
[kaniko](https://github.com/GoogleContainerTools/kaniko)、或
|
||||
[buildkit-cli-for-kubectl](https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
|
||||
),
|
||||
他们均不需要访问 Docker。
|
||||
|
||||
<!--
|
||||
For containerd, you can start with their [documentation] to see what configuration
|
||||
options are available as you migrate things over.
|
||||
-->
|
||||
对于 containerd,你可以从它们的
|
||||
[文档](https://github.com/containerd/cri/blob/master/docs/registry.md)
|
||||
开始,看看在迁移过程中有哪些配置选项可用。
|
||||
|
||||
<!--
|
||||
For instructions on how to use containerd and CRI-O with Kubernetes, see the
|
||||
Kubernetes documentation on [Container Runtimes]
|
||||
-->
|
||||
对于如何协同 Kubernetes 使用 containerd 和 CRI-O 的说明,参见 Kubernetes 文档中这部分:
|
||||
[容器运行时](/zh/docs/setup/production-environment/container-runtimes)。
|
||||
|
||||
<!--
|
||||
### What if I have more questions?
|
||||
-->
|
||||
### 我还有问题怎么办?{#what-if-I-have-more-question}
|
||||
|
||||
<!--
|
||||
If you use a vendor-supported Kubernetes distribution, you can ask them about
|
||||
upgrade plans for their products. For end-user questions, please post them
|
||||
to our end user community forum: https://discuss.kubernetes.io/.
|
||||
-->
|
||||
如果你使用了一个有供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。
|
||||
对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。
|
||||
|
||||
<!--
|
||||
You can also check out the excellent blog post
|
||||
[Wait, Docker is deprecated in Kubernetes now?][dep] a more in-depth technical
|
||||
discussion of the changes.
|
||||
-->
|
||||
你也可以看看这篇优秀的博文:
|
||||
[等等,Docker 刚刚被 Kubernetes 废掉了?](https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m)
|
||||
一个对此变化更深入的技术讨论。
|
||||
|
||||
<!--
|
||||
### Can I have a hug?
|
||||
-->
|
||||
### 我可以加入吗?{#can-I-have-a-hug}
|
||||
|
||||
<!--
|
||||
Always and whenever you want! 🤗🤗
|
||||
-->
|
||||
只要你愿意,随时随地欢迎加入!
|
||||
|
|
@ -0,0 +1,210 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "别慌: Kubernetes 和 Docker"
|
||||
date: 2020-12-02
|
||||
slug: dont-panic-kubernetes-and-docker
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Don't Panic: Kubernetes and Docker"
|
||||
date: 2020-12-02
|
||||
slug: dont-panic-kubernetes-and-docker
|
||||
-->
|
||||
|
||||
**作者:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
|
||||
<!--
|
||||
Kubernetes is [deprecating
|
||||
Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)
|
||||
as a container runtime after v1.20.
|
||||
-->
|
||||
Kubernetes 从版本 v1.20 之后,[弃用 Docker](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)
|
||||
这个容器运行时。
|
||||
|
||||
<!--
|
||||
**You do not need to panic. It’s not as dramatic as it sounds.**
|
||||
-->
|
||||
**不必慌张,这件事并没有听起来那么吓人。**
|
||||
|
||||
<!--
|
||||
TL;DR Docker as an underlying runtime is being deprecated in favor of runtimes
|
||||
that use the [Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
created for Kubernetes. Docker-produced images will continue to work in your
|
||||
cluster with all runtimes, as they always have.
|
||||
-->
|
||||
弃用 Docker 这个底层运行时,转而支持符合为 Kubernetes 创建的
|
||||
[Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
的运行时。
|
||||
Docker 构建的镜像,将在你的集群的所有运行时中继续工作,一如既往。
|
||||
|
||||
<!--
|
||||
If you’re an end-user of Kubernetes, not a whole lot will be changing for you.
|
||||
This doesn’t mean the death of Docker, and it doesn’t mean you can’t, or
|
||||
shouldn’t, use Docker as a development tool anymore. Docker is still a useful
|
||||
tool for building containers, and the images that result from running `docker
|
||||
build` can still run in your Kubernetes cluster.
|
||||
-->
|
||||
如果你是 Kubernetes 的终端用户,这对你不会有太大影响。
|
||||
这事并不意味着 Dockder 已死、也不意味着你不能或不该继续把 Docker 用作开发工具。
|
||||
Docker 仍然是构建容器的利器,使用命令 `docker build` 构建的镜像在 Kubernetes 集群中仍然可以运行。
|
||||
|
||||
<!--
|
||||
If you’re using a managed Kubernetes service like GKE, EKS, or AKS (which [defaults to containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16)) you will need to
|
||||
make sure your worker nodes are using a supported container runtime before
|
||||
Docker support is removed in a future version of Kubernetes. If you have node
|
||||
customizations you may need to update them based on your environment and runtime
|
||||
requirements. Please work with your service provider to ensure proper upgrade
|
||||
testing and planning.
|
||||
-->
|
||||
如果你正在使用 GKE、EKS、或 AKS
|
||||
([默认使用 containerd](https://github.com/Azure/AKS/releases/tag/2020-11-16))
|
||||
这类托管 Kubernetes 服务,你需要在 Kubernetes 后续版本移除对 Docker 支持之前,
|
||||
确认工作节点使用了被支持的容器运行时。
|
||||
如果你的节点被定制过,你可能需要根据你自己的环境和运行时需求更新它们。
|
||||
请与你的服务供应商协作,确保做出适当的升级测试和计划。
|
||||
|
||||
<!--
|
||||
If you’re rolling your own clusters, you will also need to make changes to avoid
|
||||
your clusters breaking. At v1.20, you will get a deprecation warning for Docker.
|
||||
When Docker runtime support is removed in a future release (currently planned
|
||||
for the 1.22 release in late 2021) of Kubernetes it will no longer be supported
|
||||
and you will need to switch to one of the other compliant container runtimes,
|
||||
like containerd or CRI-O. Just make sure that the runtime you choose supports
|
||||
the docker daemon configurations you currently use (e.g. logging).
|
||||
-->
|
||||
如果你正在运营你自己的集群,那还应该做些工作,以避免集群中断。
|
||||
在 v1.20 版中,你仅会得到一个 Docker 的弃用警告。
|
||||
当对 Docker 运行时的支持在 Kubernetes 某个后续发行版(目前的计划是 2021 年晚些时候的 1.22 版)中被移除时,
|
||||
你需要切换到 containerd 或 CRI-O 等兼容的容器运行时。
|
||||
只要确保你选择的运行时支持你当前使用的 Docker 守护进程配置(例如 logging)。
|
||||
|
||||
<!--
|
||||
## So why the confusion and what is everyone freaking out about?
|
||||
-->
|
||||
## 那为什么会有这样的困惑,为什么每个人要害怕呢?{#so-why-the-confusion-and-what-is-everyone-freaking-out-about}
|
||||
|
||||
<!--
|
||||
We’re talking about two different environments here, and that’s creating
|
||||
confusion. Inside of your Kubernetes cluster, there’s a thing called a container
|
||||
runtime that’s responsible for pulling and running your container images. Docker
|
||||
is a popular choice for that runtime (other common options include containerd
|
||||
and CRI-O), but Docker was not designed to be embedded inside Kubernetes, and
|
||||
that causes a problem.
|
||||
-->
|
||||
我们在这里讨论的是两套不同的环境,这就是造成困惑的根源。
|
||||
在你的 Kubernetes 集群中,有一个叫做容器运行时的东西,它负责拉取并运行容器镜像。
|
||||
Docker 对于运行时来说是一个流行的选择(其他常见的选择包括 containerd 和 CRI-O),
|
||||
但 Docker 并非设计用来嵌入到 Kubernetes,这就是问题所在。
|
||||
|
||||
<!--
|
||||
You see, the thing we call “Docker” isn’t actually one thing—it’s an entire
|
||||
tech stack, and one part of it is a thing called “containerd,” which is a
|
||||
high-level container runtime by itself. Docker is cool and useful because it has
|
||||
a lot of UX enhancements that make it really easy for humans to interact with
|
||||
while we’re doing development work, but those UX enhancements aren’t necessary
|
||||
for Kubernetes, because it isn’t a human.
|
||||
-->
|
||||
你看,我们称之为 “Docker” 的物件实际上并不是一个物件——它是一个完整的技术堆栈,
|
||||
它其中一个叫做 “containerd” 的部件本身,才是一个高级容器运行时。
|
||||
Docker 既酷炫又实用,因为它提供了很多用户体验增强功能,而这简化了我们做开发工作时的操作,
|
||||
Kubernetes 用不到这些增强的用户体验,毕竟它并非人类。
|
||||
|
||||
<!--
|
||||
As a result of this human-friendly abstraction layer, your Kubernetes cluster
|
||||
has to use another tool called Dockershim to get at what it really needs, which
|
||||
is containerd. That’s not great, because it gives us another thing that has to
|
||||
be maintained and can possibly break. What’s actually happening here is that
|
||||
Dockershim is being removed from Kubelet as early as v1.23 release, which
|
||||
removes support for Docker as a container runtime as a result. You might be
|
||||
thinking to yourself, but if containerd is included in the Docker stack, why
|
||||
does Kubernetes need the Dockershim?
|
||||
-->
|
||||
因为这个用户友好的抽象层,Kubernetes 集群不得不引入一个叫做 Dockershim 的工具来访问它真正需要的 containerd。
|
||||
这不是一件好事,因为这引入了额外的运维工作量,而且还可能出错。
|
||||
实际上正在发生的事情就是:Dockershim 将在不早于 v1.23 版中从 kubelet 中被移除,也就取消对 Docker 容器运行时的支持。
|
||||
你心里可能会想,如果 containerd 已经包含在 Docker 堆栈中,为什么 Kubernetes 需要 Dockershim。
|
||||
|
||||
<!--
|
||||
Docker isn’t compliant with CRI, the [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/).
|
||||
If it were, we wouldn’t need the shim, and this wouldn’t be a thing. But it’s
|
||||
not the end of the world, and you don’t need to panic—you just need to change
|
||||
your container runtime from Docker to another supported container runtime.
|
||||
-->
|
||||
Docker 不兼容 CRI,
|
||||
[容器运行时接口](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)。
|
||||
如果支持,我们就不需要这个 shim 了,也就没问题了。
|
||||
但这也不是世界末日,你也不需要恐慌——你唯一要做的就是把你的容器运行时从 Docker 切换到其他受支持的容器运行时。
|
||||
|
||||
<!--
|
||||
One thing to note: If you are relying on the underlying docker socket
|
||||
(`/var/run/docker.sock`) as part of a workflow within your cluster today, moving
|
||||
to a different runtime will break your ability to use it. This pattern is often
|
||||
called Docker in Docker. There are lots of options out there for this specific
|
||||
use case including things like
|
||||
[kaniko](https://github.com/GoogleContainerTools/kaniko),
|
||||
[img](https://github.com/genuinetools/img), and
|
||||
[buildah](https://github.com/containers/buildah).
|
||||
-->
|
||||
要注意一点:如果你依赖底层的 Docker 套接字(`/var/run/docker.sock`),作为你集群中工作流的一部分,
|
||||
切换到不同的运行时会导致你无法使用它。
|
||||
这种模式经常被称之为嵌套 Docker(Docker in Docker)。
|
||||
对于这种特殊的场景,有很多选项,比如:
|
||||
[kaniko](https://github.com/GoogleContainerTools/kaniko)、
|
||||
[img](https://github.com/genuinetools/img)、和
|
||||
[buildah](https://github.com/containers/buildah)。
|
||||
|
||||
<!--
|
||||
## What does this change mean for developers, though? Do we still write Dockerfiles? Do we still build things with Docker?
|
||||
-->
|
||||
## 那么,这一改变对开发人员意味着什么?我们还要写 Dockerfile 吗?还能用 Docker 构建镜像吗?{#what-does-this-change-mean-for-developers}
|
||||
|
||||
<!--
|
||||
This change addresses a different environment than most folks use to interact
|
||||
with Docker. The Docker installation you’re using in development is unrelated to
|
||||
the Docker runtime inside your Kubernetes cluster. It’s confusing, we understand.
|
||||
As a developer, Docker is still useful to you in all the ways it was before this
|
||||
change was announced. The image that Docker produces isn’t really a
|
||||
Docker-specific image—it’s an OCI ([Open Container Initiative](https://opencontainers.org/)) image.
|
||||
Any OCI-compliant image, regardless of the tool you use to build it, will look
|
||||
the same to Kubernetes. Both [containerd](https://containerd.io/) and
|
||||
[CRI-O](https://cri-o.io/) know how to pull those images and run them. This is
|
||||
why we have a standard for what containers should look like.
|
||||
-->
|
||||
此次改变带来了一个不同的环境,这不同于我们常用的 Docker 交互方式。
|
||||
你在开发环境中用的 Docker 和你 Kubernetes 集群中的 Docker 运行时无关。
|
||||
我们知道这听起来让人困惑。
|
||||
对于开发人员,Docker 从所有角度来看仍然有用,就跟这次改变之前一样。
|
||||
Docker 构建的镜像并不是 Docker 特有的镜像——它是一个
|
||||
OCI([开放容器标准](https://opencontainers.org/))镜像。
|
||||
任一 OCI 兼容的镜像,不管它是用什么工具构建的,在 Kubernetes 的角度来看都是一样的。
|
||||
[containerd](https://containerd.io/) 和
|
||||
[CRI-O](https://cri-o.io/)
|
||||
两者都知道怎么拉取并运行这些镜像。
|
||||
这就是我们制定容器标准的原因。
|
||||
|
||||
<!--
|
||||
So, this change is coming. It’s going to cause issues for some, but it isn’t
|
||||
catastrophic, and generally it’s a good thing. Depending on how you interact
|
||||
with Kubernetes, this could mean nothing to you, or it could mean a bit of work.
|
||||
In the long run, it’s going to make things easier. If this is still confusing
|
||||
for you, that’s okay—there’s a lot going on here; Kubernetes has a lot of
|
||||
moving parts, and nobody is an expert in 100% of it. We encourage any and all
|
||||
questions regardless of experience level or complexity! Our goal is to make sure
|
||||
everyone is educated as much as possible on the upcoming changes. We hope
|
||||
this has answered most of your questions and soothed some anxieties! ❤️
|
||||
-->
|
||||
所以,改变已经发生。
|
||||
它确实带来了一些问题,但这不是一个灾难,总的说来,这还是一件好事。
|
||||
根据你操作 Kubernetes 的方式的不同,这可能对你不构成任何问题,或者也只是意味着一点点的工作量。
|
||||
从一个长远的角度看,它使得事情更简单。
|
||||
如果你还在困惑,也没问题——这里还有很多事情;
|
||||
Kubernetes 有很多变化中的功能,没有人是100%的专家。
|
||||
我们鼓励你提出任何问题,无论水平高低、问题难易。
|
||||
我们的目标是确保所有人都能在即将到来的改变中获得足够的了解。
|
||||
我们希望这已经回答了你的大部分问题,并缓解了一些焦虑!❤️
|
||||
|
||||
<!--
|
||||
Looking for more answers? Check out our accompanying [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
||||
-->
|
||||
还在寻求更多答案吗?请参考我们附带的
|
||||
[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)。
|
|
@ -22,7 +22,7 @@ components.
|
|||
使用云基础设施技术,你可以在公有云、私有云或者混合云环境中运行 Kubernetes。
|
||||
Kubernetes 的信条是基于自动化的、API 驱动的基础设施,同时避免组件间紧密耦合。
|
||||
|
||||
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是">}}
|
||||
{{< glossary_definition term_id="cloud-controller-manager" length="all" prepend="组件 cloud-controller-manager 是指云控制器管理器,">}}
|
||||
|
||||
<!--
|
||||
The cloud-controller-manager is structured using a plugin
|
||||
|
@ -96,7 +96,7 @@ hosts running inside your tenancy with the cloud provider. The node controller p
|
|||
2. 利用特定云平台的信息为 Node 对象添加注解和标签,例如节点所在的
|
||||
区域(Region)和所具有的资源(CPU、内存等等);
|
||||
3. 获取节点的网络地址和主机名;
|
||||
4. 检查节点的健康状况。如果节点无响应,控制器通过云平台 API ll 查看该节点是否
|
||||
4. 检查节点的健康状况。如果节点无响应,控制器通过云平台 API 查看该节点是否
|
||||
已从云中禁用、删除或终止。如果节点已从云中删除,则控制器从 Kubernetes 集群
|
||||
中删除 Node 对象。
|
||||
|
||||
|
|
|
@ -155,7 +155,7 @@ that horizontally scales the nodes in your cluster.)
|
|||
并使当前状态更接近期望状态。
|
||||
|
||||
(实际上有一个[控制器](https://github.com/kubernetes/autoscaler/)
|
||||
可以水平地扩展集群中的节点。请参阅
|
||||
可以水平地扩展集群中的节点。)
|
||||
|
||||
<!--
|
||||
The important point here is that the controller makes some change to bring about
|
||||
|
@ -170,7 +170,7 @@ Other control loops can observe that reported data and take their own actions.
|
|||
In the thermostat example, if the room is very cold then a different controller
|
||||
might also turn on a frost protection heater. With Kubernetes clusters, the control
|
||||
plane indirectly works with IP address management tools, storage services,
|
||||
cloud provider APIS, and other services by
|
||||
cloud provider APIs, and other services by
|
||||
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
|
||||
-->
|
||||
在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。
|
||||
|
@ -198,7 +198,7 @@ Kubernetes 采用了系统的云原生视图,并且可以处理持续的变化
|
|||
在任务执行时,集群随时都可能被修改,并且控制回路会自动修复故障。
|
||||
这意味着很可能集群永远不会达到稳定状态。
|
||||
|
||||
只要集群中控制器的在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。
|
||||
只要集群中的控制器在运行并且进行有效的修改,整体状态的稳定与否是无关紧要的。
|
||||
|
||||
<!--
|
||||
## Design
|
||||
|
|
|
@ -30,9 +30,9 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
|
|||
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
|
||||
-->
|
||||
Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。
|
||||
节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。每个节点都包含用于运行
|
||||
{{< glossary_tooltip text="Pod" term_id="pod" >}} 所需要的服务,这些服务由
|
||||
{{< glossary_tooltip text="控制面" term_id="control-plane" >}}负责管理。
|
||||
节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。
|
||||
每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务,
|
||||
这些 Pods 由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。
|
||||
|
||||
通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能
|
||||
只有一个节点。
|
||||
|
@ -120,7 +120,7 @@ register itself with the API server. This is the preferred pattern, used by mos
|
|||
|
||||
For self-registration, the kubelet is started with the following options:
|
||||
-->
|
||||
### 节点自注册
|
||||
### 节点自注册 {#self-registration-of-nodes}
|
||||
|
||||
当 kubelet 标志 `--register-node` 为 true(默认)时,它会尝试向 API 服务注册自己。
|
||||
这是首选模式,被绝大多数发行版选用。
|
||||
|
@ -170,7 +170,7 @@ When you want to create Node objects manually, set the kubelet flag `--register-
|
|||
You can modify Node objects regardless of the setting of `--register-node`.
|
||||
For example, you can set labels on an existing Node, or mark it unschedulable.
|
||||
-->
|
||||
### 手动节点管理
|
||||
### 手动节点管理 {#manual-node-administration}
|
||||
|
||||
你可以使用 {{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
|
||||
来创建和修改 Node 对象。
|
||||
|
@ -456,8 +456,7 @@ of the node heartbeats as the cluster scales.
|
|||
#### 心跳机制 {#heartbeats}
|
||||
|
||||
Kubernetes 节点发送的心跳(Heartbeats)有助于确定节点的可用性。
|
||||
心跳有两种形式:`NodeStatus` 和 [`Lease` 对象]
|
||||
(/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)。
|
||||
心跳有两种形式:`NodeStatus` 和 [`Lease` 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)。
|
||||
每个节点在 `kube-node-lease`{{< glossary_tooltip term_id="namespace" text="名字空间">}}
|
||||
中都有一个与之关联的 `Lease` 对象。
|
||||
`Lease` 是一种轻量级的资源,可在集群规模扩大时提高节点心跳机制的性能。
|
||||
|
|
|
@ -14,44 +14,43 @@ weight: 60
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.
|
||||
Application logs can help you understand what is happening inside your application. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams.
|
||||
-->
|
||||
应用日志可以让你了解应用内部的运行状况。日志对调试问题和监控集群活动非常有用。
|
||||
大部分现代化应用都有某种日志记录机制;同样地,大多数容器引擎也被设计成支持某种日志记录机制。
|
||||
针对容器化应用,最简单且受欢迎的日志记录方式就是写入标准输出和标准错误流。
|
||||
大部分现代化应用都有某种日志记录机制。同样地,容器引擎也被设计成支持日志记录。
|
||||
针对容器化应用,最简单且最广泛采用的日志记录方式就是写入标准输出和标准错误流。
|
||||
|
||||
<!--
|
||||
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution. For example, if a container crashes, a pod is evicted, or a node dies, you'll usually still want to access your application's logs. As such, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_. Cluster-level logging requires a separate backend to store, analyze, and query logs. Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
|
||||
However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.
|
||||
For example, you may want access your application's logs if a container crashes; a pod gets evicted; or a node dies,
|
||||
In a cluster, logs should have a separate storage and lifecycle independent of nodes, pods, or containers. This concept is called _cluster-level-logging_.
|
||||
-->
|
||||
但是,由容器引擎或运行时提供的原生功能通常不足以满足完整的日志记录方案。
|
||||
例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你仍然想访问到应用日志。
|
||||
因此,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。
|
||||
这个概念叫 _集群级的日志_ 。集群级日志方案需要一个独立的后台来存储、分析和查询日志。
|
||||
Kubernetes 没有为日志数据提供原生存储方案,但是你可以集成许多现有的日志解决方案到 Kubernetes 集群中。
|
||||
但是,由容器引擎或运行时提供的原生功能通常不足以构成完整的日志记录方案。
|
||||
例如,如果发生容器崩溃、Pod 被逐出或节点宕机等情况,你可能想访问应用日志。
|
||||
在集群中,日志应该具有独立的存储和生命周期,与节点、Pod 或容器的生命周期相独立。
|
||||
这个概念叫 _集群级的日志_ 。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
Cluster-level logging architectures are described in assumption that
|
||||
a logging backend is present inside or outside of your cluster. If you're
|
||||
not interested in having cluster-level logging, you might still find
|
||||
the description of how logs are stored and handled on the node to be useful.
|
||||
Cluster-level logging architectures require a separate backend to store, analyze, and query logs. Kubernetes
|
||||
does not provide a native storage solution for log data. Instead, there are many logging solutions that
|
||||
integrate with Kubernetes. The following sections describe how to handle and store logs on nodes.
|
||||
-->
|
||||
集群级日志架构假定在集群内部或者外部有一个日志后台。
|
||||
如果你对集群级日志不感兴趣,你仍会发现关于如何在节点上存储和处理日志的描述对你是有用的。
|
||||
集群级日志架构需要一个独立的后端用来存储、分析和查询日志。
|
||||
Kubernetes 并不为日志数据提供原生的存储解决方案。
|
||||
相反,有很多现成的日志方案可以集成到 Kubernetes 中.
|
||||
下面各节描述如何在节点上处理和存储日志。
|
||||
|
||||
<!--
|
||||
## Basic logging in Kubernetes
|
||||
|
||||
In this section, you can see an example of basic logging in Kubernetes that
|
||||
outputs data to the standard output stream. This demonstration uses
|
||||
a pod specification with a container that writes some text to standard output
|
||||
once per second.
|
||||
This example uses a `Pod` specification with a container
|
||||
to write text to the standard output stream once per second.
|
||||
-->
|
||||
## Kubernetes 中的基本日志记录
|
||||
|
||||
本节,你会看到一个kubernetes 中生成基本日志的例子,该例子中数据被写入到标准输出。
|
||||
这里的示例为包含一个容器的 Pod 规约,该容器每秒钟向标准输出写入数据。
|
||||
这里的示例使用包含一个容器的 Pod 规约,每秒钟向标准输出写入数据。
|
||||
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
|
||||
|
@ -76,7 +75,7 @@ pod/counter created
|
|||
<!--
|
||||
To fetch the logs, use the `kubectl logs` command, as follows:
|
||||
-->
|
||||
使用 `kubectl logs` 命令获取日志:
|
||||
像下面这样,使用 `kubectl logs` 命令获取日志:
|
||||
|
||||
```shell
|
||||
kubectl logs counter
|
||||
|
@ -95,10 +94,10 @@ The output is:
|
|||
```
|
||||
|
||||
<!--
|
||||
You can use `kubectl logs` to retrieve logs from a previous instantiation of a container with `--previous` flag, in case the container has crashed. If your pod has multiple containers, you should specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
|
||||
You can use `kubectl logs --previous` to retrieve logs from a previous instantiation of a container..If your pod has multiple containers, specify which container's logs you want to access by appending a container name to the command. See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs) for more details.
|
||||
-->
|
||||
一旦发生容器崩溃,你可以使用命令 `kubectl logs` 和参数 `--previous` 检索之前的容器日志。
|
||||
如果 pod 中有多个容器,你应该向该命令附加一个容器名以访问对应容器的日志。
|
||||
你可以使用命令 `kubectl logs --previous` 检索之前容器实例的日志。
|
||||
如果 Pod 中有多个容器,你应该为该命令附加容器名以访问对应容器的日志。
|
||||
详见 [`kubectl logs` 文档](/docs/reference/generated/kubectl/kubectl-commands#logs)。
|
||||
|
||||
<!--
|
||||
|
@ -111,11 +110,12 @@ You can use `kubectl logs` to retrieve logs from a previous instantiation of a c
|
|||

|
||||
|
||||
<!--
|
||||
Everything a containerized application writes to `stdout` and `stderr` is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in json format.
|
||||
A container engine handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
|
||||
For example, the Docker container engine redirects those two streams to [a logging driver](https://docs.docker.com/engine/admin/logging/overview), which is configured in Kubernetes to write to a file in JSON format.
|
||||
-->
|
||||
容器化应用写入 `stdout` 和 `stderr` 的任何数据,都会被容器引擎捕获并被重定向到某个位置。
|
||||
例如,Docker 容器引擎将这两个输出流重定向到某个
|
||||
[日志驱动](https://docs.docker.com/engine/admin/logging/overview) ,
|
||||
[日志驱动(Logging Driver)](https://docs.docker.com/engine/admin/logging/overview) ,
|
||||
该日志驱动在 Kubernetes 中配置为以 JSON 格式写入文件。
|
||||
|
||||
<!--
|
||||
|
@ -135,51 +135,48 @@ By default, if a container restarts, the kubelet keeps one terminated container
|
|||
<!--
|
||||
An important consideration in node-level logging is implementing log rotation,
|
||||
so that logs don't consume all available storage on the node. Kubernetes
|
||||
currently is not responsible for rotating logs, but rather a deployment tool
|
||||
is not responsible for rotating logs, but rather a deployment tool
|
||||
should set up a solution to address that.
|
||||
For example, in Kubernetes clusters, deployed by the `kube-up.sh` script,
|
||||
there is a [`logrotate`](https://linux.die.net/man/8/logrotate)
|
||||
tool configured to run each hour. You can also set up a container runtime to
|
||||
rotate application's logs automatically, e.g. by using Docker's `log-opt`.
|
||||
In the `kube-up.sh` script, the latter approach is used for COS image on GCP,
|
||||
and the former approach is used in any other environment. In both cases, by
|
||||
default rotation is configured to take place when log file exceeds 10MB.
|
||||
rotate application's logs automatically.
|
||||
-->
|
||||
节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上所有的可用空间。
|
||||
Kubernetes 当前并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。
|
||||
例如,在 Kubernetes 集群中,用 `kube-up.sh` 部署一个每小时运行的工具
|
||||
[`logrotate`](https://linux.die.net/man/8/logrotate)。
|
||||
你也可以设置容器 runtime 来自动地轮转应用日志,比如使用 Docker 的 `log-opt` 选项。
|
||||
在 `kube-up.sh` 脚本中,使用后一种方式来处理 GCP 上的 COS 镜像,而使用前一种方式来处理其他环境。
|
||||
这两种方式,默认日志超过 10MB 大小时都会触发日志轮转。
|
||||
节点级日志记录中,需要重点考虑实现日志的轮转,以此来保证日志不会消耗节点上全部可用空间。
|
||||
Kubernetes 并不负责轮转日志,而是通过部署工具建立一个解决问题的方案。
|
||||
例如,在用 `kube-up.sh` 部署的 Kubernetes 集群中,存在一个
|
||||
[`logrotate`](https://linux.die.net/man/8/logrotate),每小时运行一次。
|
||||
你也可以设置容器运行时来自动地轮转应用日志。
|
||||
|
||||
<!--
|
||||
As an example, you can find detailed information about how `kube-up.sh` sets
|
||||
up logging for COS image on GCP in the corresponding [script][cosConfigureHelper].
|
||||
up logging for COS image on GCP in the corresponding
|
||||
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
|
||||
-->
|
||||
例如,你可以找到关于 `kube-up.sh` 为 GCP 环境的 COS 镜像设置日志的详细信息,
|
||||
相应的脚本在
|
||||
[这里](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。
|
||||
脚本为
|
||||
[`configure-helper` 脚本](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。
|
||||
|
||||
<!--
|
||||
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
|
||||
the basic logging example, the kubelet on the node handles the request and
|
||||
reads directly from the log file, returning the contents in the response.
|
||||
reads directly from the log file. The kubelet returns the content of the log file.
|
||||
-->
|
||||
当运行 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) 时,
|
||||
节点上的 kubelet 处理该请求并直接读取日志文件,同时在响应中返回日志文件内容。
|
||||
|
||||
<!--
|
||||
Currently, if some external system has performed the rotation,
|
||||
If an external system has performed the rotation,
|
||||
only the contents of the latest log file will be available through
|
||||
`kubectl logs`. E.g. if there's a 10MB file, `logrotate` performs
|
||||
the rotation and there are two files, one 10MB in size and one empty,
|
||||
`kubectl logs` will return an empty response.
|
||||
`kubectl logs`. For example, if there's a 10MB file, `logrotate` performs
|
||||
the rotation and there are two files: one file that is 10MB in size and a second file that is empty.
|
||||
`kubectl logs` returns the latest log file which in this example is an empty response.
|
||||
-->
|
||||
{{< note >}}
|
||||
当前,如果有其他系统机制执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。
|
||||
比如,一个 10MB 大小的文件,通过`logrotate` 执行轮转后生成两个文件,一个 10MB 大小,
|
||||
一个为空,所以 `kubectl logs` 将返回空。
|
||||
如果有外部系统执行日志轮转,那么 `kubectl logs` 仅可查询到最新的日志内容。
|
||||
比如,对于一个 10MB 大小的文件,通过 `logrotate` 执行轮转后生成两个文件,
|
||||
一个 10MB 大小,一个为空,`kubectl logs` 返回最新的日志文件,而该日志文件
|
||||
在这个例子中为空。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -194,34 +191,36 @@ that do not run in a container. For example:
|
|||
|
||||
<!--
|
||||
* The Kubernetes scheduler and kube-proxy run in a container.
|
||||
* The kubelet and container runtime, for example Docker, do not run in containers.
|
||||
* The kubelet and container runtime do not run in containers.
|
||||
-->
|
||||
* 在容器中运行的 kube-scheduler 和 kube-proxy。
|
||||
* 不在容器中运行的 kubelet 和容器运行时(例如 Docker)。
|
||||
* 不在容器中运行的 kubelet 和容器运行时。
|
||||
|
||||
<!--
|
||||
On machines with systemd, the kubelet and container runtime write to journald. If
|
||||
systemd is not present, they write to `.log` files in the `/var/log` directory.
|
||||
System components inside containers always write to the `/var/log` directory,
|
||||
bypassing the default logging mechanism. They use the [klog][klog]
|
||||
systemd is not present, the kubelet and container runtime write to `.log` files
|
||||
in the `/var/log` directory. System components inside containers always write
|
||||
to the `/var/log` directory, bypassing the default logging mechanism.
|
||||
They use the [`klog`](https://github.com/kubernetes/klog)
|
||||
logging library. You can find the conventions for logging severity for those
|
||||
components in the [development docs on logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
|
||||
-->
|
||||
在使用 systemd 机制的服务器上,kubelet 和容器 runtime 写入日志到 journald。
|
||||
如果没有 systemd,他们写入日志到 `/var/log` 目录的 `.log` 文件。
|
||||
容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。他们使用
|
||||
[klog](https://github.com/kubernetes/klog) 日志库。
|
||||
你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)找到这些组件的日志告警级别协议。
|
||||
在使用 systemd 机制的服务器上,kubelet 和容器容器运行时将日志写入到 journald 中。
|
||||
如果没有 systemd,它们将日志写入到 `/var/log` 目录下的 `.log` 文件中。
|
||||
容器中的系统组件通常将日志写到 `/var/log` 目录,绕过了默认的日志机制。
|
||||
他们使用 [klog](https://github.com/kubernetes/klog) 日志库。
|
||||
你可以在[日志开发文档](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)
|
||||
找到这些组件的日志告警级别约定。
|
||||
|
||||
<!--
|
||||
Similarly to the container logs, system component logs in the `/var/log`
|
||||
Similar to the container logs, system component logs in the `/var/log`
|
||||
directory should be rotated. In Kubernetes clusters brought up by
|
||||
the `kube-up.sh` script, those logs are configured to be rotated by
|
||||
the `logrotate` tool daily or once the size exceeds 100MB.
|
||||
-->
|
||||
和容器日志类似,`/var/log` 目录中的系统组件日志也应该被轮转。
|
||||
通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate` 执行每日轮转,
|
||||
或者日志大小超过 100MB 时触发轮转。
|
||||
通过脚本 `kube-up.sh` 启动的 Kubernetes 集群中,日志被工具 `logrotate`
|
||||
执行每日轮转,或者日志大小超过 100MB 时触发轮转。
|
||||
|
||||
<!--
|
||||
## Cluster-level logging architectures
|
||||
|
@ -234,10 +233,11 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
|
|||
-->
|
||||
## 集群级日志架构
|
||||
|
||||
虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。以下是一些选项:
|
||||
虽然Kubernetes没有为集群级日志记录提供原生的解决方案,但你可以考虑几种常见的方法。
|
||||
以下是一些选项:
|
||||
|
||||
* 使用在每个节点上运行的节点级日志记录代理。
|
||||
* 在应用程序的 pod 中,包含专门记录日志的 sidecar 容器。
|
||||
* 在应用程序的 Pod 中,包含专门记录日志的边车(Sidecar)容器。
|
||||
* 将日志直接从应用程序中推送到日志记录后端。
|
||||
|
||||
<!--
|
||||
|
@ -257,43 +257,35 @@ You can implement cluster-level logging by including a _node-level logging agent
|
|||
通常,日志记录代理程序是一个容器,它可以访问包含该节点上所有应用程序容器的日志文件的目录。
|
||||
|
||||
<!--
|
||||
Because the logging agent must run on every node, it's common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged.
|
||||
Because the logging agent must run on every node, it's common to run the agent
|
||||
as a `DaemonSet`.
|
||||
Node-level logging creates only one agent per node, and doesn't require any changes to the applications running on the node.
|
||||
-->
|
||||
由于日志记录代理必须在每个节点上运行,它可以用 DaemonSet 副本,Pod 或 本机进程来实现。
|
||||
然而,后两种方法被弃用并且非常不别推荐。
|
||||
由于日志记录代理必须在每个节点上运行,通常可以用 `DaemonSet` 的形式运行该代理。
|
||||
节点级日志在每个节点上仅创建一个代理,不需要对节点上的应用做修改。
|
||||
|
||||
<!--
|
||||
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
|
||||
Containers write stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
|
||||
-->
|
||||
对于 Kubernetes 集群来说,使用节点级的日志代理是最常用和被推荐的方式,
|
||||
因为在每个节点上仅创建一个代理,并且不需要对节点上的应用做修改。
|
||||
但是,节点级的日志 _仅适用于应用程序的标准输出和标准错误输出_。
|
||||
容器向标准输出和标准错误输出写出数据,但在格式上并不统一。
|
||||
节点级代理
|
||||
收集这些日志并将其进行转发以完成汇总。
|
||||
|
||||
<!--
|
||||
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
|
||||
-->
|
||||
Kubernetes 并不指定日志代理,但是有两个可选的日志代理与 Kubernetes 发行版一起发布。
|
||||
[Stackdriver 日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/)
|
||||
适用于 Google Cloud Platform,和
|
||||
[Elasticsearch](/zh/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/)。
|
||||
你可以在专门的文档中找到更多的信息和说明。
|
||||
两者都使用 [fluentd](https://www.fluentd.org/) 与自定义配置作为节点上的代理。
|
||||
|
||||
<!--
|
||||
### Using a sidecar container with the logging agent
|
||||
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
|
||||
|
||||
You can use a sidecar container in one of the following ways:
|
||||
-->
|
||||
### 使用 sidecar 容器和日志代理
|
||||
### 使用 sidecar 容器运行日志代理 {#sidecar-container-with-logging-agent}
|
||||
|
||||
你可以通过以下方式之一使用 sidecar 容器:
|
||||
你可以通过以下方式之一使用边车(Sidecar)容器:
|
||||
|
||||
<!--
|
||||
* The sidecar container streams application logs to its own `stdout`.
|
||||
* The sidecar container runs a logging agent, which is configured to pick up logs from an application container.
|
||||
-->
|
||||
* sidecar 容器将应用程序日志传送到自己的标准输出。
|
||||
* sidecar 容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。
|
||||
* 边车容器将应用程序日志传送到自己的标准输出。
|
||||
* 边车容器运行一个日志代理,配置该日志代理以便从应用容器收集日志。
|
||||
|
||||
<!--
|
||||
#### Streaming sidecar container
|
||||
|
@ -303,17 +295,16 @@ You can use a sidecar container in one of the following ways:
|
|||
By having your sidecar containers stream to their own `stdout` and `stderr`
|
||||
streams, you can take advantage of the kubelet and the logging agent that
|
||||
already run on each node. The sidecar containers read logs from a file, a socket,
|
||||
or the journald. Each individual sidecar container prints log to its own `stdout`
|
||||
or `stderr` stream.
|
||||
or the journald. Each sidecar container prints log to its own `stdout` or `stderr` stream.
|
||||
-->
|
||||
#### 传输数据流的 sidecar 容器
|
||||
|
||||

|
||||

|
||||
|
||||
利用 sidecar 容器向自己的 `stdout` 和 `stderr` 传输流的方式,
|
||||
利用边车容器向自己的 `stdout` 和 `stderr` 传输流的方式,
|
||||
你就可以利用每个节点上的 kubelet 和日志代理来处理日志。
|
||||
sidecar 容器从文件、套接字或 journald 读取日志。
|
||||
每个 sidecar 容器打印其自己的 `stdout` 和 `stderr` 流。
|
||||
边车容器从文件、套接字或 journald 读取日志。
|
||||
每个边车容器向自己的 `stdout` 和 `stderr` 流中输出日志。
|
||||
|
||||
<!--
|
||||
This approach allows you to separate several log streams from different
|
||||
|
@ -328,29 +319,30 @@ like `kubectl logs`.
|
|||
另外,因为 `stdout`、`stderr` 由 kubelet 处理,你可以使用内置的工具 `kubectl logs`。
|
||||
|
||||
<!--
|
||||
Consider the following example. A pod runs a single container, and the container
|
||||
For example, a pod runs a single container, and the container
|
||||
writes to two different log files, using two different formats. Here's a
|
||||
configuration file for the Pod:
|
||||
-->
|
||||
考虑接下来的例子。pod 的容器向两个文件写不同格式的日志,下面是这个 pod 的配置文件:
|
||||
例如,某 Pod 中运行一个容器,该容器向两个文件写不同格式的日志。
|
||||
下面是这个 pod 的配置文件:
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||
|
||||
<!--
|
||||
It would be a mess to have log entries of different formats in the same log
|
||||
It is not recommended to write log entries with different formats to the same log
|
||||
stream, even if you managed to redirect both components to the `stdout` stream of
|
||||
the container. Instead, you could introduce two sidecar containers. Each sidecar
|
||||
the container. Instead, you can create two sidecar containers. Each sidecar
|
||||
container could tail a particular log file from a shared volume and then redirect
|
||||
the logs to its own `stdout` stream.
|
||||
-->
|
||||
在同一个日志流中有两种不同格式的日志条目,这有点混乱,即使你试图重定向它们到容器的 `stdout` 流。
|
||||
取而代之的是,你可以引入两个 sidecar 容器。
|
||||
每一个 sidecar 容器可以从共享卷跟踪特定的日志文件,并重定向文件内容到各自的 `stdout` 流。
|
||||
不建议在同一个日志流中写入不同格式的日志条目,即使你成功地将其重定向到容器的
|
||||
`stdout` 流。相反,你可以创建两个边车容器。每个边车容器可以从共享卷
|
||||
跟踪特定的日志文件,并将文件内容重定向到各自的 `stdout` 流。
|
||||
|
||||
<!--
|
||||
Here's a configuration file for a pod that has two sidecar containers:
|
||||
-->
|
||||
这是运行两个 sidecar 容器的 Pod 文件。
|
||||
下面是运行两个边车容器的 Pod 的配置文件:
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
|
||||
|
||||
|
@ -358,12 +350,18 @@ Here's a configuration file for a pod that has two sidecar containers:
|
|||
Now when you run this pod, you can access each log stream separately by
|
||||
running the following commands:
|
||||
-->
|
||||
现在当你运行这个 Pod 时,你可以分别地访问每一个日志流,运行如下命令:
|
||||
现在当你运行这个 Pod 时,你可以运行如下命令分别访问每个日志流:
|
||||
|
||||
```shell
|
||||
kubectl logs counter count-log-1
|
||||
```
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
输出为:
|
||||
|
||||
```console
|
||||
0: Mon Jan 1 00:00:00 UTC 2001
|
||||
1: Mon Jan 1 00:00:01 UTC 2001
|
||||
2: Mon Jan 1 00:00:02 UTC 2001
|
||||
|
@ -373,7 +371,13 @@ kubectl logs counter count-log-1
|
|||
```shell
|
||||
kubectl logs counter count-log-2
|
||||
```
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is:
|
||||
-->
|
||||
输出为:
|
||||
|
||||
```console
|
||||
Mon Jan 1 00:00:00 UTC 2001 INFO 0
|
||||
Mon Jan 1 00:00:01 UTC 2001 INFO 1
|
||||
Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
||||
|
@ -385,7 +389,8 @@ The node-level agent installed in your cluster picks up those log streams
|
|||
automatically without any further configuration. If you like, you can configure
|
||||
the agent to parse log lines depending on the source container.
|
||||
-->
|
||||
集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。如果你愿意,你可以配置代理程序来解析源容器的日志行。
|
||||
集群中安装的节点级代理会自动获取这些日志流,而无需进一步配置。
|
||||
如果你愿意,你也可以配置代理程序来解析源容器的日志行。
|
||||
|
||||
<!--
|
||||
Note, that despite low CPU and memory usage (order of couple of millicores
|
||||
|
@ -395,113 +400,94 @@ an application that writes to a single file, it's generally better to set
|
|||
`/dev/stdout` as destination rather than implementing the streaming sidecar
|
||||
container approach.
|
||||
-->
|
||||
注意,尽管 CPU 和内存使用率都很低(以多个 cpu millicores 指标排序或者按内存的兆字节排序),
|
||||
注意,尽管 CPU 和内存使用率都很低(以多个 CPU 毫核指标排序或者按内存的兆字节排序),
|
||||
向文件写日志然后输出到 `stdout` 流仍然会成倍地增加磁盘使用率。
|
||||
如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,而不是使用流式的 sidecar 容器方式。
|
||||
如果你的应用向单一文件写日志,通常最好设置 `/dev/stdout` 作为目标路径,
|
||||
而不是使用流式的边车容器方式。
|
||||
|
||||
<!--
|
||||
Sidecar containers can also be used to rotate log files that cannot be
|
||||
rotated by the application itself. An example
|
||||
of this approach is a small container running logrotate periodically.
|
||||
rotated by the application itself. An example of this approach is a small container running logrotate periodically.
|
||||
However, it's recommended to use `stdout` and `stderr` directly and leave rotation
|
||||
and retention policies to the kubelet.
|
||||
-->
|
||||
应用本身如果不具备轮转日志文件的功能,可以通过 sidecar 容器实现。
|
||||
该方式的一个例子是运行一个定期轮转日志的容器。
|
||||
然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略交给 kubelet。
|
||||
应用本身如果不具备轮转日志文件的功能,可以通过边车容器实现。
|
||||
该方式的一个例子是运行一个小的、定期轮转日志的容器。
|
||||
然而,还是推荐直接使用 `stdout` 和 `stderr`,将日志的轮转和保留策略
|
||||
交给 kubelet。
|
||||
|
||||
<!--
|
||||
#### Sidecar container with a logging agent
|
||||
|
||||

|
||||
-->
|
||||
### 具有日志代理功能的 sidecar 容器
|
||||
### 具有日志代理功能的边车容器
|
||||
|
||||

|
||||

|
||||
|
||||
<!--
|
||||
If the node-level logging agent is not flexible enough for your situation, you
|
||||
can create a sidecar container with a separate logging agent that you have
|
||||
configured specifically to run with your application.
|
||||
-->
|
||||
如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个带有单独日志记录代理程序的
|
||||
sidecar 容器,将代理程序专门配置为与你的应用程序一起运行。
|
||||
如果节点级日志记录代理程序对于你的场景来说不够灵活,你可以创建一个
|
||||
带有单独日志记录代理的边车容器,将代理程序专门配置为与你的应用程序一起运行。
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
<!--
|
||||
Using a logging agent in a sidecar container can lead
|
||||
to significant resource consumption. Moreover, you won't be able to access
|
||||
those logs using `kubectl logs` command, because they are not controlled
|
||||
by the kubelet.
|
||||
{{< /note >}}
|
||||
-->
|
||||
{{< note >}}
|
||||
在 sidecar 容器中使用日志代理会导致严重的资源损耗。
|
||||
在边车容器中使用日志代理会带来严重的资源损耗。
|
||||
此外,你不能使用 `kubectl logs` 命令访问日志,因为日志并没有被 kubelet 管理。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
As an example, you could use [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
|
||||
which uses fluentd as a logging agent. Here are two configuration files that
|
||||
you can use to implement this approach. The first file contains
|
||||
Here are two configuration files that you can use to implement a sidecar container with a logging agent. The first file contains
|
||||
a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to configure fluentd.
|
||||
-->
|
||||
例如,你可以使用 [Stackdriver](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/),
|
||||
它使用 fluentd 作为日志记录代理。
|
||||
以下是两个可用于实现此方法的配置文件。
|
||||
第一个文件包含配置 fluentd 的
|
||||
下面是两个配置文件,可以用来实现一个带日志代理的边车容器。
|
||||
第一个文件包含用来配置 fluentd 的
|
||||
[ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
|
||||
|
||||
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
{{< note >}}
|
||||
The configuration of fluentd is beyond the scope of this article. For
|
||||
information about configuring fluentd, see the
|
||||
[official fluentd documentation](http://docs.fluentd.org/).
|
||||
{{< /note >}}
|
||||
For information about configuring fluentd, see the [fluentd documentation](https://docs.fluentd.org/).
|
||||
-->
|
||||
{{< note >}}
|
||||
配置 fluentd 超出了本文的范围。要进一步了解如何配置 fluentd,
|
||||
请参考 [fluentd 官方文档](https://docs.fluentd.org/).
|
||||
要进一步了解如何配置 fluentd,请参考 [fluentd 官方文档](https://docs.fluentd.org/).
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The second file describes a pod that has a sidecar container running fluentd.
|
||||
The pod mounts a volume where fluentd can pick up its configuration data.
|
||||
-->
|
||||
第二个文件描述了运行 fluentd sidecar 容器的 Pod 。flutend 通过 Pod 的挂载卷获取它的配置数据。
|
||||
第二个文件描述了运行 fluentd 边车容器的 Pod 。
|
||||
flutend 通过 Pod 的挂载卷获取它的配置数据。
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||
|
||||
<!--
|
||||
After some time you can find log messages in the Stackdriver interface.
|
||||
In the sample configurations, you can replace fluentd with any logging agent, reading from any source inside an application container.
|
||||
-->
|
||||
一段时间后,你可以在 Stackdriver 界面看到日志消息。
|
||||
|
||||
<!--
|
||||
Remember, that this is just an example and you can actually replace fluentd
|
||||
with any logging agent, reading from any source inside an application
|
||||
container.
|
||||
-->
|
||||
记住,这只是一个例子,事实上你可以用任何一个日志代理替换 fluentd ,并从应用容器中读取任何资源。
|
||||
在示例配置中,你可以将 fluentd 替换为任何日志代理,从应用容器内
|
||||
的任何来源读取数据。
|
||||
|
||||
<!--
|
||||
### Exposing logs directly from the application
|
||||
|
||||

|
||||
-->
|
||||
|
||||
### 从应用中直接暴露日志目录
|
||||
|
||||

|
||||
|
||||
<!--
|
||||
You can implement cluster-level logging by exposing or pushing logs directly from
|
||||
every application; however, the implementation for such a logging mechanism
|
||||
is outside the scope of Kubernetes.
|
||||
Cluster-logging that exposes or pushes logs directly from every application is outside the scope of Kubernetes.
|
||||
-->
|
||||
通过暴露或推送每个应用的日志,你可以实现集群级日志记录;
|
||||
然而,这种日志记录机制的实现已超出 Kubernetes 的范围。
|
||||
|
||||
从各个应用中直接暴露和推送日志数据的集群日志机制
|
||||
已超出 Kubernetes 的范围。
|
||||
|
||||
|
|
|
@ -92,10 +92,10 @@ These controllers include:
|
|||
-->
|
||||
这些控制器包括:
|
||||
|
||||
* 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应。
|
||||
* 副本控制器(Replication Controller): 负责为系统中的每个副本控制器对象维护正确数量的 Pod。
|
||||
* 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)。
|
||||
* 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌.
|
||||
* 节点控制器(Node Controller): 负责在节点出现故障时进行通知和响应
|
||||
* 副本控制器(Replication Controller): 负责为系统中的每个副本控制器对象维护正确数量的 Pod
|
||||
* 端点控制器(Endpoints Controller): 填充端点(Endpoints)对象(即加入 Service 与 Pod)
|
||||
* 服务帐户和令牌控制器(Service Account & Token Controllers): 为新的命名空间创建默认帐户和 API 访问令牌
|
||||
|
||||
<!--
|
||||
### cloud-controller-manager
|
||||
|
|
|
@ -61,11 +61,11 @@ Early on, organizations ran applications on physical servers. There was no way t
|
|||
-->
|
||||
**传统部署时代:**
|
||||
|
||||
早期,组织在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
|
||||
早期,各个组织机构在物理服务器上运行应用程序。无法为物理服务器中的应用程序定义资源边界,这会导致资源分配问题。
|
||||
例如,如果在物理服务器上运行多个应用程序,则可能会出现一个应用程序占用大部分资源的情况,
|
||||
结果可能导致其他应用程序的性能下降。
|
||||
一种解决方案是在不同的物理服务器上运行每个应用程序,但是由于资源利用不足而无法扩展,
|
||||
并且组织维护许多物理服务器的成本很高。
|
||||
并且维护许多物理服务器的成本很高。
|
||||
|
||||
<!--
|
||||
**Virtualized deployment era:**
|
||||
|
|
|
@ -255,7 +255,7 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
|
|||
-->
|
||||
### LIST 和 WATCH 过滤
|
||||
|
||||
LIST and WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。
|
||||
LIST 和 WATCH 操作可以使用查询参数指定标签选择算符过滤一组对象。
|
||||
两种需求都是允许的。(这里显示的是它们出现在 URL 查询字符串中)
|
||||
|
||||
<!--
|
||||
|
|
|
@ -67,7 +67,7 @@ kubectl taint nodes node1 key1=value1:NoSchedule-
|
|||
若要移除上述命令所添加的污点,你可以执行:
|
||||
|
||||
```shell
|
||||
kubectl taint nodes node1 key:NoSchedule-
|
||||
kubectl taint nodes node1 key1=value1:NoSchedule-
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -131,7 +131,7 @@ An empty `effect` matches all effects with key `key1`.
|
|||
如果一个容忍度的 `key` 为空且 operator 为 `Exists`,
|
||||
表示这个容忍度与任意的 key 、value 和 effect 都匹配,即这个容忍度能容忍任意 taint。
|
||||
|
||||
如果 `effect` 为空,则可以与所有键名 `key` 的效果相匹配。
|
||||
如果 `effect` 为空,则可以与所有键名 `key1` 的效果相匹配。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -9,11 +9,17 @@ weight: 70
|
|||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- lachie83
|
||||
- khenidak
|
||||
- aramase
|
||||
- bridgetkromhout
|
||||
title: IPv4/IPv6 dual-stack
|
||||
feature:
|
||||
title: IPv4/IPv6 dual-stack
|
||||
description: >
|
||||
Allocation of IPv4 and IPv6 addresses to Pods and Services
|
||||
|
||||
content_type: concept
|
||||
weight: 70
|
||||
-->
|
||||
|
@ -85,6 +91,20 @@ The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack
|
|||
|
||||
<!--
|
||||
To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the relevant components of your cluster, and set dual-stack cluster network assignments:
|
||||
|
||||
* kube-apiserver:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* kube-controller-manager:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` defaults to /24 for IPv4 and /64 for IPv6
|
||||
* kubelet:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* kube-proxy:
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
-->
|
||||
要启用 IPv4/IPv6 双协议栈,为集群的相关组件启用 `IPv6DualStack`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
|
@ -95,8 +115,8 @@ To enable IPv4/IPv6 dual-stack, enable the `IPv6DualStack` [feature gate](/docs/
|
|||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* kube-controller-manager:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>` 例如 `--cluster-cidr=10.244.0.0/16,fc00::/48`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>` 例如 `--service-cluster-ip-range=10.0.0.0/16,fd00::/108`
|
||||
* `--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>`
|
||||
* `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6` 对于 IPv4 默认为 /24,对于 IPv6 默认为 /64
|
||||
* kubelet:
|
||||
* `--feature-gates="IPv6DualStack=true"`
|
||||
|
@ -125,14 +145,14 @@ IPv6 CIDR 的一个例子:`fdXY:IJKL:MNOP:15::/64`(这里演示的是格式
|
|||
<!--
|
||||
If your cluster has dual-stack enabled, you can create {{< glossary_tooltip text="Services" term_id="service" >}} which can use IPv4, IPv6, or both.
|
||||
|
||||
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager).
|
||||
The address family of a Service defaults to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver).
|
||||
|
||||
When you define a Service you can optionally configure it as dual stack. To specify the behavior you want, you
|
||||
set the `.spec.ipFamilyPolicy` field to one of the following values:
|
||||
-->
|
||||
如果你的集群启用了 IPv4/IPv6 双协议栈网络,则可以使用 IPv4 或 IPv6 地址来创建
|
||||
{{< glossary_tooltip text="Service" term_id="service" >}}。
|
||||
服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-controller-manager 的 `--service-cluster-ip-range` 参数配置)
|
||||
服务的地址族默认为第一个服务集群 IP 范围的地址族(通过 kube-apiserver 的 `--service-cluster-ip-range` 参数配置)。
|
||||
当你定义服务时,可以选择将其配置为双栈。若要指定所需的行为,你可以设置 `.spec.ipFamilyPolicy` 字段为以下值之一:
|
||||
|
||||
<!--
|
||||
|
@ -297,12 +317,12 @@ These examples demonstrate the default behavior when dual-stack is newly enabled
|
|||
```
|
||||
|
||||
<!--
|
||||
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-controller-manager) even though `.spec.ClusterIP` is set to `None`.
|
||||
1. When dual-stack is enabled on a cluster, existing [headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors are configured by the control plane to set `.spec.ipFamilyPolicy` to `SingleStack` and set `.spec.ipFamilies` to the address family of the first service cluster IP range (configured via the `--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to `None`.
|
||||
-->
|
||||
2. 在集群上启用双栈时,带有选择算符的现有
|
||||
[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)
|
||||
由控制面设置 `.spec.ipFamilyPolicy` 为 `SingleStack`
|
||||
并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-controller-manager 的
|
||||
并设置 `.spec.ipFamilies` 为第一个服务群集 IP 范围的地址族(通过配置 kube-apiserver 的
|
||||
`--service-cluster-ip-range` 参数),即使 `.spec.ClusterIP` 的设置值为 `None` 也如此。
|
||||
|
||||
{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
|
||||
|
@ -396,9 +416,9 @@ For [Headless Services without selectors](/docs/concepts/services-networking/ser
|
|||
若没有显式设置 `.spec.ipFamilyPolicy`,则 `.spec.ipFamilyPolicy` 字段默认设置为 `RequireDualStack`。
|
||||
|
||||
<!--
|
||||
### Type LoadBalancer
|
||||
### Service type LoadBalancer
|
||||
-->
|
||||
### LoadBalancer 类型
|
||||
### LoadBalancer 类型服务
|
||||
|
||||
<!--
|
||||
To provision a dual-stack load balancer for your Service:
|
||||
|
@ -418,7 +438,7 @@ To use a dual-stack `LoadBalancer` type Service, your cloud provider must suppor
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Egress Traffic
|
||||
## Egress traffic
|
||||
-->
|
||||
## 出站流量
|
||||
|
||||
|
@ -440,6 +460,4 @@ Ensure your {{< glossary_tooltip text="CNI" term_id="cni" >}} provider supports
|
|||
<!--
|
||||
* [Validate IPv4/IPv6 dual-stack](/docs/tasks/network/validate-dual-stack) networking
|
||||
-->
|
||||
* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络
|
||||
|
||||
|
||||
* [验证 IPv4/IPv6 双协议栈](/zh/docs/tasks/network/validate-dual-stack)网络
|
|
@ -705,7 +705,7 @@ sure the TLS secret you created came from a certificate that contains a Common
|
|||
Name (CN), also known as a Fully Qualified Domain Name (FQDN) for `https-example.foo.com`.
|
||||
-->
|
||||
在 Ingress 中引用此 Secret 将会告诉 Ingress 控制器使用 TLS 加密从客户端到负载均衡器的通道。
|
||||
你需要确保创建的 TLS Secret 创建自包含 `sslexample.foo.com` 的公用名称(CN)的证书。
|
||||
你需要确保创建的 TLS Secret 创建自包含 `https-example.foo.com` 的公用名称(CN)的证书。
|
||||
这里的公共名称也被称为全限定域名(FQDN)。
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: StatefulSets
|
||||
content_type: concept
|
||||
weight: 40
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
|
@ -146,15 +146,25 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
In the above example:
|
||||
|
||||
* A Headless Service, named `nginx`, is used to control the network domain.
|
||||
* The StatefulSet, named `web`, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
|
||||
* The `volumeClaimTemplates` will provide stable storage using [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) provisioned by a PersistentVolume Provisioner.
|
||||
|
||||
The name of a StatefulSet object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
-->
|
||||
上述例子中:
|
||||
|
||||
* 名为 `nginx` 的 Headless Service 用来控制网络域名。
|
||||
* 名为 `web` 的 StatefulSet 有一个 Spec,它表明将在独立的 3 个 Pod 副本中启动 nginx 容器。
|
||||
* `volumeClaimTemplates` 将通过 PersistentVolumes 驱动提供的
|
||||
[PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/) 来提供稳定的存储。
|
||||
|
||||
StatefulSet 的命名需要遵循[DNS 子域名](zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)规范。
|
||||
|
||||
<!--
|
||||
## Pod Selector
|
||||
-->
|
||||
|
@ -217,9 +227,48 @@ StatefulSet 可以使用 [无头服务](/zh/docs/concepts/services-networking/se
|
|||
一旦每个 Pod 创建成功,就会得到一个匹配的 DNS 子域,格式为:
|
||||
`$(pod 名称).$(所属服务的 DNS 域名)`,其中所属服务由 StatefulSet 的 `serviceName` 域来设定。
|
||||
|
||||
<!--
|
||||
Depending on how DNS is configured in your cluster, you may not be able to look up the DNS
|
||||
name for a newly-run Pod immediately. This behavior can occur when other clients in the
|
||||
cluster have already sent queries for the hostname of the Pod before it was created.
|
||||
Negative caching (normal in DNS) means that the results of previous failed lookups are
|
||||
remembered and reused, even after the Pod is running, for at least a few seconds.
|
||||
|
||||
If you need to discover Pods promptly after they are created, you have a few options:
|
||||
|
||||
- Query the Kubernetes API directly (for example, using a watch) rather than relying on DNS lookups.
|
||||
- Decrease the time of caching in your Kubernetes DNS provider (typically this means editing the config map for CoreDNS, which currently caches for 30 seconds).
|
||||
|
||||
|
||||
As mentioned in the [limitations](#limitations) section, you are responsible for
|
||||
creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services)
|
||||
responsible for the network identity of the pods.
|
||||
|
||||
-->
|
||||
取决于集群域内部 DNS 的配置,有可能无法查询一个刚刚启动的 Pod 的 DNS 命名。
|
||||
当集群内其他客户端在 Pod 创建完成前发出 Pod 主机名查询时,就会发生这种情况。
|
||||
负缓存 (在 DNS 中较为常见) 意味着之前失败的查询结果会被记录和重用至少若干秒钟,
|
||||
即使 Pod 已经正常运行了也是如此。
|
||||
|
||||
如果需要在 Pod 被创建之后及时发现它们,有以下选项:
|
||||
|
||||
- 直接查询 Kubernetes API(比如,利用 watch 机制)而不是依赖于 DNS 查询
|
||||
- 缩短 Kubernetes DNS 驱动的缓存时长(通常这意味着修改 CoreDNS 的 ConfigMap,目前缓存时长为 30 秒)
|
||||
|
||||
正如[限制](#limitations)中所述,你需要负责创建[无头服务](/zh/docs/concepts/services-networking/service/#headless-services)
|
||||
以便为 Pod 提供网络标识。
|
||||
|
||||
<!--
|
||||
Here are some examples of choices for Cluster Domain, Service name,
|
||||
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
|
||||
|
||||
|
||||
Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain | Pod DNS | Pod Hostname |
|
||||
-------------- | ----------------- | ----------------- | -------------- | ------- | ------------ |
|
||||
cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local | web-{0..N-1} |
|
||||
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
|
||||
kube.local | foo/nginx | foo/web | nginx.foo.svc.kube.local | web-{0..N-1}.nginx.foo.svc.kube.local | web-{0..N-1} |
|
||||
|
||||
-->
|
||||
下面给出一些选择集群域、服务名、StatefulSet 名、及其怎样影响 StatefulSet 的 Pod 上的 DNS 名称的示例:
|
||||
|
||||
|
@ -350,12 +399,14 @@ described [above](#deployment-and-scaling-guarantees).
|
|||
`Parallel` pod management tells the StatefulSet controller to launch or
|
||||
terminate all Pods in parallel, and to not wait for Pods to become Running
|
||||
and Ready or completely terminated prior to launching or terminating another
|
||||
Pod.
|
||||
Pod. This option only affects the behavior for scaling operations. Updates are not affected.
|
||||
|
||||
-->
|
||||
#### 并行 Pod 管理 {#parallel-pod-management}
|
||||
|
||||
`Parallel` Pod 管理让 StatefulSet 控制器并行的启动或终止所有的 Pod,
|
||||
启动或者终止其他 Pod 前,无需等待 Pod 进入 Running 和 ready 或者完全停止状态。
|
||||
这个选项只会影响伸缩操作的行为,更新则不会被影响。
|
||||
|
||||
<!--
|
||||
## Update Strategies
|
||||
|
@ -476,4 +527,3 @@ StatefulSet 才会开始使用被还原的模板来重新创建 Pod。
|
|||
* 示例一:[部署有状态应用](/zh/docs/tutorials/stateful-application/basic-stateful-set/)。
|
||||
* 示例二:[使用 StatefulSet 部署 Cassandra](/zh/docs/tutorials/stateful-application/cassandra/)。
|
||||
* 示例三:[运行多副本的有状态应用程序](/zh/docs/tasks/run-application/run-replicated-stateful-application/)。
|
||||
|
||||
|
|
|
@ -489,7 +489,7 @@ Processes within a privileged container get almost the same privileges that are
|
|||
Pod 中的任何容器都可以使用容器规约中的
|
||||
[安全性上下文](/zh/docs/tasks/configure-pod-container/security-context/)中的
|
||||
`privileged` 参数启用特权模式。
|
||||
这对于想要使用使用操作系统管理权能(Capabilities,如操纵网络堆栈和访问设备)
|
||||
这对于想要使用操作系统管理权能(Capabilities,如操纵网络堆栈和访问设备)
|
||||
的容器很有用。
|
||||
容器内的进程几乎可以获得与容器外的进程相同的特权。
|
||||
|
||||
|
|
|
@ -162,7 +162,7 @@ disruptions, if any, to expect.
|
|||
或托管提供商可能运行一些可能导致自愿干扰的额外服务。例如,节点软
|
||||
更新可能导致自愿干扰。另外,集群(节点)自动缩放的某些
|
||||
实现可能导致碎片整理和紧缩节点的自愿干扰。集群
|
||||
理员或托管提供商应该已经记录了各级别的自愿干扰(如果有的话)。
|
||||
管理员或托管提供商应该已经记录了各级别的自愿干扰(如果有的话)。
|
||||
|
||||
<!--
|
||||
Kubernetes offers features to help run highly available applications at the same
|
||||
|
|
|
@ -36,10 +36,10 @@ tags:
|
|||
Their primary responsibility is keeping a cluster up and running, which may involve periodic maintenance activities or upgrades.<br>
|
||||
|
||||
|
||||
**NOTE:** Cluster operators are different from the [Operator pattern](https://coreos.com/operators) that extends the Kubernetes API.
|
||||
**NOTE:** Cluster operators are different from the [Operator pattern](https://www.openshift.com/learn/topics/operators) that extends the Kubernetes API.
|
||||
-->
|
||||
|
||||
他们的主要责任是保持集群正常运行,可能需要进行周期性的维护和升级活动。<br>
|
||||
|
||||
**注意:** 集群操作者不同于[操作者模式(Operator Pattern)](https://coreos.com/operators),操作者模式是用来扩展 Kubernetes API 的。
|
||||
**注意:** 集群操作者不同于[操作者模式(Operator Pattern)](https://www.openshift.com/learn/topics/operators),操作者模式是用来扩展 Kubernetes API 的。
|
||||
|
||||
|
|
|
@ -345,5 +345,5 @@ kubectl delete deployment frontend backend
|
|||
-->
|
||||
* 进一步了解 [Service](/zh/docs/concepts/services-networking/service/)
|
||||
* 进一步了解 [ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
* 进一步了解 [Service 和 Pods 的 DNS](/docs/concepts/services-networking/dns-pod-service/)
|
||||
* 进一步了解 [Service 和 Pods 的 DNS](/zh/docs/concepts/services-networking/dns-pod-service/)
|
||||
|
||||
|
|
|
@ -277,7 +277,7 @@ The following file is an Ingress resource that sends traffic to your Service via
|
|||
If you are running Minikube locally, you can visit hello-world.info from your browser.
|
||||
-->
|
||||
{{< note >}}
|
||||
如果你在使用本地 Minikube 环境,你可以从浏览器中访问 hellow-world.info。
|
||||
如果你在使用本地 Minikube 环境,你可以从浏览器中访问 hello-world.info。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -396,7 +396,7 @@ The following file is an Ingress resource that sends traffic to your Service via
|
|||
-->
|
||||
{{< note >}}
|
||||
如果你在本地运行 Minikube 环境,你可以使用浏览器来访问
|
||||
hellow-world.info 和 hello-world.info/v2。
|
||||
hello-world.info 和 hello-world.info/v2。
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -4,7 +4,6 @@ content_type: task
|
|||
min-kubernetes-server-version: 1.5
|
||||
---
|
||||
<!--
|
||||
---
|
||||
reviewers:
|
||||
- davidopp
|
||||
- mml
|
||||
|
@ -13,7 +12,6 @@ reviewers:
|
|||
title: Safely Drain a Node
|
||||
content_type: task
|
||||
min-kubernetes-server-version: 1.5
|
||||
---
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -127,7 +125,7 @@ Once it returns (without giving an error), you can power down the node
|
|||
If you leave the node in the cluster during the maintenance operation, you need to run
|
||||
-->
|
||||
一旦它返回(没有报错),
|
||||
你就可以下电此节点(或者等价地,如果在云平台上,删除支持该节点的虚拟机)。
|
||||
你就可以下线此节点(或者等价地,如果在云平台上,删除支持该节点的虚拟机)。
|
||||
如果要在维护操作期间将节点留在集群中,则需要运行:
|
||||
|
||||
```shell
|
||||
|
@ -264,7 +262,14 @@ eviction API will never return anything other than 429 or 500.
|
|||
For example: this can happen if ReplicaSet is creating Pods for your application but
|
||||
the replacement Pods do not become `Ready`. You can also see similar symptoms if the
|
||||
last Pod evicted has a very long termination grace period.
|
||||
-->
|
||||
## 驱逐阻塞
|
||||
|
||||
在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。
|
||||
例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个
|
||||
Pod 有很长的终止宽限期,就会发生这种情况。
|
||||
|
||||
<!--
|
||||
In this case, there are two potential solutions:
|
||||
|
||||
- Abort or pause the automated operation. Investigate the reason for the stuck application,
|
||||
|
@ -275,28 +280,18 @@ In this case, there are two potential solutions:
|
|||
Kubernetes does not specify what the behavior should be in this case; it is up to the
|
||||
application owners and cluster owners to establish an agreement on behavior in these cases.
|
||||
-->
|
||||
## 驱逐阻塞
|
||||
|
||||
在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。
|
||||
例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个
|
||||
Pod 有很长的终止宽限期,就会发生这种情况。
|
||||
|
||||
在这种情况下,有两种可能的解决方案:
|
||||
|
||||
- 中止或暂停自动操作。调查应用程序卡住的原因,并重新启动自动化。
|
||||
- 经过适当的长时间等待后, 从集群中删除 Pod 而不是使用驱逐 API。
|
||||
- 经过适当的长时间等待后,从集群中删除 Pod 而不是使用驱逐 API。
|
||||
|
||||
Kubernetes 并没有具体说明在这种情况下应该采取什么行为,
|
||||
这应该由应用程序所有者和集群所有者紧密沟通,并达成对行动一致意见。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
<!--
|
||||
* Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/).
|
||||
* Learn more about [maintenance on a node](/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node).
|
||||
-->
|
||||
-->
|
||||
* 执行[配置 PDB](/zh/docs/tasks/run-application/configure-pdb/)中的各个步骤,
|
||||
保护你的应用
|
||||
* 进一步了解[节点维护](/zh/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node)。
|
||||
|
||||
|
|
|
@ -40,26 +40,25 @@ We have multiple ways to install Kompose. Our preferred method is downloading th
|
|||
|
||||
我们有很多种方式安装 Kompose。首选方式是从最新的 GitHub 发布页面下载二进制文件。
|
||||
|
||||
<!--
|
||||
## GitHub release
|
||||
{{< tabs name="install_ways" >}}
|
||||
{{% tab name="GitHub 下载" %}}
|
||||
|
||||
<!--
|
||||
Kompose is released via GitHub on a three-week cycle, you can see all current releases on the [GitHub release page](https://github.com/kubernetes/kompose/releases).
|
||||
-->
|
||||
## GitHub 发布版本
|
||||
|
||||
Kompose 通过 GitHub 发布版本,发布周期为三星期。
|
||||
Kompose 通过 GitHub 发布,发布周期为三星期。
|
||||
你可以在 [GitHub 发布页面](https://github.com/kubernetes/kompose/releases)
|
||||
上看到所有当前版本。
|
||||
|
||||
```shell
|
||||
# Linux
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-linux-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
|
||||
|
||||
# macOS
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-darwin-amd64 -o kompose
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-darwin-amd64 -o kompose
|
||||
|
||||
# Windows
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-windows-amd64.exe -o kompose.exe
|
||||
|
||||
chmod +x kompose
|
||||
sudo mv ./kompose /usr/local/bin/kompose
|
||||
|
@ -68,9 +67,10 @@ sudo mv ./kompose /usr/local/bin/kompose
|
|||
<!--
|
||||
Alternatively, you can download the [tarball](https://github.com/kubernetes/kompose/releases).
|
||||
-->
|
||||
或者,你可以下载 [tarball](https://github.com/kubernetes/kompose/releases)。
|
||||
或者,你可以下载 [tar 包](https://github.com/kubernetes/kompose/releases)。
|
||||
|
||||
## Go
|
||||
{{% /tab %}}
|
||||
{{% tab name="基于源代码构建" %}}
|
||||
|
||||
<!--
|
||||
Installing using `go get` pulls from the master branch with the latest development changes.
|
||||
|
@ -81,7 +81,8 @@ Installing using `go get` pulls from the master branch with the latest developme
|
|||
go get -u github.com/kubernetes/kompose
|
||||
```
|
||||
|
||||
## CentOS
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS 包" %}}
|
||||
|
||||
<!--
|
||||
Kompose is in [EPEL](https://fedoraproject.org/wiki/EPEL) CentOS repository.
|
||||
|
@ -101,7 +102,8 @@ If you have [EPEL](https://fedoraproject.org/wiki/EPEL) enabled in your system,
|
|||
sudo yum -y install kompose
|
||||
```
|
||||
|
||||
## Fedora
|
||||
{{% /tab %}}
|
||||
{{% tab name="Fedora package" %}}
|
||||
|
||||
<!--
|
||||
Kompose is in Fedora 24, 25 and 26 repositories. You can install it just like any other package.
|
||||
|
@ -112,7 +114,8 @@ Kompose 位于 Fedora 24、25 和 26 的代码仓库。你可以像安装其他
|
|||
sudo dnf -y install kompose
|
||||
```
|
||||
|
||||
## macOS
|
||||
{{% /tab %}}
|
||||
{{% tab name="Homebrew (macOS)" %}}
|
||||
|
||||
<!--
|
||||
On macOS you can install latest release via [Homebrew](https://brew.sh):
|
||||
|
@ -123,6 +126,9 @@ On macOS you can install latest release via [Homebrew](https://brew.sh):
|
|||
brew install kompose
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
## Use Kompose
|
||||
-->
|
||||
|
@ -135,129 +141,139 @@ you need is an existing `docker-compose.yml` file.
|
|||
再需几步,我们就把你从 Docker Compose 带到 Kubernetes。
|
||||
你只需要一个现有的 `docker-compose.yml` 文件。
|
||||
|
||||
1. <!--Go to the directory containing your `docker-compose.yml` file. If you don't
|
||||
have one, test using this one.-->
|
||||
进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。
|
||||
1. <!--Go to the directory containing your `docker-compose.yml` file. If you don't
|
||||
have one, test using this one.-->
|
||||
进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。
|
||||
|
||||
```yaml
|
||||
version: "2"
|
||||
```yaml
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
services:
|
||||
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
redis-master:
|
||||
image: k8s.gcr.io/redis:e2e
|
||||
ports:
|
||||
- "6379"
|
||||
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
redis-slave:
|
||||
image: gcr.io/google_samples/gb-redisslave:v3
|
||||
ports:
|
||||
- "6379"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
frontend:
|
||||
image: gcr.io/google-samples/gb-frontend:v4
|
||||
ports:
|
||||
- "80:80"
|
||||
environment:
|
||||
- GET_HOSTS_FROM=dns
|
||||
labels:
|
||||
kompose.service.type: LoadBalancer
|
||||
```
|
||||
|
||||
2. <!--Run the `kompose up` command to deploy to Kubernetes directly, or skip to
|
||||
the next step instead to generate a file to use with `kubectl`.-->
|
||||
运行 `kompose up` 命令直接部署到 Kubernetes,或者跳到下一步,生成 `kubectl` 使用的文件。
|
||||
<!--
|
||||
2. To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl create -f <output file>`.
|
||||
-->
|
||||
2. 要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert`
|
||||
命令进行转换,然后运行 `kubectl create -f <output file>` 进行创建。
|
||||
|
||||
```bash
|
||||
$ kompose up
|
||||
We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
```shell
|
||||
kompose convert
|
||||
```
|
||||
|
||||
INFO Successfully created Service: redis
|
||||
INFO Successfully created Service: web
|
||||
INFO Successfully created Deployment: redis
|
||||
INFO Successfully created Deployment: web
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
|
||||
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
|
||||
```
|
||||
```bash
|
||||
kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,
|
||||
```
|
||||
|
||||
3. <!--To convert the `docker-compose.yml` file to files that you can use with
|
||||
`kubectl`, run `kompose convert` and then `kubectl create -f <output file>`.-->
|
||||
要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert` 命令进行转换,
|
||||
然后运行 `kubectl create -f <output file>` 进行创建。
|
||||
<!--
|
||||
The output is similar to:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```shell
|
||||
kompose convert
|
||||
```
|
||||
```none
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
|
||||
```
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "redis-master-service.yaml" created
|
||||
INFO Kubernetes file "redis-slave-service.yaml" created
|
||||
INFO Kubernetes file "frontend-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-master-deployment.yaml" created
|
||||
INFO Kubernetes file "redis-slave-deployment.yaml" created
|
||||
```
|
||||
<!--
|
||||
Your deployments are running in Kubernetes.
|
||||
-->
|
||||
你部署的应用在 Kubernetes 中运行起来了。
|
||||
|
||||
```shell
|
||||
kubectl create -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
|
||||
```
|
||||
<!--
|
||||
3. Access your application.
|
||||
-->
|
||||
3. 访问你的应用
|
||||
|
||||
```
|
||||
service/frontend created
|
||||
service/redis-master created
|
||||
service/redis-slave created
|
||||
deployment.apps/frontend created
|
||||
deployment.apps/redis-master created
|
||||
deployment.apps/redis-slave created
|
||||
```
|
||||
<!--
|
||||
If you're already using `minikube` for your development process:
|
||||
-->
|
||||
|
||||
<!--
|
||||
Your deployments are running in Kubernetes.
|
||||
-->
|
||||
你部署的应用在 Kubernetes 中运行起来了。
|
||||
如果你在开发过程中使用 `minikube`,请执行:
|
||||
|
||||
4. <!--Access your application.-->
|
||||
访问你的应用
|
||||
```shell
|
||||
minikube service frontend
|
||||
```
|
||||
|
||||
<!--If you're already using `minikube` for your development process:-->
|
||||
<!--
|
||||
Otherwise, let's look up what IP your service is using!
|
||||
-->
|
||||
否则,我们要查看一下你的服务使用了什么 IP!
|
||||
|
||||
如果你在开发过程中使用 `minikube`,请执行:
|
||||
```shell
|
||||
kubectl describe svc frontend
|
||||
```
|
||||
|
||||
```shell
|
||||
minikube service frontend
|
||||
```
|
||||
```none
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
<!--Otherwise, let's look up what IP your service is using!-->
|
||||
否则,我们要查看一下你的服务使用了什么 IP!
|
||||
<!--
|
||||
If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.
|
||||
-->
|
||||
如果你使用的是云提供商,你的 IP 将在 `LoadBalancer Ingress` 字段给出。
|
||||
|
||||
```shell
|
||||
kubectl describe svc frontend
|
||||
```
|
||||
|
||||
```
|
||||
Name: frontend
|
||||
Namespace: default
|
||||
Labels: service=frontend
|
||||
Selector: service=frontend
|
||||
Type: LoadBalancer
|
||||
IP: 10.0.0.183
|
||||
LoadBalancer Ingress: 192.0.2.89
|
||||
Port: 80 80/TCP
|
||||
NodePort: 80 31144/TCP
|
||||
Endpoints: 172.17.0.4:80
|
||||
Session Affinity: None
|
||||
No events.
|
||||
```
|
||||
|
||||
<!--If you're using a cloud provider, your IP will be listed next to `LoadBalancer Ingress`.-->
|
||||
如果你使用的是云提供商,你的 IP 将在 `LoadBalancer Ingress` 字段给出。
|
||||
|
||||
```shell
|
||||
curl http://192.0.2.89
|
||||
```
|
||||
```shell
|
||||
curl http://192.0.2.89
|
||||
```
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
|
@ -284,29 +300,37 @@ you need is an existing `docker-compose.yml` file.
|
|||
- [`kompose down`](#kompose-down)
|
||||
|
||||
- 文档
|
||||
- [构建和推送 Docker 镜像](#构建和推送-docker-镜像)
|
||||
- [构建和推送 Docker 镜像](#build-and-push-docker-images)
|
||||
- [其他转换方式](#其他转换方式)
|
||||
- [标签](#标签)
|
||||
- [重启](#重启)
|
||||
- [Docker Compose 版本](#docker-compose-版本)
|
||||
- [标签](#labels)
|
||||
- [重启](#restart)
|
||||
- [Docker Compose 版本](#docker-compose-versions)
|
||||
|
||||
<!--
|
||||
Kompose has support for two providers: OpenShift and Kubernetes.
|
||||
You can choose a targeted provider using global option `--provider`. If no provider is specified, Kubernetes is set by default.
|
||||
-->
|
||||
Kompose 支持两种驱动:OpenShift 和 Kubernetes。
|
||||
你可以通过全局选项 `--provider` 选择驱动方式。如果没有指定,会将 Kubernetes 作为默认驱动。
|
||||
你可以通过全局选项 `--provider` 选择驱动。如果没有指定,
|
||||
会将 Kubernetes 作为默认驱动。
|
||||
|
||||
## `kompose convert`
|
||||
|
||||
<!--
|
||||
Kompose supports conversion of V1, V2, and V3 Docker Compose files into Kubernetes and OpenShift objects.
|
||||
-->
|
||||
Kompose 支持将 V1、V2 和 V3 版本的 Docker Compose 文件转换为 Kubernetes 和 OpenShift 资源对象。
|
||||
|
||||
### Kubernetes
|
||||
<!--
|
||||
### Kubernetes `kompose convert` example
|
||||
-->
|
||||
### Kubernetes `kompose convert` 示例
|
||||
|
||||
```shell
|
||||
kompose --file docker-voting.yml convert
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
WARN Unsupported key networks - ignoring
|
||||
WARN Unsupported key build - ignoring
|
||||
INFO Kubernetes file "worker-svc.yaml" created
|
||||
|
@ -325,7 +349,7 @@ INFO Kubernetes file "db-deployment.yaml" created
|
|||
ls
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
|
||||
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
|
||||
```
|
||||
|
@ -338,7 +362,8 @@ You can also provide multiple docker-compose files at the same time:
|
|||
```shell
|
||||
kompose -f docker-compose.yml -f docker-guestbook.yml convert
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "frontend-service.yaml" created
|
||||
INFO Kubernetes file "mlbparks-service.yaml" created
|
||||
INFO Kubernetes file "mongodb-service.yaml" created
|
||||
|
@ -368,7 +393,10 @@ When multiple docker-compose files are provided the configuration is merged. Any
|
|||
-->
|
||||
当提供多个 docker-compose 文件时,配置将会合并。任何通用的配置都将被后续文件覆盖。
|
||||
|
||||
### OpenShift
|
||||
<!--
|
||||
### OpenShift `kompose convert` example
|
||||
-->
|
||||
### OpenShift `kompose convert` 示例
|
||||
|
||||
```shell
|
||||
kompose --provider openshift --file docker-voting.yml convert
|
||||
|
@ -403,7 +431,7 @@ kompose 还支持为服务中的构建指令创建 buildconfig。
|
|||
kompose --provider openshift --file buildconfig/docker-compose.yml convert
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
WARN [foo] Service cannot be created because of missing port.
|
||||
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
|
||||
INFO OpenShift file "foo-deploymentconfig.yaml" created
|
||||
|
@ -424,15 +452,19 @@ imagestream 工件,以解决 Openshift 的这个问题:https://github.com/op
|
|||
<!--
|
||||
Kompose supports a straightforward way to deploy your "composed" application to Kubernetes or OpenShift via `kompose up`.
|
||||
-->
|
||||
Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序部署到 Kubernetes 或 OpenShift。
|
||||
Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序
|
||||
部署到 Kubernetes 或 OpenShift。
|
||||
|
||||
### Kubernetes
|
||||
<!--
|
||||
### Kubernetes `kompose up` example
|
||||
-->
|
||||
### Kubernetes `kompose up` 示例
|
||||
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml up
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
We are going to create Kubernetes deployments and services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
|
||||
|
||||
|
@ -468,26 +500,27 @@ pod/redis-master-1432129712-63jn8 1/1 Running 0 4m
|
|||
pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m
|
||||
```
|
||||
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
**Note**:
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
- Only deployments and services are generated and deployed to Kubernetes. If you need different kind of resources, use the `kompose convert` and `kubectl create -f` commands instead.
|
||||
-->
|
||||
|
||||
**注意**:
|
||||
|
||||
- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
- 此操作仅生成 Deployment 和 Service 对象并将其部署到 Kubernetes。
|
||||
如果需要部署其他不同类型的资源,请使用 `kompose convert` 和 `kubectl create -f` 命令。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
### OpenShift
|
||||
<!--
|
||||
### OpenShift `kompose up` example
|
||||
-->
|
||||
### OpenShift `kompose up` 示例
|
||||
|
||||
```shell
|
||||
kompose --file ./examples/docker-guestbook.yml --provider openshift up
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application.
|
||||
If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead.
|
||||
|
||||
|
@ -508,7 +541,7 @@ Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is'
|
|||
oc get dc,svc,is
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
NAME REVISION DESIRED CURRENT TRIGGERED BY
|
||||
dc/frontend 0 1 0 config,image(frontend:v4)
|
||||
dc/redis-master 0 1 0 config,image(redis-master:e2e)
|
||||
|
@ -523,20 +556,18 @@ is/redis-master 172.30.12.200:5000/fff/redis-master
|
|||
is/redis-slave 172.30.12.200:5000/fff/redis-slave v1
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
**Note**:
|
||||
- You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
|
||||
You must have a running OpenShift cluster with a pre-configured `oc` context (`oc login`)
|
||||
-->
|
||||
**注意**:
|
||||
|
||||
- 你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。
|
||||
你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。
|
||||
{{< /note >}}
|
||||
|
||||
## `kompose down`
|
||||
|
||||
<!--
|
||||
Once you have deployed "composed" application to Kubernetes, `$ kompose down` will help you to take the application out by deleting its deployments and services. If you need to remove other resources, use the 'kubectl' command.
|
||||
-->
|
||||
|
||||
你一旦将"复合(composed)" 应用部署到 Kubernetes,`kompose down`
|
||||
命令将能帮你通过删除 Deployment 和 Service 对象来删除应用。
|
||||
如果需要删除其他资源,请使用 'kubectl' 命令。
|
||||
|
@ -554,26 +585,27 @@ INFO Successfully deleted service: frontend
|
|||
INFO Successfully deleted deployment: frontend
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
**Note**:
|
||||
|
||||
- You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
You must have a running Kubernetes cluster with a pre-configured kubectl context.
|
||||
-->
|
||||
- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Build and Push Docker Images
|
||||
|
||||
Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
|
||||
|
||||
- Automatically be built with Docker using the `image` key specified within your file
|
||||
- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
|
||||
|
||||
Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
|
||||
-->
|
||||
## 构建和推送 Docker 镜像 {#build-and-push-docker-images}
|
||||
|
||||
**注意**:
|
||||
|
||||
- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。
|
||||
|
||||
## 构建和推送 Docker 镜像
|
||||
|
||||
Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` 关键字,你的镜像将会:
|
||||
Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build`
|
||||
关键字,你的镜像将会:
|
||||
|
||||
- 使用文档中指定的 `image` 键自动构建 Docker 镜像
|
||||
- 使用本地凭据推送到正确的 Docker 仓库
|
||||
|
@ -598,7 +630,7 @@ Using `kompose up` with a `build` key:
|
|||
kompose up
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
|
||||
INFO Building image 'docker.io/foo/bar' from directory 'build'
|
||||
INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
|
||||
|
@ -621,10 +653,10 @@ In order to disable the functionality, or choose to use BuildConfig generation (
|
|||
可以通过传递 `--build (local|build-config|none)` 参数来实现。
|
||||
|
||||
```shell
|
||||
# Disable building/pushing Docker images
|
||||
# 禁止构造和推送 Docker 镜像
|
||||
kompose up --build none
|
||||
|
||||
# Generate Build Config artifacts for OpenShift
|
||||
# 为 OpenShift 生成 Build Config 工件
|
||||
kompose up --provider openshift --build build-config
|
||||
```
|
||||
|
||||
|
@ -633,7 +665,7 @@ kompose up --provider openshift --build build-config
|
|||
|
||||
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
|
||||
-->
|
||||
## 其他转换方式
|
||||
## 其他转换方式 {#alternative-conversions}
|
||||
|
||||
默认的 `kompose` 转换会生成 yaml 格式的 Kubernetes
|
||||
[Deployment](/zh/docs/concepts/workloads/controllers/deployment/) 和
|
||||
|
@ -646,7 +678,8 @@ The default `kompose` transformation will generate Kubernetes [Deployments](/doc
|
|||
```shell
|
||||
kompose convert -j
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "redis-svc.json" created
|
||||
INFO Kubernetes file "web-svc.json" created
|
||||
INFO Kubernetes file "redis-deployment.json" created
|
||||
|
@ -661,7 +694,8 @@ The `*-deployment.json` files contain the Deployment objects.
|
|||
```shell
|
||||
kompose convert --replication-controller
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-replicationcontroller.yaml" created
|
||||
|
@ -671,7 +705,6 @@ INFO Kubernetes file "web-replicationcontroller.yaml" created
|
|||
<!--
|
||||
The `*-replicationcontroller.yaml` files contain the Replication Controller objects. If you want to specify replicas (default is 1), use `--replicas` flag: `$ kompose convert --replication-controller --replicas 3`
|
||||
-->
|
||||
|
||||
`*-replicationcontroller.yaml` 文件包含 Replication Controller 对象。
|
||||
如果你想指定副本数(默认为 1),可以使用 `--replicas` 参数:
|
||||
`kompose convert --replication-controller --replicas 3`
|
||||
|
@ -680,7 +713,7 @@ The `*-replicationcontroller.yaml` files contain the Replication Controller obje
|
|||
kompose convert --daemon-set
|
||||
```
|
||||
|
||||
```
|
||||
```none
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-daemonset.yaml" created
|
||||
|
@ -688,17 +721,19 @@ INFO Kubernetes file "web-daemonset.yaml" created
|
|||
```
|
||||
|
||||
<!--
|
||||
The `*-daemonset.yaml` files contain the Daemon Set objects
|
||||
The `*-daemonset.yaml` files contain the DaemonSet objects
|
||||
If you want to generate a Chart to be used with [Helm](https://github.com/kubernetes/helm) simply do:
|
||||
-->
|
||||
`*-daemonset.yaml` 文件包含 Daemon Set 对象。
|
||||
`*-daemonset.yaml` 文件包含 DaemonSet 对象。
|
||||
|
||||
如果你想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart,只需简单的执行下面的命令:
|
||||
如果你想生成 [Helm](https://github.com/kubernetes/helm) 可用的 Chart,
|
||||
只需简单的执行下面的命令:
|
||||
|
||||
```shell
|
||||
kompose convert -c
|
||||
```
|
||||
```
|
||||
|
||||
```none
|
||||
INFO Kubernetes file "web-svc.yaml" created
|
||||
INFO Kubernetes file "redis-svc.yaml" created
|
||||
INFO Kubernetes file "web-deployment.yaml" created
|
||||
|
@ -734,9 +769,10 @@ The chart structure is aimed at providing a skeleton for building your Helm char
|
|||
|
||||
For example:
|
||||
-->
|
||||
## 标签
|
||||
## 标签 {#labels}
|
||||
|
||||
`kompose` 支持 `docker-compose.yml` 文件中用于 Kompose 的标签,以便在转换时明确定义 Service 的行为。
|
||||
`kompose` 支持 `docker-compose.yml` 文件中用于 Kompose 的标签,以便
|
||||
在转换时明确定义 Service 的行为。
|
||||
|
||||
- `kompose.service.type` 定义要创建的 Service 类型。例如:
|
||||
|
||||
|
@ -761,11 +797,13 @@ For example:
|
|||
For example:
|
||||
-->
|
||||
- `kompose.service.expose` 定义是否允许从集群外部访问 Service。
|
||||
如果该值被设置为 "true",提供程序将自动设置端点,对于任何其他值,该值将被设置为主机名。
|
||||
如果该值被设置为 "true",提供程序将自动设置端点,
|
||||
对于任何其他值,该值将被设置为主机名。
|
||||
如果在 Service 中定义了多个端口,则选择第一个端口作为公开端口。
|
||||
|
||||
- 对于 Kubernetes 驱动程序,创建了一个 Ingress 资源,并且假定已经配置了相应的 Ingress 控制器。
|
||||
- 对于 OpenShift 驱动程序, 创建一个 route。
|
||||
- 如果使用 Kubernetes 驱动,会有一个 Ingress 资源被创建,并且假定
|
||||
已经配置了相应的 Ingress 控制器。
|
||||
- 如果使用 OpenShift 驱动, 则会有一个 route 被创建。
|
||||
|
||||
例如:
|
||||
|
||||
|
@ -793,19 +831,18 @@ The currently supported options are:
|
|||
| kompose.service.type | nodeport / clusterip / loadbalancer |
|
||||
| kompose.service.expose| true / hostname |
|
||||
-->
|
||||
|
||||
当前支持的选项有:
|
||||
|
||||
| 键 | 值 |
|
||||
|----------------------|-------------------------------------|
|
||||
| kompose.service.type | nodeport / clusterip / loadbalancer |
|
||||
| kompose.service.expose| true / hostname |
|
||||
| 键 | 值 |
|
||||
|------------------------|-------------------------------------|
|
||||
| kompose.service.type | nodeport / clusterip / loadbalancer |
|
||||
| kompose.service.expose | true / hostname |
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `kompose.service.type` label should be defined with `ports` only, otherwise `kompose` will fail.
|
||||
-->
|
||||
{{< note >}}
|
||||
`kompose.service.type` 标签应该只用`ports`来定义,否则 `kompose` 会失败。
|
||||
`kompose.service.type` 标签应该只用 `ports` 来定义,否则 `kompose` 会失败。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -813,10 +850,10 @@ The `kompose.service.type` label should be defined with `ports` only, otherwise
|
|||
|
||||
If you want to create normal pods without controllers you can use `restart` construct of docker-compose to define that. Follow table below to see what happens on the `restart` value.
|
||||
-->
|
||||
## 重启
|
||||
## 重启 {#restart}
|
||||
|
||||
如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart` 结构来定义它。
|
||||
请参考下表了解 `restart` 的不同参数。
|
||||
如果你想创建没有控制器的普通 Pod,可以使用 docker-compose 的 `restart`
|
||||
结构来指定这一行为。请参考下表了解 `restart` 的不同参数。
|
||||
|
||||
<!--
|
||||
| `docker-compose` `restart` | object created | Pod `restartPolicy` |
|
||||
|
@ -827,10 +864,10 @@ If you want to create normal pods without controllers you can use `restart` cons
|
|||
| `no` | Pod | `Never` |
|
||||
-->
|
||||
|
||||
| `docker-compose` `restart` | 创建的对象 | Pod `restartPolicy` |
|
||||
| `docker-compose` `restart` | 创建的对象 | Pod `restartPolicy` |
|
||||
|----------------------------|-------------------|---------------------|
|
||||
| `""` | 控制器对象 | `Always` |
|
||||
| `always` | 控制器对象 | `Always` |
|
||||
| `""` | 控制器对象 | `Always` |
|
||||
| `always` | 控制器对象 | `Always` |
|
||||
| `on-failure` | Pod | `OnFailure` |
|
||||
| `no` | Pod | `Never` |
|
||||
|
||||
|
@ -843,9 +880,9 @@ The controller object could be `deployment` or `replicationcontroller`, etc.
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
For e.g. `pival` service will become pod down here. This container calculated value of `pi`.
|
||||
For example, the `pival` service will become pod down here. This container calculated value of `pi`.
|
||||
-->
|
||||
例如,`pival` Service 将在这里变成 Pod。这个容器的计算值为 `pi`。
|
||||
例如,`pival` Service 将在这里变成 Pod。这个容器计算 `pi` 的取值。
|
||||
|
||||
```yaml
|
||||
version: '2'
|
||||
|
@ -858,23 +895,22 @@ services:
|
|||
```
|
||||
|
||||
<!--
|
||||
### Warning about Deployment Config's
|
||||
### Warning about Deployment Configurations
|
||||
|
||||
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
|
||||
-->
|
||||
|
||||
### 关于 Deployment Config 的提醒
|
||||
|
||||
如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或 DeploymentConfig (OpenShift)
|
||||
的策略会从 "RollingUpdate" (默认) 变为 "Recreate"。
|
||||
如果 Docker Compose 文件中为服务声明了卷,Deployment (Kubernetes) 或
|
||||
DeploymentConfig (OpenShift) 策略会从 "RollingUpdate" (默认) 变为 "Recreate"。
|
||||
这样做的目的是为了避免服务的多个实例同时访问卷。
|
||||
|
||||
<!--
|
||||
If the Docker Compose file has service name with `_` in it (eg.`web_service`), then it will be replaced by `-` and the service name will be renamed accordingly (eg.`web-service`). Kompose does this because "Kubernetes" doesn't allow `_` in object name.
|
||||
Please note that changing service name might break some `docker-compose` files.
|
||||
-->
|
||||
如果 Docker Compose 文件中的服务名包含 `_` (例如 `web_service`),
|
||||
那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。
|
||||
如果 Docker Compose 文件中的服务名包含 `_`(例如 `web_service`),
|
||||
那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。
|
||||
Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`。
|
||||
|
||||
请注意,更改服务名称可能会破坏一些 `docker-compose` 文件。
|
||||
|
@ -883,14 +919,15 @@ Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`。
|
|||
## Docker Compose Versions
|
||||
|
||||
Kompose supports Docker Compose versions: 1, 2 and 3. We have limited support on versions 2.1 and 3.2 due to their experimental nature.
|
||||
|
||||
A full list on compatibility between all three versions is listed in our [conversion document](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md) including a list of all incompatible Docker Compose keys.
|
||||
-->
|
||||
## Docker Compose 版本
|
||||
## Docker Compose 版本 {#docker-compose-versions}
|
||||
|
||||
Kompose 支持的 Docker Compose 版本包括:1、2 和 3。有限支持 2.1 和 3.2 版本,因为它们还在实验阶段。
|
||||
Kompose 支持的 Docker Compose 版本包括:1、2 和 3。
|
||||
对 2.1 和 3.2 版本的支持还有限,因为它们还在实验阶段。
|
||||
|
||||
所有三个版本的兼容性列表请查看我们的
|
||||
[转换文档](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md),
|
||||
文档中列出了所有不兼容的 Docker Compose 关键字。
|
||||
|
||||
|
||||
|
|
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
content_type: concept
|
||||
title: StackDriver 中的事件
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- piosz
|
||||
- x13n
|
||||
content_type: concept
|
||||
title: Events in Stackdriver
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes events are objects that provide insight into what is happening
|
||||
inside a cluster, such as what decisions were made by scheduler or why some
|
||||
pods were evicted from the node. You can read more about using events
|
||||
for debugging your application in the [Application Introspection and Debugging
|
||||
](/docs/tasks/debug-application-cluster/debug-application-introspection/)
|
||||
section.
|
||||
-->
|
||||
|
||||
Kubernetes 事件是一种对象,它为用户提供了洞察集群内发生的事情的能力,
|
||||
例如调度程序做出了什么决定,或者为什么某些 Pod 被逐出节点。
|
||||
你可以在[应用程序自检和调试](/zh/docs/tasks/debug-application-cluster/debug-application-introspection/)
|
||||
中阅读有关使用事件调试应用程序的更多信息。
|
||||
|
||||
<!--
|
||||
Since events are API objects, they are stored in the apiserver on master. To
|
||||
avoid filling up master's disk, a retention policy is enforced: events are
|
||||
removed one hour after the last occurrence. To provide longer history
|
||||
and aggregation capabilities, a third party solution should be installed
|
||||
to capture events.
|
||||
-->
|
||||
因为事件是 API 对象,所以它们存储在主控节点上的 API 服务器中。
|
||||
为了避免主节点磁盘空间被填满,将强制执行保留策略:事件在最后一次发生的一小时后将会被删除。
|
||||
为了提供更长的历史记录和聚合能力,应该安装第三方解决方案来捕获事件。
|
||||
|
||||
<!--
|
||||
This article describes a solution that exports Kubernetes events to
|
||||
Stackdriver Logging, where they can be processed and analyzed.
|
||||
-->
|
||||
本文描述了一个将 Kubernetes 事件导出为 Stackdriver Logging 的解决方案,在这里可以对它们进行处理和分析。
|
||||
|
||||
<!--
|
||||
It is not guaranteed that all events happening in a cluster will be
|
||||
exported to Stackdriver. One possible scenario when events will not be
|
||||
exported is when event exporter is not running (e.g. during restart or
|
||||
upgrade). In most cases it's fine to use events for purposes like setting up
|
||||
[metrics][sdLogMetrics] and [alerts][sdAlerts], but you should be aware
|
||||
of the potential inaccuracy.
|
||||
-->
|
||||
{{< note >}}
|
||||
不能保证集群中发生的所有事件都将导出到 Stackdriver。
|
||||
事件不能导出的一种可能情况是事件导出器没有运行(例如,在重新启动或升级期间)。
|
||||
在大多数情况下,可以将事件用于设置
|
||||
[metrics](https://cloud.google.com/logging/docs/view/logs_based_metrics) 和
|
||||
[alerts](https://cloud.google.com/logging/docs/view/logs_based_metrics#creating_an_alerting_policy)
|
||||
等目的,但你应该注意其潜在的不准确性。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Deployment
|
||||
-->
|
||||
## 部署 {#deployment}
|
||||
|
||||
### Google Kubernetes Engine
|
||||
|
||||
<!--
|
||||
In Google Kubernetes Engine, if cloud logging is enabled, event exporter
|
||||
is deployed by default to the clusters with master running version 1.7 and
|
||||
higher. To prevent disturbing your workloads, event exporter does not have
|
||||
resources set and is in the best effort QOS class, which means that it will
|
||||
be the first to be killed in the case of resource starvation. If you want
|
||||
your events to be exported, make sure you have enough resources to facilitate
|
||||
the event exporter pod. This may vary depending on the workload, but on
|
||||
average, approximately 100Mb RAM and 100m CPU is needed.
|
||||
-->
|
||||
|
||||
在 Google Kubernetes Engine 中,如果启用了云日志,那么事件导出器默认部署在主节点运行版本为 1.7 及更高版本的集群中。
|
||||
为了防止干扰你的工作负载,事件导出器没有设置资源,并且处于尽力而为的 QoS 类型中,这意味着它将在资源匮乏的情况下第一个被杀死。
|
||||
如果要导出事件,请确保有足够的资源给事件导出器 Pod 使用。
|
||||
这可能会因为工作负载的不同而有所不同,但平均而言,需要大约 100MB 的内存和 100m 的 CPU。
|
||||
|
||||
<!--
|
||||
### Deploying to the Existing Cluster
|
||||
|
||||
Deploy event exporter to your cluster using the following command:
|
||||
-->
|
||||
### 部署到现有集群
|
||||
|
||||
使用下面的命令将事件导出器部署到你的集群:
|
||||
|
||||
```shell
|
||||
kubectl create -f https://k8s.io/examples/debug/event-exporter.yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
Since event exporter accesses the Kubernetes API, it requires permissions to
|
||||
do so. The following deployment is configured to work with RBAC
|
||||
authorization. It sets up a service account and a cluster role binding
|
||||
to allow event exporter to read events. To make sure that event exporter
|
||||
pod will not be evicted from the node, you can additionally set up resource
|
||||
requests. As mentioned earlier, 100Mb RAM and 100m CPU should be enough.
|
||||
-->
|
||||
|
||||
由于事件导出器访问 Kubernetes API,因此它需要权限才能访问。
|
||||
以下的部署配置为使用 RBAC 授权。
|
||||
它设置服务帐户和集群角色绑定,以允许事件导出器读取事件。
|
||||
为了确保事件导出器 Pod 不会从节点中退出,你可以另外设置资源请求。
|
||||
如前所述,100MB 内存和 100m CPU 应该就足够了。
|
||||
|
||||
{{< codenew file="debug/event-exporter.yaml" >}}
|
||||
|
||||
<!--
|
||||
## User Guide
|
||||
|
||||
Events are exported to the `GKE Cluster` resource in Stackdriver Logging.
|
||||
You can find them by selecting an appropriate option from a drop-down menu
|
||||
of available resources:
|
||||
-->
|
||||
## 用户指南 {#user-guide}
|
||||
|
||||
事件在 Stackdriver Logging 中被导出到 `GKE Cluster` 资源。
|
||||
你可以通过从可用资源的下拉菜单中选择适当的选项来找到它们:
|
||||
|
||||
<!--
|
||||
<img src="/images/docs/stackdriver-event-exporter-resource.png" alt="Events location in the Stackdriver Logging interface" width="500">
|
||||
-->
|
||||
<img src="/images/docs/stackdriver-event-exporter-resource.png" alt="Stackdriver 日志接口中事件的位置" width="500">
|
||||
|
||||
<!--
|
||||
You can filter based on the event object fields using Stackdriver Logging
|
||||
[filtering mechanism](https://cloud.google.com/logging/docs/view/advanced_filters).
|
||||
For example, the following query will show events from the scheduler
|
||||
about pods from deployment `nginx-deployment`:
|
||||
-->
|
||||
你可以使用 Stackdriver Logging 的
|
||||
[过滤机制](https://cloud.google.com/logging/docs/view/advanced_filters)
|
||||
基于事件对象字段进行过滤。
|
||||
例如,下面的查询将显示调度程序中有关 Deployment `nginx-deployment` 中的 Pod 的事件:
|
||||
|
||||
```
|
||||
resource.type="gke_cluster"
|
||||
jsonPayload.kind="Event"
|
||||
jsonPayload.source.component="default-scheduler"
|
||||
jsonPayload.involvedObject.name:"nginx-deployment"
|
||||
```
|
||||
|
||||
{{< figure src="/images/docs/stackdriver-event-exporter-filter.png" alt="在 Stackdriver 接口中过滤的事件" width="500" >}}
|
||||
|
||||
|
|
@ -1,197 +0,0 @@
|
|||
---
|
||||
content_type: concept
|
||||
title: 使用 ElasticSearch 和 Kibana 进行日志管理
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- piosz
|
||||
- x13n
|
||||
content_type: concept
|
||||
title: Logging Using Elasticsearch and Kibana
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
On the Google Compute Engine (GCE) platform, the default logging support targets
|
||||
[Stackdriver Logging](https://cloud.google.com/logging/), which is described in detail
|
||||
in the [Logging With Stackdriver Logging](/docs/user-guide/logging/stackdriver).
|
||||
-->
|
||||
在 Google Compute Engine (GCE) 平台上,默认的日志管理支持目标是
|
||||
[Stackdriver Logging](https://cloud.google.com/logging/),
|
||||
在[使用 Stackdriver Logging 管理日志](/zh/docs/tasks/debug-application-cluster/logging-stackdriver/)
|
||||
中详细描述了这一点。
|
||||
|
||||
<!--
|
||||
This article describes how to set up a cluster to ingest logs into
|
||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch) and view
|
||||
them using [Kibana](https://www.elastic.co/products/kibana), as an alternative to
|
||||
Stackdriver Logging when running on GCE.
|
||||
-->
|
||||
本文介绍了如何设置一个集群,将日志导入
|
||||
[Elasticsearch](https://www.elastic.co/products/elasticsearch),并使用
|
||||
[Kibana](https://www.elastic.co/products/kibana) 查看日志,作为在 GCE 上
|
||||
运行应用时使用 Stackdriver Logging 管理日志的替代方案。
|
||||
|
||||
<!--
|
||||
You cannot automatically deploy Elasticsearch and Kibana in the Kubernetes cluster hosted on Google Kubernetes Engine. You have to deploy them manually.
|
||||
-->
|
||||
{{< note >}}
|
||||
你不能在 Google Kubernetes Engine 平台运行的 Kubernetes 集群上自动部署
|
||||
Elasticsearch 和 Kibana。你必须手动部署它们。
|
||||
{{< /note >}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
To use Elasticsearch and Kibana for cluster logging, you should set the
|
||||
following environment variable as shown below when creating your cluster with
|
||||
kube-up.sh:
|
||||
-->
|
||||
要使用 Elasticsearch 和 Kibana 处理集群日志,你应该在使用 kube-up.sh
|
||||
脚本创建集群时设置下面所示的环境变量:
|
||||
|
||||
```shell
|
||||
KUBE_LOGGING_DESTINATION=elasticsearch
|
||||
```
|
||||
|
||||
<!--
|
||||
You should also ensure that `KUBE_ENABLE_NODE_LOGGING=true` (which is the default for the GCE platform).
|
||||
-->
|
||||
你还应该确保设置了 `KUBE_ENABLE_NODE_LOGGING=true` (这是 GCE 平台的默认设置)。
|
||||
|
||||
<!--
|
||||
Now, when you create a cluster, a message will indicate that the Fluentd log
|
||||
collection daemons that run on each node will target Elasticsearch:
|
||||
-->
|
||||
现在,当你创建集群时,将有一条消息将指示每个节点上运行的 fluentd 日志收集守护进程
|
||||
以 ElasticSearch 为日志输出目标:
|
||||
|
||||
```shell
|
||||
cluster/kube-up.sh
|
||||
```
|
||||
|
||||
```
|
||||
...
|
||||
Project: kubernetes-satnam
|
||||
Zone: us-central1-b
|
||||
... calling kube-up
|
||||
Project: kubernetes-satnam
|
||||
Zone: us-central1-b
|
||||
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
|
||||
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
|
||||
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
|
||||
Looking for already existing resources
|
||||
Starting master and configuring firewalls
|
||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/zones/us-central1-b/disks/kubernetes-master-pd].
|
||||
NAME ZONE SIZE_GB TYPE STATUS
|
||||
kubernetes-master-pd us-central1-b 20 pd-ssd READY
|
||||
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip].
|
||||
+++ Logging using Fluentd to elasticsearch
|
||||
```
|
||||
|
||||
<!--
|
||||
The per-node Fluentd pods, the Elasticsearch pods, and the Kibana pods should
|
||||
all be running in the kube-system namespace soon after the cluster comes to
|
||||
life.
|
||||
-->
|
||||
每个节点的 Fluentd Pod、Elasticsearch Pod 和 Kibana Pod 都应该在集群启动后不久运行在
|
||||
kube-system 名字空间中。
|
||||
|
||||
```shell
|
||||
kubectl get pods --namespace=kube-system
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
elasticsearch-logging-v1-78nog 1/1 Running 0 2h
|
||||
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
|
||||
fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
|
||||
kibana-logging-v1-bhpo8 1/1 Running 0 2h
|
||||
kube-dns-v3-7r1l9 3/3 Running 0 2h
|
||||
monitoring-heapster-v4-yl332 1/1 Running 1 2h
|
||||
monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h
|
||||
```
|
||||
|
||||
<!--
|
||||
The `fluentd-elasticsearch` pods gather logs from each node and send them to
|
||||
the `elasticsearch-logging` pods, which are part of a
|
||||
[service](/docs/concepts/services-networking/service/) named `elasticsearch-logging`. These
|
||||
Elasticsearch pods store the logs and expose them via a REST API.
|
||||
The `kibana-logging` pod provides a web UI for reading the logs stored in
|
||||
Elasticsearch, and is part of a service named `kibana-logging`.
|
||||
-->
|
||||
`fluentd-elasticsearch` Pod 从每个节点收集日志并将其发送到 `elasticsearch-logging` Pod,
|
||||
该 Pod 是名为 `elasticsearch-logging` 的
|
||||
[服务](/zh/docs/concepts/services-networking/service/)的一部分。
|
||||
这些 ElasticSearch pod 存储日志,并通过 REST API 将其公开。
|
||||
`kibana-logging` pod 提供了一个用于读取 ElasticSearch 中存储的日志的 Web UI,
|
||||
它是名为 `kibana-logging` 的服务的一部分。
|
||||
|
||||
<!--
|
||||
The Elasticsearch and Kibana services are both in the `kube-system` namespace
|
||||
and are not directly exposed via a publicly reachable IP address. To reach them,
|
||||
follow the instructions for [Accessing services running in a cluster](/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
|
||||
-->
|
||||
|
||||
Elasticsearch 和 Kibana 服务都位于 `kube-system` 名字空间中,并且没有通过
|
||||
可公开访问的 IP 地址直接暴露。要访问它们,请参照
|
||||
[访问集群中运行的服务](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster)
|
||||
的说明进行操作。
|
||||
|
||||
<!--
|
||||
If you try accessing the `elasticsearch-logging` service in your browser, you'll
|
||||
see a status page that looks something like this:
|
||||
-->
|
||||
如果你想在浏览器中访问 `elasticsearch-logging` 服务,你将看到类似下面的状态页面:
|
||||
|
||||

|
||||
|
||||
<!--
|
||||
You can now type Elasticsearch queries directly into the browser, if you'd
|
||||
like. See [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
|
||||
for more details on how to do so.
|
||||
-->
|
||||
现在你可以直接在浏览器中输入 Elasticsearch 查询,如果你愿意的话。
|
||||
请参考 [Elasticsearch 的文档](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-uri-request.html)
|
||||
以了解这样做的更多细节。
|
||||
|
||||
<!--
|
||||
Alternatively, you can view your cluster's logs using Kibana (again using the
|
||||
[instructions for accessing a service running in the cluster](/docs/user-guide/accessing-the-cluster/#accessing-services-running-on-the-cluster)).
|
||||
The first time you visit the Kibana URL you will be presented with a page that
|
||||
asks you to configure your view of the ingested logs. Select the option for
|
||||
timeseries values and select `@timestamp`. On the following page select the
|
||||
`Discover` tab and then you should be able to see the ingested logs.
|
||||
You can set the refresh interval to 5 seconds to have the logs
|
||||
regularly refreshed.
|
||||
-->
|
||||
|
||||
或者,你可以使用 Kibana 查看集群的日志(再次使用
|
||||
[访问集群中运行的服务的说明](/zh/docs/tasks/access-application-cluster/access-cluster/#accessing-services-running-on-the-cluster))。
|
||||
第一次访问 Kibana URL 时,将显示一个页面,要求你配置所接收日志的视图。
|
||||
选择时间序列值的选项,然后选择 `@timestamp`。
|
||||
在下面的页面中选择 `Discover` 选项卡,然后你应该能够看到所摄取的日志。
|
||||
你可以将刷新间隔设置为 5 秒,以便定期刷新日志。
|
||||
|
||||
<!--
|
||||
Here is a typical view of ingested logs from the Kibana viewer:
|
||||
-->
|
||||
|
||||
以下是从 Kibana 查看器中摄取日志的典型视图:
|
||||
|
||||

|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
Kibana opens up all sorts of powerful options for exploring your logs! For some
|
||||
ideas on how to dig into it, check out [Kibana's documentation](https://www.elastic.co/guide/en/kibana/current/discover.html).
|
||||
-->
|
||||
Kibana 为浏览你的日志提供了各种强大的选项!有关如何深入研究它的一些想法,
|
||||
请查看 [Kibana 的文档](https://www.elastic.co/guide/en/kibana/current/discover.html)。
|
||||
|
|
@ -434,6 +434,127 @@ usual.
|
|||
如果延迟(冷却)时间设置的太短,那么副本数量有可能跟以前一样出现抖动。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Support for resource metrics
|
||||
|
||||
Any HPA target can be scaled based on the resource usage of the pods in the scaling target.
|
||||
When defining the pod specification the resource requests like `cpu` and `memory` should
|
||||
be specified. This is used to determine the resource utilization and used by the HPA controller
|
||||
to scale the target up or down. To use resource utilization based scaling specify a metric source
|
||||
like this:
|
||||
-->
|
||||
## 对资源指标的支持 {#support-for-resource-metrics}
|
||||
|
||||
HPA 的任何目标资源都可以基于其中的 Pods 的资源用量来实现扩缩。
|
||||
在定义 Pod 规约时,类似 `cpu` 和 `memory` 这类资源请求必须被设定。
|
||||
这些设定值被用来确定资源利用量并被 HPA 控制器用来对目标资源完成扩缩操作。
|
||||
要使用基于资源利用率的扩缩,可以像下面这样指定一个指标源:
|
||||
|
||||
```yaml
|
||||
type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 60
|
||||
```
|
||||
|
||||
<!--
|
||||
With this metric the HPA controller will keep the average utilization of the pods in the scaling
|
||||
target at 60%. Utilization is the ratio between the current usage of resource to the requested
|
||||
resources of the pod. See [Algorithm](#algorithm-details) for more details about how the utilization
|
||||
is calculated and averaged.
|
||||
-->
|
||||
基于这一指标设定,HPA 控制器会维持扩缩目标中的 Pods 的平均资源利用率在 60%。
|
||||
利用率是 Pod 的当前资源用量与其请求值之间的比值。关于如何计算利用率以及如何计算平均值
|
||||
的细节可参考[算法](#algorithm-details)小节。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Since the resource usages of all the containers are summed up the total pod utilization may not
|
||||
accurately represent the individual container resource usage. This could lead to situations where
|
||||
a single container might be running with high usage and the HPA will not scale out because the overall
|
||||
pod usage is still within acceptable limits.
|
||||
-->
|
||||
由于所有的容器的资源用量都会被累加起来,Pod 的总体资源用量值可能不会精确体现
|
||||
各个容器的资源用量。这一现象也会导致一些问题,例如某个容器运行时的资源用量非常
|
||||
高,但因为 Pod 层面的资源用量总值让人在可接受的约束范围内,HPA 不会执行扩大
|
||||
目标对象规模的操作。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Container Resource Metrics
|
||||
-->
|
||||
### 容器资源指标 {#container-resource-metrics}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
`HorizontalPodAutoscaler` also supports a container metric source where the HPA can track the
|
||||
resource usage of individual containers across a set of Pods, in order to scale the target resource.
|
||||
This lets you configure scaling thresholds for the containers that matter most in a particular Pod.
|
||||
For example, if you have a web application and a logging sidecar, you can scale based on the resource
|
||||
use of the web application, ignoring the sidecar container and its resource use.
|
||||
-->
|
||||
`HorizontalPodAutoscaler` 也支持容器指标源,这时 HPA 可以跟踪记录一组 Pods 中各个容器的
|
||||
资源用量,进而触发扩缩目标对象的操作。
|
||||
容器资源指标的支持使得你可以为特定 Pod 中最重要的容器配置规模缩放阈值。
|
||||
例如,如果你有一个 Web 应用和一个执行日志操作的边车容器,你可以基于 Web 应用的
|
||||
资源用量来执行扩缩,忽略边车容器的存在及其资源用量。
|
||||
|
||||
<!--
|
||||
If you revise the target resource to have a new Pod specification with a different set of containers,
|
||||
you should revise the HPA spec if that newly added container should also be used for
|
||||
scaling. If the specified container in the metric source is not present or only present in a subset
|
||||
of the pods then those pods are ignored and the recommendation is recalculated. See [Algorithm](#algorithm-details)
|
||||
for more details about the calculation. To use container resources for autoscaling define a metric
|
||||
source as follows:
|
||||
-->
|
||||
如果你更改缩放目标对象,令其使用新的、包含一组不同的容器的 Pod 规约,你就需要
|
||||
修改 HPA 的规约才能基于新添加的容器来执行规模扩缩操作。
|
||||
如果指标源中指定的容器不存在或者仅存在于部分 Pods 中,那么这些 Pods 会被忽略,
|
||||
HPA 会重新计算资源用量值。参阅[算法](#algorithm-details)小节进一步了解计算细节。
|
||||
要使用容器资源用量来完成自动扩缩,可以像下面这样定义指标源:
|
||||
|
||||
```yaml
|
||||
type: ContainerResource
|
||||
containerResource:
|
||||
name: cpu
|
||||
container: application
|
||||
target:
|
||||
type: Utilization
|
||||
averageUtilization: 60
|
||||
```
|
||||
|
||||
<!--
|
||||
In the above example the HPA controller scales the target such that the average utilization of the cpu
|
||||
in the `application` container of all the pods is 60%.
|
||||
-->
|
||||
在上面的例子中,HPA 控制器会对目标对象执行扩缩操作以确保所有 Pods 中
|
||||
`application` 容器的平均 CPU 用量为 60%。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If you change the name of a container that a HorizontalPodAutoscaler is tracking, you can
|
||||
make that change in a specific order to ensure scaling remains available and effective
|
||||
whilst the change is being applied. Before you update the resource that defines the container
|
||||
(such as a Deployment), you should update the associated HPA to track both the new and
|
||||
old container names. This way, the HPA is able to calculate a scaling recommendation
|
||||
throughout the update process.
|
||||
-->
|
||||
如果你要更改 HorizontalPodAutoscaler 所跟踪记录的容器的名称,你可以按一定顺序
|
||||
来执行这一更改,确保在应用更改的过程中用来判定扩缩行为的容器可用。
|
||||
在更新定义容器的资源(如 Deployment)之前,你需要更新相关的 HPA,使之能够同时
|
||||
跟踪记录新的和老的容器名称。这样,HPA 就能够在整个更新过程中继续计算并提供扩缩操作建议。
|
||||
|
||||
<!--
|
||||
Once you have rolled out the container name change to the workload resource, tidy up by removing
|
||||
the old container name from the HPA specification.
|
||||
-->
|
||||
一旦你已经将容器名称变更这一操作应用到整个负载对象至上,就可以从 HPA
|
||||
的规约中去掉老的容器名称,完成清理操作。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Support for multiple metrics
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ Use [Helm](https://helm.sh/) to install Service Catalog on your Kubernetes clust
|
|||
* 如果你正在使用 `hack/local-up-cluster.sh`,请确保设置了 `KUBE_ENABLE_CLUSTER_DNS` 环境变量,然后运行安装脚本。
|
||||
* [安装和设置 v1.7 或更高版本的 kubectl](/zh/docs/tasks/tools/install-kubectl/),确保将其配置为连接到 Kubernetes 集群。
|
||||
* 安装 v2.7.0 或更高版本的 [Helm](https://helm.sh/)。
|
||||
* 遵照 [Helm 安装说明](https://github.com/kubernetes/helm/blob/master/docs/install.md)。
|
||||
* 遵照 [Helm 安装说明](https://helm.sh/docs/intro/install/)。
|
||||
* 如果已经安装了适当版本的 Helm,请执行 `helm init` 来安装 Helm 的服务器端组件 Tiller。
|
||||
|
||||
<!-- steps -->
|
||||
|
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue