Merge pull request #41886 from Rishit-dagli/merged-main-dev-1.28
Sync `dev-1.28` branch with `main`pull/42008/head
commit
7116bda08c
|
@ -18,6 +18,7 @@ aliases:
|
|||
- onlydole
|
||||
- reylejano
|
||||
- sftim
|
||||
- seokho-son
|
||||
- tengqm
|
||||
sig-docs-de-owners: # Admins for German content
|
||||
- bene2k1
|
||||
|
@ -84,8 +85,6 @@ aliases:
|
|||
- Babapool
|
||||
- bishal7679
|
||||
- divya-mohan0209
|
||||
- Garima-Negi
|
||||
- verma-kunal
|
||||
sig-docs-id-owners: # Admins for Indonesian content
|
||||
- ariscahyadi
|
||||
- danninov
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: "Kubernetes Architekur"
|
||||
title: "Kubernetes Architektur"
|
||||
weight: 30
|
||||
description: >
|
||||
Hier werden die architektonischen Konzepte von Kubernetes beschrieben.
|
||||
|
|
|
@ -2,4 +2,4 @@ Sie benötigen einen Kubernetes-Cluster, und das Kommandozeilen-Tool kubectl mus
|
|||
|
||||
Oder Sie können einen dieser Kubernetes-Spielplätze benutzen:
|
||||
* [Katacoda](https://www.katacoda.com/courses/kubernetes/playground)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
||||
* [Play with Kubernetes](https://labs.play-with-k8s.com/)
|
||||
|
|
|
@ -1,13 +1,20 @@
|
|||
---
|
||||
title: " Autoscaling in Kubernetes "
|
||||
title: "Autoscaling in Kubernetes"
|
||||
date: 2017-11-17
|
||||
slug: autoscaling-in-kubernetes
|
||||
url: /blog/2017/11/Autoscaling-In-Kubernetes
|
||||
---
|
||||
|
||||
Kubernetes allows developers to automatically adjust cluster sizes and the number of
|
||||
pod replicas based on current traffic and load. These adjustments reduce the amount of
|
||||
unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks
|
||||
you through the current state of pod and node autoscaling in Kubernetes: how it works,
|
||||
and how to use it, including best practices for deployments in production applications.
|
||||
|
||||
Kubernetes allows developers to automatically adjust cluster sizes and the number of pod replicas based on current traffic and load. These adjustments reduce the amount of unused nodes, saving money and resources. In this talk, Marcin Wielgus of Google walks you through the current state of pod and node autoscaling in Kubernetes: .how it works, and how to use it, including best practices for deployments in production applications.
|
||||
{{< youtube id="m3Ma3G14dJ0" title=" Autoscaling in Kubernetes [I] - Marcin Wielgus, Google" >}}
|
||||
|
||||
Enjoyed this talk? Join us for more exciting sessions on scaling and automating your Kubernetes clusters at KubeCon in Austin on December 6-8. [Register Now](https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754?_ga=2.9666039.317115486.1510003873-1623727562.1496428006)
|
||||
Enjoyed this talk? Join us for more exciting sessions on scaling and automating your
|
||||
Kubernetes clusters at KubeCon in Austin on December 6-8.
|
||||
<del><a href="https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754">Register now</a>.</del>
|
||||
|
||||
Be sure to check out [Automating and Testing Production Ready Kubernetes Clusters in the Public Cloud](http://sched.co/CU64) by Ron Lipke, Senior Developer, Platform as a Service, Gannet/USA Today Network.
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
flowchart TD
|
||||
A(Create Policy\ninstance) -->|annotate namespace\nto validate signatures| B(Create Pod)
|
||||
B --> C{policy evaluation}
|
||||
C --> |pass| D[fa:fa-check Admitted]
|
||||
C --> |fail| E[fa:fa-xmark Not admitted]
|
||||
D --> |if necessary| F[Image Pull]
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 12 KiB |
|
@ -0,0 +1,305 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Verifying Container Image Signatures Within CRI Runtimes"
|
||||
date: 2023-06-29
|
||||
slug: container-image-signature-verification
|
||||
---
|
||||
|
||||
**Author**: Sascha Grunert
|
||||
|
||||
The Kubernetes community has been signing their container image-based artifacts
|
||||
since release v1.24. While the graduation of the [corresponding enhancement][kep]
|
||||
from `alpha` to `beta` in v1.26 introduced signatures for the binary artifacts,
|
||||
other projects followed the approach by providing image signatures for their
|
||||
releases, too. This means that they either create the signatures within their
|
||||
own CI/CD pipelines, for example by using GitHub actions, or rely on the
|
||||
Kubernetes [image promotion][promo] process to automatically sign the images by
|
||||
proposing pull requests to the [k/k8s.io][k8s.io] repository. A requirement for
|
||||
using this process is that the project is part of the `kubernetes` or
|
||||
`kubernetes-sigs` GitHub organization, so that they can utilize the community
|
||||
infrastructure for pushing images into staging buckets.
|
||||
|
||||
[kep]: https://github.com/kubernetes/enhancements/issues/3031
|
||||
[promo]: https://github.com/kubernetes-sigs/promo-tools/blob/e2b96dd/docs/image-promotion.md
|
||||
[k8s.io]: https://github.com/kubernetes/k8s.io/tree/4b95cc2/k8s.gcr.io
|
||||
|
||||
Assuming that a project now produces signed container image artifacts, how can
|
||||
one actually verify the signatures? It is possible to do it manually like
|
||||
outlined in the [official Kubernetes documentation][docs]. The problem with this
|
||||
approach is that it involves no automation at all and should be only done for
|
||||
testing purposes. In production environments, tools like the [sigstore
|
||||
policy-controller][policy-controller] can help with the automation. These tools
|
||||
provide a higher level API by using [Custom Resource Definitions (CRD)][crd] as
|
||||
well as an integrated [admission controller and webhook][admission] to verify
|
||||
the signatures.
|
||||
|
||||
[docs]: /docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures
|
||||
[policy-controller]: https://docs.sigstore.dev/policy-controller/overview
|
||||
[crd]: /docs/concepts/extend-kubernetes/api-extension/custom-resources
|
||||
[admission]: /docs/reference/access-authn-authz/admission-controllers
|
||||
|
||||
The general usage flow for an admission controller based verification is:
|
||||
|
||||
{{< figure src="/blog/2023/06/29/container-image-signature-verification/flow.svg" alt="Create an instance of the policy and annotate the namespace to validate the signatures. Then create the pod. The controller evaluates the policy and if it passes, then it does the image pull if necessary. If the policy evaluation fails, then it will not admit the pod." >}}
|
||||
|
||||
A key benefit of this architecture is simplicity: A single instance within the
|
||||
cluster validates the signatures before any image pull can happen in the
|
||||
container runtime on the nodes, which gets initiated by the kubelet. This
|
||||
benefit also brings along the issue of separation: The node which should pull
|
||||
the container image is not necessarily the same node that performs the admission. This
|
||||
means that if the controller is compromised, then a cluster-wide policy
|
||||
enforcement can no longer be possible.
|
||||
|
||||
One way to solve this issue is doing the policy evaluation directly within the
|
||||
[Container Runtime Interface (CRI)][cri] compatible container runtime. The
|
||||
runtime is directly connected to the [kubelet][kubelet] on a node and does all
|
||||
the tasks like pulling images. [CRI-O][cri-o] is one of those available runtimes
|
||||
and will feature full support for container image signature verification in v1.28.
|
||||
|
||||
[cri]: /docs/concepts/architecture/cri
|
||||
[kubelet]: /docs/reference/command-line-tools-reference/kubelet
|
||||
[cri-o]: https://github.com/cri-o/cri-o
|
||||
|
||||
How does it work? CRI-O reads a file called [`policy.json`][policy.json], which
|
||||
contains all the rules defined for container images. For example, you can define a
|
||||
policy which only allows signed images `quay.io/crio/signed` for any tag or
|
||||
digest like this:
|
||||
|
||||
[policy.json]: https://github.com/containers/image/blob/b3e0ba2/docs/containers-policy.json.5.md#sigstoresigned
|
||||
|
||||
```json
|
||||
{
|
||||
"default": [{ "type": "reject" }],
|
||||
"transports": {
|
||||
"docker": {
|
||||
"quay.io/crio/signed": [
|
||||
{
|
||||
"type": "sigstoreSigned",
|
||||
"signedIdentity": { "type": "matchRepository" },
|
||||
"fulcio": {
|
||||
"oidcIssuer": "https://github.com/login/oauth",
|
||||
"subjectEmail": "sgrunert@redhat.com",
|
||||
"caData": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUI5ekNDQVh5Z0F3SUJBZ0lVQUxaTkFQRmR4SFB3amVEbG9Ed3lZQ2hBTy80d0NnWUlLb1pJemowRUF3TXcKS2pFVk1CTUdBMVVFQ2hNTWMybG5jM1J2Y21VdVpHVjJNUkV3RHdZRFZRUURFd2h6YVdkemRHOXlaVEFlRncweQpNVEV3TURjeE16VTJOVGxhRncwek1URXdNRFV4TXpVMk5UaGFNQ294RlRBVEJnTlZCQW9UREhOcFozTjBiM0psCkxtUmxkakVSTUE4R0ExVUVBeE1JYzJsbmMzUnZjbVV3ZGpBUUJnY3Foa2pPUFFJQkJnVXJnUVFBSWdOaUFBVDcKWGVGVDRyYjNQUUd3UzRJYWp0TGszL09sbnBnYW5nYUJjbFlwc1lCcjVpKzR5bkIwN2NlYjNMUDBPSU9aZHhleApYNjljNWlWdXlKUlErSHowNXlpK1VGM3VCV0FsSHBpUzVzaDArSDJHSEU3U1hyazFFQzVtMVRyMTlMOWdnOTJqCll6QmhNQTRHQTFVZER3RUIvd1FFQXdJQkJqQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUlkKd0I1ZmtVV2xacWw2ekpDaGt5TFFLc1hGK2pBZkJnTlZIU01FR0RBV2dCUll3QjVma1VXbFpxbDZ6SkNoa3lMUQpLc1hGK2pBS0JnZ3Foa2pPUFFRREF3TnBBREJtQWpFQWoxbkhlWFpwKzEzTldCTmErRURzRFA4RzFXV2cxdENNCldQL1dIUHFwYVZvMGpoc3dlTkZaZ1NzMGVFN3dZSTRxQWpFQTJXQjlvdDk4c0lrb0YzdlpZZGQzL1Z0V0I1YjkKVE5NZWE3SXgvc3RKNVRmY0xMZUFCTEU0Qk5KT3NRNHZuQkhKCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0="
|
||||
},
|
||||
"rekorPublicKeyData": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFMkcyWSsydGFiZFRWNUJjR2lCSXgwYTlmQUZ3cgprQmJtTFNHdGtzNEwzcVg2eVlZMHp1ZkJuaEM4VXIvaXk1NUdoV1AvOUEvYlkyTGhDMzBNOStSWXR3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg=="
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
CRI-O has to be started to use that policy as the global source of truth:
|
||||
|
||||
```console
|
||||
> sudo crio --log-level debug --signature-policy ./policy.json
|
||||
```
|
||||
|
||||
CRI-O is now able to pull the image while verifying its signatures. This can be
|
||||
done by using [`crictl` (cri-tools)][cri-tools], for example:
|
||||
|
||||
[cri-tools]: https://github.com/kubernetes-sigs/cri-tools
|
||||
|
||||
```console
|
||||
> sudo crictl -D pull quay.io/crio/signed
|
||||
DEBU[…] get image connection
|
||||
DEBU[…] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:quay.io/crio/signed,Annotations:map[string]string{},},Auth:nil,SandboxConfig:nil,}
|
||||
DEBU[…] PullImageResponse: &PullImageResponse{ImageRef:quay.io/crio/signed@sha256:18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a,}
|
||||
Image is up to date for quay.io/crio/signed@sha256:18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a
|
||||
```
|
||||
|
||||
The CRI-O debug logs will also indicate that the signature got successfully
|
||||
validated:
|
||||
|
||||
```console
|
||||
DEBU[…] IsRunningImageAllowed for image docker:quay.io/crio/signed:latest
|
||||
DEBU[…] Using transport "docker" specific policy section quay.io/crio/signed
|
||||
DEBU[…] Reading /var/lib/containers/sigstore/crio/signed@sha256=18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a/signature-1
|
||||
DEBU[…] Looking for sigstore attachments in quay.io/crio/signed:sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig
|
||||
DEBU[…] GET https://quay.io/v2/crio/signed/manifests/sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig
|
||||
DEBU[…] Content-Type from manifest GET is "application/vnd.oci.image.manifest.v1+json"
|
||||
DEBU[…] Found a sigstore attachment manifest with 1 layers
|
||||
DEBU[…] Fetching sigstore attachment 1/1: sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
|
||||
DEBU[…] Downloading /v2/crio/signed/blobs/sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
|
||||
DEBU[…] GET https://quay.io/v2/crio/signed/blobs/sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45
|
||||
DEBU[…] Requirement 0: allowed
|
||||
DEBU[…] Overall: allowed
|
||||
```
|
||||
|
||||
All of the defined fields like `oidcIssuer` and `subjectEmail` in the policy
|
||||
have to match, while `fulcio.caData` and `rekorPublicKeyData` are the public
|
||||
keys from the upstream [fulcio (OIDC PKI)][fulcio] and [rekor
|
||||
(transparency log)][rekor] instances.
|
||||
|
||||
[fulcio]: https://github.com/sigstore/fulcio
|
||||
[rekor]: https://github.com/sigstore/rekor
|
||||
|
||||
This means that if you now invalidate the `subjectEmail` of the policy, for example to
|
||||
`wrong@mail.com`:
|
||||
|
||||
```console
|
||||
> jq '.transports.docker."quay.io/crio/signed"[0].fulcio.subjectEmail = "wrong@mail.com"' policy.json > new-policy.json
|
||||
> mv new-policy.json policy.json
|
||||
```
|
||||
|
||||
Then remove the image, since it already exists locally:
|
||||
|
||||
```console
|
||||
> sudo crictl rmi quay.io/crio/signed
|
||||
```
|
||||
|
||||
Now when you pull the image, CRI-O complains that the required email is wrong:
|
||||
|
||||
```console
|
||||
> sudo crictl pull quay.io/crio/signed
|
||||
FATA[…] pulling image: rpc error: code = Unknown desc = Source image rejected: Required email wrong@mail.com not found (got []string{"sgrunert@redhat.com"})
|
||||
```
|
||||
|
||||
It is also possible to test an unsigned image against the policy. For that you
|
||||
have to modify the key `quay.io/crio/signed` to something like
|
||||
`quay.io/crio/unsigned`:
|
||||
|
||||
```console
|
||||
> sed -i 's;quay.io/crio/signed;quay.io/crio/unsigned;' policy.json
|
||||
```
|
||||
|
||||
If you now pull the container image, CRI-O will complain that no signature exists
|
||||
for it:
|
||||
|
||||
```console
|
||||
> sudo crictl pull quay.io/crio/unsigned
|
||||
FATA[…] pulling image: rpc error: code = Unknown desc = SignatureValidationFailed: Source image rejected: A signature was required, but no signature exists
|
||||
```
|
||||
|
||||
It is important to mention that CRI-O will match the
|
||||
`.critical.identity.docker-reference` field within the signature to match with
|
||||
the image repository. For example, if you verify the image
|
||||
`registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.3`, then the corresponding
|
||||
`docker-reference` should be `registry.k8s.io/kube-apiserver-amd64`:
|
||||
|
||||
```console
|
||||
> cosign verify registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.3 \
|
||||
--certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
|
||||
--certificate-oidc-issuer https://accounts.google.com \
|
||||
| jq -r '.[0].critical.identity."docker-reference"'
|
||||
…
|
||||
|
||||
registry.k8s.io/kubernetes/kube-apiserver-amd64
|
||||
```
|
||||
|
||||
The Kubernetes community introduced `registry.k8s.io` as proxy mirror for
|
||||
various registries. Before the release of [kpromo v4.0.2][kpromo], images
|
||||
had been signed with the actual mirror rather than `registry.k8s.io`:
|
||||
|
||||
[kpromo]: https://github.com/kubernetes-sigs/promo-tools/releases/tag/v4.0.2
|
||||
|
||||
```console
|
||||
> cosign verify registry.k8s.io/kube-apiserver-amd64:v1.28.0-alpha.2 \
|
||||
--certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
|
||||
--certificate-oidc-issuer https://accounts.google.com \
|
||||
| jq -r '.[0].critical.identity."docker-reference"'
|
||||
…
|
||||
|
||||
asia-northeast2-docker.pkg.dev/k8s-artifacts-prod/images/kubernetes/kube-apiserver-amd64
|
||||
```
|
||||
|
||||
The change of the `docker-reference` to `registry.k8s.io` makes it easier for
|
||||
end users to validate the signatures, because they cannot know anything about the
|
||||
underlying infrastructure being used. The feature to set the identity on image
|
||||
signing has been added to [cosign][cosign-pr] via the flag `sign
|
||||
--sign-container-identity` as well and will be part of its upcoming release.
|
||||
|
||||
[cosign-pr]: https://github.com/sigstore/cosign/pull/2984
|
||||
|
||||
The Kubernetes image pull error code `SignatureValidationFailed` got [recently added to
|
||||
Kubernetes][pr-117717] and will be available from v1.28. This error code allows
|
||||
end-users to understand image pull failures directly from the kubectl CLI. For
|
||||
example, if you run CRI-O together with Kubernetes using the policy which requires
|
||||
`quay.io/crio/unsigned` to be signed, then a pod definition like this:
|
||||
|
||||
[pr-117717]: https://github.com/kubernetes/kubernetes/pull/117717
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod
|
||||
spec:
|
||||
containers:
|
||||
- name: container
|
||||
image: quay.io/crio/unsigned
|
||||
```
|
||||
|
||||
Will cause the `SignatureValidationFailed` error when applying the pod manifest:
|
||||
|
||||
```console
|
||||
> kubectl apply -f pod.yaml
|
||||
pod/pod created
|
||||
```
|
||||
|
||||
```console
|
||||
> kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod 0/1 SignatureValidationFailed 0 4s
|
||||
```
|
||||
|
||||
```console
|
||||
> kubectl describe pod pod | tail -n8
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 58s default-scheduler Successfully assigned default/pod to 127.0.0.1
|
||||
Normal BackOff 22s (x2 over 55s) kubelet Back-off pulling image "quay.io/crio/unsigned"
|
||||
Warning Failed 22s (x2 over 55s) kubelet Error: ImagePullBackOff
|
||||
Normal Pulling 9s (x3 over 58s) kubelet Pulling image "quay.io/crio/unsigned"
|
||||
Warning Failed 6s (x3 over 55s) kubelet Failed to pull image "quay.io/crio/unsigned": SignatureValidationFailed: Source image rejected: A signature was required, but no signature exists
|
||||
Warning Failed 6s (x3 over 55s) kubelet Error: SignatureValidationFailed
|
||||
```
|
||||
|
||||
This overall behavior provides a more Kubernetes native experience and does not
|
||||
rely on third party software to be installed in the cluster.
|
||||
|
||||
There are still a few corner cases to consider: For example, what if you want to
|
||||
allow policies per namespace in the same way the policy-controller supports it?
|
||||
Well, there is an upcoming CRI-O feature in v1.28 for that! CRI-O will support
|
||||
the `--signature-policy-dir` / `signature_policy_dir` option, which defines the
|
||||
root path for pod namespace-separated signature policies. This means that CRI-O
|
||||
will lookup that path and assemble a policy like `<SIGNATURE_POLICY_DIR>/<NAMESPACE>.json`,
|
||||
which will be used on image pull if existing. If no pod namespace is
|
||||
provided on image pull ([via the sandbox config][sandbox-config]), or the
|
||||
concatenated path is non-existent, then CRI-O's global policy will be used as
|
||||
fallback.
|
||||
|
||||
[sandbox-config]: https://github.com/kubernetes/cri-api/blob/e5515a5/pkg/apis/runtime/v1/api.proto#L1448
|
||||
|
||||
Another corner case to consider is critical for the correct signature
|
||||
verification within container runtimes: The kubelet only invokes container image
|
||||
pulls if the image does not already exist on disk. This means that an
|
||||
unrestricted policy from Kubernetes namespace A can allow pulling an image,
|
||||
while namespace B is not able to enforce the policy because it already exits on
|
||||
the node. Finally, CRI-O has to verify the policy not only on image pull, but
|
||||
also on container creation. This fact makes things even a bit more complicated,
|
||||
because the CRI does not really pass down the user specified image reference on
|
||||
container creation, but an already resolved image ID, or digest. A [small
|
||||
change to the CRI][pr-118652] can help with that.
|
||||
|
||||
[pr-118652]: https://github.com/kubernetes/kubernetes/pull/118652
|
||||
|
||||
Now that everything happens within the container runtime, someone has to
|
||||
maintain and define the policies to provide a good user experience around that
|
||||
feature. The CRDs of the policy-controller are great, while we could imagine that
|
||||
a daemon within the cluster can write the policies for CRI-O per namespace. This
|
||||
would make any additional hook obsolete and moves the responsibility of
|
||||
verifying the image signature to the actual instance which pulls the image. [I
|
||||
evaluated][thread] other possible paths toward a better container image
|
||||
signature verification within plain Kubernetes, but I could not find a great fit
|
||||
for a native API. This means that I believe that a CRD is the way to go, but
|
||||
users still need an instance which actually serves it.
|
||||
|
||||
[thread]: https://groups.google.com/g/kubernetes-sig-node/c/kgpxqcsJ7Vc/m/7X7t_ElsAgAJ
|
||||
|
||||
Thank you for reading this blog post! If you're interested in more, providing
|
||||
feedback or asking for help, then feel free to get in touch with me directly via
|
||||
[Slack (#crio)][slack] or the [SIG Node mailing list][mail].
|
||||
|
||||
[slack]: https://kubernetes.slack.com/messages/crio
|
||||
[mail]: https://groups.google.com/forum/#!forum/kubernetes-sig-node
|
|
@ -0,0 +1,347 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Confidential Kubernetes: Use Confidential Virtual Machines and Enclaves to improve your cluster security"
|
||||
date: 2023-07-06
|
||||
slug: "confidential-kubernetes"
|
||||
---
|
||||
|
||||
**Authors:** Fabian Kammel (Edgeless Systems), Mikko Ylinen (Intel), Tobin Feldman-Fitzthum (IBM)
|
||||
|
||||
In this blog post, we will introduce the concept of Confidential Computing (CC) to improve any computing environment's security and privacy properties. Further, we will show how
|
||||
the Cloud-Native ecosystem, particularly Kubernetes, can benefit from the new compute paradigm.
|
||||
|
||||
Confidential Computing is a concept that has been introduced previously in the cloud-native world. The
|
||||
[Confidential Computing Consortium](https://confidentialcomputing.io/) (CCC) is a project community in the Linux Foundation
|
||||
that already worked on
|
||||
[Defining and Enabling Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/85/2019/12/CCC_Overview.pdf).
|
||||
In the [Whitepaper](https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf),
|
||||
they provide a great motivation for the use of Confidential Computing:
|
||||
|
||||
> Data exists in three states: in transit, at rest, and in use. …Protecting sensitive data
|
||||
> in all of its states is more critical than ever. Cryptography is now commonly deployed
|
||||
> to provide both data confidentiality (stopping unauthorized viewing) and data integrity
|
||||
> (preventing or detecting unauthorized changes). While techniques to protect data in transit
|
||||
> and at rest are now commonly deployed, the third state - protecting data in use - is the new frontier.
|
||||
|
||||
Confidential Computing aims to primarily solve the problem of **protecting data in use**
|
||||
by introducing a hardware-enforced Trusted Execution Environment (TEE).
|
||||
|
||||
## Trusted Execution Environments
|
||||
|
||||
For more than a decade, Trusted Execution Environments (TEEs) have been available in commercial
|
||||
computing hardware in the form of [Hardware Security Modules](https://en.wikipedia.org/wiki/Hardware_security_module)
|
||||
(HSMs) and [Trusted Platform Modules](https://www.iso.org/standard/50970.html) (TPMs). These
|
||||
technologies provide trusted environments for shielded computations. They can
|
||||
store highly sensitive cryptographic keys and carry out critical cryptographic operations
|
||||
such as signing or encrypting data.
|
||||
|
||||
TPMs are optimized for low cost, allowing them to be integrated into mainboards and act as a
|
||||
system's physical root of trust. To keep the cost low, TPMs are limited in scope, i.e., they
|
||||
provide storage for only a few keys and are capable of just a small subset of cryptographic operations.
|
||||
|
||||
In contrast, HSMs are optimized for high performance, providing secure storage for far
|
||||
more keys and offering advanced physical attack detection mechanisms. Additionally, high-end HSMs
|
||||
can be programmed so that arbitrary code can be compiled and executed. The downside
|
||||
is that they are very costly. A managed CloudHSM from AWS costs
|
||||
[around $1.50 / hour](https://aws.amazon.com/cloudhsm/pricing/) or ~$13,500 / year.
|
||||
|
||||
In recent years, a new kind of TEE has gained popularity. Technologies like
|
||||
[AMD SEV](https://developer.amd.com/sev/),
|
||||
[Intel SGX](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html),
|
||||
and [Intel TDX](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html)
|
||||
provide TEEs that are closely integrated with userspace. Rather than low-power or high-performance
|
||||
devices that support specific use cases, these TEEs shield normal processes or virtual machines
|
||||
and can do so with relatively low overhead. These technologies each have different design goals,
|
||||
advantages, and limitations, and they are available in different environments, including consumer
|
||||
laptops, servers, and mobile devices.
|
||||
|
||||
Additionally, we should mention
|
||||
[ARM TrustZone](https://www.arm.com/technologies/trustzone-for-cortex-a), which is optimized
|
||||
for embedded devices such as smartphones, tablets, and smart TVs, as well as
|
||||
[AWS Nitro Enclaves](https://aws.amazon.com/ec2/nitro/nitro-enclaves/), which are only available
|
||||
on [Amazon Web Services](https://aws.amazon.com/) and have a different threat model compared
|
||||
to the CPU-based solutions by Intel and AMD.
|
||||
|
||||
[IBM Secure Execution for Linux](https://www.ibm.com/docs/en/linux-on-systems?topic=virtualization-secure-execution)
|
||||
lets you run your Kubernetes cluster's nodes as KVM guests within a trusted execution environment on
|
||||
IBM Z series hardware. You can use this hardware-enhanced virtual machine isolation to
|
||||
provide strong isolation between tenants in a cluster, with hardware attestation about the (virtual) node's integrity.
|
||||
|
||||
### Security properties and feature set
|
||||
|
||||
In the following sections, we will review the security properties and additional features
|
||||
these new technologies bring to the table. Only some solutions will provide all properties;
|
||||
we will discuss each technology in further detail in their respective section.
|
||||
|
||||
The **Confidentiality** property ensures that information cannot be viewed while it is
|
||||
in use in the TEE. This provides us with the highly desired feature to secure
|
||||
**data in use**. Depending on the specific TEE used, both code and data may be protected
|
||||
from outside viewers. The differences in TEE architectures and how their use
|
||||
in a cloud native context are important considerations when designing end-to-end security
|
||||
for sensitive workloads with a minimal **Trusted Computing Base** (TCB) in mind. CCC has recently
|
||||
worked on a [common vocabulary and supporting material](https://confidentialcomputing.io/wp-content/uploads/sites/85/2023/01/Common-Terminology-for-Confidential-Computing.pdf)
|
||||
that helps to explain where confidentiality boundaries are drawn with the different TEE
|
||||
architectures and how that impacts the TCB size.
|
||||
|
||||
Confidentiality is a great feature, but an attacker can still manipulate
|
||||
or inject arbitrary code and data for the TEE to execute and, therefore, easily leak critical
|
||||
information. **Integrity** guarantees a TEE owner that neither code nor data can be
|
||||
tampered with while running critical computations.
|
||||
|
||||
**Availability** is a basic property often discussed in the context of information
|
||||
security. However, this property is outside the scope of most TEEs. Usually, they can be controlled
|
||||
(shut down, restarted, …) by some higher level abstraction. This could be the CPU itself, the
|
||||
hypervisor, or the kernel. This is to preserve the overall system's availability,
|
||||
not the TEE itself. When running in the cloud, availability is usually guaranteed by
|
||||
the cloud provider in terms of Service Level Agreements (SLAs) and is not cryptographically enforceable.
|
||||
|
||||
Confidentiality and Integrity by themselves are only helpful in some cases. For example,
|
||||
consider a TEE running in a remote cloud. How would you know the TEE is genuine and running
|
||||
your intended software? It could be an imposter stealing your data as soon as you send it over.
|
||||
This fundamental problem is addressed by **Attestability**. Attestation allows us to verify
|
||||
the identity, confidentiality, and integrity of TEEs based on cryptographic certificates issued
|
||||
from the hardware itself. This feature can also be made available to clients outside of the
|
||||
confidential computing hardware in the form of remote attestation.
|
||||
|
||||
TEEs can hold and process information that predates or outlives the trusted environment. That
|
||||
could mean across restarts, different versions, or platform migrations. Therefore **Recoverability**
|
||||
is an important feature. Data and the state of a TEE need to be sealed before they are written
|
||||
to persistent storage to maintain confidentiality and integrity guarantees. The access to such
|
||||
sealed data needs to be well-defined. In most cases, the unsealing is bound to a TEE's identity.
|
||||
Hence, making sure the recovery can only happen in the same confidential context.
|
||||
|
||||
This does not have to limit the flexibility of the overall system.
|
||||
[AMD SEV-SNP's migration agent (MA)](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf)
|
||||
allows users to migrate a confidential virtual machine to a different host system
|
||||
while keeping the security properties of the TEE intact.
|
||||
|
||||
## Feature comparison
|
||||
|
||||
These sections of the article will dive a little bit deeper into the specific implementations,
|
||||
compare supported features and analyze their security properties.
|
||||
|
||||
### AMD SEV
|
||||
|
||||
AMD's [Secure Encrypted Virtualization (SEV)](https://developer.amd.com/sev/) technologies
|
||||
are a set of features to enhance the security of virtual machines on AMD's server CPUs. SEV
|
||||
transparently encrypts the memory of each VM with a unique key. SEV can also calculate a
|
||||
signature of the memory contents, which can be sent to the VM's owner as an attestation that
|
||||
the initial guest memory was not manipulated.
|
||||
|
||||
The second generation of SEV, known as
|
||||
[Encrypted State](https://www.amd.com/system/files/TechDocs/Protecting%20VM%20Register%20State%20with%20SEV-ES.pdf)
|
||||
or SEV-ES, provides additional protection from the hypervisor by encrypting all
|
||||
CPU register contents when a context switch occurs.
|
||||
|
||||
The third generation of SEV,
|
||||
[Secure Nested Paging](https://www.amd.com/system/files/TechDocs/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf)
|
||||
or SEV-SNP, is designed to prevent software-based integrity attacks and reduce the risk associated with
|
||||
compromised memory integrity. The basic principle of SEV-SNP integrity is that if a VM can read
|
||||
a private (encrypted) memory page, it must always read the value it last wrote.
|
||||
|
||||
Additionally, by allowing the guest to obtain remote attestation statements dynamically,
|
||||
SNP enhances the remote attestation capabilities of SEV.
|
||||
|
||||
AMD SEV has been implemented incrementally. New features and improvements have been added with
|
||||
each new CPU generation. The Linux community makes these features available as part of the KVM hypervisor
|
||||
and for host and guest kernels. The first SEV features were discussed and implemented in 2016 - see
|
||||
[AMD x86 Memory Encryption Technologies](https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/kaplan)
|
||||
from the 2016 Usenix Security Symposium. The latest big addition was
|
||||
[SEV-SNP guest support in Linux 5.19](https://www.phoronix.com/news/AMD-SEV-SNP-Arrives-Linux-5.19).
|
||||
|
||||
[Confidential VMs based on AMD SEV-SNP](https://azure.microsoft.com/en-us/updates/azureconfidentialvm/)
|
||||
are available in Microsoft Azure since July 2022. Similarly, Google Cloud Platform (GCP) offers
|
||||
[confidential VMs based on AMD SEV-ES](https://cloud.google.com/compute/confidential-vm/docs/about-cvm).
|
||||
|
||||
### Intel SGX
|
||||
|
||||
Intel's
|
||||
[Software Guard Extensions](https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html)
|
||||
has been available since 2015 and were introduced with the Skylake architecture.
|
||||
|
||||
SGX is an instruction set that enables users to create a protected and isolated process called
|
||||
an *enclave*. It provides a reverse sandbox that protects enclaves from the operating system,
|
||||
firmware, and any other privileged execution context.
|
||||
|
||||
The enclave memory cannot be read or written from outside the enclave, regardless of
|
||||
the current privilege level and CPU mode. The only way to call an enclave function is
|
||||
through a new instruction that performs several protection checks. Its memory is encrypted.
|
||||
Tapping the memory or connecting the DRAM modules to another system will yield only encrypted
|
||||
data. The memory encryption key randomly changes every power cycle. The key is stored
|
||||
within the CPU and is not accessible.
|
||||
|
||||
Since the enclaves are process isolated, the operating system's libraries are not usable as is;
|
||||
therefore, SGX enclave SDKs are required to compile programs for SGX. This also implies applications
|
||||
need to be designed and implemented to consider the trusted/untrusted isolation boundaries.
|
||||
On the other hand, applications get built with very minimal TCB.
|
||||
|
||||
An emerging approach to easily transition to process-based confidential computing
|
||||
and avoid the need to build custom applications is to utilize library OSes. These OSes
|
||||
facilitate running native, unmodified Linux applications inside SGX enclaves.
|
||||
A library OS intercepts all application requests to the host OS and processes them securely
|
||||
without the application knowing it's running a TEE.
|
||||
|
||||
The 3rd generation Xeon CPUs (aka Ice Lake Server - "ICX") and later generations did switch to using a technology called
|
||||
[Total Memory Encryption - Multi-Key](https://www.intel.com/content/www/us/en/developer/articles/news/runtime-encryption-of-memory-with-intel-tme-mk.html)
|
||||
(TME-MK) that uses AES-XTS, moving away from the
|
||||
[Memory Encryption Engine](https://eprint.iacr.org/2016/204.pdf)
|
||||
that the consumer and Xeon E CPUs used. This increased the possible
|
||||
[enclave page cache](https://sgx101.gitbook.io/sgx101/sgx-bootstrap/enclave#enclave-page-cache-epc)
|
||||
(EPC) size (up to 512GB/CPU) and improved performance. More info
|
||||
about SGX on multi-socket platforms can be found in the
|
||||
[Whitepaper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/supporting-intel-sgx-on-mulit-socket-platforms.pdf).
|
||||
|
||||
A [list of supported platforms](https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873)
|
||||
is available from Intel.
|
||||
|
||||
SGX is available on
|
||||
[Azure](https://azure.microsoft.com/de-de/updates/intel-sgx-based-confidential-computing-vms-now-available-on-azure-dedicated-hosts/),
|
||||
[Alibaba Cloud](https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-an-sgx-encrypted-computing-environment),
|
||||
[IBM](https://cloud.ibm.com/docs/bare-metal?topic=bare-metal-bm-server-provision-sgx), and many more.
|
||||
|
||||
### Intel TDX
|
||||
|
||||
Where Intel SGX aims to protect the context of a single process,
|
||||
[Intel's Trusted Domain Extensions](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html)
|
||||
protect a full virtual machine and are, therefore, most closely comparable to AMD SEV.
|
||||
|
||||
As with SEV-SNP, guest support for TDX was [merged in Linux Kernel 5.19](https://www.phoronix.com/news/Intel-TDX-For-Linux-5.19).
|
||||
However, hardware support will land with [Sapphire Rapids](https://en.wikipedia.org/wiki/Sapphire_Rapids) during 2023:
|
||||
[Alibaba Cloud provides](https://www.alibabacloud.com/help/en/elastic-compute-service/latest/build-a-tdx-confidential-computing-environment)
|
||||
invitational preview instances, and
|
||||
[Azure has announced](https://techcommunity.microsoft.com/t5/azure-confidential-computing/preview-introducing-dcesv5-and-ecesv5-series-confidential-vms/ba-p/3800718)
|
||||
its TDX preview opportunity.
|
||||
|
||||
## Overhead analysis
|
||||
|
||||
The benefits that Confidential Computing technologies provide via strong isolation and enhanced
|
||||
security to customer data and workloads are not for free. Quantifying this impact is challenging and
|
||||
depends on many factors: The TEE technology, the benchmark, the metrics, and the type of workload
|
||||
all have a huge impact on the expected performance overhead.
|
||||
|
||||
Intel SGX-based TEEs are hard to benchmark, as [shown](https://arxiv.org/pdf/2205.06415.pdf)
|
||||
[by](https://www.ibr.cs.tu-bs.de/users/mahhouk/papers/eurosec2021.pdf)
|
||||
[different papers](https://dl.acm.org/doi/fullHtml/10.1145/3533737.3535098). The chosen SDK/library
|
||||
OS, the application itself, as well as the resource requirements (especially large memory requirements)
|
||||
have a huge impact on performance. A single-digit percentage overhead can be expected if an application
|
||||
is well suited to run inside an enclave.
|
||||
|
||||
Confidential virtual machines based on AMD SEV-SNP require no changes to the executed program
|
||||
and operating system and are a lot easier to benchmark. A
|
||||
[benchmark from Azure and AMD](https://community.amd.com/t5/business/microsoft-azure-confidential-computing-powered-by-3rd-gen-epyc/ba-p/497796)
|
||||
shows that SEV-SNP VM overhead is <10%, sometimes as low as 2%.
|
||||
|
||||
Although there is a performance overhead, it should be low enough to enable real-world workloads
|
||||
to run in these protected environments and improve the security and privacy of our data.
|
||||
|
||||
## Confidential Computing compared to FHE, ZKP, and MPC
|
||||
|
||||
Fully Homomorphic Encryption (FHE), Zero Knowledge Proof/Protocol (ZKP), and Multi-Party
|
||||
Computations (MPC) are all a form of encryption or cryptographic protocols that offer
|
||||
similar security guarantees to Confidential Computing but do not require hardware support.
|
||||
|
||||
Fully (also partially and somewhat) homomorphic encryption allows one to perform
|
||||
computations, such as addition or multiplication, on encrypted data. This provides
|
||||
the property of encryption in use but does not provide integrity protection or attestation
|
||||
like confidential computing does. Therefore, these two technologies can [complement to each other](https://confidentialcomputing.io/2023/03/29/confidential-computing-and-homomorphic-encryption/).
|
||||
|
||||
Zero Knowledge Proofs or Protocols are a privacy-preserving technique (PPT) that
|
||||
allows one party to prove facts about their data without revealing anything else about
|
||||
the data. ZKP can be used instead of or in addition to Confidential Computing to protect
|
||||
the privacy of the involved parties and their data. Similarly, Multi-Party Computation
|
||||
enables multiple parties to work together on a computation, i.e., each party provides
|
||||
their data to the result without leaking it to any other parties.
|
||||
|
||||
## Use cases of Confidential Computing
|
||||
|
||||
The presented Confidential Computing platforms show that both the isolation of a single container
|
||||
process and, therefore, minimization of the trusted computing base and the isolation of a
|
||||
``
|
||||
full virtual machine are possible. This has already enabled a lot of interesting and secure
|
||||
projects to emerge:
|
||||
|
||||
### Confidential Containers
|
||||
|
||||
[Confidential Containers](https://github.com/confidential-containers) (CoCo) is a
|
||||
CNCF sandbox project that isolates Kubernetes pods inside of confidential virtual machines.
|
||||
|
||||
CoCo can be installed on a Kubernetes cluster with an operator.
|
||||
The operator will create a set of runtime classes that can be used to deploy
|
||||
pods inside an enclave on several different platforms, including
|
||||
AMD SEV, Intel TDX, Secure Execution for IBM Z, and Intel SGX.
|
||||
|
||||
CoCo is typically used with signed and/or encrypted container images
|
||||
which are pulled, verified, and decrypted inside the enclave.
|
||||
Secrets, such as image decryption keys, are conditionally provisioned
|
||||
to the enclave by a trusted Key Broker Service that validates the
|
||||
hardware evidence of the TEE prior to releasing any sensitive information.
|
||||
|
||||
CoCo has several deployment models. Since the Kubernetes control plane
|
||||
is outside the TCB, CoCo is suitable for managed environments. CoCo can
|
||||
be run in virtual environments that don't support nesting with the help of an
|
||||
API adaptor that starts pod VMs in the cloud. CoCo can also be run on
|
||||
bare metal, providing strong isolation even in multi-tenant environments.
|
||||
|
||||
### Managed confidential Kubernetes
|
||||
|
||||
[Azure](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-node-pool-aks) and
|
||||
[GCP](https://cloud.google.com/blog/products/identity-security/announcing-general-availability-of-confidential-gke-nodes)
|
||||
both support the use of confidential virtual machines as worker nodes for their managed Kubernetes offerings.
|
||||
|
||||
Both services aim for better workload protection and security guarantees by enabling memory encryption
|
||||
for container workloads. However, they don't seek to fully isolate the cluster or workloads against
|
||||
the service provider or infrastructure. Specifically, they don't offer a dedicated confidential control
|
||||
plane or expose attestation capabilities for the confidential cluster/nodes.
|
||||
|
||||
Azure also enables
|
||||
[Confidential Containers](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-nodes-aks-overview)
|
||||
in their managed Kubernetes offering. They support the creation based on
|
||||
[Intel SGX enclaves](https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-containers-enclaves)
|
||||
and [AMD SEV-based VMs](https://techcommunity.microsoft.com/t5/azure-confidential-computing/microsoft-introduces-preview-of-confidential-containers-on-azure/ba-p/3410394).
|
||||
|
||||
### Constellation
|
||||
|
||||
[Constellation](https://github.com/edgelesssys/constellation) is a Kubernetes engine that aims to
|
||||
provide the best possible data security. Constellation wraps your entire Kubernetes cluster into
|
||||
a single confidential context that is shielded from the underlying cloud infrastructure. Everything
|
||||
inside is always encrypted, including at runtime in memory. It shields both the worker and control
|
||||
plane nodes. In addition, it already integrates with popular CNCF software such as Cilium for
|
||||
secure networking and provides extended CSI drivers to write data securely.
|
||||
|
||||
### Occlum and Gramine
|
||||
|
||||
[Occlum](https://occlum.io/) and [Gramine](https://gramineproject.io/) are examples of open source
|
||||
library OS projects that can be used to run unmodified applications in SGX enclaves. They
|
||||
are member projects under the CCC, but similar projects and products maintained by companies
|
||||
also exist. With these libOS projects, existing containerized applications can be
|
||||
easily converted into confidential computing enabled containers. Many curated prebuilt
|
||||
containers are also available.
|
||||
|
||||
## Where are we today? Vendors, limitations, and FOSS landscape
|
||||
|
||||
As we hope you have seen from the previous sections, Confidential Computing is a powerful new concept
|
||||
to improve security, but we are still in the (early) adoption phase. New products are
|
||||
starting to emerge to take advantage of the unique properties.
|
||||
|
||||
Google and Microsoft are the first major cloud providers to have confidential offerings that
|
||||
can run unmodified applications inside a protected boundary.
|
||||
Still, these offerings are limited to compute, while end-to-end solutions for confidential
|
||||
databases, cluster networking, and load balancers have to be self-managed.
|
||||
|
||||
These technologies provide opportunities to bring even the most
|
||||
sensitive workloads into the cloud and enables them to leverage all the
|
||||
tools in the CNCF landscape.
|
||||
|
||||
## Call to action
|
||||
|
||||
If you are currently working on a high-security product that struggles to run in the
|
||||
public cloud due to legal requirements or are looking to bring the privacy and security
|
||||
of your cloud-native project to the next level: Reach out to all the great projects
|
||||
we have highlighted! Everyone is keen to improve the security of our ecosystem, and you can
|
||||
play a vital role in that journey.
|
||||
|
||||
* [Confidential Containers](https://github.com/confidential-containers)
|
||||
* [Constellation: Always Encrypted Kubernetes](https://github.com/edgelesssys/constellation)
|
||||
* [Occlum](https://occlum.io/)
|
||||
* [Gramine](https://gramineproject.io/)
|
||||
* CCC also maintains a [list of projects](https://confidentialcomputing.io/projects/)
|
|
@ -88,7 +88,7 @@ You should avoid using the `:latest` tag when deploying containers in production
|
|||
it is harder to track which version of the image is running and more difficult to
|
||||
roll back properly.
|
||||
|
||||
Instead, specify a meaningful tag such as `v1.42.0`.
|
||||
Instead, specify a meaningful tag such as `v1.42.0` and/or a digest.
|
||||
{{< /note >}}
|
||||
|
||||
To make sure the Pod always uses the same version of a container image, you can specify
|
||||
|
@ -113,6 +113,8 @@ running the same code no matter what tag changes happen at the registry.
|
|||
When you (or a controller) submit a new Pod to the API server, your cluster sets the
|
||||
`imagePullPolicy` field when specific conditions are met:
|
||||
|
||||
- if you omit the `imagePullPolicy` field, and you specify the digest for the
|
||||
container image, the `imagePullPolicy` is automatically set to `IfNotPresent`.
|
||||
- if you omit the `imagePullPolicy` field, and the tag for the container image is
|
||||
`:latest`, `imagePullPolicy` is automatically set to `Always`;
|
||||
- if you omit the `imagePullPolicy` field, and you don't specify the tag for the
|
||||
|
@ -123,7 +125,7 @@ When you (or a controller) submit a new Pod to the API server, your cluster sets
|
|||
|
||||
{{< note >}}
|
||||
The value of `imagePullPolicy` of the container is always set when the object is
|
||||
first _created_, and is not updated if the image's tag later changes.
|
||||
first _created_, and is not updated if the image's tag or digest later changes.
|
||||
|
||||
For example, if you create a Deployment with an image whose tag is _not_
|
||||
`:latest`, and later update that Deployment's image to a `:latest` tag, the
|
||||
|
|
|
@ -397,7 +397,7 @@ ensure your kubelet services are started with the following flags:
|
|||
|
||||
## Device plugin integration with the Topology Manager
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology
|
||||
aligned manner. In order to do this, the Device Plugin API was extended to include a
|
||||
|
|
|
@ -150,8 +150,7 @@ kubectl api-resources --namespaced=false
|
|||
{{< feature-state for_k8s_version="1.22" state="stable" >}}
|
||||
|
||||
The Kubernetes control plane sets an immutable {{< glossary_tooltip text="label" term_id="label" >}}
|
||||
`kubernetes.io/metadata.name` on all namespaces, provided that the `NamespaceDefaultLabelName`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
`kubernetes.io/metadata.name` on all namespaces.
|
||||
The value of the label is the namespace name.
|
||||
|
||||
|
||||
|
|
|
@ -220,7 +220,7 @@ are true. The following taints are built in:
|
|||
as unusable. After a controller from the cloud-controller-manager initializes
|
||||
this node, the kubelet removes this taint.
|
||||
|
||||
In case a node is to be evicted, the node controller or the kubelet adds relevant taints
|
||||
In case a node is to be drained, the node controller or the kubelet adds relevant taints
|
||||
with `NoExecute` effect. If the fault condition returns to normal the kubelet or node
|
||||
controller can remove the relevant taint(s).
|
||||
|
||||
|
@ -230,7 +230,7 @@ the kubelet until communication with the API server is re-established. In the me
|
|||
the pods that are scheduled for deletion may continue to run on the partitioned node.
|
||||
|
||||
{{< note >}}
|
||||
The control plane limits the rate of adding node new taints to nodes. This rate limiting
|
||||
The control plane limits the rate of adding new taints to nodes. This rate limiting
|
||||
manages the number of evictions that are triggered when many nodes become unreachable at
|
||||
once (for example: if there is a network disruption).
|
||||
{{< /note >}}
|
||||
|
|
|
@ -462,7 +462,7 @@ The IP address that you choose must be a valid IPv4 or IPv6 address from within
|
|||
If you try to create a Service with an invalid `clusterIP` address value, the API
|
||||
server will return a 422 HTTP status code to indicate that there's a problem.
|
||||
|
||||
Read [avoiding collisions](#avoiding-collisions)
|
||||
Read [avoiding collisions](/docs/reference/networking/virtual-ips/#avoiding-collisions)
|
||||
to learn how Kubernetes helps reduce the risk and impact of two different Services
|
||||
both trying to use the same IP address.
|
||||
|
||||
|
@ -787,269 +787,6 @@ metadata:
|
|||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
#### TLS support on AWS {#ssl-support-on-aws}
|
||||
|
||||
For partial TLS / SSL support on clusters running on AWS, you can add three
|
||||
annotations to a `LoadBalancer` service:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
|
||||
```
|
||||
|
||||
The first specifies the ARN of the certificate to use. It can be either a
|
||||
certificate from a third party issuer that was uploaded to IAM or one created
|
||||
within AWS Certificate Manager.
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp)
|
||||
```
|
||||
|
||||
The second annotation specifies which protocol a Pod speaks. For HTTPS and
|
||||
SSL, the ELB expects the Pod to authenticate itself over the encrypted
|
||||
connection, using a certificate.
|
||||
|
||||
HTTP and HTTPS selects layer 7 proxying: the ELB terminates
|
||||
the connection with the user, parses headers, and injects the `X-Forwarded-For`
|
||||
header with the user's IP address (Pods only see the IP address of the
|
||||
ELB at the other end of its connection) when forwarding requests.
|
||||
|
||||
TCP and SSL selects layer 4 proxying: the ELB forwards traffic without
|
||||
modifying the headers.
|
||||
|
||||
In a mixed-use environment where some ports are secured and others are left unencrypted,
|
||||
you can use the following annotations:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
|
||||
```
|
||||
|
||||
In the above example, if the Service contained three ports, `80`, `443`, and
|
||||
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
|
||||
|
||||
From Kubernetes v1.9 onwards you can use
|
||||
[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
|
||||
with HTTPS or SSL listeners for your Services.
|
||||
To see which policies are available for use, you can use the `aws` command line tool:
|
||||
|
||||
```bash
|
||||
aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName'
|
||||
```
|
||||
|
||||
You can then specify any one of those policies using the
|
||||
"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`"
|
||||
annotation; for example:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
|
||||
```
|
||||
|
||||
#### PROXY protocol support on AWS
|
||||
|
||||
To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
|
||||
support for clusters running on AWS, you can use the following service
|
||||
annotation:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
|
||||
```
|
||||
|
||||
Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB
|
||||
and cannot be configured otherwise.
|
||||
|
||||
#### ELB Access Logs on AWS
|
||||
|
||||
There are several annotations to manage access logs for ELB Services on AWS.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`
|
||||
controls whether access logs are enabled.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`
|
||||
controls the interval in minutes for publishing the access logs. You can specify
|
||||
an interval of either 5 or 60 minutes.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`
|
||||
controls the name of the Amazon S3 bucket where load balancer access logs are
|
||||
stored.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`
|
||||
specifies the logical hierarchy you created for your Amazon S3 bucket.
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# Specifies whether access logs are enabled for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
|
||||
|
||||
# The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
|
||||
|
||||
# The name of the Amazon S3 bucket where the access logs are stored
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
|
||||
|
||||
# The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
|
||||
```
|
||||
|
||||
#### Connection Draining on AWS
|
||||
|
||||
Connection draining for Classic ELBs can be managed with the annotation
|
||||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
|
||||
to the value of `"true"`. The annotation
|
||||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
|
||||
also be used to set maximum time, in seconds, to keep the existing connections open before
|
||||
deregistering the instances.
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
|
||||
```
|
||||
|
||||
#### Other ELB annotations
|
||||
|
||||
There are other annotations to manage Classic Elastic Load Balancers that are described below.
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# The time, in seconds, that the connection is allowed to be idle (no data has been sent
|
||||
# over the connection) before it is closed by the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
|
||||
|
||||
# Specifies whether cross-zone load balancing is enabled for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
|
||||
|
||||
# A comma-separated list of key-value pairs which will be recorded as
|
||||
# additional tags in the ELB.
|
||||
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
|
||||
|
||||
# The number of successive successful health checks required for a backend to
|
||||
# be considered healthy for traffic. Defaults to 2, must be between 2 and 10
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
|
||||
|
||||
# The number of unsuccessful health checks required for a backend to be
|
||||
# considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
|
||||
|
||||
# The approximate interval, in seconds, between health checks of an
|
||||
# individual instance. Defaults to 10, must be between 5 and 300
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
|
||||
|
||||
# The amount of time, in seconds, during which no response means a failed
|
||||
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
|
||||
# value. Defaults to 5, must be between 2 and 60
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
|
||||
|
||||
# A list of existing security groups to be configured on the ELB created. Unlike the annotation
|
||||
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other
|
||||
# security groups previously assigned to the ELB and also overrides the creation
|
||||
# of a uniquely generated security group for this ELB.
|
||||
# The first security group ID on this list is used as a source to permit incoming traffic to
|
||||
# target worker nodes (service traffic and health checks).
|
||||
# If multiple ELBs are configured with the same security group ID, only a single permit line
|
||||
# will be added to the worker node security groups, that means if you delete any
|
||||
# of those ELBs it will remove the single permit line and block access for all ELBs that shared the same security group ID.
|
||||
# This can cause a cross-service outage if not used properly
|
||||
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
|
||||
|
||||
# A list of additional security groups to be added to the created ELB, this leaves the uniquely
|
||||
# generated security group in place, this ensures that every ELB
|
||||
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes
|
||||
# (service traffic and health checks).
|
||||
# Security groups defined here can be shared between services.
|
||||
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
|
||||
|
||||
# A comma separated list of key-value pairs which are used
|
||||
# to select the target nodes for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
|
||||
```
|
||||
|
||||
#### Network Load Balancer support on AWS {#aws-nlb-support}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
|
||||
|
||||
To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`.
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
NLB only works with certain instance classes; see the
|
||||
[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
|
||||
on Elastic Load Balancing for a list of supported instance types.
|
||||
{{< /note >}}
|
||||
|
||||
Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the
|
||||
client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy`
|
||||
is set to `Cluster`, the client's IP address is not propagated to the end
|
||||
Pods.
|
||||
|
||||
By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is
|
||||
propagated to the end Pods, but this could result in uneven distribution of
|
||||
traffic. Nodes without any Pods for a particular LoadBalancer Service will fail
|
||||
the NLB Target Group's health check on the auto-assigned
|
||||
`.spec.healthCheckNodePort` and not receive any traffic.
|
||||
|
||||
In order to achieve even traffic, either use a DaemonSet or specify a
|
||||
[pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||
to not locate on the same node.
|
||||
|
||||
You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
annotation.
|
||||
|
||||
In order for client traffic to reach instances behind an NLB, the Node security
|
||||
groups are modified with the following IP rules:
|
||||
|
||||
| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
|
||||
|------|----------|---------|------------|---------------------|
|
||||
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
|
||||
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
|
||||
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
|
||||
|
||||
In order to limit which client IP's can access the Network Load Balancer,
|
||||
specify `loadBalancerSourceRanges`.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
loadBalancerSourceRanges:
|
||||
- "143.231.0.0/16"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If `.spec.loadBalancerSourceRanges` is not set, Kubernetes
|
||||
allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have
|
||||
public IP addresses, be aware that non-NLB traffic can also reach all instances
|
||||
in those modified security groups.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
Further documentation on annotations for Elastic IPs and other common use-cases may be found
|
||||
in the [AWS Load Balancer Controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/).
|
||||
|
||||
### `type: ExternalName` {#externalname}
|
||||
|
||||
|
||||
|
@ -1129,6 +866,9 @@ either:
|
|||
* For IPv4 endpoints, the DNS system creates A records.
|
||||
* For IPv6 endpoints, the DNS system creates AAAA records.
|
||||
|
||||
When you define a headless Service without a selector, the `port` must
|
||||
match the `targetPort`.
|
||||
|
||||
## Discovering services
|
||||
|
||||
For clients running inside your cluster, Kubernetes supports two primary modes of
|
||||
|
|
|
@ -422,7 +422,7 @@ For example:
|
|||
1. Preheat oven to 350˚F
|
||||
|
||||
1. Prepare the batter, and pour into springform pan.
|
||||
`{{</* note */>}}Grease the pan for best results.{{</* /note */>}}`
|
||||
{{</* note */>}}Grease the pan for best results.{{</* /note */>}}
|
||||
|
||||
1. Bake for 20-25 minutes or until set.
|
||||
|
||||
|
|
|
@ -10,43 +10,50 @@ weight: 80
|
|||
---
|
||||
|
||||
<!-- overview -->
|
||||
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together.
|
||||
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted
|
||||
to users through the use of policies which combine attributes together.
|
||||
|
||||
<!-- body -->
|
||||
## Policy File Format
|
||||
|
||||
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup.
|
||||
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC`
|
||||
on startup.
|
||||
|
||||
The file format is [one JSON object per line](https://jsonlines.org/). There
|
||||
The file format is [one JSON object per line](https://jsonlines.org/). There
|
||||
should be no enclosing list or map, only one map per line.
|
||||
|
||||
Each line is a "policy object", where each such object is a map with the following
|
||||
properties:
|
||||
|
||||
- Versioning properties:
|
||||
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning and conversion of the policy format.
|
||||
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
|
||||
- `spec` property set to a map with the following properties:
|
||||
- Subject-matching properties:
|
||||
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the username of the authenticated user.
|
||||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all unauthenticated requests.
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
- Ex: `/version` or `/apis`
|
||||
- Wildcard:
|
||||
- `*` matches all non-resource requests.
|
||||
- `/foo/*` matches all subpaths of `/foo/`.
|
||||
- `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list, and watch operations, Non-resource-matching policy only applies to get operation.
|
||||
- Versioning properties:
|
||||
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning
|
||||
and conversion of the policy format.
|
||||
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
|
||||
- `spec` property set to a map with the following properties:
|
||||
- Subject-matching properties:
|
||||
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the
|
||||
username of the authenticated user.
|
||||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.
|
||||
`system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all
|
||||
unauthenticated requests.
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
- Ex: `/version` or `/apis`
|
||||
- Wildcard:
|
||||
- `*` matches all non-resource requests.
|
||||
- `/foo/*` matches all subpaths of `/foo/`.
|
||||
- `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list,
|
||||
and watch operations, Non-resource-matching policy only applies to get operation.
|
||||
|
||||
{{< note >}}
|
||||
An unset property is the same as a property set to the zero value for its type
|
||||
|
@ -61,7 +68,7 @@ REST interface.
|
|||
|
||||
A request has attributes which correspond to the properties of a policy object.
|
||||
|
||||
When a request is received, the attributes are determined. Unknown attributes
|
||||
When a request is received, the attributes are determined. Unknown attributes
|
||||
are set to the zero value of its type (e.g. empty string, 0, false).
|
||||
|
||||
A property set to `"*"` will match any value of the corresponding attribute.
|
||||
|
@ -95,42 +102,49 @@ exposed via the `nonResourcePath` property in a policy (see [examples](#examples
|
|||
To inspect the HTTP calls involved in a specific kubectl operation you can turn
|
||||
up the verbosity:
|
||||
|
||||
kubectl --v=8 version
|
||||
```shell
|
||||
kubectl --v=8 version
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
1. Alice can do anything to all resources:
|
||||
1. Alice can do anything to all resources:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
|
||||
```
|
||||
2. The Kubelet can read any pods:
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
|
||||
```
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
3. The Kubelet can read and write events:
|
||||
1. The kubelet can read any pods:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
|
||||
```
|
||||
4. Bob can just read pods in namespace "projectCaribou":
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
5. Anyone can make read-only requests to all non-resource paths:
|
||||
1. The kubelet can read and write events:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
|
||||
```
|
||||
|
||||
1. Bob can just read pods in namespace "projectCaribou":
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
|
||||
1. Anyone can make read-only requests to all non-resource paths:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
```
|
||||
```
|
||||
|
||||
[Complete file example](https://releases.k8s.io/v{{< skew currentPatchVersion >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## A quick note on service accounts
|
||||
|
||||
Every service account has a corresponding ABAC username, and that service account's username is generated according to the naming convention:
|
||||
Every service account has a corresponding ABAC username, and that service account's username is generated
|
||||
according to the naming convention:
|
||||
|
||||
```shell
|
||||
system:serviceaccount:<namespace>:<serviceaccountname>
|
||||
|
@ -142,7 +156,7 @@ Creating a new namespace leads to the creation of a new service account in the f
|
|||
system:serviceaccount:<namespace>:default
|
||||
```
|
||||
|
||||
For example, if you wanted to grant the default service account (in the `kube-system` namespace) full
|
||||
For example, if you wanted to grant the default service account (in the `kube-system` namespace) full
|
||||
privilege to the API using ABAC, you would add this line to your policy file:
|
||||
|
||||
```json
|
||||
|
@ -150,6 +164,3 @@ privilege to the API using ABAC, you would add this line to your policy file:
|
|||
```
|
||||
|
||||
The apiserver will need to be restarted to pick up the new policy lines.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -736,15 +736,22 @@ for more information.
|
|||
|
||||
### SecurityContextDeny {#securitycontextdeny}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="deprecated" >}}
|
||||
|
||||
{{< caution >}}
|
||||
This admission controller plugin is **outdated** and **incomplete**, it may be
|
||||
unusable or not do what you would expect. It was originally designed to prevent
|
||||
the use of some, but not all, security-sensitive fields. Indeed, fields like
|
||||
`privileged`, were not filtered at creation and the plugin was not updated with
|
||||
the most recent fields, and new APIs like the `ephemeralContainers` field for a
|
||||
Pod.
|
||||
The Kubernetes project recommends that you **do not use** the
|
||||
`SecurityContextDeny` admission controller.
|
||||
|
||||
The `SecurityContextDeny` admission controller plugin is deprecated and disabled
|
||||
by default. It will be removed in a future version. If you choose to enable the
|
||||
`SecurityContextDeny` admission controller plugin, you must enable the
|
||||
`SecurityContextDeny` feature gate as well.
|
||||
|
||||
The `SecurityContextDeny` admission plugin is deprecated because it is outdated
|
||||
and incomplete; it may be unusable or not do what you would expect. As
|
||||
implemented, this plugin is unable to restrict all security-sensitive attributes
|
||||
of the Pod API. For example, the `privileged` and `ephemeralContainers` fields
|
||||
were never restricted by this plugin.
|
||||
|
||||
The [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
plugin enforcing the [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
|
||||
|
|
|
@ -180,7 +180,7 @@ Kubernetes provides built-in signers that each have a well-known `signerName`:
|
|||
1. Permitted subjects - organizations are exactly `["system:nodes"]`, common name starts with "`system:node:`".
|
||||
1. Permitted x509 extensions - honors key usage and DNSName/IPAddress subjectAltName extensions, forbids EmailAddress and
|
||||
URI subjectAltName extensions, drops other extensions. At least one DNS or IP subjectAltName must be present.
|
||||
1. Permitted key usages - `["key encipherment", "digital signature", "client auth"]` or `["digital signature", "client auth"]`.
|
||||
1. Permitted key usages - `["key encipherment", "digital signature", "server auth"]` or `["digital signature", "server auth"]`.
|
||||
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
|
||||
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
|
|
@ -99,6 +99,7 @@ each source also represents a single path within that volume. The three sources
|
|||
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
|
||||
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
|
||||
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
|
||||
The kubelet also refreshes that token before the token expires.
|
||||
The token is bound to the specific Pod and has the kube-apiserver as its audience.
|
||||
This mechanism superseded an earlier mechanism that added a volume based on a Secret,
|
||||
where the Secret represented the ServiceAccount for the Pod, but did not expire.
|
||||
|
|
|
@ -706,7 +706,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
ClusterIP range is subdivided. Dynamic allocated ClusterIP addresses will be allocated preferently
|
||||
from the upper range allowing users to assign static ClusterIPs from the lower range with a low
|
||||
risk of collision. See
|
||||
[Avoiding collisions](/docs/concepts/services-networking/service/#avoiding-collisions)
|
||||
[Avoiding collisions](/docs/reference/networking/virtual-ips/#avoiding-collisions)
|
||||
for more details.
|
||||
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
|
||||
memory-backed volumes (mainly `emptyDir` volumes).
|
||||
|
|
|
@ -704,7 +704,7 @@ When this annotation is set, the Kubernetes components will "stand-down" and the
|
|||
|
||||
### statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name}
|
||||
|
||||
Type: Annotation
|
||||
Type: Label
|
||||
|
||||
Example: `statefulset.kubernetes.io/pod-name: "mystatefulset-7"`
|
||||
|
||||
|
@ -1406,6 +1406,422 @@ To learn more about NFD and its components go to its official
|
|||
[documentation](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/).
|
||||
{{< /note >}}
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-emit-interval}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer for a Service based on this annotation. The value determines
|
||||
how often the load balancer writes log entries. For example, if you set the value
|
||||
to 5, the log writes occur 5 seconds apart.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-access-log-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-enabled}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "false"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer for a Service based on this annotation. Access logging is enabled
|
||||
if you set the annotation to "true".
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-s3-bucket-name}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: example`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer for a Service based on this annotation. The load balancer
|
||||
writes logs to an S3 bucket with the name you specify.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-s3-bucket-prefix}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "/example"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer for a Service based on this annotation. The load balancer
|
||||
writes log objects with the prefix that you specify.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags (beta) {#service-beta-kubernetes-io-aws-load-balancer-additional-resource-tags}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Environment=demo,Project=example"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
tags (an AWS concept) for a load balancer based on the comma-separated key/value
|
||||
pairs in the value of this annotation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-alpn-policy (beta) {#service-beta-kubernetes-io-aws-load-balancer-alpn-policy}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Optional`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-attributes (beta) {#service-beta-kubernetes-io-aws-load-balancer-attributes}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-attributes: "deletion_protection.enabled=true"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-backend-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-backend-protocol}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer listener based on the value of this annotation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-draining-enabled}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "false"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
the load balancer based on this annotation. The load balancer's connection draining
|
||||
setting depends on the value you set.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-draining-timeout}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
If you configure [connection draining](#service-beta-kubernetes-io-aws-load-balancer-connection-draining-enabled)
|
||||
for a Service of `type: LoadBalancer`, and you use the AWS cloud, the integration configures
|
||||
the draining period based on this annotation. The value you set determines the draining
|
||||
timeout in seconds.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-ip-address-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-ip-address-type}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-idle-timeout}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The load balancer has a configured idle
|
||||
timeout period (in seconds) that applies to its connections. If no data has been
|
||||
sent or received by the time that the idle timeout period elapses, the load balancer
|
||||
closes the connection.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-cross-zone-load-balancing-enabled}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. If you set this annotation to "true",
|
||||
each load balancer node distributes requests evenly across the registered targets
|
||||
in all enabled [availability zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones).
|
||||
If you disable cross-zone load balancing, each load balancer node distributes requests
|
||||
evenly across the registered targets in its availability zone only.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-eip-allocations (beta) {#service-beta-kubernetes-io-aws-load-balancer-eip-allocations}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-01bcdef23bcdef456,eipalloc-def1234abc4567890"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The value is a comma-separated list
|
||||
of elastic IP address allocation IDs.
|
||||
|
||||
This annotation is only relevant for Services of `type: LoadBalancer`, where
|
||||
the load balancer is an AWS Network Load Balancer.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-extra-security-groups (beta) {#service-beta-kubernetes-io-aws-load-balancer-extra-security-groups}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-12abcd3456,sg-34dcba6543"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value is a comma-separated
|
||||
list of extra AWS VPC security groups to configure for the load balancer.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-healthy-threshold}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value specifies the number of
|
||||
successive successful health checks required for a backend to be considered healthy
|
||||
for traffic.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-interval}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "30"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value specifies the interval,
|
||||
in seconds, between health check probes made by the load balancer.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-path (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-papth}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthcheck`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value determines the
|
||||
path part of the URL that is used for HTTP health checks.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-port (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-port}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "24"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value determines which
|
||||
port the load balancer connects to when performing health checks.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-protocol}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value determines how the
|
||||
load balancer checks the health of backend targets.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-timeout}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "3"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value specifies the number
|
||||
of seconds before a probe that hasn't yet succeeded is automatically treated as
|
||||
having failed.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-unhealthy-threshold}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The annotation value specifies the number of
|
||||
successive unsuccessful health checks required for a backend to be considered unhealthy
|
||||
for traffic.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-internal (beta) {#service-beta-kubernetes-io-aws-load-balancer-internal}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-internal: "true"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The cloud controller manager integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. When you set this annotation to "true",
|
||||
the integration configures an internal load balancer.
|
||||
|
||||
If you use the [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/),
|
||||
see [`service.beta.kubernetes.io/aws-load-balancer-scheme`](#service-beta-kubernetes-io-aws-load-balancer-scheme).
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules (beta) {#service-beta-kubernetes-io-aws-load-balancer-manage-backend-security-group-rules)
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "true"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-name (beta) {#service-beta-kubernetes-io-aws-load-balancer-name}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-name: my-elb`
|
||||
|
||||
Used on: Service
|
||||
|
||||
If you set this annotation on a Service, and you also annotate that Service with
|
||||
`service.beta.kubernetes.io/aws-load-balancer-type: "external"`, and you use the
|
||||
[AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
in your cluster, then the AWS load balancer controller sets the name of that load
|
||||
balancer to the value you set for _this_ annotation.
|
||||
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-nlb-target-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-nlb-target-type)
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "true"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses (beta) {#service-beta-kubernetes-io-aws-load-balancer-private-ipv4-addresses}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "198.51.100.0,198.51.100.64"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-proxy-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-proxy-protocol}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The official Kubernetes integration with AWS elastic load balancing configures
|
||||
a load balancer based on this annotation. The only permitted value is `"*"`,
|
||||
which indicates that the load balancer should wrap TCP connections to the backend
|
||||
Pod with the PROXY protocol.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-scheme (beta) {#service-beta-kubernetes-io-aws-load-balancer-scheme}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-scheme: internal`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/load-balancer-source-ranges (deprecated) {#service-beta-kubernetes-io-load-balancer-source-ranges}
|
||||
|
||||
Example: `service.beta.kubernetes.io/load-balancer-source-ranges: "192.0.2.0/25"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation. You should set `.spec.loadBalancerSourceRanges` for the Service instead.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-ssl-cert (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-cert}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The official integration with AWS elastic load balancing configures TLS for a Service of
|
||||
`type: LoadBalancer` based on this annotation. The value of the annotation is the
|
||||
AWS Resource Name (ARN) of the X.509 certificate that the load balancer listener should
|
||||
use.
|
||||
|
||||
(The TLS protocol is based on an older technology that abbreviates to SSL.)
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-negotiation-policy}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01`
|
||||
|
||||
The official integration with AWS elastic load balancing configures TLS for a Service of
|
||||
`type: LoadBalancer` based on this annotation. The value of the annotation is the name
|
||||
of an AWS policy for negotiating TLS with a client peer.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-ssl-ports (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-ports}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"`
|
||||
|
||||
The official integration with AWS elastic load balancing configures TLS for a Service of
|
||||
`type: LoadBalancer` based on this annotation. The value of the annotation is either `"*"`,
|
||||
which means that all the load balancer's ports should use TLS, or it is a comma separated
|
||||
list of port numbers.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-subnets (beta) {#service-beta-kubernetes-io-aws-load-balancer-subnets}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-subnets: "private-a,private-b"`
|
||||
|
||||
Kubernetes' official integration with AWS uses this annotation to configure a
|
||||
load balancer and determine in which AWS availability zones to deploy the managed
|
||||
load balancing service. The value is either a comma separated list of subnet names, or a
|
||||
comma separated list of subnet IDs.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-target-group-attributes (beta) {#service-beta-kubernetes-io-aws-load-balancer-target-group-attributes}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "stickiness.enabled=true,stickiness.type=source_ip"`
|
||||
|
||||
Used on: Service
|
||||
|
||||
The [AWS load balancer controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/)
|
||||
uses this annotation.
|
||||
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
|
||||
in the AWS load balancer controller documentation.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-target-node-labels (beta) {#service-beta-kubernetes-io-aws-target-node-labels}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "kubernetes.io/os=Linux,topology.kubernetes.io/region=us-east-2"`
|
||||
|
||||
Kubernetes' official integration with AWS uses this annotation to determine which
|
||||
nodes in your cluster should be considered as valid targets for the load balancer.
|
||||
|
||||
### service.beta.kubernetes.io/aws-load-balancer-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-type}
|
||||
|
||||
Example: `service.beta.kubernetes.io/aws-load-balancer-type: external`
|
||||
|
||||
Kubernetes' official integrations with AWS use this annotation to determine
|
||||
whether the AWS cloud provider integration should manage a Service of
|
||||
`type: LoadBalancer`.
|
||||
|
||||
There are two permitted values:
|
||||
|
||||
`nlb`
|
||||
: the cloud controller manager configures a Network Load Balancer
|
||||
|
||||
`external`
|
||||
: the cloud controller manager does not configure any load balancer
|
||||
|
||||
If you deploy a Service of `type: LoadBalancer` on AWS, and you don't set any
|
||||
`service.beta.kubernetes.io/aws-load-balancer-type` annotation,
|
||||
the AWS integration deploys a classic Elastic Load Balancer. This behavior,
|
||||
with no annotation present, is the default unless you specify otherwise.
|
||||
|
||||
When you set this annotation to `external` on a Service of `type: LoadBalancer`,
|
||||
and your cluster has a working deployment of the AWS Load Balancer controller,
|
||||
then the AWS Load Balancer controller attempts to deploy a load balancer based
|
||||
on the Service specification.
|
||||
|
||||
{{< caution >}}
|
||||
Do not modify or add the `service.beta.kubernetes.io/aws-load-balancer-type` annotation
|
||||
on an existing Service object. See the AWS documentation on this topic for more
|
||||
details.
|
||||
{{< /caution >}}
|
||||
|
||||
### pod-security.kubernetes.io/enforce
|
||||
|
||||
Type: Label
|
||||
|
|
|
@ -83,9 +83,9 @@ The **discovery.k8s.io/v1beta1** API version of EndpointSlice is no longer serve
|
|||
* Migrate manifests and API clients to use the **discovery.k8s.io/v1** API version, available since v1.21.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **discovery.k8s.io/v1**:
|
||||
* use per Endpoint `nodeName` field instead of deprecated `topology["kubernetes.io/hostname"]` field
|
||||
* use per Endpoint `zone` field instead of deprecated `topology["topology.kubernetes.io/zone"]` field
|
||||
* `topology` is replaced with the `deprecatedTopology` field which is not writable in v1
|
||||
* use per Endpoint `nodeName` field instead of deprecated `topology["kubernetes.io/hostname"]` field
|
||||
* use per Endpoint `zone` field instead of deprecated `topology["topology.kubernetes.io/zone"]` field
|
||||
* `topology` is replaced with the `deprecatedTopology` field which is not writable in v1
|
||||
|
||||
#### Event {#event-v125}
|
||||
|
||||
|
@ -94,14 +94,20 @@ The **events.k8s.io/v1beta1** API version of Event is no longer served as of v1.
|
|||
* Migrate manifests and API clients to use the **events.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **events.k8s.io/v1**:
|
||||
* `type` is limited to `Normal` and `Warning`
|
||||
* `involvedObject` is renamed to `regarding`
|
||||
* `action`, `reason`, `reportingController`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
|
||||
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingController` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* `type` is limited to `Normal` and `Warning`
|
||||
* `involvedObject` is renamed to `regarding`
|
||||
* `action`, `reason`, `reportingController`, and `reportingInstance` are required
|
||||
when creating new **events.k8s.io/v1** Events
|
||||
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed
|
||||
to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field
|
||||
(which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.count` instead of the deprecated `count` field
|
||||
(which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingController` instead of the deprecated `source.component` field
|
||||
(which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingInstance` instead of the deprecated `source.host` field
|
||||
(which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
|
||||
|
||||
#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v125}
|
||||
|
||||
|
@ -117,11 +123,14 @@ The **policy/v1beta1** API version of PodDisruptionBudget is no longer served as
|
|||
* Migrate manifests and API clients to use the **policy/v1** API version, available since v1.21.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **policy/v1**:
|
||||
* an empty `spec.selector` (`{}`) written to a `policy/v1` PodDisruptionBudget selects all pods in the namespace (in `policy/v1beta1` an empty `spec.selector` selected no pods). An unset `spec.selector` selects no pods in either API version.
|
||||
* an empty `spec.selector` (`{}`) written to a `policy/v1` PodDisruptionBudget selects all
|
||||
pods in the namespace (in `policy/v1beta1` an empty `spec.selector` selected no pods).
|
||||
An unset `spec.selector` selects no pods in either API version.
|
||||
|
||||
#### PodSecurityPolicy {#psp-v125}
|
||||
|
||||
PodSecurityPolicy in the **policy/v1beta1** API version is no longer served as of v1.25, and the PodSecurityPolicy admission controller will be removed.
|
||||
PodSecurityPolicy in the **policy/v1beta1** API version is no longer served as of v1.25,
|
||||
and the PodSecurityPolicy admission controller will be removed.
|
||||
|
||||
Migrate to [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
or a [3rd party admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/).
|
||||
|
@ -142,17 +151,20 @@ The **v1.22** release stopped serving the following deprecated API versions:
|
|||
|
||||
#### Webhook resources {#webhook-resources-v122}
|
||||
|
||||
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration is no longer served as of v1.22.
|
||||
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration
|
||||
and ValidatingWebhookConfiguration is no longer served as of v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **admissionregistration.k8s.io/v1** API version, available since v1.16.
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
* Notable changes:
|
||||
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
|
||||
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
|
||||
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
|
||||
* `webhooks[*].sideEffects` default value is removed, and the field made required, and only `None` and `NoneOnDryRun` are permitted for v1
|
||||
* `webhooks[*].admissionReviewVersions` default value is removed and the field made required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
|
||||
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
|
||||
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
|
||||
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
|
||||
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
|
||||
* `webhooks[*].sideEffects` default value is removed, and the field made required,
|
||||
and only `None` and `NoneOnDryRun` are permitted for v1
|
||||
* `webhooks[*].admissionReviewVersions` default value is removed and the field made
|
||||
required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
|
||||
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
|
||||
|
||||
#### CustomResourceDefinition {#customresourcedefinition-v122}
|
||||
|
||||
|
@ -161,16 +173,19 @@ The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition is
|
|||
* Migrate manifests and API clients to use the **apiextensions.k8s.io/v1** API version, available since v1.16.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
|
||||
* `spec.version` is removed in v1; use `spec.versions` instead
|
||||
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
|
||||
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
|
||||
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
|
||||
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
|
||||
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
|
||||
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects, and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
|
||||
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1 (fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
|
||||
* `spec.version` is removed in v1; use `spec.versions` instead
|
||||
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
|
||||
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
|
||||
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
|
||||
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
|
||||
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
|
||||
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects,
|
||||
and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects;
|
||||
it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
|
||||
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1
|
||||
(fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
|
||||
#### APIService {#apiservice-v122}
|
||||
|
||||
|
@ -189,11 +204,12 @@ The **authentication.k8s.io/v1beta1** API version of TokenReview is no longer se
|
|||
|
||||
#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}
|
||||
|
||||
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.
|
||||
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview,
|
||||
SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6.
|
||||
* Notable changes:
|
||||
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
|
||||
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
|
||||
|
||||
#### CertificateSigningRequest {#certificatesigningrequest-v122}
|
||||
|
||||
|
@ -202,13 +218,15 @@ The **certificates.k8s.io/v1beta1** API version of CertificateSigningRequest is
|
|||
* Migrate manifests and API clients to use the **certificates.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in `certificates.k8s.io/v1`:
|
||||
* For API clients requesting certificates:
|
||||
* `spec.signerName` is now required (see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
|
||||
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
|
||||
* For API clients approving or signing certificates:
|
||||
* `status.conditions` may not contain duplicate types
|
||||
* `status.conditions[*].status` is now required
|
||||
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
|
||||
* For API clients requesting certificates:
|
||||
* `spec.signerName` is now required
|
||||
(see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)),
|
||||
and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
|
||||
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
|
||||
* For API clients approving or signing certificates:
|
||||
* `status.conditions` may not contain duplicate types
|
||||
* `status.conditions[*].status` is now required
|
||||
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
|
||||
|
||||
#### Lease {#lease-v122}
|
||||
|
||||
|
@ -225,11 +243,12 @@ The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ing
|
|||
* Migrate manifests and API clients to use the **networking.k8s.io/v1** API version, available since v1.19.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.backend` is renamed to `spec.defaultBackend`
|
||||
* The backend `serviceName` field is renamed to `service.name`
|
||||
* Numeric backend `servicePort` fields are renamed to `service.port.number`
|
||||
* String backend `servicePort` fields are renamed to `service.port.name`
|
||||
* `pathType` is now required for each specified path. Options are `Prefix`, `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
|
||||
* `spec.backend` is renamed to `spec.defaultBackend`
|
||||
* The backend `serviceName` field is renamed to `service.name`
|
||||
* Numeric backend `servicePort` fields are renamed to `service.port.number`
|
||||
* String backend `servicePort` fields are renamed to `service.port.name`
|
||||
* `pathType` is now required for each specified path. Options are `Prefix`,
|
||||
`Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
|
||||
|
||||
#### IngressClass {#ingressclass-v122}
|
||||
|
||||
|
@ -241,7 +260,8 @@ The **networking.k8s.io/v1beta1** API version of IngressClass is no longer serve
|
|||
|
||||
#### RBAC resources {#rbac-resources-v122}
|
||||
|
||||
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding is no longer served as of v1.22.
|
||||
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding,
|
||||
Role, and RoleBinding is no longer served as of v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8.
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
|
@ -285,9 +305,11 @@ The **extensions/v1beta1** and **apps/v1beta2** API versions of DaemonSet are no
|
|||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.templateGeneration` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`)
|
||||
* `spec.templateGeneration` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing
|
||||
template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
|
||||
(the default in `extensions/v1beta1` was `OnDelete`)
|
||||
|
||||
#### Deployment {#deployment-v116}
|
||||
|
||||
|
@ -296,11 +318,15 @@ The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions
|
|||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.rollbackTo` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline)
|
||||
* `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
|
||||
* `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`)
|
||||
* `spec.rollbackTo` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing
|
||||
template labels as the selector for seamless upgrades
|
||||
* `spec.progressDeadlineSeconds` now defaults to `600` seconds
|
||||
(the default in `extensions/v1beta1` was no deadline)
|
||||
* `spec.revisionHistoryLimit` now defaults to `10`
|
||||
(the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
|
||||
* `maxSurge` and `maxUnavailable` now default to `25%`
|
||||
(the default in `extensions/v1beta1` was `1`)
|
||||
|
||||
#### StatefulSet {#statefulset-v116}
|
||||
|
||||
|
@ -309,8 +335,10 @@ The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no lon
|
|||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`)
|
||||
* `spec.selector` is now required and immutable after creation;
|
||||
use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
|
||||
(the default in `apps/v1beta1` was `OnDelete`)
|
||||
|
||||
#### ReplicaSet {#replicaset-v116}
|
||||
|
||||
|
@ -319,7 +347,7 @@ The **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions
|
|||
* Migrate manifests and API clients to use the **apps/v1** API version, available since v1.9.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
|
||||
#### PodSecurityPolicy {#psp-v116}
|
||||
|
||||
|
|
|
@ -69,6 +69,13 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
|
|||
#
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: false
|
||||
authorization:
|
||||
mode: AlwaysAllow
|
||||
cgroupDriver: systemd
|
||||
address: 127.0.0.1
|
||||
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
|
||||
|
@ -298,7 +305,7 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
|
|||
https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
|
||||
https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms
|
||||
```
|
||||
|
||||
|
||||
- Set `${HOST0}`to the IP address of the host you are testing.
|
||||
|
||||
|
||||
|
|
|
@ -8,11 +8,11 @@ weight: 140
|
|||
|
||||
This page shows how to configure liveness, readiness and startup probes for containers.
|
||||
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses liveness probes to know when to
|
||||
restart a container. For example, liveness probes could catch a deadlock,
|
||||
where an application is running, but unable to make progress. Restarting a
|
||||
container in such a state can help to make the application more available
|
||||
despite bugs.
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses
|
||||
liveness probes to know when to restart a container. For example, liveness
|
||||
probes could catch a deadlock, where an application is running, but unable to
|
||||
make progress. Restarting a container in such a state can help to make the
|
||||
application more available despite bugs.
|
||||
|
||||
A common pattern for liveness probes is to use the same low-cost HTTP endpoint
|
||||
as for readiness probes, but with a higher failureThreshold. This ensures that the pod
|
||||
|
@ -24,7 +24,7 @@ One use of this signal is to control which Pods are used as backends for Service
|
|||
When a Pod is not ready, it is removed from Service load balancers.
|
||||
|
||||
The kubelet uses startup probes to know when a container application has started.
|
||||
If such a probe is configured, it disables liveness and readiness checks until
|
||||
If such a probe is configured, liveness and readiness probes do not start until
|
||||
it succeeds, making sure those probes don't interfere with the application startup.
|
||||
This can be used to adopt liveness checks on slow starting containers, avoiding them
|
||||
getting killed by the kubelet before they are up and running.
|
||||
|
@ -397,7 +397,9 @@ have a number of fields that you can use to more precisely control the behavior
|
|||
liveness and readiness checks:
|
||||
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started before startup,
|
||||
liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
|
||||
liveness or readiness probes are initiated. If a startup probe is defined, liveness and
|
||||
readiness probe delays do not begin until the startup probe has succeeded.
|
||||
Defaults to 0 seconds. Minimum value is 0.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10 seconds.
|
||||
The minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out.
|
||||
|
|
|
@ -470,7 +470,7 @@ for scaling down which allows a 100% of the currently running replicas to be rem
|
|||
means the scaling target can be scaled down to the minimum allowed replicas.
|
||||
For scaling up there is no stabilization window. When the metrics indicate that the target should be
|
||||
scaled up the target is scaled up immediately. There are 2 policies where 4 pods or a 100% of the currently
|
||||
running replicas will be added every 15 seconds till the HPA reaches its steady state.
|
||||
running replicas may at most be added every 15 seconds till the HPA reaches its steady state.
|
||||
|
||||
### Example: change downscale stabilization window
|
||||
|
||||
|
|
|
@ -61,17 +61,17 @@ Markdown doesn't have strict rules about how to process lists. When we moved
|
|||
from Jekyll to Hugo, we broke some lists. To fix them, keep the following in
|
||||
mind:
|
||||
|
||||
- Make sure you indent sub-list items **2 spaces**.
|
||||
- Make sure you indent sub-list items **2 spaces**.
|
||||
|
||||
- To end a list and start another, you need a HTML comment block on a new line
|
||||
- To end a list and start another, you need an HTML comment block on a new line
|
||||
between the lists, flush with the left-hand border. The first list won't end
|
||||
otherwise, no matter how many blank lines you put between it and the second.
|
||||
|
||||
### Bullet lists
|
||||
|
||||
- This is a list item
|
||||
* This is another list item in the same list
|
||||
- You can mix `-` and `*`
|
||||
- This is a list item.
|
||||
* This is another list item in the same list.
|
||||
- You can mix `-` and `*`.
|
||||
- To make a sub-item, indent two spaces.
|
||||
- This is a sub-sub-item. Indent two more spaces.
|
||||
- Another sub-item.
|
||||
|
@ -93,37 +93,38 @@ mind:
|
|||
- And a sub-list after some block-level content
|
||||
|
||||
- A bullet list item can contain a numbered list.
|
||||
1. Numbered sub-list item 1
|
||||
2. Numbered sub-list item 2
|
||||
1. Numbered sub-list item 1
|
||||
1. Numbered sub-list item 2
|
||||
|
||||
### Numbered lists
|
||||
|
||||
1. This is a list item
|
||||
2. This is another list item in the same list. The number you use in Markdown
|
||||
does not necessarily correlate to the number in the final output. By
|
||||
convention, we keep them in sync.
|
||||
3. {{<note>}}
|
||||
For single-digit numbered lists, using two spaces after the period makes
|
||||
interior block-level content line up better along tab-stops.
|
||||
{{</note>}}
|
||||
1. This is a list item
|
||||
1. This is another list item in the same list. The number you use in Markdown
|
||||
does not necessarily correlate to the number in the final output. By
|
||||
convention, we keep them in sync.
|
||||
|
||||
{{<note>}}
|
||||
For single-digit numbered lists, using two spaces after the period makes
|
||||
interior block-level content line up better along tab-stops.
|
||||
{{</note>}}
|
||||
|
||||
<!-- separate lists -->
|
||||
|
||||
1. This is a new list. With Hugo, you need to use a HTML comment to separate
|
||||
two consecutive lists. **The HTML comment needs to be at the left margin.**
|
||||
2. Numbered lists can have paragraphs or block elements within them.
|
||||
1. This is a new list. With Hugo, you need to use an HTML comment to separate
|
||||
two consecutive lists. **The HTML comment needs to be at the left margin.**
|
||||
1. Numbered lists can have paragraphs or block elements within them.
|
||||
|
||||
Indent the content to be the same as the first line of the bullet
|
||||
point. **This paragraph and the code block line up with the `N` in
|
||||
`Numbered` above.**
|
||||
Indent the content to be the same as the first line of the bullet
|
||||
point. **This paragraph and the code block line up with the `N` in
|
||||
`Numbered` above.**
|
||||
|
||||
```bash
|
||||
ls -l
|
||||
```
|
||||
```bash
|
||||
ls -l
|
||||
```
|
||||
|
||||
- And a sub-list after some block-level content. This is at the same
|
||||
"level" as the paragraph and code block above, despite being indented
|
||||
more.
|
||||
- And a sub-list after some block-level content. This is at the same
|
||||
"level" as the paragraph and code block above, despite being indented
|
||||
more.
|
||||
|
||||
### Tab lists
|
||||
|
||||
|
@ -218,11 +219,13 @@ source for this page).
|
|||
## Links
|
||||
|
||||
To format a link, put the link text inside square brackets, followed by the
|
||||
link target in parentheses. [Link to Kubernetes.io](https://kubernetes.io/) or
|
||||
[Relative link to Kubernetes.io](/)
|
||||
link target in parentheses.
|
||||
|
||||
- `[Link to Kubernetes.io](https://kubernetes.io/)` or
|
||||
- `[Relative link to Kubernetes.io](/)`
|
||||
|
||||
You can also use HTML, but it is not preferred.
|
||||
<a href="https://kubernetes.io/">Link to Kubernetes.io</a>
|
||||
For example, `<a href="https://kubernetes.io/">Link to Kubernetes.io</a>`.
|
||||
|
||||
## Images
|
||||
|
||||
|
@ -251,7 +254,6 @@ You can also use HTML for images, but it is not preferred.
|
|||
|
||||
<img src="/images/pencil.png" alt="pencil icon" />
|
||||
|
||||
|
||||
## Tables
|
||||
|
||||
Simple tables have one row per line, and columns are separated by `|`
|
||||
|
@ -299,7 +301,7 @@ graph TD;
|
|||
{{</*/ mermaid */>}}
|
||||
```
|
||||
|
||||
Produces:
|
||||
Produces:
|
||||
|
||||
{{< mermaid >}}
|
||||
graph TD;
|
||||
|
@ -323,7 +325,7 @@ sequenceDiagram
|
|||
{{</*/ mermaid */>}}
|
||||
```
|
||||
|
||||
Produces:
|
||||
Produces:
|
||||
|
||||
{{< mermaid >}}
|
||||
sequenceDiagram
|
||||
|
@ -337,7 +339,7 @@ sequenceDiagram
|
|||
Alice->John: Yes... John, how are you?
|
||||
{{</ mermaid >}}
|
||||
|
||||
<br>More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the official docs.
|
||||
You can check more [examples](https://mermaid-js.github.io/mermaid/#/examples) from the official docs.
|
||||
|
||||
## Sidebars and Admonitions
|
||||
|
||||
|
@ -358,7 +360,6 @@ A sidebar offsets text visually, but without the visual prominence of
|
|||
> ```bash
|
||||
> sudo dmesg
|
||||
> ```
|
||||
>
|
||||
|
||||
### Admonitions
|
||||
|
||||
|
@ -376,13 +377,10 @@ You can have multiple paragraphs and block-level elements inside an admonition.
|
|||
The reader should proceed with caution.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
{{< warning >}}
|
||||
Warnings point out something that could cause harm if ignored.
|
||||
{{< /warning >}}
|
||||
|
||||
|
||||
|
||||
## Includes
|
||||
|
||||
To add shortcodes to includes.
|
||||
|
|
|
@ -304,6 +304,7 @@ If you want to use minikube again to learn more about Kubernetes, you don't need
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Tutorial to _[deploy your first app on Kubernetes with kubectl](/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/)_.
|
||||
* Learn more about [Deployment objects](/docs/concepts/workloads/controllers/deployment/).
|
||||
* Learn more about [Deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/).
|
||||
* Learn more about [Service objects](/docs/concepts/services-networking/service/).
|
||||
|
|
|
@ -112,6 +112,7 @@ description: |-
|
|||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a id="deploy-an-app"></a>
|
||||
<h3>Deploy an app</h3>
|
||||
<p>Let’s deploy our first app on Kubernetes with the <code>kubectl create deployment</code> command. We need to provide the deployment name and app image location (include the full repository url for images hosted outside Docker hub).</p>
|
||||
<p><b><code>kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1</code></b></p>
|
||||
|
|
|
@ -65,7 +65,7 @@ description: |-
|
|||
<h3>Services and Labels</h3>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>A Service routes traffic across a set of Pods. Services are the abstraction that allows pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) are handled by Kubernetes Services.</p>
|
||||
|
@ -76,7 +76,7 @@ description: |-
|
|||
<li>Classify an object using tags</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
@ -97,7 +97,8 @@ description: |-
|
|||
<h3>Create a new Service</h3>
|
||||
<p>Let’s verify that our application is running. We’ll use the <code>kubectl get</code> command and look for existing Pods:</p>
|
||||
<p><code><b>kubectl get pods</b></code></p>
|
||||
<p>If no pods are running then it means the interactive environment is still reloading its previous state. Please wait a couple of seconds and list the Pods again. You can continue once you see the one Pod running.</p>
|
||||
<p>If no Pods are running then it means the objects from the previous tutorials were cleaned up. In this case, go back and recreate the deployment from the <a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro#deploy-an-app">Using kubectl to create a Deployment</a> tutorial.
|
||||
Please wait a couple of seconds and list the Pods again. You can continue once you see the one Pod running.</p>
|
||||
<p>Next, let’s list the current Services from our cluster:</p>
|
||||
<p><code><b>kubectl get services</b></code></p>
|
||||
<p>We have a Service called <tt>kubernetes</tt> that is created by default when minikube starts the cluster.
|
||||
|
|
|
@ -113,7 +113,8 @@ description: |-
|
|||
<p>To list your deployments use the <code>get deployments</code> subcommand:
|
||||
<code><b>kubectl get deployments</b></code></p>
|
||||
<p>The output should be similar to:</p>
|
||||
<pre>NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
<pre>
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kubernetes-bootcamp 1/1 1 1 11m
|
||||
</pre>
|
||||
<p>We should have 1 Pod. If not, run the command again. This shows:</p>
|
||||
|
|
|
@ -78,10 +78,9 @@ releases may also occur in between these.
|
|||
|
||||
| Monthly Patch Release | Cherry Pick Deadline | Target date |
|
||||
| --------------------- | -------------------- | ----------- |
|
||||
| May 2023 | 2023-05-12 | 2023-05-17 |
|
||||
| June 2023 | 2023-06-09 | 2023-06-14 |
|
||||
| July 2023 | 2023-07-07 | 2023-07-12 |
|
||||
| August 2023 | 2023-08-04 | 2023-08-09 |
|
||||
| September 2023 | 2023-09-08 | 2023-09-13 |
|
||||
|
||||
## Detailed Release History for Active Branches
|
||||
|
||||
|
|
|
@ -0,0 +1,135 @@
|
|||
---
|
||||
title: Prácticas Recomendadas de Configuración
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Este documento destaca y consolida las prácticas recomendadas de configuración que se presentan
|
||||
a lo largo de la guía del usuario, la documentación de Introducción y los ejemplos.
|
||||
|
||||
Este es un documento vivo. Si se te ocurre algo que no está en esta lista pero que puede ser útil
|
||||
a otros, no dudes en crear un _issue_ o enviar un PR.
|
||||
|
||||
<!-- body -->
|
||||
## Consejos Generales de Configuración
|
||||
|
||||
- Al definir configuraciones, especifica la última versión estable de la API.
|
||||
|
||||
- Los archivos de configuración deben almacenarse en el control de versiones antes de enviarse al clúster. Este
|
||||
le permite revertir rápidamente un cambio de configuración si es necesario. También ayuda a
|
||||
la recreación y restauración del clúster.
|
||||
|
||||
- Escribe tus archivos de configuración usando YAML en lugar de JSON. Aunque estos formatos se pueden utilizarse
|
||||
indistintamente en casi todos los escenarios, YAML tiende a ser más amigable con el usuario.
|
||||
|
||||
- Agrupa los objetos relacionados en un solo archivo siempre que tenga sentido. Un archivo suele ser más fácil de
|
||||
administrar que varios. Ver el archivo
|
||||
[guestbook-all-in-one.yaml](https://github.com/kubernetes/examples/tree/master/guestbook/all-in-one/guestbook-all-in-one.yaml)
|
||||
como un ejemplo de esta sintaxis.
|
||||
|
||||
- Ten en cuenta también que se pueden llamar muchos comandos `kubectl` en un directorio. Por ejemplo, puedes llamar
|
||||
`kubectl apply` en un directorio de archivos de configuración.
|
||||
|
||||
- No especifiques valores predeterminados innecesariamente: una configuración simple y mínima hará que los errores sean menos probables.
|
||||
|
||||
- Coloca descripciones de objetos en anotaciones, para permitir una mejor introspección.
|
||||
|
||||
## "Naked" Pods vs ReplicaSets, Deployments y Jobs {#naked-pods-vs-replicasets-deployments-and-jobs}
|
||||
|
||||
- No usar "Naked" Pods (es decir, Pods no vinculados a un [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) o a un
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)) si puedes evitarlo. Los Naked Pods
|
||||
no se reprogramará en caso de falla de un nodo.
|
||||
|
||||
Un Deployment, que crea un ReplicaSet para garantizar que se alcance la cantidad deseada de Pods está
|
||||
siempre disponible y especifica una estrategia para reemplazar los Pods (como
|
||||
[RollingUpdate](/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment)), es
|
||||
casi siempre preferible a crear Pods directamente, excepto por algunos explícitos
|
||||
[`restartPolicy: Never`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) escenarios.
|
||||
Un [Job](/docs/concepts/workloads/controllers/job/) también puede ser apropiado.
|
||||
|
||||
## Servicios
|
||||
|
||||
- Crea un [Service](/docs/concepts/services-networking/service/) antes de tus cargas de trabajo de backend correspondientes
|
||||
(Deployments o ReplicaSets) y antes de cualquier carga de trabajo que necesite acceder a él.
|
||||
Cuando Kubernetes inicia un contenedor, proporciona variables de entorno que apuntan a todos los _Services_
|
||||
que se estaban ejecutando cuando se inició el contenedor. Por ejemplo, si existe un _Service_ llamado `foo`,
|
||||
todos los contenedores obtendrán las siguientes variables en su entorno inicial:
|
||||
|
||||
```shell
|
||||
FOO_SERVICE_HOST=<el host en el que se ejecuta el Service>
|
||||
FOO_SERVICE_PORT=<el puerto en el que se ejecuta el Service>
|
||||
```
|
||||
|
||||
\* Esto implica un requisito de ordenamiento - cualquier `Service` al que un `Pod` quiera acceder debe ser
|
||||
creado antes del `Pod` en sí mismo, de lo contrario, las variables de entorno no se completarán.
|
||||
El DNS no tiene esta restricción.
|
||||
|
||||
- Un [cluster add-on](/docs/concepts/cluster-administration/addons/) opcional (aunque muy recomendable)
|
||||
es un servidor DNS. El servidor DNS observa la API de Kubernetes en busca de nuevos `Servicios` y crea un conjunto
|
||||
de registros DNS para cada uno. Si el DNS se ha habilitado en todo el clúster, todos los `Pods` deben ser
|
||||
capaces de hacer la resolución de nombres de `Services` automáticamente.
|
||||
|
||||
- No especifiques un `hostPort` para un Pod a menos que sea absolutamente necesario. Cuando vinculas un Pod a un
|
||||
`hostPort`, limita la cantidad de lugares en los que se puede agendar el Pod, porque cada combinación <`hostIP`,
|
||||
`hostPort`, `protocol`> debe ser única. Si no especificas el `hostIP` y
|
||||
`protocol` explícitamente, Kubernetes usará `0.0.0.0` como el `hostIP` predeterminado y `TCP` como el
|
||||
`protocol` por defecto.
|
||||
|
||||
Si solo necesitas acceder al puerto con fines de depuración, puedes utilizar el
|
||||
[apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls)
|
||||
o [`kubectl port-forward`](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
|
||||
|
||||
Si necesitas exponer explícitamente el puerto de un Pod en el nodo, considera usar un
|
||||
[NodePort](/docs/concepts/services-networking/service/#type-nodeport) Service antes de recurrir a
|
||||
`hostPort`.
|
||||
|
||||
- Evita usar `hostNetwork`, por las mismas razones que `hostPort`.
|
||||
|
||||
- Usa [headless Services](/docs/concepts/services-networking/service/#headless-services)
|
||||
(que tiene un `ClusterIP` de `None`) para el descubrimiento de servicios cuando no necesites
|
||||
balanceo de carga `kube-proxy`.
|
||||
|
||||
## Usando Labels
|
||||
|
||||
- Define y usa [labels](/docs/concepts/overview/working-with-objects/labels/) que identifiquen
|
||||
__atributos semánticos__ de tu aplicación o Deployment, como `{ app.kubernetes.io/name:
|
||||
MyApp, tier: frontend, phase: test, deployment: v3 }`. Puedes utilizar estas labels para seleccionar los
|
||||
Pods apropiados para otros recursos; por ejemplo, un Service que selecciona todo los
|
||||
Pods `tier: frontend`, o todos los componentes `phase: test` de `app.kubernetes.io/name: MyApp`.
|
||||
Revisa el [libro de visitas](https://github.com/kubernetes/examples/tree/master/guestbook/)
|
||||
para ver ejemplos de este enfoque.
|
||||
|
||||
Un Service puede hacer que abarque múltiples Deployments omitiendo las labels específicas de la versión de su
|
||||
selector. Cuando necesites actualizar un servicio en ejecución sin downtime, usa un
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/).
|
||||
|
||||
Un estado deseado de un objeto se describe mediante una implementación, y si los cambios a esa especificación son
|
||||
_aplicados_, el controlador de implementación cambia el estado actual al estado deseado en un
|
||||
ritmo controlado.
|
||||
|
||||
- Use las [labels comunes de Kubernetes](/docs/concepts/overview/working-with-objects/common-labels/)
|
||||
para casos de uso común. Estas labels estandarizadas enriquecen los metadatos de una manera que permite que las herramientas,
|
||||
incluyendo `kubectl` y el [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard),
|
||||
trabajen de forma interoperable.
|
||||
|
||||
- Puedes manipular las labels para la depuración. Debido a que los controladores de Kubernetes (como ReplicaSet) y
|
||||
los Services coinciden con los Pods usando labels de selector, se detendrá la eliminación de las labels relevantes de un Pod
|
||||
que sea considerado por un controlador o que un Service sirva tráfico. si quitas
|
||||
las labels de un Pod existente, su controlador creará un nuevo Pod para ocupar su lugar. Esto es un
|
||||
forma útil de depurar un Pod previamente "vivo" en un entorno de "cuarentena". Para eliminar interactivamente
|
||||
o agregar labels, usa [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
|
||||
## Usando kubectl
|
||||
|
||||
- Usa `kubectl apply -f <directorio>`. Esto busca la configuración de Kubernetes en todos los `.yaml`,
|
||||
`.yml`, y `.json` en `<directorio>` y lo pasa a `apply`.
|
||||
|
||||
- Usa selectores de labels para las operaciones `get` y `delete` en lugar de nombres de objetos específicos. Ve las
|
||||
secciones en [selectores de labels](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
|
||||
y [usar labels de forma eficaz](/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively).
|
||||
|
||||
- Usa `kubectl create deployment` y `kubectl expose` para crear rápidamente un contenedor único
|
||||
Deployments y Services.
|
||||
Consulta [Usar un Service para Acceder a una Aplicación en un Clúster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
para un ejemplo.
|
|
@ -83,7 +83,7 @@ Esto es útil para futuras introspecciones, por ejemplo para comprobar qué coma
|
|||
|
||||
A continuación, ejecuta el comando `kubectl get deployments`. La salida debe ser parecida a la siguiente:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 1s
|
||||
```
|
||||
|
@ -126,7 +126,7 @@ deployment "nginx-deployment" successfully rolled out
|
|||
|
||||
Ejecuta de nuevo el comando `kubectl get deployments` unos segundos más tarde:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 18s
|
||||
```
|
||||
|
@ -136,7 +136,7 @@ la última plantilla Pod) y están disponibles (el estado del Pod tiene el valor
|
|||
|
||||
Para ver el ReplicaSet (`rs`) creado por el Deployment, ejecuta el comando `kubectl get rs`:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-75675f5897 3 3 3 18s
|
||||
```
|
||||
|
@ -146,7 +146,7 @@ genera de forma aleatoria y usa el pod-template-hash como semilla.
|
|||
|
||||
Para ver las etiquetas generadas automáticamente en cada pod, ejecuta el comando `kubectl get pods --show-labels`. Se devuelve la siguiente salida:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
|
|
|
@ -89,7 +89,7 @@ Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel.
|
|||
1. Exécutez `kubectl get deployments` pour vérifier si le déploiement a été créé.
|
||||
Si le déploiement est toujours en cours de création, la sortie est similaire à:
|
||||
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 0/3 0 0 1s
|
||||
```
|
||||
|
@ -122,7 +122,7 @@ Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel.
|
|||
1. Exécutez à nouveau `kubectl get deployments` quelques secondes plus tard.
|
||||
La sortie est similaire à ceci:
|
||||
|
||||
```text
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 18s
|
||||
```
|
||||
|
@ -143,7 +143,7 @@ Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel.
|
|||
1. Pour voir les labels générées automatiquement pour chaque Pod, exécutez `kubectl get pods --show-labels`.
|
||||
La sortie est similaire à ceci:
|
||||
|
||||
```text
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
|
|
|
@ -218,10 +218,10 @@ Pour revenir à ce contexte, exécutez la commande suivante: `kubectl config use
|
|||
#### Spécifier la version de Kubernetes
|
||||
|
||||
Vous pouvez spécifier la version de Kubernetes pour Minikube à utiliser en ajoutant la chaîne `--kubernetes-version` à la commande `minikube start`.
|
||||
Par exemple, pour exécuter la version {{< param "fullversion" >}}, procédez comme suit:
|
||||
Par exemple, pour exécuter la version {{< skew currentPatchVersion >}}, procédez comme suit:
|
||||
|
||||
```shell
|
||||
minikube start --kubernetes-version {{< param "fullversion" >}}
|
||||
minikube start --kubernetes-version v{{< skew currentPatchVersion >}}
|
||||
```
|
||||
|
||||
#### Spécification du pilote de machine virtuelle
|
||||
|
|
|
@ -35,10 +35,10 @@ Vous devez utiliser une version de kubectl qui différe seulement d'une version
|
|||
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
|
||||
Par exemple, pour télécharger la version {{< param "fullversion" >}} sur Linux, tapez :
|
||||
Par exemple, pour télécharger la version {{< skew currentPatchVersion >}} sur Linux, tapez :
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Rendez le binaire kubectl exécutable.
|
||||
|
@ -115,10 +115,10 @@ kubectl version --client
|
|||
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -Ls https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
|
||||
Par exemple, pour télécharger la version {{< param "fullversion" >}} sur macOS, tapez :
|
||||
Par exemple, pour télécharger la version {{< skew currentPatchVersion >}} sur macOS, tapez :
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Rendez le binaire kubectl exécutable.
|
||||
|
@ -180,12 +180,12 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Macport
|
|||
|
||||
### Installer le binaire kubectl avec curl sur Windows
|
||||
|
||||
1. Téléchargez la dernière version {{< param "fullversion" >}} depuis [ce lien](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
1. Téléchargez la dernière version {{< skew currentPatchVersion >}} depuis [ce lien](https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Ou si vous avez `curl` installé, utilisez cette commande:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Pour connaître la dernière version stable (par exemple, en scripting), jetez un coup d'oeil à [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
|
|
@ -2,4 +2,4 @@ Vous devez disposer d'un cluster Kubernetes et l'outil de ligne de commande kube
|
|||
Si vous ne possédez pas déjà de cluster, vous pouvez en créer un en utilisant [Minikube](/docs/setup/minikube), ou vous pouvez utiliser l'un de ces environnements Kubernetes:
|
||||
|
||||
* [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
||||
* [Play with Kubernetes](https://labs.play-with-k8s.com/)
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: अवधारणाएँ
|
||||
main_menu: true
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
अवधारणा अनुभाग आपको कुबेरनेट्स प्रणाली के हिस्सों के बारे में जानने में मदद करता है जिसका उपयोग कुबेरनेट्स आपके {{< glossary_tooltip text="क्लस्टर" term_id="cluster" length="all" >}} का प्रतिनिधित्व करने के लिए करता है, और कुबेरनेट्स कार्यप्रणाली की गहरी समझ प्राप्त करने में आपकी मदद करता है।
|
||||
|
||||
<!-- body -->
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: एनोटेशन (Annotation)
|
||||
id: annotation
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/annotations
|
||||
short_description: >
|
||||
एक की-वैल्यू पेयर जिसका उपयोग मनमाने ढंग से गैर-पहचान वाले मेटाडेटा को ऑब्जेक्ट से जोड़ने के लिए किया जाता है।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
एक की-वैल्यू पेयर जिसका उपयोग मनमाने ढंग से गैर-पहचान वाले मेटाडेटा को ऑब्जेक्ट से जोड़ने के लिए किया जाता है।
|
||||
|
||||
<!--more-->
|
||||
|
||||
एनोटेट किया गया मेटाडेटा छोटा या बड़ा, संरचित या असंरचित हो सकता है, और इसमें ऐसे वर्ण हो सकते हैं जिनकी {{< glossary_tooltip text="लेबल" term_id="label" >}} में अनुमति नहीं है। आप क्लाइंट जैसे टूल और लाइब्रेरी के द्वारा मेटाडेटा पुनर्प्राप्त कर सकते हैं।
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: ऐप कंटेनर (App Container)
|
||||
id: app-container
|
||||
date: 2019-02-12
|
||||
full_link:
|
||||
short_description: >
|
||||
एक कंटेनर एक कार्यभार का हिस्सा चलाने के लिए प्रयोग किया जाता है। इनिट कंटेनर के साथ तुलना करें।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- workload
|
||||
---
|
||||
एप्लिकेशन कंटेनर (या ऐप कंटेनर) एक {{<glossary_tooltip text="पॉड" term_id="pod" >}} में {{< glossary_tooltip text="कंटेनर" term_id="container" >}} होता हैं, जो किसी भी {{< glossary_tooltip text="इनिट कंटेनर" term_id="init-container" >}} के पूरा हो जाने के बाद शुरू होते हैं।
|
||||
|
||||
<!--more-->
|
||||
|
||||
एक इनिट कंटेनर आपको इनिशियलाइज़ेशन विवरण को अलग करने देता है जो समग्र {{< glossary_tooltip text="कार्यभार" term_id="workload" >}} के लिए महत्वपूर्ण हैं, और एप्लिकेशन कंटेनर शुरू हो जाने के बाद इसे चालू रखने की आवश्यकता नहीं है। यदि किसी पॉड में कोई इनिट कंटेनर कॉन्फ़िगर नहीं है, तो उस पॉड के सभी कंटेनर ऐप कंटेनर हैं।
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: सीग्रुप (cgroup,control group)
|
||||
id: cgroup
|
||||
date: 2019-06-25
|
||||
full_link:
|
||||
short_description: >
|
||||
वैकल्पिक संसाधन अलगाव, लेखांकन और सीमाओं के साथ Linux प्रक्रियाओं का एक समूह।
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
वैकल्पिक संसाधन अलगाव, लेखांकन और सीमाओं के साथ Linux प्रक्रियाओं का एक समूह।
|
||||
|
||||
<!--more-->
|
||||
|
||||
सीग्रुप एक लिनक्स कर्नेल सुविधा है जो प्रक्रियाओं के संग्रह के लिए रिसोर्स उपयोग (CPU, मेमोरी, डिस्क I/O, नेटवर्क) को सीमित, जिम्मेदार और अलग करती है।
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: कंटेनर (Container)
|
||||
id: container
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/containers/
|
||||
short_description: >
|
||||
एक हल्की और पोर्टेबल निष्पादन योग्य इमेज जिसमें सॉफ़्टवेयर और उसकी सभी निर्भरताएँ होती हैं।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- workload
|
||||
---
|
||||
एक हल्की और पोर्टेबल निष्पादन योग्य इमेज जिसमें सॉफ़्टवेयर और उसकी सभी निर्भरताएँ होती हैं।
|
||||
<!--more-->
|
||||
|
||||
विभिन्न क्लाउड या OS वातावरणों में डिप्लॉयमेंट को आसान बनाने और आसान स्केलिंग के लिए कंटेनर अंतर्निहित होस्ट इन्फ्रास्ट्रक्चर से ऍप्लिकेशन्स को अलग करते हैं।
|
||||
कंटेनर के अंदर चलने वाले एप्लिकेशन को कन्टेनराइज़्ड एप्लिकेशन कहा जाता है। इन ऍप्लिकेशन्स और उनकी निर्भरताओं को एक कंटेनर इमेज में बंडल करने की प्रक्रिया को कन्टेनराइज़ेशन कहा जाता है।
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: कंटेनरडी (containerd)
|
||||
id: containerd
|
||||
date: 2019-05-14
|
||||
full_link: https://containerd.io/docs/
|
||||
short_description: >
|
||||
सादगी, मजबूती और सुवाह्यता पर जोर देने वाला कंटेनर रनटाइम
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
---
|
||||
|
||||
सादगी, मजबूती और सुवाह्यता पर जोर देने वाला कंटेनर रनटाइम
|
||||
|
||||
<!--more-->
|
||||
|
||||
कंटेनरडी एक {{< glossary_tooltip text="कंटेनर" term_id="container" >}} रनटाइम है जो Linux या Windows पर एक डैमन के रूप में चलता है। कंटेनर डी कंटेनर इमेजेस को लाने और संग्रहीत करने, कंटेनरों को निष्पादित करने, नेटवर्क एक्सेस प्रदान करने, आदि का ध्यान रखता है।
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: API सर्वर
|
||||
id: kube-apiserver
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/components/#kube-apiserver
|
||||
short_description: >
|
||||
एक कंट्रोल प्लेन घटक जो कुबेरनेट्स API की सेवाएं प्रदान करता है।।
|
||||
|
||||
aka:
|
||||
- kube-apiserver
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
API सर्वर कुबेरनेट्स {{< glossary_tooltip text="कंट्रोल प्लेन" term_id="control-plane" >}} का एक घटक है जो कुबेरनेट्स API को उजागर करता है।
|
||||
API सर्वर कुबेरनेट्स कंट्रोल प्लेन का फ्रंट एंड है।
|
||||
|
||||
<!--more-->
|
||||
|
||||
कुबेरनेट्स API सर्वर का मुख्य कार्यान्वयन [kube-apiserver](/docs/reference/generated/kube-apiserver/) है।
|
||||
kube-apiserver को हॉरिज़ॉन्टल रूप से स्केल करने के लिए बनाया गया है अर्थात, आप अधिक इंस्टेंस को डिप्लॉय करके स्केल कर सकते हैं।
|
||||
आप kube-apiserver के कई इंस्टेंस चला सकते हैं और उनके बीच ट्रैफ़िक को संतुलित कर सकते हैं।
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: क्यूबकण्ट्रोल (Kubectl)
|
||||
id: kubectl
|
||||
date: 2018-04-12
|
||||
full_link: /docs/user-guide/kubectl-overview/
|
||||
short_description: >
|
||||
कुबेरनेट्स क्लस्टर के साथ संचार करने के लिए एक कमांड लाइन उपकरण।
|
||||
aka:
|
||||
- kubectl
|
||||
tags:
|
||||
- tool
|
||||
- fundamental
|
||||
---
|
||||
कुबेरनेट्स API का उपयोग करके कुबेरनेट्स क्लस्टर के {{< glossary_tooltip text="कण्ट्रोल प्लेन" term_id="control-plane">}} के साथ संचार करने के लिए कमांड लाइन उपकरण।
|
||||
|
||||
<!--more-->
|
||||
|
||||
आप कुबेरनेट्स ऑब्जेक्ट को बनाने, निरीक्षण करने, अपडेट करने और हटाने के लिए `kubectl` का उपयोग कर सकते हैं।
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: क्यूबलेट
|
||||
id: kubelet
|
||||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kubelet
|
||||
short_description: >
|
||||
एक एजेंट जो क्लस्टर में प्रत्येक नोड पर चलता है। यह सुनिश्चित करता है कि कंटेनर पॉड में चल रहे हैं।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
एक एजेंट जो क्लस्टर में प्रत्येक {{< glossary_tooltip text="नोड" term_id="node" >}} पर चलता है। यह सुनिश्चित करता है कि {{<glossary_tooltip text="कंटेनर" term_id="container" >}} एक {{<glossary_tooltip text="पॉड" term_id="pod" >}} में चल रहे हैं।
|
||||
|
||||
<!--more-->
|
||||
|
||||
क्यूबलेट को विभिन्न तंत्रों के माध्यम से पॉडस्पेक्स (PodSpec) का एक समूह प्राप्त होता है और यह सुनिश्चित करता हैं कि इन पॉडस्पेक्स में वर्णित कंटेनर चल रहे हैं और स्वस्थ हैं। क्यूबलेट उन कंटेनरों का प्रबंधन नहीं करता है जो कुबेरनेट्स द्वारा नहीं बनाए गए थे।
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: संसाधन कोटा (Resource Quotas)
|
||||
id: resource-quota
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/policy/resource-quotas/
|
||||
short_description: >
|
||||
प्रति नेमस्पेस पर कुल संसाधन खपत को सीमित करने वाली बाधाएं प्रदान करता है।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- operation
|
||||
- architecture
|
||||
---
|
||||
प्रति {{< glossary_tooltip text="नेमस्पेस" term_id="namespace" >}} पर कुल संसाधन खपत को सीमित करने वाली बाधाएं (contraints) प्रदान करता है।
|
||||
|
||||
<!--more-->
|
||||
|
||||
किसी नेमस्पेस में बनाई जा सकने वाली ऑब्जेक्ट्स की मात्रा को उनके प्रकार के अनुसार सीमित करता है, साथ ही उस परियोजना के संसाधनों द्वारा उपभोग किए जा सकने वाले कंप्यूट संसाधनों की कुल मात्रा को भी सीमित करता है।
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: यूआईडी (UID)
|
||||
id: uid
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/names
|
||||
short_description: >
|
||||
विशिष्ट ऑब्जेक्ट्स की पहचान करने के लिए एक कुबेरनेट्स सिस्टम-जनित स्ट्रिंग।
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
विशिष्ट ऑब्जेक्ट्स की पहचान करने के लिए एक कुबेरनेट्स सिस्टम-जनित स्ट्रिंग।
|
||||
<!--more-->
|
||||
|
||||
एक कुबेरनेट्स क्लस्टर के पूरे जीवनकाल में बनाई गई प्रत्येक ऑब्जेक्ट का एक अलग UID होता है। इसका उद्देश्य समान इकाइयों की ऐतिहासिक घटनाओं के बीच अंतर करना है।
|
|
@ -135,7 +135,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -169,7 +169,7 @@ CPU = resourceScoringFunction((2+1),8)
|
|||
= rawScoringFunction(37.5)
|
||||
= 3
|
||||
|
||||
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)
|
||||
= 5
|
||||
|
||||
|
||||
|
@ -209,7 +209,7 @@ CPU = resourceScoringFunction((2+6),8)
|
|||
= rawScoringFunction(100)
|
||||
= 10
|
||||
|
||||
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)
|
||||
= 7
|
||||
|
||||
```
|
||||
|
|
|
@ -719,14 +719,14 @@ _field_ lainnya yang bersinggungan di dalam **overlay** berbeda. Di bawah ini me
|
|||
```shell
|
||||
mkdir dev
|
||||
cat <<EOF > dev/kustomization.yaml
|
||||
bases:
|
||||
resources:
|
||||
- ../base
|
||||
namePrefix: dev-
|
||||
EOF
|
||||
|
||||
mkdir prod
|
||||
cat <<EOF > prod/kustomization.yaml
|
||||
bases:
|
||||
resources:
|
||||
- ../base
|
||||
namePrefix: prod-
|
||||
EOF
|
||||
|
|
|
@ -24,7 +24,7 @@ Kubernetesはv1.20より新しいバージョンで、コンテナランタイ
|
|||
ここで議論になっているのは2つの異なる場面についてであり、それが混乱の原因になっています。Kubernetesクラスターの内部では、Container runtimeと呼ばれるものがあり、それはImageをPullし起動する役目を持っています。Dockerはその選択肢として人気があります(他にはcontainerdやCRI-Oが挙げられます)が、しかしDockerはそれ自体がKubernetesの一部として設計されているわけではありません。これが問題の原因となっています。
|
||||
|
||||
お分かりかと思いますが、ここで”Docker”と呼んでいるものは、ある1つのものではなく、その技術的な体系の全体であり、その一部には"containerd"と呼ばれるものもあり、これはそれ自体がハイレベルなContainer runtimeとなっています。Dockerは素晴らしいもので、便利です。なぜなら、多くのUXの改善がされており、それは人間が開発を行うための操作を簡単にしているのです。しかし、それらはKubernetesに必要なものではありません。Kubernetesは人間ではないからです。
|
||||
このhuman-friendlyな抽象化レイヤが作られてために、結果としてはKubernetesクラスターはDockershimと呼ばれるほかのツールを使い、本当に必要な機能つまりcontainerdを利用してきました。これは素晴らしいとは言えません。なぜなら、我々がメンテする必要のあるものが増えますし、それは問題が発生する要因ともなります。今回の変更で実際に行われることというのは、Dockershimを最も早い場合でv1.23のリリースでkubeletから除外することです。その結果として、Dockerのサポートがなくなるということなのです。
|
||||
このhuman-friendlyな抽象化レイヤーが作られたために、結果としてはKubernetesクラスターはDockershimと呼ばれるほかのツールを使い、本当に必要な機能つまりcontainerdを利用してきました。これは素晴らしいとは言えません。なぜなら、我々がメンテする必要のあるものが増えますし、それは問題が発生する要因ともなります。今回の変更で実際に行われることというのは、Dockershimを最も早い場合でv1.23のリリースでkubeletから除外することです。その結果として、Dockerのサポートがなくなるということなのです。
|
||||
ここで、containerdがDockerに含まれているなら、なぜDockershimが必要なのかと疑問に思われる方もいるでしょう。
|
||||
|
||||
DockerはCRI([Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/))に準拠していません。もしそうであればshimは必要ないのですが、現実はそうでありません。
|
||||
|
|
|
@ -84,7 +84,11 @@ cgroup v2はcgroup v1とは違うAPIを利用しているため、cgroupファ
|
|||
* サードパーティーの監視またはセキュリティエージェントはcgroupファイルシステムに依存していることがあります。
|
||||
エージェントをcgroup v2をサポートしているバージョンに更新してください。
|
||||
* Podやコンテナを監視するために[cAdvisor](https://github.com/google/cadvisor)をスタンドアローンのDaemonSetとして起動している場合、v0.43.0以上に更新してください。
|
||||
* JDKを利用している場合、[cgroup v2を完全にサポートしている](https://bugs.openjdk.org/browse/JDK-8230305)JDK 11.0.16以降、またはJDK15以降を利用することが望ましいです。
|
||||
* Javaアプリケーションをデプロイする場合は、完全にcgroup v2をサポートしているバージョンを利用してください:
|
||||
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372、11.0.16、15以降
|
||||
* [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01、11.0.16.0、17.0.4.0、18.0.2.0以降
|
||||
* [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15以降
|
||||
* [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs)パッケージを利用している場合は、利用するバージョンがv1.5.1以上であることを確認してください。
|
||||
|
||||
## Linux Nodeのcgroupバージョンを特定する {#check-cgroup-version}
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ aliases:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
本ドキュメントは、APIサーバーとKubernetesクラスター間の通信経路をまとめたものです。
|
||||
本ドキュメントは、{{< glossary_tooltip term_id="kube-apiserver" text="APIサーバー" >}}とKubernetes{{< glossary_tooltip text="クラスター" term_id="cluster" length="all" >}}間の通信経路をまとめたものです。
|
||||
その目的は、信頼できないネットワーク上(またはクラウドプロバイダー上の完全なパブリックIP)でクラスターが実行できるよう、ユーザーがインストールをカスタマイズしてネットワーク構成を強固にできるようにすることです。
|
||||
|
||||
<!-- body -->
|
||||
|
@ -18,10 +18,10 @@ aliases:
|
|||
Kubernetesには「ハブアンドスポーク」というAPIパターンがあります。ノード(またはノードが実行するPod)からのすべてのAPIの使用は、APIサーバーで終了します。他のコントロールプレーンコンポーネントは、どれもリモートサービスを公開するようには設計されていません。APIサーバーは、1つ以上の形式のクライアント[認証](/ja/docs/reference/access-authn-authz/authentication/)が有効になっている状態で、セキュアなHTTPSポート(通常は443)でリモート接続をリッスンするように設定されています。
|
||||
特に[匿名リクエスト](/ja/docs/reference/access-authn-authz/authentication/#anonymous-requests)や[サービスアカウントトークン](/ja/docs/reference/access-authn-authz/authentication/#service-account-token)が許可されている場合は、1つ以上の[認可](/docs/reference/access-authn-authz/authorization/)形式を有効にする必要があります。
|
||||
|
||||
ノードは、有効なクライアント認証情報とともに、APIサーバーに安全に接続できるように、クラスターのパブリックルート証明書でプロビジョニングされる必要があります。適切なやり方は、kubeletに提供されるクライアント認証情報が、クライアント証明書の形式であることです。kubeletクライアント証明書の自動プロビジョニングについては、[kubelet TLSブートストラップ](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)を参照してください。
|
||||
ノードは、有効なクライアント認証情報とともに、APIサーバーに安全に接続できるように、クラスターのパブリックルート{{< glossary_tooltip text="証明書" term_id="certificate" >}}でプロビジョニングされる必要があります。適切なやり方は、kubeletに提供されるクライアント認証情報が、クライアント証明書の形式であることです。kubeletクライアント証明書の自動プロビジョニングについては、[kubelet TLSブートストラップ](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)を参照してください。
|
||||
|
||||
APIサーバーに接続したいPodは、サービスアカウントを利用することで、安全に接続することができます。これにより、Podのインスタンス化時に、Kubernetesはパブリックルート証明書と有効なBearerトークンを自動的にPodに挿入します。
|
||||
`kubernetes`サービス(`デフォルト`の名前空間)は、APIサーバー上のHTTPSエンドポイントに(`kube-proxy`経由で)リダイレクトされる仮想IPアドレスで構成されます。
|
||||
APIサーバーに接続したい{{< glossary_tooltip text="Pod" term_id="pod" >}}は、サービスアカウントを利用することで、安全に接続することができます。これにより、Podのインスタンス化時に、Kubernetesはパブリックルート証明書と有効なBearerトークンを自動的にPodに挿入します。
|
||||
`kubernetes`サービス(`デフォルト`の名前空間)は、APIサーバー上のHTTPSエンドポイントに(`{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}`経由で)リダイレクトされる仮想IPアドレスで構成されます。
|
||||
|
||||
また、コントロールプレーンのコンポーネントは、セキュアなポートを介してAPIサーバーとも通信します。
|
||||
|
||||
|
@ -30,7 +30,7 @@ APIサーバーに接続したいPodは、サービスアカウントを利用
|
|||
## コントロールプレーンからノードへの通信 {#control-plane-to-node}
|
||||
|
||||
コントロールプレーン(APIサーバー)からノードへの主要な通信経路は2つあります。
|
||||
1つ目は、APIサーバーからクラスター内の各ノードで実行されるkubeletプロセスへの通信経路です。
|
||||
1つ目は、APIサーバーからクラスター内の各ノードで実行される{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}プロセスへの通信経路です。
|
||||
2つ目は、APIサーバーの _プロキシー_ 機能を介した、APIサーバーから任意のノード、Pod、またはサービスへの通信経路です。
|
||||
|
||||
### APIサーバーからkubeletへの通信 {#api-server-to-kubelet}
|
||||
|
@ -56,7 +56,7 @@ APIサーバーからノード、Pod、またはサービスへの接続は、
|
|||
|
||||
### SSHトンネル {#ssh-tunnels}
|
||||
|
||||
Kubernetesは、コントロールプレーンからノードへの通信経路を保護するために、SSHトンネルをサポートしています。この構成では、APIサーバーがクラスター内の各ノードへのSSHトンネルを開始(ポート22でリッスンしているSSHサーバーに接続)し、kubelet、ノード、Pod、またはサービス宛てのすべてのトラフィックをトンネル経由で渡します。
|
||||
Kubernetesは、コントロールプレーンからノードへの通信経路を保護するために、[SSHトンネル](https://www.ssh.com/academy/ssh/tunneling)をサポートしています。この構成では、APIサーバーがクラスター内の各ノードへのSSHトンネルを開始(ポート22でリッスンしているSSHサーバーに接続)し、kubelet、ノード、Pod、またはサービス宛てのすべてのトラフィックをトンネル経由で渡します。
|
||||
このトンネルにより、ノードが稼働するネットワークの外部にトラフィックが公開されないようになります。
|
||||
|
||||
{{< note >}}
|
||||
|
@ -73,3 +73,12 @@ Konnectivityサービスを有効にすると、コントロールプレーン
|
|||
|
||||
[Konnectivityサービスのセットアップ](/docs/tasks/extend-kubernetes/setup-konnectivity/)に従って、クラスターにKonnectivityサービスをセットアップしてください。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Kubernetesコントロールプレーンコンポーネント](/ja/docs/concepts/overview/components/#control-plane-components)について読む。
|
||||
* [HubsとSpokeモデル](https://book.kubebuilder.io/multiversion-tutorial/conversion-concepts.html#hubs-spokes-and-other-wheel-metaphors)について学習する。
|
||||
* [クラスターのセキュリティ](/ja/docs/tasks/administer-cluster/securing-a-cluster/)について学習する。
|
||||
* [Kubernetes API](/ja/docs/concepts/overview/kubernetes-api/)について学習する。
|
||||
* [Konnectivityサービスを設定する](/docs/tasks/extend-kubernetes/setup-konnectivity/)
|
||||
* [Port Forwardingを使用してクラスター内のアプリケーションにアクセスする](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
|
||||
* [Podログを調べます](/ja/docs/tasks/debug/debug-application/debug-running-pod/#examine-pod-logs)と[kubectl port-forwardを使用します](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod)について学習する。
|
|
@ -29,7 +29,7 @@ weight: 60
|
|||
|
||||
下記は、アノテーション内で記録できる情報の例です。
|
||||
|
||||
* 宣言的設定レイヤによって管理されているフィールド。これらのフィールドをアノテーションとして割り当てることで、クライアントもしくはサーバによってセットされたデフォルト値、オートサイジングやオートスケーリングシステムによってセットされたフィールドや、自動生成のフィールドなどと区別することができます。
|
||||
* 宣言的設定レイヤーによって管理されているフィールド。これらのフィールドをアノテーションとして割り当てることで、クライアントもしくはサーバによってセットされたデフォルト値、オートサイジングやオートスケーリングシステムによってセットされたフィールドや、自動生成のフィールドなどと区別することができます。
|
||||
|
||||
* ビルド、リリースやタイムスタンプのようなイメージの情報、リリースID、gitのブランチ、PR番号、イメージハッシュ、レジストリアドレスなど
|
||||
|
||||
|
|
|
@ -147,7 +147,7 @@ cpu = resourceScoringFunction((2+1),8)
|
|||
= rawScoringFunction(37.5)
|
||||
= 3 # floor(37.5/10)
|
||||
|
||||
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)
|
||||
= 5
|
||||
```
|
||||
|
||||
|
@ -186,7 +186,7 @@ cpu = resourceScoringFunction((2+6),8)
|
|||
= rawScoringFunction(100)
|
||||
= 10
|
||||
|
||||
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)
|
||||
= 7
|
||||
|
||||
```
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 70
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
IPアドレスまたはポートのレベル(OSI参照モデルのレイヤ3または4)でトラフィックフローを制御したい場合、クラスター内の特定のアプリケーションにKubernetesのネットワークポリシーを使用することを検討してください。ネットワークポリシーはアプリケーション中心の構造であり、{{<glossary_tooltip text="Pod" term_id="pod">}}がネットワークを介して多様な「エンティティ」(「Endpoint」や「Service」のようなKubernetesに含まれる特定の意味を持つ共通の用語との重複を避けるため、ここではエンティティという単語を使用します。)と通信する方法を指定できます。
|
||||
IPアドレスまたはポートのレベル(OSI参照モデルのレイヤー3または4)でトラフィックフローを制御したい場合、クラスター内の特定のアプリケーションにKubernetesのネットワークポリシーを使用することを検討してください。ネットワークポリシーはアプリケーション中心の構造であり、{{<glossary_tooltip text="Pod" term_id="pod">}}がネットワークを介して多様な「エンティティ」(「Endpoint」や「Service」のようなKubernetesに含まれる特定の意味を持つ共通の用語との重複を避けるため、ここではエンティティという単語を使用します。)と通信する方法を指定できます。
|
||||
|
||||
Podが通信できるエンティティは以下の3つの識別子の組み合わせによって識別されます。
|
||||
|
||||
|
@ -206,7 +206,7 @@ SCTPプロトコルのネットワークポリシーをサポートする{{< glo
|
|||
## ネットワークポリシーでできないこと(少なくともまだ)
|
||||
|
||||
Kubernetes1.20現在、ネットワークポリシーAPIに以下の機能は存在しません。
|
||||
しかし、オペレーティングシステムのコンポーネント(SELinux、OpenVSwitch、IPTablesなど)、レイヤ7の技術(Ingressコントローラー、サービスメッシュ実装)、もしくはアドミッションコントローラーを使用して回避策を実装できる場合があります。
|
||||
しかし、オペレーティングシステムのコンポーネント(SELinux、OpenVSwitch、IPTablesなど)、レイヤー7の技術(Ingressコントローラー、サービスメッシュ実装)、もしくはアドミッションコントローラーを使用して回避策を実装できる場合があります。
|
||||
Kubernetesのネットワークセキュリティを初めて使用する場合は、ネットワークポリシーAPIを使用して以下のユーザーストーリーを(まだ)実装できないことに注意してください。これらのユーザーストーリーの一部(全てではありません)は、ネットワークポリシーAPIの将来のリリースで活発に議論されています。
|
||||
|
||||
- クラスター内トラフィックを強制的に共通ゲートウェイを通過させる(これは、サービスメッシュもしくは他のプロキシで提供するのが最適な場合があります)。
|
||||
|
|
|
@ -466,7 +466,7 @@ Split-HorizonなDNS環境において、ユーザーは2つのServiceを外部
|
|||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
networking.gke.io /load-balancer-type: "Internal"
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ PVは静的か動的どちらかでプロビジョニングされます。
|
|||
|
||||
ストレージクラスに基づいたストレージの動的プロビジョニングを有効化するには、クラスター管理者が`DefaultStorageClass`[アドミッションコントローラー](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)をAPIサーバーで有効化する必要があります。
|
||||
これは例えば、`DefaultStorageClass`がAPIサーバーコンポーネントの`--enable-admission-plugins`フラグのコンマ区切りの順序付きリストの中に含まれているかで確認できます。
|
||||
APIサーバーのコマンドラインフラグの詳細については[kube-apiserver](/docs/admin/kube-apiserver/)のドキュメントを参照してください。
|
||||
APIサーバーのコマンドラインフラグの詳細については[kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)のドキュメントを参照してください。
|
||||
|
||||
### バインディング
|
||||
|
||||
|
@ -367,7 +367,7 @@ KubernetesはPersistentVolumesの2つの`volumeModes`をサポートしていま
|
|||
|
||||
`volumeMode`の値を`Block`に設定してボリュームをRAWブロックデバイスとして使用します。
|
||||
このようなボリュームは、ファイルシステムを持たないブロックデバイスとしてPodに提示されます。
|
||||
このモードは、Podとボリュームの間のファイルシステムレイヤなしにボリュームにアクセスする可能な限り最速の方法をPodに提供するのに便利です。一方で、Pod上で実行しているアプリケーションはRAWブロックデバイスの扱い方を知っていなければなりません。
|
||||
このモードは、Podとボリュームの間のファイルシステムレイヤーなしにボリュームにアクセスする可能な限り最速の方法をPodに提供するのに便利です。一方で、Pod上で実行しているアプリケーションはRAWブロックデバイスの扱い方を知っていなければなりません。
|
||||
Pod内で`volumeMode: Block`とともにボリュームを使用する例としては、[Raw Block Volume Support](#raw-block-volume-support)を参照してください。
|
||||
|
||||
### アクセスモード
|
||||
|
|
|
@ -72,7 +72,7 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
2. Deploymentが作成されたことを確認するために、`kubectl get deployments`を実行してください。
|
||||
|
||||
Deploymentがまだ作成中の場合、コマンドの実行結果は以下のとおりです。
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 0/3 0 0 1s
|
||||
```
|
||||
|
@ -95,14 +95,14 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
|
||||
4. 数秒後、再度`kubectl get deployments`を実行してください。
|
||||
コマンドの実行結果は以下のとおりです。
|
||||
```shell
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3/3 3 3 18s
|
||||
```
|
||||
Deploymentが3つ全てのレプリカを作成して、全てのレプリカが最新(Podが最新のPodテンプレートを含んでいる)になり、利用可能となっていることを確認してください。
|
||||
|
||||
5. Deploymentによって作成されたReplicaSet(`rs`)を確認するには`kubectl get rs`を実行してください。コマンドの実行結果は以下のとおりです:
|
||||
```shell
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-75675f5897 3 3 3 18s
|
||||
```
|
||||
|
@ -118,11 +118,11 @@ Deploymentによって作成されたReplicaSetを管理しないでください
|
|||
|
||||
6. 各Podにラベルが自動的に付けられるのを確認するには`kubectl get pods --show-labels`を実行してください。
|
||||
コマンドの実行結果は以下のとおりです:
|
||||
```shell
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
```
|
||||
作成されたReplicaSetは`nginx`Podを3つ作成することを保証します。
|
||||
|
||||
|
|
|
@ -45,11 +45,11 @@ job.batch/pi created
|
|||
{{< tab name="kubectl describe job pi" codelang="bash" >}}
|
||||
Name: pi
|
||||
Namespace: default
|
||||
Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
job-name=pi
|
||||
Annotations: kubectl.kubernetes.io/last-applied-configuration:
|
||||
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
|
||||
Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
batch.kubernetes.io/job-name=pi
|
||||
...
|
||||
Annotations: batch.kubernetes.io/job-tracking: ""
|
||||
Parallelism: 1
|
||||
Completions: 1
|
||||
Start Time: Mon, 02 Dec 2019 15:20:11 +0200
|
||||
|
@ -57,8 +57,8 @@ Completed At: Mon, 02 Dec 2019 15:21:16 +0200
|
|||
Duration: 65s
|
||||
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
|
||||
Pod Template:
|
||||
Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
job-name=pi
|
||||
Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
|
||||
batch.kubernetes.io/job-name=pi
|
||||
Containers:
|
||||
pi:
|
||||
Image: perl:5.34.0
|
||||
|
@ -75,24 +75,24 @@ Pod Template:
|
|||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
|
||||
Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4
|
||||
Normal Completed 18s job-controller Job completed
|
||||
{{< /tab >}}
|
||||
{{< tab name="kubectl get job pi -o yaml" codelang="bash" >}}
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
annotations:
|
||||
kubectl.kubernetes.io/last-applied-configuration: |
|
||||
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":{"spec":{"containers":[{"command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"image":"perl","name":"pi"}],"restartPolicy":"Never"}}}}
|
||||
creationTimestamp: "2022-06-15T08:40:15Z"
|
||||
annotations: batch.kubernetes.io/job-tracking: ""
|
||||
...
|
||||
creationTimestamp: "2022-11-10T17:53:53Z"
|
||||
generation: 1
|
||||
labels:
|
||||
controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
job-name: pi
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
batch.kubernetes.io/job-name: pi
|
||||
name: pi
|
||||
namespace: default
|
||||
resourceVersion: "987"
|
||||
uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
resourceVersion: "4751"
|
||||
uid: 204fb678-040b-497f-9266-35ffa8716d14
|
||||
spec:
|
||||
backoffLimit: 4
|
||||
completionMode: NonIndexed
|
||||
|
@ -100,14 +100,14 @@ spec:
|
|||
parallelism: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
suspend: false
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
job-name: pi
|
||||
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
|
||||
batch.kubernetes.io/job-name: pi
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
|
@ -116,7 +116,7 @@ spec:
|
|||
- -wle
|
||||
- print bpi(2000)
|
||||
image: perl:5.34.0
|
||||
imagePullPolicy: Always
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: pi
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
|
@ -128,8 +128,9 @@ spec:
|
|||
terminationGracePeriodSeconds: 30
|
||||
status:
|
||||
active: 1
|
||||
ready: 1
|
||||
startTime: "2022-06-15T08:40:15Z"
|
||||
ready: 0
|
||||
startTime: "2022-11-10T17:53:57Z"
|
||||
uncountedTerminatedPods: {}
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -138,7 +139,7 @@ Jobの完了したPodを確認するには、`kubectl get pods`を使います
|
|||
Jobに属するPodの一覧を機械可読形式で出力するには、下記のコマンドを使います:
|
||||
|
||||
```shell
|
||||
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}')
|
||||
echo $pods
|
||||
```
|
||||
|
||||
|
@ -156,6 +157,12 @@ pi-5rwd7
|
|||
kubectl logs $pods
|
||||
```
|
||||
|
||||
Jobの標準出力を確認するもう一つの方法は:
|
||||
|
||||
```shell
|
||||
kubectl logs jobs/pi
|
||||
```
|
||||
|
||||
出力結果はこのようになります:
|
||||
|
||||
```
|
||||
|
@ -165,10 +172,14 @@ kubectl logs $pods
|
|||
## Job spec(仕様)の書き方 {#writing-a-job-spec}
|
||||
|
||||
他のKubernetesオブジェクト設定ファイルと同様に、Jobにも`apiVersion`、`kind`または`metadata`フィールドが必要です。
|
||||
Jobの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要があります。
|
||||
|
||||
コントロールプレーンがJobのために新しいPodを作成するとき、Jobの`.metadata.name`はそれらのPodに名前をつけるための基礎の一部になります。Jobの名前は有効な[DNSサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names)である必要がありますが、これはPodのホスト名に予期しない結果をもたらす可能性があります。最高の互換性を得るためには、名前は[DNSラベル](/ja/docs/concepts/overview/working-with-objects/names/#dns-label-names)のより限定的な規則に従うべきです。名前がDNSサブドメインの場合でも、名前は63文字以下でなければなりません。
|
||||
|
||||
Jobには[`.spec`セクション](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)も必要です。
|
||||
|
||||
### Jobラベル
|
||||
Jobラベルの`job-name`と`controller-uid`の接頭辞は`batch.kubernetes.io/`となります。
|
||||
|
||||
### Podテンプレート {#pod-template}
|
||||
|
||||
`.spec.template`は`.spec`の唯一の必須フィールドです。
|
||||
|
@ -238,11 +249,14 @@ Jobで実行するのに適したタスクは主に3種類あります:
|
|||
- `Indexed`: Jobに属するPodはそれぞれ、0から`.spec.completions-1`の範囲内の完了インデックスを取得できます。インデックスは下記の三つの方法で取得できます。
|
||||
- Podアノテーション`batch.kubernetes.io/job-completion-index`。
|
||||
- Podホスト名の一部として、`$(job-name)-$(index)`の形式になっています。
|
||||
インデックス付きJob(Indexed Job)と{{< glossary_tooltip term_id="Service" >}}を一緒に使用すると、Jobに属するPodはお互いにDNSを介して確定的ホスト名で通信できます。
|
||||
インデックス付きJob(Indexed Job)と{{< glossary_tooltip term_id="Service" >}}を一緒に使用すると、Jobに属するPodはお互いにDNSを介して確定的ホスト名で通信できます。この設定方法の詳細は[Pod間通信を使用したJob](https://kubernetes.io/docs/tasks/job/job-with-pod-to-pod-communication/)を参照してください。
|
||||
- コンテナ化されたタスクの環境変数`JOB_COMPLETION_INDEX`。
|
||||
|
||||
インデックスごとに、成功したPodが一つ存在すると、Jobの完了となります。完了モードの使用方法の詳細については、
|
||||
[静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/)を参照してください。めったに発生しませんが、同じインデックスを取得して稼働し始めるPodも存在する可能性があります。ただし、完了数にカウントされるのはそのうちの一つだけです。
|
||||
各インデックスに1つずつ正常に完了したPodがあると、Jobは完了したとみなされます。このモードの使い方については、[静的な処理の割り当てを使用した並列処理のためのインデックス付きJob](/ja/docs/tasks/job/indexed-parallel-processing-static/)を参照してください。
|
||||
|
||||
{{< note >}}
|
||||
めったに発生しませんが、同じインデックスに対して複数のPodが起動することがあります。(Nodeの障害、kubeletの再起動、Podの立ち退きなど)。この場合、正常に完了した最初のPodだけ完了数にカウントされ、Jobのステータスが更新されます。同じインデックスに対して実行中または完了した他のPodは、検出されるとJobコントローラーによって削除されます。
|
||||
{{< /note >}}
|
||||
|
||||
## Podとコンテナの障害対策 {#handling-pod-and-container-failures}
|
||||
|
||||
|
@ -251,10 +265,16 @@ Pod内のコンテナは、その中のプロセスが0以外の終了コード
|
|||
|
||||
Podがノードからキックされた(ノードがアップグレード、再起動、削除されたなど)、または`.spec.template.spec.restartPolicy = "Never"`と設定されたときにPodに属するコンテナが失敗したなど、様々な理由でPod全体が故障することもあります。Podに障害が発生すると、Jobコントローラーは新しいPodを起動します。つまりアプリケーションは新しいPodで再起動された場合の処理を行う必要があります。特に、過去に実行した際に生じた一時ファイル、ロック、不完全な出力などを処理する必要があります。
|
||||
|
||||
デフォルトでは、それぞれのPodの失敗は`.spec.backoffLimit`にカウントされます。詳しくは[Pod失敗のバックオフポリシー](#pod-backoff-failure-policy)をご覧ください。しかし、[JobのPod失敗ポリシー](#pod-failure-policy)を設定することで、Pod失敗の処理をカスタマイズすることができます。
|
||||
|
||||
`.spec.parallelism = 1`、`.spec.completions = 1`と`.spec.template.spec.restartPolicy = "Never"`を指定しても、同じプログラムが2回起動されることもありますので注意してください。
|
||||
|
||||
`.spec.parallelism`と`.spec.completions`を両方とも2以上指定した場合、複数のPodが同時に実行される可能性があります。そのため、Podは並行処理を行えるようにする必要があります。
|
||||
|
||||
[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)の`PodDisruptionConditions`と`JobPodFailurePolicy`の両方が有効で、`.spec.podFailurePolicy`フィールドが設定されている場合、Jobコントローラーは終了するPod(`.metadata.deletionTimestamp`フィールドが設定されているPod)を、そのPodが終了する(`.status.phase`が`Failed`または`Succeeded`になる)までは失敗とはみなしません。ただし、Jobコントローラーは、終了が明らかになるとすみやかに代わりのPodを作成します。Podが終了すると、Jobコントローラーはこの終了したPodを考慮に入れて、該当のJobの`.backoffLimit`と`.podFailurePolicy`を評価します。
|
||||
|
||||
これらの要件のいずれかが満たされていない場合、Jobコントローラーは、そのPodが後に`phase: "Succeeded"`で終了する場合でも、終了するPodを即時に失敗として数えます。
|
||||
|
||||
### Pod失敗のバックオフポリシー {#pod-backoff-failure-policy}
|
||||
|
||||
設定の論理エラーなどにより、Jobが数回再試行した後に失敗状態にしたい場合があります。`.spec.backoffLimit`を設定すると、失敗したと判断するまでの再試行回数を指定できます。バックオフ制限はデフォルトで6に設定されています。Jobに属していて失敗したPodはJobコントローラーにより再作成され、バックオフ遅延は指数関数的に増加し(10秒、20秒、40秒…)、最大6分まで増加します。
|
||||
|
@ -272,13 +292,63 @@ Podがノードからキックされた(ノードがアップグレード、再
|
|||
`restartPolicy = "OnFailure"`が設定されたJobはバックオフ制限に達すると、属するPodは全部終了されるので注意してください。これにより、Jobの実行ファイルのデバッグ作業が難しくなる可能性があります。失敗したJobからの出力が不用意に失われないように、Jobのデバッグ作業をする際は`restartPolicy = "Never"`を設定するか、ロギングシステムを使用することをお勧めします。
|
||||
{{< /note >}}
|
||||
|
||||
## Pod失敗ポリシー {#pod-failure-policy}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
クラスターで`JobPodFailurePolicy`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効になっている場合のみ、Jobに対してPod失敗ポリシーを設定することができます。さらにPod失敗ポリシーでPodの中断条件を検知して処理できるように、`PodDisruptionConditions`フィーチャーゲートを有効にすることが推奨されます。([Podの中断条件](/docs/concepts/workloads/pods/disruptions#pod-disruption-conditions)を参照してください)。どちらのフィーチャーゲートもKubernetes 1.27で利用可能です。
|
||||
{{< /note >}}
|
||||
|
||||
`.spec.podFailurePolicy`フィールドで定義されるPod失敗ポリシーを使用すると、コンテナの終了コードとPodの条件に基づいてクラスターがPodの失敗を処理できるようになります。
|
||||
|
||||
状況によっては、Podの失敗を処理するときに、Jobの`.spec.backoffLimit`に基づいた[Pod失敗のバックオフポリシー](#pod-backoff-failure-policy)が提供する制御よりも、Podの失敗処理に対してより良い制御を求めるかもしれません。これらはいくつかの使用例です:
|
||||
|
||||
- 不要なPodの再起動を回避してワークロードの実行コストを最適化するために、Podの1つがソフトウェアバグを示す終了コードで失敗するとすぐにJobを終了させることができます。
|
||||
- 中断が発生してもJobが完了するように、中断によって発生したPodの失敗({{< glossary_tooltip text="preemption" term_id="preemption" >}}、{{< glossary_tooltip text="APIを起点とした退避" term_id="api-eviction" >}}、{{< glossary_tooltip text="taint" term_id="taint" >}}を起点とした立ち退き)を無視し、`.spec.backoffLimit`のリトライ回数にカウントしないようにすることができます。
|
||||
|
||||
上記のユースケースを満たすために、`.spec.podFailurePolicy`フィールドでPod失敗ポリシーを設定できます。このポリシーは、コンテナの終了コードとPodの条件に基づいてPodの失敗を処理できます。
|
||||
|
||||
以下は、`podFailurePolicy`を定義するJobのマニフェストです:
|
||||
|
||||
{{< codenew file="controllers/job-pod-failure-policy-example.yaml" >}}
|
||||
|
||||
上記の例では、Pod失敗ポリシーの最初のルールは、`main`コンテナが42の終了コードで失敗した場合、そのJobを失敗とマークすることを指定しています。以下は特に `main`コンテナに関するルールです:
|
||||
|
||||
- 終了コード0はコンテナが成功したことを意味します。
|
||||
- 終了コード42は**Job全体**が失敗したことを意味します。
|
||||
- それ以外の終了コードは、コンテナが失敗したこと、つまりPod全体が失敗したことを示します。再起動の合計回数が`backoffLimit`未満であれば、Podは再作成されます。`backoffLimit`に達した場合、**Job全体**が失敗したことになります。
|
||||
|
||||
{{< note >}}
|
||||
Podテンプレートは`restartPolicy.Never`を指定しているため、kubeletはその特定のPodの`main`コンテナを再起動しません。
|
||||
{{< /note >}}
|
||||
|
||||
Pod失敗ポリシーの2つ目のルールでは、`DisruptionTarget`という条件で失敗したPodに対してIgnoreアクションを指定することで、Podの中断が`.spec.backoffLimit`によるリトライの制限にカウントされないようにします。
|
||||
|
||||
{{< note >}}
|
||||
Pod失敗ポリシーまたはPod失敗のバックオフポリシーのいずれかによってJobが失敗し、そのJobが複数のPodを実行している場合、KubernetesはそのJob内の保留中または実行中のすべてのPodを終了します。
|
||||
{{< /note >}}
|
||||
|
||||
これらはAPIの要件と機能です:
|
||||
- `.spec.podFailurePolicy`フィールドをJobに使いたい場合は、`.spec.restartPolicy`を`Never`に設定してそのJobのPodテンプレートも定義する必要があります。
|
||||
- `spec.podFailurePolicy.rules`で指定したPod失敗ポリシーのルールが順番に評価されます。あるPodの失敗がルールに一致すると、残りのルールは無視されます。Pod失敗に一致するルールがない場合は、デフォルトの処理が適用されます。
|
||||
- `spec.podFailurePolicy.rules[*].containerName`を指定することで、ルールを特定のコンテナに制限することができます。指定しない場合、ルールはすべてのコンテナに適用されます。指定する場合は、Pod テンプレート内のコンテナ名または`initContainer`名のいずれかに一致する必要があります。
|
||||
- Pod失敗ポリシーが`spec.podFailurePolicy.rules[*].action`にマッチしたときに実行されるアクションを指定できます。指定可能な値は以下のとおりです。
|
||||
- `FailJob`: PodのJobを`Failed`としてマークし、実行中の Pod をすべて終了させる必要があることを示します。
|
||||
- `Ignore`: `.spec.backoffLimit`のカウンターは加算されず、代替のPodが作成すべきであることを示します。
|
||||
- `Count`: Podがデフォルトの方法で処理されるべきであることを示します。`.spec.backoffLimit`のカウンターが加算されます。
|
||||
|
||||
{{< note >}}
|
||||
`PodFailurePolicy`を使用すると、Jobコントローラは`Failed`フェーズのPodのみにマッチします。削除タイムスタンプを持つPodで、終了フェーズ(`Failed`または`Succeeded`)にないものは、まだ終了中と見なされます。これは、終了中Podは終了フェーズに達するまで[追跡ファイナライザー](#job-tracking-with-finalizers)を保持することを意味します。Kubernetes 1.27以降、Kubeletは削除されたPodを終了フェーズに遷移させます(参照:[Podのフェーズ](/ja/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase))。これにより、削除されたPodはJobコントローラーによってファイナライザーが削除されます。
|
||||
{{< /note >}}
|
||||
|
||||
## Jobの終了とクリーンアップ {#job-termination-and-cleanup}
|
||||
|
||||
Jobが完了すると、それ以上Podは作成されませんが、[通常](#pod-backoff-failure-policy)Podが削除されることもありません。
|
||||
これらを残しておくと、完了したPodのログを確認でき、エラーや警告などの診断出力を確認できます。
|
||||
またJobオブジェクトはJob完了後も残っているため、状態を確認することができます。古いJobの状態を把握した上で、削除するかどうかはユーザー次第です。Jobを削除するには`kubectl` (例:`kubectl delete jobs/pi`または`kubectl delete -f ./job.yaml`)を使います。`kubectl`でJobを削除する場合、Jobが作成したPodも全部削除されます。
|
||||
|
||||
デフォルトでは、Jobは中断されることなく実行できますが、Podが失敗した場合(`restartPolicy=Never`)、またはコンテナがエラーで終了した場合(`restartPolicy=OnFailure`)のみ、前述の`.spec.backoffLimit`で決まった回数まで再試行します。`.spec.backoffLimit`に達すると、Jobが失敗とマークされ、実行中のPodもすべて終了されます。
|
||||
デフォルトでは、Podが失敗しない(`restartPolicy=Never`)またはコンテナがエラーで終了しない(`restartPolicy=OnFailure`)限り、Jobは中断されることなく実行されます。`.spec.backoffLimit`に達するとそのJobは失敗と見なされ、実行中のPodはすべて終了します。
|
||||
|
||||
Jobを終了させるもう一つの方法は、活動期間を設定することです。
|
||||
Jobの`.spec.activeDeadlineSeconds`フィールドに秒数を設定することで、活動期間を設定できます。
|
||||
|
@ -376,29 +446,32 @@ Jobオブジェクトは、Podの確実な並列実行をサポートするた
|
|||
ここでは、上記のトレードオフをまとめてあり、それぞれ2~4列目に対応しています。
|
||||
またパターン名のところは、例やより詳しい説明が書いてあるページへのリンクになっています。
|
||||
|
||||
| パターン | 単一Jobオブジェクト | Podが作業項目より少ない? | アプリを修正せずに使用できる? |
|
||||
| ----------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|
|
||||
| [作業項目ごとにPodを持つキュー] | ✓ | | 時々 |
|
||||
| [Pod数可変のキュー] | ✓ | ✓ | |
|
||||
| [静的な処理の割り当てを使用したインデックス付きJob] | ✓ | | ✓ |
|
||||
| [Jobテンプレート拡張] | | | ✓ |
|
||||
| パターン | 単一Jobオブジェクト | Podが作業項目より少ない? | アプリを修正せずに使用できる? |
|
||||
| --------------------------------------------------- | :-----------------: | :-----------------------: | :----------------------------: |
|
||||
| [作業項目ごとにPodを持つキュー] | ✓ | | 時々 |
|
||||
| [Pod数可変のキュー] | ✓ | ✓ | |
|
||||
| [静的な処理の割り当てを使用したインデックス付きJob] | ✓ | | ✓ |
|
||||
| [Jobテンプレート拡張] | | | ✓ |
|
||||
| [Pod間通信を使用したJob] | ✓ | 時々 | 時々 |
|
||||
|
||||
`.spec.completions`で完了数を指定する場合、Jobコントローラーより作成された各Podは同一の[`spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)を持ちます。これは、このタスクのすべてのPodが同じコマンドライン、同じイメージ、同じボリューム、そして(ほぼ)同じ環境変数を持つことを意味します。これらのパターンは、Podが異なる作業をするためのさまざまな配置方法になります。
|
||||
|
||||
この表は、各パターンで必要な`.spec.parallelism`と`.spec.completions`の設定を示しています。
|
||||
ここで、`W`は作業項目の数を表しています。
|
||||
|
||||
| パターン | `.spec.completions` | `.spec.parallelism` |
|
||||
| ----------------------------------------- |:-------------------:|:--------------------:|
|
||||
| [作業項目ごとにPodを持つキュー] | W | 任意 |
|
||||
| [Pod数可変のキュー] | null | 任意 |
|
||||
| [静的な処理の割り当てを使用したインデックス付きJob] | W | 任意 |
|
||||
| [Jobテンプレート拡張] | 1 | 1であるべき |
|
||||
| パターン | `.spec.completions` | `.spec.parallelism` |
|
||||
| --------------------------------------------------- | :-----------------: | :-----------------: |
|
||||
| [作業項目ごとにPodを持つキュー] | W | 任意 |
|
||||
| [Pod数可変のキュー] | null | 任意 |
|
||||
| [静的な処理の割り当てを使用したインデックス付きJob] | W | 任意 |
|
||||
| [Jobテンプレート拡張] | 1 | 1であるべき |
|
||||
| [Pod間通信を使用したJob] | W | W |
|
||||
|
||||
[作業項目ごとにPodを持つキュー]: /docs/tasks/job/coarse-parallel-processing-work-queue/
|
||||
[Pod数可変のキュー]: /docs/tasks/job/fine-parallel-processing-work-queue/
|
||||
[静的な処理の割り当てを使用したインデックス付きJob]: /ja/docs/tasks/job/indexed-parallel-processing-static/
|
||||
[Jobテンプレート拡張]: /docs/tasks/job/parallel-processing-expansion/
|
||||
[Pod間通信を使用したJob]: /docs/tasks/job/job-with-pod-to-pod-communication/
|
||||
|
||||
## 高度な使い方 {#advanced-usage}
|
||||
|
||||
|
@ -413,7 +486,7 @@ Jobを一時停止するには、Jobの`.spec.suspend`フィールドをtrueに
|
|||
|
||||
一時停止状態のJobを再開すると、`.status.startTime`フィールドの値は現在時刻にリセットされます。これはつまり、Jobが一時停止して再開すると、`.spec.activeDeadlineSeconds`タイマーは停止してリセットされることになります。
|
||||
|
||||
Jobを中断すると、稼働中のPodは全部削除されることを忘れないでください。Jobが中断されると、PodはSIGTERMシグナルを受信して[終了されます](/ja/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。Podのグレースフル終了の猶予期間がカウントダウンされ、この期間内に、Podはこのシグナルを処理しなければなりません。場合により、その後のために処理状況を保存したり、変更を元に戻したりする処理が含まれます。この方法で終了したPodは`completions`数にカウントされません。
|
||||
Jobを中断すると、状態が`Completed`ではない実行中のPodはすべてSIGTERMシグナルを受信して[終了されます](/ja/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。Podのグレースフル終了の猶予期間がカウントダウンされ、この期間内に、Podはこのシグナルを処理しなければなりません。場合により、その後のために処理状況を保存したり、変更を元に戻したりする処理が含まれます。この方法で終了したPodは`completions`数にカウントされません。
|
||||
|
||||
下記は一時停止状態のままで作成されたJobの定義例になります:
|
||||
|
||||
|
@ -435,6 +508,20 @@ spec:
|
|||
...
|
||||
```
|
||||
|
||||
コマンドラインを使ってJobにパッチを当てることで、Jobの一時停止状態を切り替えることもできます。
|
||||
|
||||
活動中のJobを一時停止する:
|
||||
|
||||
```shell
|
||||
kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":true}}'
|
||||
```
|
||||
|
||||
一時停止中のJobを再開する:
|
||||
|
||||
```shell
|
||||
kubectl patch job/myjob --type=strategic --patch '{"spec":{"suspend":false}}'
|
||||
```
|
||||
|
||||
Jobのstatusセクションで、Jobが停止中なのか、過去に停止したことがあるかを判断できます:
|
||||
|
||||
```shell
|
||||
|
@ -479,20 +566,15 @@ Events:
|
|||
|
||||
### 可変スケジューリング命令 {#mutable-scheduling-directives}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
{{< note >}}
|
||||
この機能を使うためには、[APIサーバー](/docs/reference/command-line-tools-reference/kube-apiserver/)上で`JobMutableNodeSchedulingDirectives`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にする必要があります。
|
||||
デフォルトで有効になっています。
|
||||
{{< /note >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
ほとんどの場合、並列Jobは、すべてのPodが同じゾーン、またはすべてのGPUモデルxかyのいずれかであるが、両方の混在ではない、などの制約付きで実行することが望ましいです。
|
||||
|
||||
[suspend](#suspending-a-job)フィールドは、これらの機能を実現するための第一歩です。Suspendは、カスタムキューコントローラーがJobをいつ開始すべきかを決定することができます。しかし、Jobの一時停止が解除されると、カスタムキューコントローラーは、Job内のPodの実際の配置場所には影響を与えません。
|
||||
|
||||
この機能により、Jobが開始される前にスケジューリング命令を更新でき、カスタムキューコントローラーがPodの配置に影響を与えることができると同時に、実際のPodとノードの割り当てをkube-schedulerにオフロードすることができます。これは一時停止されたJobの中で、一度も一時停止解除されたことのないJobに対してのみ許可されます。
|
||||
この機能により、Jobが開始する前にスケジューリング命令を更新でき、カスタムキューコントローラーがPodの配置に影響を与えることができるようになります。同時に実際のPodからNodeへの割り当てをkube-schedulerにオフロードする能力を提供します。これは一時停止されたJobの中で、一度も一時停止解除されたことのないJobに対してのみ許可されます。
|
||||
|
||||
JobのPodテンプレートで更新可能なフィールドはnodeAffinity、nodeSelector、tolerations、labelsとannotationsです。
|
||||
JobのPodテンプレートで更新可能なフィールドはnodeAffinity、nodeSelector、tolerations、labelsとannotations、[スケジューリングゲート](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)です。
|
||||
|
||||
### 独自のPodセレクターを指定 {#specifying-your-own-pod-selector}
|
||||
|
||||
|
@ -521,11 +603,11 @@ metadata:
|
|||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
次に、`new`という名前で新しくJobを作成し、同じセレクターを明示的に指定します。既存のPodも`controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`ラベルが付いているので、同じく`new`Jobによってコントロールされます。
|
||||
次に、`new`という名前で新しくJobを作成し、同じセレクターを明示的に指定します。既存のPodも`batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`ラベルが付いているので、同じく`new`Jobによってコントロールされます。
|
||||
|
||||
通常システムが自動的に生成するセレクターを使用しないため、新しいJobで `manualSelector: true`を指定する必要があります。
|
||||
|
||||
|
@ -538,7 +620,7 @@ spec:
|
|||
manualSelector: true
|
||||
selector:
|
||||
matchLabels:
|
||||
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
|
||||
...
|
||||
```
|
||||
|
||||
|
@ -546,26 +628,25 @@ spec:
|
|||
|
||||
### FinalizerによるJob追跡 {#job-tracking-with-finalizers}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
{{< note >}}
|
||||
この機能を使うためには、[APIサーバー](/docs/reference/command-line-tools-reference/kube-apiserver/)と[コントローラーマネージャー](/docs/reference/command-line-tools-reference/kube-controller-manager/)で`JobTrackingWithFinalizers`
|
||||
[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にする必要があります。
|
||||
|
||||
有効にした場合、コントロールプレーンは下記に示す機能で新しいJobを追跡します。この機能が有効になる前に作成されたJobは影響を受けません。ユーザーとして実感できる唯一の違いは、コントロールプレーンのJob完了ステータス追跡がより正確になるということだけです。
|
||||
`JobTrackingWithFinalizers`機能が無効になっている時に作成されたJobについては、コントロールプレーンを1.26にアップグレードしても、ファイナライザーを使用してJobを追跡しません。
|
||||
{{< /note >}}
|
||||
|
||||
この機能が有効でない場合、Job {{< glossary_tooltip term_id="controller" >}}はクラスター内に存在するPodを数えてJobステータスを追跡します。つまり`succeeded`Podと`failed`Podのカウンターを保持します。
|
||||
しかし、Podは以下のような理由で削除されることもあります:
|
||||
- Nodeがダウンしたときに、孤立した(Orphan)Podを削除するガベージコレクター。
|
||||
- 閾値に達すると、(`Succeeded`または`Failed`フェーズで)終了したPodを削除するガベージコレクター。
|
||||
- Jobに属するPodの人為的な削除。
|
||||
- 外部コントローラー(Kubernetesの一部として提供されていない)によるPodの削除や置き換え。
|
||||
コントロールプレーンは任意のJobに属するPodを追跡し、そのPodがAPIサーバーから削除されたかどうか認識します。そのためJobコントローラはファイナライザー`batch.kubernetes.io/job-tracking`を持つPodを作成します。コントローラーがファイナライザーを削除するのは、PodがJobステータスに反映された後なので、他のコントローラーやユーザがPodを削除することができます。
|
||||
|
||||
クラスターで`JobTrackingWithFinalizers`機能を有効にすると、コントロールプレーンは任意のJobに属するPodを追跡し、そのようなPodがAPIサーバーから削除された場合に通知します。そのために、Jobコントローラーは`batch.kubernetes.io/job-tracking`Finalizerを持つPodを作成します。コントローラーはPodがJobステータスに計上された後にのみFinalizerを削除し、他のコントローラーやユーザーによるPodの削除を可能にします。
|
||||
Kubernetes 1.26にアップグレードする前、またはフィーチャーゲート`JobTrackingWithFinalizers`が有効になる前に作成されたJobは、Podファイナライザーを使用せずに追跡されます。Job{{< glossary_tooltip term_id="controller" text="コントローラー" >}}は、クラスタに存在するPodのみに基づいて、`succeeded`Podと`failed`Podのステータスカウンタを更新します。クラスタからPodが削除されると、コントロールプレーンはJobの進捗を見失う可能性があります。
|
||||
|
||||
Jobコントローラーは、新しいJobに対してのみ新しいアルゴリズムを使用します。この機能が有効になる前に作成されたJobは影響を受けません。JobコントローラーがPod FinalizerでJob追跡しているかどうかは、Jobが`batch.kubernetes.io/job-tracking`というアノテーションを持っているかどうかで判断できます。
|
||||
このアノテーションを手動で追加または削除しては**いけません**。
|
||||
Jobが`batch.kubernetes.io/job-tracking`というアノテーションを持っているかどうかをチェックすることで、コントロールプレーンがPodファイナライザーを使ってJobを追跡しているかどうかを判断できます。Jobからこのアノテーションを手動で追加したり削除したりしては**いけません**。代わりに、JobがPodファイナライザーを使用して追跡されていることを確認するために、Jobを再作成することができます。
|
||||
|
||||
### 静的なインデックス付きJob {#elastic-indexed-jobs}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
`.spec.parallelism`と`.spec.compleitions`の両方を、`.spec.parallelism` == `.spec.compleitions`となるように変更することで、インデックス付きJobを増減させることができます。[APIサーバ](/docs/reference/command-line-tools-reference/kube-apiserver/)の`ElasticIndexedJob`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が無効になっている場合、`.spec.compleitions`は不変です。
|
||||
|
||||
静的なインデックス付きJobの使用例としては、MPI、Horovord、Ray、PyTorchトレーニングジョブなど、インデックス付きJobのスケーリングを必要とするバッチワークロードがあります。
|
||||
|
||||
## 代替案 {#alternatives}
|
||||
|
||||
|
@ -601,3 +682,4 @@ Replication Controllerは、終了することが想定されていないPod(Web
|
|||
* [終了したJobの自動クリーンアップ](#clean-up-finished-jobs-automatically)のリンクから、クラスターが完了または失敗したJobをどのようにクリーンアップするかをご確認ください。
|
||||
* `Job`はKubernetes REST APIの一部です。JobのAPIを理解するために、{{< api-reference page="workload-resources/job-v1" >}}オブジェクトの定義をお読みください。
|
||||
* UNIXツールの`cron`と同様に、スケジュールに基づいて実行される一連のJobを定義するために使用できる[`CronJob`](/ja/docs/concepts/workloads/controllers/cron-jobs/)についてお読みください。
|
||||
* 段階的な[例](/docs/tasks/job/pod-failure-policy/)に基づいて、`PodFailurePolicy`を使用して、回復可能なPod失敗と回復不可能なPod失敗の処理を構成する方法を練習します。
|
||||
|
|
|
@ -1,4 +1,90 @@
|
|||
---
|
||||
title: 新しいコンテンツの貢献
|
||||
content_type: concept
|
||||
main_menu: true
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
このセクションでは、新しいコンテンツの貢献を行う前に知っておくべき情報を説明します。
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
{{< mermaid >}}
|
||||
flowchart LR
|
||||
subgraph second[始める前に]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
A[CNCF CLAに署名] --> B[Gitブランチを選択]
|
||||
B --> C[言語ごとにPR]
|
||||
C --> F[コントリビューターのための<br>ツールをチェックアウト]
|
||||
end
|
||||
subgraph first[貢献の基本]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
D[ドキュメントをMarkdownで書き<br>Hugoでサイトをビルド] --- E[GitHubにあるソース]
|
||||
E --- G[複数の言語のドキュメントを含む<br>'/content/../docs'フォルダー]
|
||||
G --- H[Hugoのpage content<br>typesやshortcodeをレビュー]
|
||||
end
|
||||
|
||||
|
||||
first ----> second
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,C,D,E,F,G,H grey
|
||||
class S,T spacewhite
|
||||
class first,second white
|
||||
{{</ mermaid >}}
|
||||
|
||||
***図 - 新しいコンテンツ提供の貢献方法***
|
||||
|
||||
上記の図は新しいコンテンツを申請する前に知っておくべき情報を示しています。
|
||||
詳細については以下で説明します。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 貢献の基本
|
||||
|
||||
- KubernetesのドキュメントはMarkdownで書き、Kubernetesのウェブサイトは[Hugo](https://gohugo.io/)を使ってビルドします。
|
||||
- Kubernetesのドキュメントは、Markdownのスタイルとして[CommonMark](https://commonmark.org)を使用しています。
|
||||
- ソースは[GitHub](https://github.com/kubernetes/website)にあります。Kubernetesのドキュメントは`/content/en/docs/`にあります。リファレンスドキュメントの一部は、`update-imported-docs/`ディレクトリ内のスクリプトから自動的に生成されます。
|
||||
- [Page content types](/docs/contribute/style/page-content-types/)にHugoによるドキュメントのコンテンツの見え方を記述しています。
|
||||
- Kubernetesのドキュメントに貢献するのに[Docsy shortcode](https://www.docsy.dev/docs/adding-content/shortcodes/)や[カスタムのHugo shortcode](/docs/contribute/style/hugo-shortcodes/)が使えます。
|
||||
- 標準のHugoのshortcodeに加えて、多数の[カスタムのHugo shortcode](/docs/contribute/style/hugo-shortcodes/)を使用してコンテンツの見え方をコントロールしています。
|
||||
- ドキュメントのソースは`/content/`内にある複数の言語で利用できます。各言語はそれぞれ[ISO 639-1標準](https://www.loc.gov/standards/iso639-2/php/code_list.php)で定義された2文字のコードの名前のフォルダを持ちます。たとえば、英語のドキュメントのソースは`/content/en/docs/`内に置かれています。
|
||||
- 複数言語でのドキュメントへの貢献や新しい翻訳の開始に関する情報については、[Kubernetesのドキュメントを翻訳する](/docs/contribute/localization)を参照してください。
|
||||
|
||||
## 始める前に {#before-you-begin}
|
||||
|
||||
### CNCF CLAに署名する {#sign-the-cla}
|
||||
|
||||
すべてのKubernetesのコントリビューターは、[コントリビューターガイド](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md)を読み、[Contributor License Agreement(コントリビューターライセンス契約、CLA)への署名](https://github.com/kubernetes/community/blob/master/CLA.md)を**必ず行わなければなりません**。
|
||||
|
||||
CLAへの署名が完了していないコントリビューターからのpull requestは、自動化されたテストで失敗します。名前とメールアドレスは`git config`コマンドで表示されるものに一致し、gitの名前とメールアドレスはCNCF CLAで使われたものに一致しなければなりません。
|
||||
|
||||
### どのGitブランチを使用するかを選ぶ
|
||||
|
||||
pull requestをオープンするときは、どのブランチをベースにして作業するかをあらかじめ知っておく必要があります。
|
||||
|
||||
シナリオ | ブランチ
|
||||
:---------|:------------
|
||||
現在のリリースに対する既存または新しい英語のコンテンツ | `main`
|
||||
機能変更のリリースに対するコンテンツ | 機能変更が含まれるメジャーおよびマイナーバージョンに対応する、`dev-<version>`というパターンのブランチを使います。たとえば、機能変更が`v{{< skew nextMinorVersion >}}`に含まれる場合、ドキュメントの変更は``dev-{{< skew nextMinorVersion >}}``ブランチに追加します。
|
||||
他の言語内のコンテンツ(翻訳) | 各翻訳対象の言語のルールに従います。詳しい情報は、[翻訳のブランチ戦略](/docs/contribute/localization/#branching-strategy)を読んでください。
|
||||
|
||||
それでも選ぶべきブランチがわからないときは、Slack上の`#sig-docs`チャンネルで質問してください。
|
||||
|
||||
{{< note >}}
|
||||
すでにpull requestを作成していて、ベースブランチが間違っていたことに気づいた場合は、作成者であるあなただけがベースブランチを変更できます。
|
||||
{{< /note >}}
|
||||
|
||||
### 言語ごとのPR
|
||||
|
||||
pull requestはPRごとに1つの言語に限定してください。複数の言語に同一の変更を行う必要がある場合は、言語ごとに別々のPRを作成してください。
|
||||
|
||||
## コントリビューターのためのツール
|
||||
|
||||
`kubernetes/website`リポジトリ内の[doc contributors tools](https://github.com/kubernetes/website/tree/master/content/en/docs/doc-contributor-tools)ディレクトリには、コントリビューターとしての旅を楽にしてくれるツールがあります。
|
||||
|
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
title: 新しいコンテンツの貢献の概要
|
||||
linktitle: 概要
|
||||
content_type: concept
|
||||
main_menu: true
|
||||
weight: 5
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
このセクションでは、新しいコンテンツの貢献を行う前に知っておくべき情報を説明します。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 貢献の基本
|
||||
|
||||
- KubernetesのドキュメントはMarkdownで書き、Kubernetesのウェブサイトは[Hugo](https://gohugo.io/)を使ってビルドします。
|
||||
- ソースは[GitHub](https://github.com/kubernetes/website)にあります。Kubernetesのドキュメントは`/content/en/docs/`にあります。リファレンスドキュメントの一部は、`update-imported-docs/`ディレクトリ内のスクリプトから自動的に生成されます。
|
||||
- [Page content types](/docs/contribute/style/page-content-types/)にHugoによるドキュメントのコンテンツの見え方を記述しています。
|
||||
- 標準のHugoのshortcodeに加えて、多数の[カスタムのHugo shortcode](/docs/contribute/style/hugo-shortcodes/)を使用してコンテンツの見え方をコントロールしています。
|
||||
- ドキュメントのソースは`/content/`内にある複数の言語で利用できます。各言語はそれぞれ[ISO 639-1標準](https://www.loc.gov/standards/iso639-2/php/code_list.php)で定義された2文字のコードの名前のフォルダを持ちます。たとえば、英語のドキュメントのソースは`/content/en/docs/`内に置かれています。
|
||||
- 複数言語でのドキュメントへの貢献や新しい翻訳の開始に関する情報については、[Kubernetesのドキュメントを翻訳する](/docs/contribute/localization)を参照してください。
|
||||
|
||||
## 始める前に {#before-you-begin}
|
||||
|
||||
### CNCF CLAに署名する {#sign-the-cla}
|
||||
|
||||
すべてのKubernetesのコントリビューターは、[コントリビューターガイド](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md)を読み、[Contributor License Agreement(コントリビューターライセンス契約、CLA)への署名](https://github.com/kubernetes/community/blob/master/CLA.md)を**必ず行わなければなりません**。
|
||||
|
||||
CLAへの署名が完了していないコントリビューターからのpull requestは、自動化されたテストで失敗します。名前とメールアドレスは`git config`コマンドで表示されるものに一致し、gitの名前とメールアドレスはCNCF CLAで使われたものに一致しなければなりません。
|
||||
|
||||
### どのGitブランチを使用するかを選ぶ
|
||||
|
||||
pull requestをオープンするときは、どのブランチをベースにして作業するかをあらかじめ知っておく必要があります。
|
||||
|
||||
シナリオ | ブランチ
|
||||
:---------|:------------
|
||||
現在のリリースに対する既存または新しい英語のコンテンツ | `master`
|
||||
機能変更のリリースに対するコンテンツ | 機能変更が含まれるメジャーおよびマイナーバージョンに対応する、`dev-<version>`というパターンのブランチを使います。たとえば、機能変更が`v{{< skew nextMinorVersion >}}`に含まれる場合、ドキュメントの変更は``dev-{{< skew nextMinorVersion >}}``ブランチに追加します。
|
||||
他の言語内のコンテンツ(翻訳) | 各翻訳対象の言語のルールに従います。詳しい情報は、[翻訳のブランチ戦略](/docs/contribute/localization/#branching-strategy)を読んでください。
|
||||
|
||||
それでも選ぶべきブランチがわからないときは、Slack上の`#sig-docs`チャンネルで質問してください。
|
||||
|
||||
{{< note >}}
|
||||
すでにpull requestを作成していて、ベースブランチが間違っていたことに気づいた場合は、作成者であるあなただけがベースブランチを変更できます。
|
||||
{{< /note >}}
|
||||
|
||||
### 言語ごとのPR
|
||||
|
||||
pull requestはPRごとに1つの言語に限定してください。複数の言語に同一の変更を行う必要がある場合は、言語ごとに別々のPRを作成してください。
|
||||
|
||||
## コントリビューターのためのツール
|
||||
|
||||
`kubernetes/website`リポジトリ内の[doc contributors tools](https://github.com/kubernetes/website/tree/master/content/en/docs/doc-contributor-tools)ディレクトリには、コントリビューターとしての旅を楽にしてくれるツールがあります。
|
|
@ -642,7 +642,7 @@ contexts:
|
|||
current-context: my-cluster
|
||||
```
|
||||
|
||||
相対的なコマンドパスは、設定ファイルのディレクトリーからの相対的なものとして解釈されます。KUBECONFIGが`/home/jane/kubeconfig`に設定されていて、execコマンドが`./bin/example-client-go-exec-plugin`の場合、バイナリー`/home/jane/bin/example-client-go-exec-plugin`が実行されます。
|
||||
相対的なコマンドパスは、設定ファイルのディレクトリーからの相対的なものとして解釈されます。KUBECONFIGが`/home/jane/kubeconfig`に設定されていて、execコマンドが`./bin/example-client-go-exec-plugin`の場合、バイナリ`/home/jane/bin/example-client-go-exec-plugin`が実行されます。
|
||||
|
||||
```yaml
|
||||
- name: my-user
|
||||
|
|
|
@ -75,7 +75,7 @@ APIのバージョンが異なると、安定性やサポートのレベルも
|
|||
|
||||
## APIグループ
|
||||
|
||||
[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)で、KubernetesのAPIを簡単に拡張することができます。
|
||||
[API groups](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md)で、KubernetesのAPIを簡単に拡張することができます。
|
||||
APIグループは、RESTパスとシリアル化されたオブジェクトの`apiVersion`フィールドで指定されます。
|
||||
|
||||
KubernetesにはいくつかのAPIグループがあります:
|
||||
|
@ -112,4 +112,4 @@ Kubernetesはシリアライズされた状態を、APIリソースとして{{<
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- [API conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#api-conventions)をもっと知る
|
||||
- [aggregator](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/aggregated-api-servers.md)の設計ドキュメントを読む
|
||||
- [aggregator](https://git.k8s.io/design-proposals-archive/api-machinery/aggregated-api-servers.md)の設計ドキュメントを読む
|
||||
|
|
|
@ -4,6 +4,7 @@ title: はじめに
|
|||
main_menu: true
|
||||
weight: 20
|
||||
content_type: concept
|
||||
no_list: true
|
||||
card:
|
||||
name: setup
|
||||
weight: 20
|
||||
|
@ -17,26 +18,39 @@ card:
|
|||
<!-- overview -->
|
||||
|
||||
このセクションではKubernetesをセットアップして動かすための複数のやり方について説明します。
|
||||
Kubernetesをインストールする際には、メンテナンスの容易さ、セキュリティ、制御、利用可能なリソース、クラスターの運用及び管理に必要な専門知識に基づいてインストレーションタイプを選んでください。
|
||||
Kubernetesをインストールする際には、メンテナンスの容易さ、セキュリティ、制御、利用可能なリソース、クラスターの運用および管理に必要な専門知識に基づいてインストレーションタイプを選んでください。
|
||||
|
||||
Kuerbetesクラスターをローカルマシン、クラウド、データセンターにデプロイするために、[Kubernetesをダウンロード](/releases/download/)できます。
|
||||
|
||||
Kubernetesクラスターはローカルマシン、クラウド、オンプレのデータセンターにデプロイすることもできますし、マネージドのKubernetesクラスターを選択することもできます。複数のクラウドプロバイダーやベアメタルの環境に跨ったカスタムソリューションもあります。
|
||||
|
||||
{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}や{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}のようないくつかの[Kubernetesのコンポーネント](/ja/docs/concepts/overview/components/)も、[コンテナイメージ](/releases/download/#container-images)としてクラスター内にデプロイできます。
|
||||
|
||||
可能であればコンテナイメージとしてKubernetesのコンポーネントを実行し、それらのコンポーネントをKubernetesで管理するようにすることを**推奨**します。
|
||||
コンテナを実行するコンポーネント(特にkubelet)は、このカテゴリーには含まれません。
|
||||
|
||||
Kubernetesクラスターを自分で管理するのを望まないなら、[認定プラットフォーム](/ja/docs/setup/production-environment/turnkey-solutions/)をはじめとする、マネージドのサービスを選択することもできます。
|
||||
複数のクラウドやベアメタル環境にまたがった、その他の標準あるいはカスタムのソリューションもあります。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 環境について学ぶ
|
||||
|
||||
Kubernetesについて学んでいる場合、Kubernetesコミュニティにサポートされているツールや、Kubernetesクラスターをローカルマシンにセットアップするエコシステム内のツールを使いましょう。
|
||||
[ツールのインストール](/ja/docs/tasks/tools/)を参照してください。
|
||||
|
||||
## プロダクション環境
|
||||
|
||||
[プロダクション環境](/ja/docs/setup/production-environment/)用のソリューションを評価する際には、Kubernetesクラスター(または*抽象概念*)の運用においてどの部分を自分で管理し、どの部分をプロバイダーに任せるのかを考慮してください。
|
||||
|
||||
## 本番環境
|
||||
自分で管理するクラスターであれば、Kubernetesをデプロイするための公式にサポートされているツールは[kubeadm](/ja/docs/setup/production-environment/tools/kubeadm/)です。
|
||||
|
||||
本番環境用のソリューションを評価する際には、Kubernetesクラスター(または抽象レイヤ)の運用においてどの部分を自分で管理し、どの部分をプロバイダーに任せるのかを考慮してください。
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
[Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes)プロバイダーの一覧については、[Kubernetes パートナー](https://kubernetes.io/ja/partners/#conformance)を参照してください。
|
||||
- [Kubernetesのダウンロード](/releases/download/)
|
||||
- `kubectl`を含む、ツールのダウンロードと[インストール](/ja/docs/tasks/tools/)
|
||||
- 新しいクラスターのための[コンテナランタイム](/ja/docs/setup/production-environment/container-runtimes/)の選択
|
||||
- クラスターセットアップの[ベストプラクティス](/ja/docs/setup/best-practices/)を学ぶ
|
||||
|
||||
Kubernetesは、その{{< glossary_tooltip term_id="control-plane" text="コントロールプレーン" >}}がLinux上で実行されるよう設計されています。
|
||||
クラスター内では、Linux上でも、Windowsを含めた別のオペレーティングシステム上でも、アプリケーションを実行できます。
|
||||
|
||||
- [Windowsノードのクラスターのセットアップ](/docs/concepts/windows/)について学ぶ
|
||||
|
|
|
@ -129,9 +129,10 @@ kube-apiserverには2つのバックエンドが用意されています。
|
|||
|
||||
クラスターのコントロールプレーンでkube-apiserverをPodとして動作させている場合は、監査記録が永久化されるように、ポリシーファイルとログファイルの場所に`hostPath`をマウントすることを忘れないでください。
|
||||
例えば:
|
||||
```shell
|
||||
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
|
||||
--audit-log-path=/var/log/audit.log
|
||||
|
||||
```yaml
|
||||
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
|
||||
- --audit-log-path=/var/log/kubernetes/audit/audit.log
|
||||
```
|
||||
|
||||
それからボリュームをマウントします:
|
||||
|
|
|
@ -7,7 +7,7 @@ weight: 15
|
|||
<!-- overview -->
|
||||
|
||||
アプリケーションを拡張し、信頼性の高いサービスを提供するために、デプロイ時にアプリケーションがどのように動作するかを理解する必要があります。
|
||||
コンテナ、[Pod](/docs/concepts/workloads/pods/)、[Service](/docs/concepts/services-networking/service/)、クラスター全体の特性を調べることにより、Kubernetesクラスターのアプリケーションパフォーマンスを調査することができます。
|
||||
コンテナ、[Pod](/ja/docs/concepts/workloads/pods/)、[Service](/ja/docs/concepts/services-networking/service/)、クラスター全体の特性を調べることにより、Kubernetesクラスターのアプリケーションパフォーマンスを調査することができます。
|
||||
Kubernetesは、これらの各レベルでアプリケーションのリソース使用に関する詳細な情報を提供します。
|
||||
この情報により、アプリケーションのパフォーマンスを評価し、ボトルネックを取り除くことで全体のパフォーマンスを向上させることができます。
|
||||
|
||||
|
@ -16,7 +16,7 @@ Kubernetesは、これらの各レベルでアプリケーションのリソー
|
|||
Kubernetesでは、アプリケーションの監視は1つの監視ソリューションに依存することはありません。
|
||||
新しいクラスターでは、[リソースメトリクス](#resource-metrics-pipeline)または[フルメトリクス](#full-metrics-pipeline)パイプラインを使用してモニタリング統計を収集することができます。
|
||||
|
||||
## リソースメトリクスパイプライン
|
||||
## リソースメトリクスパイプライン {#resource-metrics-pipeline}
|
||||
|
||||
リソースメトリックパイプラインは、[Horizontal Pod Autoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/)コントローラーなどのクラスターコンポーネントや、`kubectl top`ユーティリティに関連する限定的なメトリックセットを提供します。
|
||||
|
||||
|
@ -31,7 +31,7 @@ kubeletは各Podを構成するコンテナに変換し、コンテナランタ
|
|||
そして、集約されたPodリソース使用統計情報を、metrics-server Resource Metrics APIを通じて公開します。
|
||||
このAPIは、kubeletの認証済みおよび読み取り専用ポート上の `/metrics/resource/v1beta1` で提供されます。
|
||||
|
||||
## フルメトリクスパイプライン
|
||||
## フルメトリクスパイプライン {#full-metrics-pipeline}
|
||||
|
||||
フルメトリクスパイプラインは、より豊富なメトリクスにアクセスすることができます。
|
||||
Kubernetesは、Horizontal Pod Autoscalerなどのメカニズムを使用して、現在の状態に基づいてクラスターを自動的にスケールまたは適応させることによって、これらのメトリクスに対応することができます。
|
||||
|
|
|
@ -194,7 +194,7 @@ StatefulSetがスケールアップした場合や、次のPodがPersistentVolum
|
|||
## クライアントトラフィックを送信する
|
||||
|
||||
テストクエリーをMySQLマスター(ホスト名 `mysql-0.mysql`)に送信するには、
|
||||
`mysql:5.7`イメージを使って一時的なコンテナを実行し、`mysql`クライアントバイナリーを実行します。
|
||||
`mysql:5.7`イメージを使って一時的なコンテナを実行し、`mysql`クライアントバイナリを実行します。
|
||||
|
||||
```shell
|
||||
kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
|
||||
|
|
|
@ -24,6 +24,8 @@ content_type: concept
|
|||
|
||||
## 設定
|
||||
|
||||
* [例: Javaのマイクロサービスの設定](/docs/tutorials/configuration/configure-java-microservice/)
|
||||
|
||||
* [ConfigMapを用いたRedisの設定](/ja/docs/tutorials/configuration/configure-redis-using-configmap/)
|
||||
|
||||
## ステートレスアプリケーション
|
||||
|
@ -34,29 +36,26 @@ content_type: concept
|
|||
|
||||
## ステートフルアプリケーション
|
||||
|
||||
* [StatefulSetの基本](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
* [StatefulSetの基本](/ja/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [例: 永続ボリュームを使ったWordPressとMySQLのデプロイ](/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
* [例: Persistent Volumeを使用したWordpressとMySQLをデプロイする](/ja/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [例: Stateful Setsを使ったCassandraのデプロイ](/docs/tutorials/stateful-application/cassandra/)
|
||||
* [例: Stateful Setを使用したCassandraのデプロイ](/ja/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [CP(一貫性+分断耐性)分散システムZooKeeperの実行](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## クラスター
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
* [seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
|
||||
## サービス
|
||||
|
||||
* [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
* [Source IPを使う](/ja/docs/tutorials/services/source-ip/)
|
||||
|
||||
## セキュリティ
|
||||
|
||||
* [クラスターレベルのPod Securityの標準の適用](/docs/tutorials/security/cluster-level-pss/)
|
||||
* [NamespaceレベルのPod Securityの標準の適用](/docs/tutorials/security/ns-level-pss/)
|
||||
* [AppArmor](/docs/tutorials/security/apparmor/)
|
||||
* [seccomp](/docs/tutorials/security/seccomp/)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
チュートリアルのページタイプについての情報は、[Content Page Types](/docs/contribute/style/page-content-types/)を参照してください。
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,304 @@
|
|||
---
|
||||
title: クラスターレベルでのPodセキュリティの標準の適用
|
||||
content_type: tutorial
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% alert title="Note" %}}
|
||||
このチュートリアルは、新しいクラスターにのみ適用されます。
|
||||
{{% /alert %}}
|
||||
|
||||
Podセキュリティアドミッション(PSA)は、[ベータへ進み](/blog/2021/12/09/pod-security-admission-beta/)、v1.23以降でデフォルトで有効になっています。
|
||||
Podセキュリティアドミッションは、Podが作成される際に、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)の適用の認可を制御するものです。
|
||||
このチュートリアルでは、クラスター内の全ての名前空間に標準設定を適用することで、クラスターレベルで`baseline` Podセキュリティの標準を強制する方法を示します。
|
||||
|
||||
Podセキュリティの標準を特定の名前空間に適用するには、[名前空間レベルでのPodセキュリティの標準の適用](/ja/docs/tutorials/security/ns-level-pss/)を参照してください。
|
||||
|
||||
v{{< skew currentVersion >}}以外のKubernetesバージョンを実行している場合は、そのバージョンのドキュメントを確認してください。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
ワークステーションに以下をインストールしてください:
|
||||
|
||||
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
|
||||
- [kubectl](/ja/docs/tasks/tools/)
|
||||
|
||||
このチュートリアルでは、完全な制御下にあるKubernetesクラスターの何を設定できるかをデモンストレーションします。
|
||||
コントロールプレーンを設定できない管理されたクラスターのPodセキュリティアドミッションに対しての設定方法を知りたいのであれば、[名前空間レベルでのPodセキュリティの標準の適用](/ja/docs/tutorials/security/ns-level-pss/)を参照してください。
|
||||
|
||||
## 適用する正しいPodセキュリティの標準の選択
|
||||
|
||||
[Podのセキュリティアドミッション](/ja/docs/concepts/security/pod-security-admission/)は、以下のモードでビルトインの[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)の適用を促します: `enforce`、`audit`、`warn`。
|
||||
設定に最適なPodセキュリティの標準を選択するにあたって助けになる情報を収集するために、以下を行ってください:
|
||||
|
||||
1. Podセキュリティの標準を適用していないクラスターを作成します:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-wo-cluster-pss
|
||||
```
|
||||
出力は次のようになります:
|
||||
```
|
||||
Creating cluster "psa-wo-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-wo-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
kubectl cluster-info --context kind-psa-wo-cluster-pss
|
||||
|
||||
Thanks for using kind! 😊
|
||||
```
|
||||
|
||||
1. kubectl contextを新しいクラスターにセットします:
|
||||
|
||||
```shell
|
||||
kubectl cluster-info --context kind-psa-wo-cluster-pss
|
||||
```
|
||||
出力は次のようになります:
|
||||
|
||||
```
|
||||
Kubernetes control plane is running at https://127.0.0.1:61350
|
||||
|
||||
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
1. クラスター内の名前空間の一覧を取得します:
|
||||
|
||||
```shell
|
||||
kubectl get ns
|
||||
```
|
||||
出力は次のようになります:
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 9m30s
|
||||
kube-node-lease Active 9m32s
|
||||
kube-public Active 9m32s
|
||||
kube-system Active 9m32s
|
||||
local-path-storage Active 9m26s
|
||||
```
|
||||
|
||||
1. 異なるPodセキュリティの標準が適用されたときに何が起きるかを理解するために、`-dry-run=server`を使います:
|
||||
|
||||
1. privileged
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=privileged
|
||||
```
|
||||
|
||||
出力は次のようになります:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
namespace/kube-system labeled
|
||||
namespace/local-path-storage labeled
|
||||
```
|
||||
2. baseline
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=baseline
|
||||
```
|
||||
|
||||
出力は次のようになります:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
Warning: existing pods in namespace "kube-system" violate the new PodSecurity enforce level "baseline:latest"
|
||||
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes
|
||||
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes
|
||||
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged
|
||||
namespace/kube-system labeled
|
||||
namespace/local-path-storage labeled
|
||||
```
|
||||
|
||||
3. restricted
|
||||
```shell
|
||||
kubectl label --dry-run=server --overwrite ns --all \
|
||||
pod-security.kubernetes.io/enforce=restricted
|
||||
```
|
||||
|
||||
出力は次のようになります:
|
||||
```
|
||||
namespace/default labeled
|
||||
namespace/kube-node-lease labeled
|
||||
namespace/kube-public labeled
|
||||
Warning: existing pods in namespace "kube-system" violate the new PodSecurity enforce level "restricted:latest"
|
||||
Warning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile
|
||||
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true
|
||||
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile
|
||||
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile
|
||||
namespace/kube-system labeled
|
||||
Warning: existing pods in namespace "local-path-storage" violate the new PodSecurity enforce level "restricted:latest"
|
||||
Warning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile
|
||||
namespace/local-path-storage labeled
|
||||
```
|
||||
|
||||
この出力から、`privileged` Podセキュリティの標準を適用すると、名前空間のどれにも警告が示されないことに気付くでしょう。
|
||||
これに対し、`baseline`と`restrict`の標準ではどちらも、とりわけ`kube-system`名前空間に対して警告が示されています。
|
||||
|
||||
## モード、バージョン、標準のセット
|
||||
|
||||
このセクションでは、`latest`バージョンに以下のPodセキュリティの標準を適用します:
|
||||
|
||||
* `enforce`モードで`baseline`標準。
|
||||
* `warn`および`audit`モードで`restricted`標準。
|
||||
|
||||
`baseline` Podセキュリティの標準は、免除リストを短く保てて、かつ既知の特権昇格を防ぐような、利便性のある中庸を提供します。
|
||||
|
||||
加えて、`kube-system`内の失敗からPodを守るために、適用されるPodセキュリティの標準の対象から名前空間を免除します。
|
||||
|
||||
環境にPodセキュリティアドミッションを実装する際には、以下の点を考慮してください:
|
||||
|
||||
1. クラスターに適用されるリスク状況に基づくと、`restricted`のようにより厳格なPodセキュリティの標準のほうが、より良い選択肢かもしれません。
|
||||
1. `kube-ssytem`名前空間の免除は、Podがその名前空間で`privileged`として実行するのを許容することになります。
|
||||
実世界で使うにあたっては、以下の最小権限の原則に従って`kube-system`へのアクセスを制限する厳格なRBACポリシーを適用することを、Kubernetesプロジェクトは強く推奨します。
|
||||
上記の標準を実装するには、次のようにします:
|
||||
1. 目的のPodセキュリティの標準を実装するために、Podセキュリティアドミッションコントローラーで利用可能な設定ファイルを作成します:
|
||||
|
||||
```
|
||||
mkdir -p /tmp/pss
|
||||
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
- name: PodSecurity
|
||||
configuration:
|
||||
apiVersion: pod-security.admission.config.k8s.io/v1
|
||||
kind: PodSecurityConfiguration
|
||||
defaults:
|
||||
enforce: "baseline"
|
||||
enforce-version: "latest"
|
||||
audit: "restricted"
|
||||
audit-version: "latest"
|
||||
warn: "restricted"
|
||||
warn-version: "latest"
|
||||
exemptions:
|
||||
usernames: []
|
||||
runtimeClasses: []
|
||||
namespaces: [kube-system]
|
||||
EOF
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`pod-security.admission.config.k8s.io/v1`設定はv1.25+での対応です。
|
||||
v1.23とv1.24では[v1beta1](https://v1-24.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)を使用してください。
|
||||
v1.22では[v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/)を使用してください。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
1. クラスターの作成中にこのファイルを取り込むAPIサーバーを設定します:
|
||||
|
||||
```
|
||||
cat <<EOF > /tmp/pss/cluster-config.yaml
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: ClusterConfiguration
|
||||
apiServer:
|
||||
extraArgs:
|
||||
admission-control-config-file: /etc/config/cluster-level-pss.yaml
|
||||
extraVolumes:
|
||||
- name: accf
|
||||
hostPath: /etc/config
|
||||
mountPath: /etc/config
|
||||
readOnly: false
|
||||
pathType: "DirectoryOrCreate"
|
||||
extraMounts:
|
||||
- hostPath: /tmp/pss
|
||||
containerPath: /etc/config
|
||||
# optional: if set, the mount is read-only.
|
||||
# default false
|
||||
readOnly: false
|
||||
# optional: if set, the mount needs SELinux relabeling.
|
||||
# default false
|
||||
selinuxRelabel: false
|
||||
# optional: set propagation mode (None, HostToContainer or Bidirectional)
|
||||
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
|
||||
# default None
|
||||
propagation: None
|
||||
EOF
|
||||
```
|
||||
|
||||
{{<note>}}
|
||||
macOSでDocker DesktopとKinDを利用している場合は、**Preferences > Resources > File Sharing**のメニュー項目からShared Directoryとして`/tmp`を追加できます。
|
||||
{{</note>}}
|
||||
|
||||
1. 目的のPodセキュリティの標準を適用するために、Podセキュリティアドミッションを使うクラスターを作成します:
|
||||
|
||||
```shell
|
||||
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
|
||||
```
|
||||
出力は次のようになります:
|
||||
```
|
||||
Creating cluster "psa-with-cluster-pss" ...
|
||||
✓ Ensuring node image (kindest/node:v{{< skew currentVersion >}}.0) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-psa-with-cluster-pss"
|
||||
You can now use your cluster with:
|
||||
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
|
||||
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
|
||||
```
|
||||
|
||||
1. kubectlをこのクラスターに向けます:
|
||||
```shell
|
||||
kubectl cluster-info --context kind-psa-with-cluster-pss
|
||||
```
|
||||
出力は次のようになります:
|
||||
```
|
||||
Kubernetes control plane is running at https://127.0.0.1:63855
|
||||
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
|
||||
|
||||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
1. デフォルトの名前空間にPodを作成します:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
|
||||
```
|
||||
|
||||
Podは正常に開始されますが、出力には警告が含まれます:
|
||||
```
|
||||
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
pod/nginx created
|
||||
```
|
||||
|
||||
## 後片付け
|
||||
|
||||
では、上記で作成したクラスターを、以下のコマンドを実行して削除します:
|
||||
|
||||
```shell
|
||||
kind delete cluster --name psa-with-cluster-pss
|
||||
```
|
||||
```shell
|
||||
kind delete cluster --name psa-wo-cluster-pss
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- 前出の一連の手順を一度に全て行うために[シェルスクリプト](/examples/security/kind-with-cluster-level-baseline-pod-security.sh)を実行します:
|
||||
1. クラスターレベルの設定に基づきPodセキュリティの標準を作成します。
|
||||
2. APIサーバーでこの設定を取り込むようにファイルを作成します。
|
||||
3. この設定のAPIサーバーを立てるクラスターを作成します。
|
||||
4. この新しいクラスターにkubectl contextをセットします。
|
||||
5. 最小限のPod YAMLファイルを作成します。
|
||||
6. 新しいクラスター内でPodを作成するために、このファイルを適用します。
|
||||
- [Podのセキュリティアドミッション](/ja/docs/concepts/security/pod-security-admission/)
|
||||
- [Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)
|
||||
- [名前空間レベルでのPodセキュリティの標準の適用](/ja/docs/tutorials/security/ns-level-pss/)
|
|
@ -0,0 +1,434 @@
|
|||
---
|
||||
title: アプリケーションをServiceに接続する
|
||||
content_type: tutorial
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
## コンテナに接続するためのKubernetesモデル
|
||||
|
||||
さて、継続的に実行され、複製されたアプリケーションができたので、これをネットワーク上に公開できます。
|
||||
|
||||
Kubernetesは、Podがどのホストに配置されているかにかかわらず、ほかのPodと通信できることを引き受けます。
|
||||
Kubernetesは各Podにそれぞれ固有のクラスタープライベートなIPアドレスを付与するので、Pod間のリンクや、コンテナのポートとホストのポートのマップを明示的に作成する必要はありません。
|
||||
これは、Pod内のコンテナは全てlocalhost上でお互いのポートに到達でき、クラスター内の全てのPodはNATなしに互いを見られるということを意味します。このドキュメントの残りの部分では、このようなネットワークモデルの上で信頼性のあるServiceを実行する方法について、詳しく述べていきます。
|
||||
|
||||
このチュートリアルでは概念のデモンストレーションのために、シンプルなnginx Webサーバーを例として使います。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Podをクラスターへ公開
|
||||
|
||||
これは前出の例でも行いましたが、もう一度やってみて、ネットワークからの観点に着目してみましょう。
|
||||
nginx Podを作成し、コンテナのポート指定も記載します:
|
||||
|
||||
{{< codenew file="service/networking/run-my-nginx.yaml" >}}
|
||||
|
||||
この設定で、あなたのクラスターにはどのノードからもアクセス可能になります。Podを実行中のノードを確認してみましょう:
|
||||
|
||||
```shell
|
||||
kubectl apply -f ./run-my-nginx.yaml
|
||||
kubectl get pods -l run=my-nginx -o wide
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m
|
||||
my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd
|
||||
```
|
||||
|
||||
PodのIPアドレスを確認します:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
|
||||
POD_IP
|
||||
[map[ip:10.244.3.4]]
|
||||
[map[ip:10.244.2.5]]
|
||||
```
|
||||
|
||||
あなたのクラスター内のどのノードにもsshで入ることができて、双方のIPアドレスに対して問い合わせるために`curl`のようなツールを使えるようにしておくのがよいでしょう。
|
||||
各コンテナはノード上でポート80を*使っておらず*、トラフィックをPodに流すための特別なNATルールもなんら存在しないことに注意してください。
|
||||
つまり、全て同じ`containerPort`を使った同じノード上で複数のnginx Podを実行でき、Serviceに割り当てられたIPアドレスを使って、クラスター内のほかのどのPodあるいはノードからもそれらにアクセスできます。
|
||||
背後にあるPodにフォワードするためにホストNode上の特定のポートを充てたいというのであれば、それも可能です。とはいえ、ネットワークモデルではそのようなことをする必要がありません。
|
||||
|
||||
興味があれば、さらなる詳細について
|
||||
[Kubernetesネットワークモデル](/ja/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)
|
||||
を読んでください。
|
||||
|
||||
## Serviceの作成
|
||||
|
||||
さて、フラットなクラスター全体のアドレス空間内でnginxを実行中のPodが得られました。
|
||||
理論的にはこれらのPodと直接対話することは可能ですが、ノードが死んでしまった時には何が起きるでしょうか?
|
||||
ノードと一緒にPodは死に、Deploymentが新しいPodを異なるIPアドレスで作成します。
|
||||
これがServiceが解決する問題です。
|
||||
|
||||
KubernetesのServiceは、全て同じ機能を提供する、クラスター内のどこかで実行するPodの論理的な集合を定義した抽象物です。
|
||||
作成時に各Serviceは固有のIPアドレス(clusterIPとも呼ばれます)を割り当てられます。
|
||||
このアドレスはServiceのライフスパンと結び付けられており、Serviceが生きている間は変わりません。
|
||||
PodはServiceと対話できるよう設定され、Serviceのメンバーである複数のPodへ自動的に負荷分散されたServiceへ通信する方法を知っています。
|
||||
|
||||
`kubectl expose`を使って、2つのnginxレプリカのためのServiceを作成できます:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment/my-nginx
|
||||
```
|
||||
```
|
||||
service/my-nginx exposed
|
||||
```
|
||||
|
||||
これは`kubectl apply -f`を以下のyamlに対して実行するのと同じです:
|
||||
|
||||
{{< codenew file="service/networking/nginx-svc.yaml" >}}
|
||||
|
||||
この指定は、`run: my-nginx`ラベルの付いた任意のPod上のTCPポート80を宛先とし、それを抽象化されたServiceポート(`targetPort`はコンテナがトラフィックを許可するポート、`port`は抽象化されたServiceポートで、ほかのPodがServiceにアクセスするのに使う任意のポートです)で公開するServiceを作成します。
|
||||
Service定義内でサポートされているフィールドのリストを見るには、[Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) APIオブジェクトを参照してください。
|
||||
Serviceを確認してみましょう:
|
||||
|
||||
```shell
|
||||
kubectl get svc my-nginx
|
||||
```
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx ClusterIP 10.0.162.149 <none> 80/TCP 21s
|
||||
```
|
||||
|
||||
前述したとおり、ServiceはPodのグループに支えられています。
|
||||
これらのPodは{{<glossary_tooltip term_id="endpoint-slice" text="EndpointSlices">}}を通して公開されています。
|
||||
Serviceのセレクターは継続的に評価され、その結果はServiceに接続されているEndpointSliceに{{< glossary_tooltip text="labels" term_id="label" >}}を使って「投稿(POST)」されます。
|
||||
|
||||
Podが死ぬと、エンドポイントとして含まれていたEndpointSliceからそのPodは自動的に削除されます。
|
||||
Serviceのセレクターにマッチする新しいPodが、Serviceのために自動的にEndpointSliceに追加されます。
|
||||
エンドポイントを確認し、IPアドレスが最初のステップで作成したPodと同じであることに注目してください:
|
||||
|
||||
```shell
|
||||
kubectl describe svc my-nginx
|
||||
```
|
||||
```
|
||||
Name: my-nginx
|
||||
Namespace: default
|
||||
Labels: run=my-nginx
|
||||
Annotations: <none>
|
||||
Selector: run=my-nginx
|
||||
Type: ClusterIP
|
||||
IP Family Policy: SingleStack
|
||||
IP Families: IPv4
|
||||
IP: 10.0.162.149
|
||||
IPs: 10.0.162.149
|
||||
Port: <unset> 80/TCP
|
||||
TargetPort: 80/TCP
|
||||
Endpoints: 10.244.2.5:80,10.244.3.4:80
|
||||
Session Affinity: None
|
||||
Events: <none>
|
||||
```
|
||||
```shell
|
||||
kubectl get endpointslices -l kubernetes.io/service-name=my-nginx
|
||||
```
|
||||
```
|
||||
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
|
||||
my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s
|
||||
```
|
||||
|
||||
今や、あなたのクラスター内のどのノードからもnginx Serviceに`<CLUSTER-IP>:<PORT>`でcurlを使用してアクセスできるはずです。Service IPは完全に仮想であり、物理的なケーブルで接続されるものではありません。どのように動作しているのか興味があれば、さらなる詳細について[サービスプロキシー](/ja/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies)を読んでください。
|
||||
|
||||
## Serviceへのアクセス
|
||||
|
||||
KubernetesはServiceを探す2つの主要なモードとして、環境変数とDNSをサポートしています。
|
||||
前者はすぐに動かせるのに対し、後者は[CoreDNSクラスターアドオン](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns)が必要です。
|
||||
|
||||
{{< note >}}
|
||||
もしServiceの環境変数が望ましくないなら(想定しているプログラムの環境変数と競合する可能性がある、処理する変数が多すぎる、DNSだけ使いたい、など)、[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)で`enableServiceLinks`のフラグを`false`にすることで、このモードを無効化できます。
|
||||
{{< /note >}}
|
||||
|
||||
### 環境変数
|
||||
|
||||
PodをNode上で実行する時、kubeletはアクティブなServiceのそれぞれに環境変数のセットを追加します。
|
||||
これは順序問題を生みます。なぜそうなるかの理由を見るために、実行中のnginx Podの環境を調査してみましょう(Podの名前は環境によって異なります):
|
||||
|
||||
```shell
|
||||
kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
|
||||
```
|
||||
```
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
Serviceについて何も言及がないことに注意してください。
|
||||
これは、Serviceの前にレプリカを作成したからです。
|
||||
このようにすることでの不利益のもう1つは、スケジューラーが同一のマシンに両方のPodを置く可能性があることです(もしそのマシンが死ぬと全Serviceがダウンしてしまいます)。
|
||||
2つのPodを殺し、Deploymentがそれらを再作成するのを待つことで、これを正しいやり方にできます。
|
||||
今回は、レプリカの*前に*Serviceが存在します。
|
||||
これにより、正しい環境変数だけでなく、(全てのノードで等量のキャパシティを持つ場合)Podに広がった、スケジューラーレベルのServiceが得られます:
|
||||
|
||||
```shell
|
||||
kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
|
||||
|
||||
kubectl get pods -l run=my-nginx -o wide
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd
|
||||
my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m
|
||||
```
|
||||
|
||||
Podが、いったん殺されて再作成された後、異なる名前を持ったことに気付いたでしょうか。
|
||||
|
||||
```shell
|
||||
kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
|
||||
```
|
||||
```
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
MY_NGINX_SERVICE_HOST=10.0.162.149
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
MY_NGINX_SERVICE_PORT=80
|
||||
KUBERNETES_SERVICE_PORT_HTTPS=443
|
||||
```
|
||||
|
||||
### DNS
|
||||
|
||||
Kubernetesは、DNS名をほかのServiceに自動的に割り当てる、DNSクラスターアドオンServiceを提供しています。
|
||||
クラスター上でそれを実行しているならば、確認できます:
|
||||
|
||||
```shell
|
||||
kubectl get services kube-dns --namespace=kube-system
|
||||
```
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
|
||||
```
|
||||
|
||||
本セクションの以降では、長寿命のIPアドレス(my-nginx)を持つServiceと、そのIPアドレスに名前を割り当てているDNSサーバーがあることを想定しています。
|
||||
ここではCoreDNSクラスターアドオン(アプリケーション名`kube-dns`)を使い、標準的な手法(例えば`gethostbyname()`)を使ってクラスター内の任意のPodからServiceと対話してみます。
|
||||
CoreDNSが動作していない時には、
|
||||
[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes)
|
||||
や[CoreDNSのインストール](/ja/docs/tasks/administer-cluster/coredns/#installing-coredns)を参照して有効化してください。
|
||||
テストするために、別のcurlアプリケーションを実行してみましょう:
|
||||
|
||||
```shell
|
||||
kubectl run curl --image=radial/busyboxplus:curl -i --tty
|
||||
```
|
||||
```
|
||||
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
|
||||
Hit enter for command prompt
|
||||
```
|
||||
|
||||
次にenterを押し、`nslookup my-nginx`を実行します:
|
||||
|
||||
```shell
|
||||
[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: my-nginx
|
||||
Address 1: 10.0.162.149
|
||||
```
|
||||
|
||||
## Serviceのセキュア化
|
||||
|
||||
これまではクラスター内からnginxサーバーだけにアクセスしてきました。
|
||||
Serviceをインターネットに公開する前に、通信経路がセキュアかどうかを確かめたいところです。
|
||||
そのためには次のようなものが必要です:
|
||||
|
||||
* https用の自己署名証明書(まだ本人証明を用意していない場合)
|
||||
* 証明書を使うよう設定されたnginxサーバー
|
||||
* 証明書をPodからアクセスできるようにする[Secret](/ja/docs/concepts/configuration/secret/)
|
||||
|
||||
これら全ては[nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/)から取得できます。
|
||||
go環境とmakeツールのインストールが必要です
|
||||
(もしこれらをインストールしたくないときには、後述の手動手順に従ってください)。簡潔には:
|
||||
|
||||
```shell
|
||||
make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt
|
||||
kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
|
||||
```
|
||||
```
|
||||
secret/nginxsecret created
|
||||
```
|
||||
```shell
|
||||
kubectl get secrets
|
||||
```
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
nginxsecret kubernetes.io/tls 2 1m
|
||||
```
|
||||
configmapも同様:
|
||||
```shell
|
||||
kubectl create configmap nginxconfigmap --from-file=default.conf
|
||||
```
|
||||
```
|
||||
configmap/nginxconfigmap created
|
||||
```
|
||||
```shell
|
||||
kubectl get configmaps
|
||||
```
|
||||
```
|
||||
NAME DATA AGE
|
||||
nginxconfigmap 1 114s
|
||||
```
|
||||
以下に示すのは、makeを実行したときに問題が発生する場合(例えばWindowsなど)の手動手順です:
|
||||
|
||||
```shell
|
||||
# 公開鍵と秘密鍵のペアを作成する
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
|
||||
# 鍵をbase64エンコーディングに変換する
|
||||
cat /d/tmp/nginx.crt | base64
|
||||
cat /d/tmp/nginx.key | base64
|
||||
```
|
||||
|
||||
以下のようなyamlファイルを作成するために、前のコマンドからの出力を使います。
|
||||
base64エンコードされた値は、全て1行で記述する必要があります。
|
||||
|
||||
```yaml
|
||||
apiVersion: "v1"
|
||||
kind: "Secret"
|
||||
metadata:
|
||||
name: "nginxsecret"
|
||||
namespace: "default"
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
|
||||
tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"
|
||||
```
|
||||
では、このファイルを使ってSecretを作成します:
|
||||
|
||||
```shell
|
||||
kubectl apply -f nginxsecrets.yaml
|
||||
kubectl get secrets
|
||||
```
|
||||
```
|
||||
NAME TYPE DATA AGE
|
||||
nginxsecret kubernetes.io/tls 2 1m
|
||||
```
|
||||
|
||||
Secretにある証明書を使ってhttpsサーバーを開始するために、nginxレプリカを変更します。また、Serviceは(80および443の)両方のポートを公開するようにします:
|
||||
|
||||
{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
|
||||
|
||||
nginx-secure-appマニフェストの注目すべきポイント:
|
||||
|
||||
- DeploymentとServiceの指定の両方が同じファイルに含まれています。
|
||||
- [nginxサーバー](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/default.conf)は、ポート80でHTTPトラフィック、ポート443でHTTPSトラフィックをサービスし、nginx Serviceは両方のポートを公開します。
|
||||
- 各コンテナは、`/etc/nginx/ssl`にマウントされたボリューム経由で鍵にアクセスできます。
|
||||
これはnginxサーバーが開始する*前*にセットアップされます。
|
||||
|
||||
```shell
|
||||
kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
|
||||
```
|
||||
|
||||
この時点で、任意のノードからnginxサーバーに到達できます。
|
||||
|
||||
```shell
|
||||
kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
|
||||
POD_IP
|
||||
[map[ip:10.244.3.5]]
|
||||
```
|
||||
|
||||
```shell
|
||||
node $ curl -k https://10.244.3.5
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
最後の手順でcurlに`-k`パラメーターを与えていることに注意してください。
|
||||
これは、証明書の生成時点ではnginxを実行中のPodについて何もわからないので、curlにCNameのミスマッチを無視するよう指示する必要があるからです。
|
||||
Serviceを作成することで、証明書で使われているCNameと、PodがServiceルックアップ時に使う実際のDNS名とがリンクされます。
|
||||
Podからこれをテストしてみましょう(単純化のため同じSecretが再利用されるので、ServiceにアクセスするのにPodが必要なのはnginx.crtだけです):
|
||||
|
||||
{{< codenew file="service/networking/curlpod.yaml" >}}
|
||||
|
||||
```shell
|
||||
kubectl apply -f ./curlpod.yaml
|
||||
kubectl get pods -l app=curlpod
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
curl-deployment-1515033274-1410r 1/1 Running 0 1m
|
||||
```
|
||||
```shell
|
||||
kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
...
|
||||
```
|
||||
|
||||
## Serviceの公開
|
||||
|
||||
アプリケーションのいくつかの部分においては、Serviceを外部IPアドレスで公開したいと思うかもしれません。
|
||||
Kubernetesはこれに対して2つのやり方をサポートしています: NodePortとLoadBalancerです。
|
||||
前のセクションで作成したServiceではすでに`NodePort`を使っていたので、ノードにパブリックIPアドレスがあれば、nginx HTTPSレプリカもトラフィックをインターネットでサービスする準備がすでに整っています。
|
||||
|
||||
```shell
|
||||
kubectl get svc my-nginx -o yaml | grep nodePort -C 5
|
||||
uid: 07191fb3-f61a-11e5-8ae5-42010af00002
|
||||
spec:
|
||||
clusterIP: 10.0.162.149
|
||||
ports:
|
||||
- name: http
|
||||
nodePort: 31704
|
||||
port: 8080
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
- name: https
|
||||
nodePort: 32453
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: 443
|
||||
selector:
|
||||
run: my-nginx
|
||||
```
|
||||
```shell
|
||||
kubectl get nodes -o yaml | grep ExternalIP -C 1
|
||||
- address: 104.197.41.11
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
--
|
||||
- address: 23.251.152.56
|
||||
type: ExternalIP
|
||||
allocatable:
|
||||
...
|
||||
|
||||
$ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
|
||||
...
|
||||
<h1>Welcome to nginx!</h1>
|
||||
```
|
||||
|
||||
では、クラウドロードバランサーを使うために、Serviceを再作成してみましょう。
|
||||
`my-nginx`の`Type`を`NodePort`から`LoadBalancer`に変更してください:
|
||||
|
||||
```shell
|
||||
kubectl edit svc my-nginx
|
||||
kubectl get svc my-nginx
|
||||
```
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
my-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s
|
||||
```
|
||||
```
|
||||
curl https://<EXTERNAL-IP> -k
|
||||
...
|
||||
<title>Welcome to nginx!</title>
|
||||
```
|
||||
|
||||
`EXTERNAL-IP`列のIPアドレスが、パブリックインターネットで利用可能なものになっています。
|
||||
`CLUSTER-IP`はクラスター/プライベートクラウドネットワーク内でのみ利用可能なものです。
|
||||
|
||||
AWSにおいては、`LoadBalancer`タイプは、IPアドレスではなく(長い)ホスト名を使うELBを作成することに注意してください。
|
||||
これは標準の`kubectl get svc`の出力に合わせるには長すぎ、実際それを見るには`kubectl describe service my-nginx`を使う必要があります。
|
||||
これは以下のような見た目になります:
|
||||
|
||||
```shell
|
||||
kubectl describe service my-nginx
|
||||
...
|
||||
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [Serviceを利用したクラスター内のアプリケーションへのアクセス](/ja/docs/tasks/access-application-cluster/service-access-application-cluster/)を学びます。
|
||||
* [Serviceを使用してフロントエンドをバックエンドに接続する](/ja/docs/tasks/access-application-cluster/connecting-frontend-backend/)方法を学びます。
|
||||
* [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)を学びます。
|
|
@ -0,0 +1,28 @@
|
|||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: job-pod-failure-policy-example
|
||||
spec:
|
||||
completions: 12
|
||||
parallelism: 3
|
||||
template:
|
||||
spec:
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: main
|
||||
image: docker.io/library/bash:5
|
||||
command: ["bash"] # example command simulating a bug which triggers the FailJob action
|
||||
args:
|
||||
- -c
|
||||
- echo "Hello world!" && sleep 5 && exit 42
|
||||
backoffLimit: 6
|
||||
podFailurePolicy:
|
||||
rules:
|
||||
- action: FailJob
|
||||
onExitCodes:
|
||||
containerName: main # optional
|
||||
operator: In # one of: In, NotIn
|
||||
values: [42]
|
||||
- action: Ignore # one of: Ignore, FailJob, Count
|
||||
onPodConditions:
|
||||
- type: DisruptionTarget # indicates Pod disruption
|
|
@ -1,113 +1,297 @@
|
|||
---
|
||||
title: Usando kubectl para criar uma implantação
|
||||
title: Usando kubectl para criar um Deployment
|
||||
weight: 10
|
||||
description: |-
|
||||
Aprenda sobre objetos Deployment do Kubernetes.
|
||||
Implante seu primeiro aplicativo no Kubernetes utilizando o kubectl.
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
<html lang="pt-BR">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Objetivos</h3>
|
||||
<ul>
|
||||
<li> Saiba mais sobre implantações de aplicativos. </li>
|
||||
<li> Implante seu primeiro aplicativo no Kubernetes com o kubectl. </li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="col-md-8">
|
||||
<h3>Objetivos</h3>
|
||||
<ul>
|
||||
<li> Saiba mais sobre implantações de aplicativos.</li>
|
||||
<li> Implante seu primeiro aplicativo no Kubernetes com o kubectl.
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Implantações do Kubernetes</h3>
|
||||
<p>
|
||||
Assim que o seu cluster Kubernetes estiver em execução você pode implementar seu aplicativo em contêiners nele.
|
||||
Para fazer isso, você precisa criar uma configuração do tipo <b> Deployment </b> do Kubernetes. O Deployment define como criar e
|
||||
atualizar instâncias do seu aplicativo. Depois de criar um Deployment, o Master do Kubernetes
|
||||
agenda as instâncias do aplicativo incluídas nesse Deployment para ser executado em nós individuais do Cluster.
|
||||
</p>
|
||||
<div class="col-md-8">
|
||||
<h3>Deployments do Kubernetes</h3>
|
||||
<p>
|
||||
Assim que o seu <a href="/pt-br/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">cluster
|
||||
Kubernetes estiver em execução</a> você pode implantar seu aplicativo
|
||||
contêinerizado nele.
|
||||
Para fazer isso, você precisa criar um objeto <b>Deployment</b> do Kubernetes.
|
||||
O Deployment instrui o Kubernetes sobre como criar e atualizar instâncias
|
||||
do seu aplicativo. Depois de criar um Deployment, a camada de gerenciamento
|
||||
do Kubernetes aloca as instâncias do aplicativo incluídas nesse Deployment
|
||||
para serem executadas em nós individuais do cluster.
|
||||
</p>
|
||||
|
||||
<p> Depois que as instâncias do aplicativo são criadas, um Controlador do Kubernetes Deployment monitora continuamente essas instâncias.
|
||||
Se o nó que hospeda uma instância ficar inativo ou for excluído, o controlador de Deployment substituirá a instância por uma instância em outro nó no cluster.
|
||||
<b> Isso fornece um mecanismo de autocorreção para lidar com falhas ou manutenção da máquina. </b> </p>
|
||||
<p>
|
||||
Depois que as instâncias do aplicativo são criadas, o controlador de
|
||||
Deployment do Kubernetes monitora continuamente essas instâncias.
|
||||
Se o nó em que uma instância está alocada ficar indisponível ou for
|
||||
excluído, o controlador de Deployment substituirá a instância por uma
|
||||
instância em outro nó no cluster.
|
||||
<b>Isso fornece um mecanismo de autocorreção para lidar com falhas ou
|
||||
manutenção da máquina.</b>
|
||||
</p>
|
||||
|
||||
<p>Em um mundo de pré-orquestração, os scripts de instalação costumavam ser usados para iniciar aplicativos, mas não permitiam a recuperação de falha da máquina.
|
||||
Ao criar suas instâncias de aplicativo e mantê-las em execução entre nós, as implantações do Kubernetes fornecem uma abordagem fundamentalmente diferente para o gerenciamento de aplicativos. </p>
|
||||
<p>
|
||||
Em um mundo de pré-orquestração, os scripts de instalação eram utilizados
|
||||
para iniciar aplicativos, mas não permitiam a recuperação de falha da máquina.
|
||||
Ao criar suas instâncias de aplicativo e mantê-las em execução entre nós,
|
||||
os Deployments do Kubernetes fornecem uma abordagem fundamentalmente
|
||||
diferente para o gerenciamento de aplicativos.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>Resumo:</h3>
|
||||
<ul>
|
||||
<li>Deployments</li>
|
||||
<li>Kubectl</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
O tipo Deployment é responsável por criar e atualizar instâncias de seu aplicativo
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>Resumo:</h3>
|
||||
<ul>
|
||||
<li>Deployments</li>
|
||||
<li>Kubectl</li>
|
||||
</ul>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Implantar seu primeiro aplicativo no Kubernetes</h2>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
O objeto Deployment é responsável por criar e atualizar instâncias
|
||||
de seu aplicativo
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Implante seu primeiro aplicativo no
|
||||
Kubernetes</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img
|
||||
src="/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg">
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Você pode criar e gerenciar uma implantação usando a interface
|
||||
de linha de comando do Kubernetes, o <b>kubectl</b>.
|
||||
O kubectl usa a API do Kubernetes para interagir com o cluster.
|
||||
Neste módulo, você aprenderá os comandos Kubectl mais comuns
|
||||
necessários para criar Deployments que executam seus aplicativos
|
||||
em um cluster Kubernetes.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Quando você cria um Deployment, você precisa especificar a imagem
|
||||
de contêiner para seu aplicativo e o número de réplicas que deseja executar.
|
||||
Você pode alterar essas informações posteriormente, atualizando seu Deployment;
|
||||
os Módulos <a href="/pt-br/docs/tutorials/kubernetes-basics/scale/scale-intro/">5</a>
|
||||
e <a href="/docs/tutorials/kubernetes-basics/update/update-intro/">6</a>
|
||||
do bootcamp explicam como você pode dimensionar e atualizar seus Deployments.
|
||||
</p>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> Os aplicativos precisam ser empacotados em um dos formatos de
|
||||
contêiner suportados para serem implantados no Kubernetes </i></p>
|
||||
</div>
|
||||
<br>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Para criar seu primeiro Deployment, você usará o aplicativo hello-node
|
||||
empacotado em um contêiner que utiliza o NGINX para repetir todas as requisições.
|
||||
(Se você ainda não tentou criar o aplicativo hello-node e implantá-lo
|
||||
usando um contêiner, você pode fazer isso primeiro seguindo as
|
||||
instruções do <a href="/pt/docs/tutorials/hello-minikube/">tutorial Olá, Minikube!</a>).
|
||||
</p>
|
||||
|
||||
<p>Você pode criar e gerenciar uma implantação usando a interface de linha de comando do Kubernetes, <b> Kubectl </b>.
|
||||
O Kubectl usa a API Kubernetes para interagir com o cluster. Neste módulo, você aprenderá os comandos Kubectl mais comuns necessários para criar implantações que executam seus aplicativos em um cluster Kubernetes.</p>
|
||||
<p>
|
||||
Você precisará ter o kubectl instalado também. Se você precisar de
|
||||
instruções de instalação, veja
|
||||
<a href="/pt-br/docs/tasks/tools/#kubectl">instalando ferramentas</a>.
|
||||
</p>
|
||||
|
||||
<p>Quando você cria um Deployment, você precisa especificar a imagem do contêiner para seu aplicativo e o número de réplicas que deseja executar.
|
||||
Você pode alterar essas informações posteriormente, atualizando sua implantação; Módulos<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/">5</a> e <a href="/docs/tutorials/kubernetes-basics/update/update-intro/">6</a> do bootcamp explica como você pode dimensionar e atualizar suas implantações.</p>
|
||||
<p>
|
||||
Agora que você já sabe o que são Deployments, vamos implantar
|
||||
nosso primeiro aplicativo!
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> Os aplicativos precisam ser empacotados em um dos formatos de contêiner suportados para serem implantados no Kubernetes</i></p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h3>Noções básicas do kubectl</h3>
|
||||
<p>O formato comum de um comando kubectl é: <code>kubectl <i>ação recurso</i></code></p>
|
||||
<p>
|
||||
Isto executa a <em>ação</em> especificada (como por exemplo <i>create</i>,
|
||||
<i>describe</i> ou <i>delete</i>) no <em>recurso</em>
|
||||
especificado (por exemplo, <tt>node</tt> ou <tt>deployment</tt>).
|
||||
Você pode utilizar <code>--help</code> após o subcomando
|
||||
para obter informações adicionais sobre parâmetros permitidos
|
||||
(por exemplo, <code>kubectl get nodes --help</code>).
|
||||
</p>
|
||||
<p>Verifique que o kubectl está configurado para comunicar-se com seu
|
||||
cluster rodando o comando <b><code>kubectl version</code></b>.</p>
|
||||
<p>Certifique-se de que o kubectl está instalado e que você consegue ver
|
||||
as versões do cliente e do servidor.</p>
|
||||
<p>Para visualizar os nós do cluster, execute o comando <b><code>kubectl
|
||||
get nodes</code></b>.</p>
|
||||
<p>
|
||||
Você verá os nós disponíveis. Posteriormente, o Kubernetes irá escolher
|
||||
onde implantar nossa aplicação baseado nos recursos disponíveis nos nós.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<h3>Implante uma aplicação</h3>
|
||||
<p>
|
||||
Vamos implantar nossa primeira aplicação no Kubernetes utilizando
|
||||
o comando <code>kubectl create deployment</code>. Precisaremos
|
||||
fornecer o nome do Deployment e a localização da imagem de contêiner
|
||||
do aplicativo (inclua a URL completa do repositório para images
|
||||
hospedadas fora do Docker Hub).
|
||||
</p>
|
||||
|
||||
<p><b><code>kubectl create deployment kubernetes-bootcamp
|
||||
--image=gcr.io/google-samples/kubernetes-bootcamp:v1</code></b></p>
|
||||
|
||||
<p>
|
||||
Excelente! Você acabou de implantar sua primeira aplicação através
|
||||
da criação de um Deployment. Este comando efetuou algumas ações
|
||||
para você:
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
buscou um nó utilizável onde a instância da aplicação pode ser executada
|
||||
(temos somente um nó disponível)
|
||||
</li>
|
||||
<li>alocou a aplicação para rodar naquele nó</li>
|
||||
<li>
|
||||
configurou o cluster para realocar a instância em um novo nó sempre
|
||||
que necessário
|
||||
</li>
|
||||
</ul>
|
||||
<p>
|
||||
Para listar seus Deployments existentes, utilize o comando
|
||||
<code>kubectl get deployments</code>:
|
||||
</p>
|
||||
<p><b><code>kubectl get deployments</code></b></p>
|
||||
<p>
|
||||
Podemos observar que há um Deployment rodando uma única instância
|
||||
da sua aplicação. A instância está executando dentro de um contêiner
|
||||
no seu nó.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<h3>Visualize o aplicativo</h3>
|
||||
<p>
|
||||
Pods que rodam dentro do Kubernetes estão rodando em uma rede privada e isolada.
|
||||
Por padrão, eles são visíveis a outros Pods e Services dentro do mesmo
|
||||
cluster do Kubernetes, mas não de fora daquela rede.
|
||||
Ao usarmos <code>kubectl</code>, estamos interagindo através de um
|
||||
<i>endpoint</i> de API para comunicar-nos com a nossa aplicação.
|
||||
</p>
|
||||
<p>
|
||||
Iremos discutir outras opções de como expor sua aplicação fora do
|
||||
cluster do Kubernetes no Módulo 4.
|
||||
</p>
|
||||
<p>
|
||||
O comando <code>kubectl</code> pode criar um proxy que encaminha
|
||||
comunicações para dentro da rede privada que engloba todo o cluster. O
|
||||
proxy pode ser encerrado utilizando a sequência control-C e não irá
|
||||
imprimir nenhum tipo de saída enquanto estiver rodando.
|
||||
</p>
|
||||
<p>
|
||||
<strong>Você precisa abrir uma segunda janela do terminal para executar
|
||||
o proxy.</strong>
|
||||
</p>
|
||||
<p>
|
||||
<b><code>kubectl proxy</code></b>
|
||||
</p>
|
||||
<p>
|
||||
Agora temos uma conexão entre nosso <i>host</i> (o terminal online) e o
|
||||
cluster do Kubernetes. O proxy habilita acesso direto à API através
|
||||
destes terminais.
|
||||
</p>
|
||||
<p>
|
||||
Você pode ver todas as APIs hospedadas através do <i>endpoint</i> do
|
||||
proxy. Por exemplo, podemos obter a versão diretamente através da API
|
||||
utilizando o comando <code>curl</code>:
|
||||
</p>
|
||||
<p>
|
||||
<b><code>curl http://localhost:8001/version</code></b>
|
||||
</p>
|
||||
<div class="alert alert-info note callout" role="alert">
|
||||
<strong>Nota:</strong> se a porta 8001 não estiver acessível, certifique-se
|
||||
de que o comando <code>kubectl proxy</code> que você iniciou acima está
|
||||
rodando no segundo terminal.
|
||||
</div>
|
||||
<p>
|
||||
O servidor da API irá automaticamente criar um <i>endpoint</i> para cada
|
||||
Pod, baseado no nome do Pod, que também estará acessível através do proxy.
|
||||
</p>
|
||||
<p>
|
||||
Primeiro, precisaremos obter o nome do Pod. Iremos armazená-lo na
|
||||
variável de ambiente <tt>POD_NAME</tt>:
|
||||
</p>
|
||||
<p><b><code>export POD_NAME=$(kubectl get pods -o go-template --template
|
||||
'{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')</code></b><br/>
|
||||
<b><code>echo Nome do Pod: $POD_NAME</code></b></p>
|
||||
<p>Você pode acessar o Pod através da API encaminhada, rodando o comando:</p>
|
||||
<p><b><code>curl
|
||||
http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/</code></b>
|
||||
</p>
|
||||
<p>
|
||||
Para que o novo Deployment esteja acessível sem utilizar o proxy, um
|
||||
Service é requerido. Isto será explicado nos próximos módulos.
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do <a href="/pt/docs/tutorials/hello-minikube/">tutorial Olá, Minikube!</a>).
|
||||
<p>
|
||||
|
||||
<p>Agora que você sabe o que são implantações (Deployment), vamos para o tutorial online e implantar nosso primeiro aplicativo!</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/pt/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">Iniciar tutorial interativo<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
</div>
|
||||
<div class="row">
|
||||
<p>
|
||||
Assim que você finalizar este tutorial, vá para <a
|
||||
href="/pt-br/docs/tutorials/kubernetes-basics/explore/explore-intro/"
|
||||
title="Visualizando Pods e Nós">Visualizando Pods e Nós</a>.
|
||||
</p>
|
||||
</div>
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
|
|
|
@ -5,4 +5,4 @@
|
|||
или вы можете использовать одну из песочниц Kubernetes:
|
||||
|
||||
* [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
||||
* [Play with Kubernetes](https://labs.play-with-k8s.com/)
|
||||
|
|
|
@ -16,4 +16,4 @@ Một công cụ giúp bạn sử dụng các OCI container runtime với Kubern
|
|||
CRI-O là một thực thi của {{< glossary_tooltip term_id="cri" >}} để cho phép sử dụng các {{< glossary_tooltip text="container" term_id="container" >}} runtime cái mà tương thích với Open Container Initiative (OCI)
|
||||
[runtime spec](http://www.github.com/opencontainers/runtime-spec).
|
||||
|
||||
Triển khai CRI-O cho phép Kuberentes sử dụng bất kì OCI-compliant runtime như container runtime để chạy {{< glossary_tooltip text="Pods" term_id="pod" >}}, và để lấy CRI container image từ các remote registry.
|
||||
Triển khai CRI-O cho phép Kubernetes sử dụng bất kì OCI-compliant runtime như container runtime để chạy {{< glossary_tooltip text="Pods" term_id="pod" >}}, và để lấy CRI container image từ các remote registry.
|
||||
|
|
|
@ -33,10 +33,10 @@ Bạn cần phải sử dụng phiên bản kubectl sai lệch không quá một
|
|||
|
||||
Để tải về phiên bản cụ thể, hãy thay thế phần `$(curl -LS https://dl.k8s.io/release/stable.txt)` trong câu lệnh với một phiên bản cụ thể.
|
||||
|
||||
Ví dụ như, để tải về phiên bản {{< param "fullversion" >}} trên Linux, hãy nhập như sau:
|
||||
Ví dụ như, để tải về phiên bản {{< skew currentPatchVersion >}} trên Linux, hãy nhập như sau:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Tạo kubectl binary thực thi.
|
||||
|
@ -108,10 +108,10 @@ Nếu bạn đang sử dụng Ubuntu hoặc distro Linux khác hỗ trợ trình
|
|||
|
||||
Để tải về phiên bản cụ thể, hãy thay thế phần `$(curl -LS https://dl.k8s.io/release/stable.txt)` trong câu lệnh với phiên bản cụ thể.
|
||||
|
||||
Ví dụ, để tải về phiên bản {{< param "fullversion" >}} trên macOS, gõ:
|
||||
Ví dụ, để tải về phiên bản {{< skew currentPatchVersion >}} trên macOS, gõ:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Tạo kubectl binary thực thi.
|
||||
|
@ -173,12 +173,12 @@ Nếu bạn đang trên macOS và sử dụng trình quản lý gói [Macports](
|
|||
|
||||
### Cài đặt kubectl binary với curl trên Windows
|
||||
|
||||
1. Tải về phiên bản mới nhất {{< param "fullversion" >}} từ [đường dẫn này](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
|
||||
1. Tải về phiên bản mới nhất {{< skew currentPatchVersion >}} từ [đường dẫn này](https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe).
|
||||
|
||||
Hoặc nếu bạn đã cài đặt `curl`, hãy sử dụng câu lệnh sau:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Để tìm ra phiên bản ổn định mới nhất, hãy xem [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
|
|
@ -0,0 +1,425 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 的取证容器检查点"
|
||||
date: 2022-12-05
|
||||
slug: forensic-container-checkpointing-alpha
|
||||
---
|
||||
|
||||
**作者:** [Adrian Reber](https://github.com/adrianreber) (Red Hat)
|
||||
<!--
|
||||
**Authors:** Adrian Reber (Red Hat)
|
||||
-->
|
||||
|
||||
<!--
|
||||
Forensic container checkpointing is based on [Checkpoint/Restore In
|
||||
Userspace](https://criu.org/) (CRIU) and allows the creation of stateful copies
|
||||
of a running container without the container knowing that it is being
|
||||
checkpointed. The copy of the container can be analyzed and restored in a
|
||||
sandbox environment multiple times without the original container being aware
|
||||
of it. Forensic container checkpointing was introduced as an alpha feature in
|
||||
Kubernetes v1.25.
|
||||
-->
|
||||
取证容器检查点(Forensic container checkpointing)基于 [CRIU][criu](Checkpoint/Restore In Userspace ,用户空间的检查点/恢复),
|
||||
并允许创建正在运行的容器的有状态副本,而容器不知道它正在被检查。容器的副本,可以在沙箱环境中被多次分析和恢复,而原始容器并不知道。
|
||||
取证容器检查点是作为一个 alpha 特性在 Kubernetes v1.25 中引入的。
|
||||
|
||||
<!--
|
||||
## How does it work?
|
||||
-->
|
||||
## 工作原理
|
||||
|
||||
<!--
|
||||
With the help of CRIU it is possible to checkpoint and restore containers.
|
||||
CRIU is integrated in runc, crun, CRI-O and containerd and forensic container
|
||||
checkpointing as implemented in Kubernetes uses these existing CRIU
|
||||
integrations.
|
||||
-->
|
||||
在 CRIU 的帮助下,检查(checkpoint)和恢复容器成为可能。CRIU 集成在 runc、crun、CRI-O 和 containerd 中,
|
||||
而在 Kubernetes 中实现的取证容器检查点使用这些现有的 CRIU 集成。
|
||||
|
||||
<!--
|
||||
## Why is it important?
|
||||
-->
|
||||
## 这一特性为何重要?
|
||||
|
||||
<!--
|
||||
With the help of CRIU and the corresponding integrations it is possible to get
|
||||
all information and state about a running container on disk for later forensic
|
||||
analysis. Forensic analysis might be important to inspect a suspicious
|
||||
container without stopping or influencing it. If the container is really under
|
||||
attack, the attacker might detect attempts to inspect the container. Taking a
|
||||
checkpoint and analysing the container in a sandboxed environment offers the
|
||||
possibility to inspect the container without the original container and maybe
|
||||
attacker being aware of the inspection.
|
||||
-->
|
||||
借助 CRIU 和相应的集成,可以获得磁盘上正在运行的容器的所有信息和状态,供以后进行取证分析。
|
||||
取证分析对于在不阻止或影响可疑容器的情况下,对其进行检查可能很重要。如果容器确实受到攻击,攻击者可能会检测到检查容器的企图。
|
||||
获取检查点并在沙箱环境中分析容器,提供了在原始容器和可能的攻击者不知道检查的情况下检查容器的可能性。
|
||||
|
||||
<!--
|
||||
In addition to the forensic container checkpointing use case, it is also
|
||||
possible to migrate a container from one node to another node without loosing
|
||||
the internal state. Especially for stateful containers with long initialization
|
||||
times restoring from a checkpoint might save time after a reboot or enable much
|
||||
faster startup times.
|
||||
-->
|
||||
除了取证容器检查点用例,还可以在不丢失内部状态的情况下,将容器从一个节点迁移到另一个节点。
|
||||
特别是对于初始化时间长的有状态容器,从检查点恢复,可能会节省重新启动后的时间,或者实现更快的启动时间。
|
||||
|
||||
<!--
|
||||
## How do I use container checkpointing?
|
||||
-->
|
||||
## 如何使用容器检查点?
|
||||
|
||||
<!--
|
||||
The feature is behind a [feature gate][container-checkpoint-feature-gate], so
|
||||
make sure to enable the `ContainerCheckpoint` gate before you can use the new
|
||||
feature.
|
||||
-->
|
||||
该功能在[特性门控][container-checkpoint-feature-gate]后面,因此在使用这个新功能之前,
|
||||
请确保启用了 ContainerCheckpoint 特性门控。
|
||||
|
||||
<!--
|
||||
The runtime must also support container checkpointing:
|
||||
|
||||
* containerd: support is currently under discussion. See containerd
|
||||
pull request [#6965][containerd-checkpoint-restore-pr] for more details.
|
||||
|
||||
* CRI-O: v1.25 has support for forensic container checkpointing.
|
||||
-->
|
||||
|
||||
运行时还必须支持容器检查点:
|
||||
|
||||
* containerd:相关支持目前正在讨论中。有关更多详细信息,请参见 [containerd pull request #6965][containerd-checkpoint-restore-pr]。
|
||||
* CRI-O:v1.25 支持取证容器检查点。
|
||||
|
||||
<!--
|
||||
### Usage example with CRI-O
|
||||
-->
|
||||
## CRI-O 的使用示例
|
||||
|
||||
<!--
|
||||
To use forensic container checkpointing in combination with CRI-O, the runtime
|
||||
needs to be started with the command-line option `--enable-criu-support=true`.
|
||||
For Kubernetes, you need to run your cluster with the `ContainerCheckpoint`
|
||||
feature gate enabled. As the checkpointing functionality is provided by CRIU it
|
||||
is also necessary to install CRIU. Usually runc or crun depend on CRIU and
|
||||
therefore it is installed automatically.
|
||||
-->
|
||||
要将取证容器检查点与 CRI-O 结合使用,需要使用命令行选项--enable-criu-support=true 启动运行时。
|
||||
Kubernetes 方面,你需要在启用 ContainerCheckpoint 特性门控的情况下运行你的集群。
|
||||
由于检查点功能是由 CRIU 提供的,因此也有必要安装 CRIU。
|
||||
通常 runc 或 crun 依赖于 CRIU,因此它是自动安装的。
|
||||
|
||||
<!--
|
||||
It is also important to mention that at the time of writing the checkpointing functionality is
|
||||
to be considered as an alpha level feature in CRI-O and Kubernetes and the
|
||||
security implications are still under consideration.
|
||||
-->
|
||||
值得一提的是,在编写本文时,检查点功能被认为是 CRI-O 和 Kubernetes 中的一个 alpha 级特性,其安全影响仍在评估之中。
|
||||
|
||||
<!--
|
||||
Once containers and pods are running it is possible to create a checkpoint.
|
||||
[Checkpointing](https://kubernetes.io/docs/reference/node/kubelet-checkpoint-api/)
|
||||
is currently only exposed on the **kubelet** level. To checkpoint a container,
|
||||
you can run `curl` on the node where that container is running, and trigger a
|
||||
checkpoint:
|
||||
|
||||
```shell
|
||||
curl -X POST "https://localhost:10250/checkpoint/namespace/podId/container"
|
||||
```
|
||||
-->
|
||||
一旦容器和 pod 开始运行,就可以创建一个检查点。[检查点][kubelet-checkpoint-api]目前只在 **kubelet** 级别暴露。
|
||||
要检查一个容器,可以在运行该容器的节点上运行 curl,并触发一个检查点:
|
||||
|
||||
```shell
|
||||
curl -X POST "https://localhost:10250/checkpoint/namespace/podId/container"
|
||||
```
|
||||
|
||||
<!--
|
||||
For a container named *counter* in a pod named *counters* in a namespace named
|
||||
*default* the *kubelet* API endpoint is reachable at:
|
||||
|
||||
```shell
|
||||
curl -X POST "https://localhost:10250/checkpoint/default/counters/counter"
|
||||
```
|
||||
-->
|
||||
对于 **default** 命名空间中 **counters** Pod 中名为 **counter** 的容器,可通过以下方式访问 **kubelet** API 端点:
|
||||
|
||||
```shell
|
||||
curl -X POST "https://localhost:10250/checkpoint/default/counters/counter"
|
||||
```
|
||||
|
||||
<!--
|
||||
For completeness the following `curl` command-line options are necessary to
|
||||
have `curl` accept the *kubelet*'s self signed certificate and authorize the
|
||||
use of the *kubelet* `checkpoint` API:
|
||||
|
||||
```shell
|
||||
--insecure --cert /var/run/kubernetes/client-admin.crt --key /var/run/kubernetes/client-admin.key
|
||||
```
|
||||
-->
|
||||
为了完整起见,以下 `curl` 命令行选项对于让 `curl` 接受 **kubelet** 的自签名证书并授权使用
|
||||
**kubelet** 检查点 API 是必要的:
|
||||
|
||||
```shell
|
||||
--insecure --cert /var/run/kubernetes/client-admin.crt --key /var/run/kubernetes/client-admin.key
|
||||
```
|
||||
|
||||
<!--
|
||||
Triggering this **kubelet** API will request the creation of a checkpoint from
|
||||
CRI-O. CRI-O requests a checkpoint from your low-level runtime (for example,
|
||||
`runc`). Seeing that request, `runc` invokes the `criu` tool
|
||||
to do the actual checkpointing.
|
||||
|
||||
Once the checkpointing has finished the checkpoint should be available at
|
||||
`/var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar`
|
||||
|
||||
You could then use that tar archive to restore the container somewhere else.
|
||||
-->
|
||||
触发这个 **kubelet** API 将从 CRI-O 请求创建一个检查点,CRI-O 从你的低级运行时(例如 `runc`)请求一个检查点。
|
||||
看到这个请求,`runc` 调用 `criu` 工具来执行实际的检查点操作。
|
||||
|
||||
检查点操作完成后,检查点应该位于
|
||||
`/var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar`
|
||||
|
||||
然后,你可以使用 tar 归档文件在其他地方恢复容器。
|
||||
|
||||
<!--
|
||||
### Restore a checkpointed container outside of Kubernetes (with CRI-O) {#restore-checkpointed-container-standalone}
|
||||
-->
|
||||
### 在 Kubernetes 外恢复检查点容器(使用 CRI-O)
|
||||
|
||||
<!--
|
||||
With the checkpoint tar archive it is possible to restore the container outside
|
||||
of Kubernetes in a sandboxed instance of CRI-O. For better user experience
|
||||
during restore, I recommend that you use the latest version of CRI-O from the
|
||||
*main* CRI-O GitHub branch. If you're using CRI-O v1.25, you'll need to
|
||||
manually create certain directories Kubernetes would create before starting the
|
||||
container.
|
||||
-->
|
||||
使用检查点 tar 归档文件,可以在 Kubernetes 之外的 CRI-O 沙箱实例中恢复容器。
|
||||
为了在恢复过程中获得更好的用户体验,建议你使用 CRI-O GitHub 的 **main** 分支中最新版本的 CRI-O。
|
||||
如果你使用的是 CRI-O v1.25,你需要在启动容器之前手动创建 Kubernetes 会创建的某些目录。
|
||||
<!--
|
||||
The first step to restore a container outside of Kubernetes is to create a pod sandbox
|
||||
using *crictl*:
|
||||
|
||||
```shell
|
||||
crictl runp pod-config.json
|
||||
```
|
||||
-->
|
||||
在 Kubernetes 外恢复容器的第一步是使用 **crictl** 创建一个 pod 沙箱:
|
||||
|
||||
```shell
|
||||
crictl runp pod-config.json
|
||||
```
|
||||
|
||||
<!--
|
||||
Then you can restore the previously checkpointed container into the newly created pod sandbox:
|
||||
|
||||
```shell
|
||||
crictl create <POD_ID> container-config.json pod-config.json
|
||||
```
|
||||
-->
|
||||
然后,你可以将之前的检查点容器恢复到新创建的 pod 沙箱中:
|
||||
|
||||
```shell
|
||||
crictl create <POD_ID> container-config.json pod-config.json
|
||||
```
|
||||
|
||||
<!--
|
||||
Instead of specifying a container image in a registry in `container-config.json`
|
||||
you need to specify the path to the checkpoint archive that you created earlier:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"name": "counter"
|
||||
},
|
||||
"image":{
|
||||
"image": "/var/lib/kubelet/checkpoints/<checkpoint-archive>.tar"
|
||||
}
|
||||
}
|
||||
```
|
||||
-->
|
||||
你不需要在 container-config.json 的注册表中指定容器镜像,而是需要指定你之前创建的检查点归档文件的路径:
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"name": "counter"
|
||||
},
|
||||
"image":{
|
||||
"image": "/var/lib/kubelet/checkpoints/<checkpoint-archive>.tar"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
Next, run `crictl start <CONTAINER_ID>` to start that container, and then a
|
||||
copy of the previously checkpointed container should be running.
|
||||
-->
|
||||
接下来,运行 crictl start <CONTAINER_ID>来启动该容器,然后应该会运行先前检查点容器的副本。
|
||||
|
||||
<!--
|
||||
### Restore a checkpointed container within of Kubernetes {#restore-checkpointed-container-k8s}
|
||||
-->
|
||||
### 在 Kubernetes 中恢复检查点容器
|
||||
|
||||
<!--
|
||||
To restore the previously checkpointed container directly in Kubernetes it is
|
||||
necessary to convert the checkpoint archive into an image that can be pushed to
|
||||
a registry.
|
||||
-->
|
||||
要在 Kubernetes 中直接恢复之前的检查点容器,需要将检查点归档文件转换成可以推送到注册中心的镜像。
|
||||
|
||||
<!--
|
||||
One possible way to convert the local checkpoint archive consists of the
|
||||
following steps with the help of [buildah](https://buildah.io/):
|
||||
|
||||
```shell
|
||||
newcontainer=$(buildah from scratch)
|
||||
buildah add $newcontainer /var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar /
|
||||
buildah config --annotation=io.kubernetes.cri-o.annotations.checkpoint.name=<container-name> $newcontainer
|
||||
buildah commit $newcontainer checkpoint-image:latest
|
||||
buildah rm $newcontainer
|
||||
```
|
||||
-->
|
||||
转换本地检查点存档的一种方法包括在 [buildah][buildah] 的帮助下执行以下步骤:
|
||||
|
||||
```shell
|
||||
newcontainer=$(buildah from scratch)
|
||||
buildah add $newcontainer /var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar /
|
||||
buildah config --annotation=io.kubernetes.cri-o.annotations.checkpoint.name=<container-name> $newcontainer
|
||||
buildah commit $newcontainer checkpoint-image:latest
|
||||
buildah rm $newcontainer
|
||||
```
|
||||
|
||||
<!--
|
||||
The resulting image is not standardized and only works in combination with
|
||||
CRI-O. Please consider this image format as pre-alpha. There are ongoing
|
||||
[discussions][image-spec-discussion] to standardize the format of checkpoint
|
||||
images like this. Important to remember is that this not yet standardized image
|
||||
format only works if CRI-O has been started with `--enable-criu-support=true`.
|
||||
The security implications of starting CRI-O with CRIU support are not yet clear
|
||||
and therefore the functionality as well as the image format should be used with
|
||||
care.
|
||||
-->
|
||||
生成的镜像未经标准化,只能与 CRI-O 结合使用。请将此镜像格式视为 pre-alpha 格式。
|
||||
社区正在[讨论][image-spec-discussion]如何标准化这样的检查点镜像格式。
|
||||
重要的是要记住,这种尚未标准化的镜像格式只有在 CRI-O 已经用`--enable-criu-support=true` 启动时才有效。
|
||||
在 CRIU 支持下启动 CRI-O 的安全影响尚不清楚,因此应谨慎使用功能和镜像格式。
|
||||
|
||||
<!--
|
||||
Now, you'll need to push that image to a container image registry. For example:
|
||||
|
||||
```shell
|
||||
buildah push localhost/checkpoint-image:latest container-image-registry.example/user/checkpoint-image:latest
|
||||
```
|
||||
-->
|
||||
现在,你需要将该镜像推送到容器镜像注册中心。例如:
|
||||
|
||||
```shell
|
||||
buildah push localhost/checkpoint-image:latest container-image-registry.example/user/checkpoint-image:latest
|
||||
```
|
||||
|
||||
<!--
|
||||
To restore this checkpoint image (`container-image-registry.example/user/checkpoint-image:latest`), the
|
||||
image needs to be listed in the specification for a Pod. Here's an example
|
||||
manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
namePrefix: example-
|
||||
spec:
|
||||
containers:
|
||||
- name: <container-name>
|
||||
image: container-image-registry.example/user/checkpoint-image:latest
|
||||
nodeName: <destination-node>
|
||||
```
|
||||
-->
|
||||
要恢复此检查点镜像(container-image-registry.example/user/checkpoint-image:latest),
|
||||
该镜像需要在 Pod 的规约中列出。下面是一个清单示例:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
namePrefix: example-
|
||||
spec:
|
||||
containers:
|
||||
- name: <container-name>
|
||||
image: container-image-registry.example/user/checkpoint-image:latest
|
||||
nodeName: <destination-node>
|
||||
```
|
||||
|
||||
<!--
|
||||
Kubernetes schedules the new Pod onto a node. The kubelet on that node
|
||||
instructs the container runtime (CRI-O in this example) to create and start a
|
||||
container based on an image specified as `registry/user/checkpoint-image:latest`.
|
||||
CRI-O detects that `registry/user/checkpoint-image:latest`
|
||||
is a reference to checkpoint data rather than a container image. Then,
|
||||
instead of the usual steps to create and start a container,
|
||||
CRI-O fetches the checkpoint data and restores the container from that
|
||||
specified checkpoint.
|
||||
-->
|
||||
Kubernetes 将新的 Pod 调度到一个节点上。该节点上的 kubelet 指示容器运行时(本例中为 CRI-O)
|
||||
基于指定为 `registry/user/checkpoint-image:latest` 的镜像创建并启动容器。
|
||||
CRI-O 检测到 `registry/user/checkpoint-image:latest` 是对检查点数据的引用,而不是容器镜像。
|
||||
然后,与创建和启动容器的通常步骤不同,CRI-O 获取检查点数据,并从指定的检查点恢复容器。
|
||||
|
||||
<!--
|
||||
The application in that Pod would continue running as if the checkpoint had not been taken;
|
||||
within the container, the application looks and behaves like any other container that had been
|
||||
started normally and not restored from a checkpoint.
|
||||
-->
|
||||
该 Pod 中的应用程序将继续运行,就像检查点未被获取一样;在该容器中,
|
||||
应用程序的外观和行为,与正常启动且未从检查点恢复的任何其他容器相似。
|
||||
|
||||
<!--
|
||||
With these steps, it is possible to replace a Pod running on one node
|
||||
with a new equivalent Pod that is running on a different node,
|
||||
and without losing the state of the containers in that Pod.
|
||||
-->
|
||||
通过这些步骤,可以用在不同节点上运行的新的等效 Pod,替换在一个节点上运行的 Pod,而不会丢失该 Pod中容器的状态。
|
||||
|
||||
<!--
|
||||
## How do I get involved?
|
||||
-->
|
||||
## 如何参与?
|
||||
|
||||
<!--
|
||||
You can reach SIG Node by several means:
|
||||
|
||||
* Slack: [#sig-node](https://kubernetes.slack.com/messages/sig-node)
|
||||
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-node)
|
||||
-->
|
||||
你可以通过多种方式参与 SIG Node:
|
||||
|
||||
* Slack: [#sig-node][sig-node]
|
||||
* [Mailing list][Mailing list]
|
||||
|
||||
<!--
|
||||
## Further reading
|
||||
-->
|
||||
## 延伸阅读
|
||||
|
||||
<!--
|
||||
Please see the follow-up article [Forensic container
|
||||
analysis][forensic-container-analysis] for details on how a container checkpoint
|
||||
can be analyzed.
|
||||
-->
|
||||
有关如何分析容器检查点的详细信息,请参阅后续文章[取证容器分析][forensic-container-analysis]。
|
||||
|
||||
[forensic-container-analysis]: /zh-cn/blog/2023/03/10/forensic-container-analysis/
|
||||
[criu]: https://criu.org/
|
||||
[containerd-checkpoint-restore-pr]: https://github.com/containerd/containerd/pull/6965
|
||||
[container-checkpoint-feature-gate]: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
|
||||
[image-spec-discussion]: <https://github.com/opencontainers/image-spec/issues/962>
|
||||
[kubelet-checkpoint-api]: <https://kubernetes.io/docs/reference/node/kubelet-checkpoint-api/>
|
||||
[buildah]: <https://buildah.io/>
|
||||
[sig-node]: <https://kubernetes.slack.com/messages/sig-node>
|
||||
[Mailing list]: <https://groups.google.com/forum/#!forum/kubernetes-sig-node>
|
|
@ -0,0 +1,558 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 的容器检查点分析"
|
||||
date: 2023-03-10
|
||||
slug: forensic-container-analysis
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Forensic container analysis"
|
||||
date: 2023-03-10
|
||||
slug: forensic-container-analysis
|
||||
-->
|
||||
|
||||
**作者:** [Adrian Reber](https://github.com/adrianreber) (Red Hat)
|
||||
<!--
|
||||
**Authors:** Adrian Reber (Red Hat)
|
||||
-->
|
||||
|
||||
**译者**:[Paco Xu](https://github.com/pacoxu) (Daocloud)
|
||||
|
||||
<!--
|
||||
In my previous article, [Forensic container checkpointing in
|
||||
Kubernetes][forensic-blog], I introduced checkpointing in Kubernetes
|
||||
and how it has to be setup and how it can be used. The name of the
|
||||
feature is Forensic container checkpointing, but I did not go into
|
||||
any details how to do the actual analysis of the checkpoint created by
|
||||
Kubernetes. In this article I want to provide details how the
|
||||
checkpoint can be analyzed.
|
||||
-->
|
||||
在我之前的文章 [Kubernetes 中的取证容器检查点][forensic-blog] 中,我介绍了检查点以及如何创建和使用它。
|
||||
该特性的名称是取证容器检查点,但我没有详细介绍如何对 Kubernetes 创建的检查点进行实际分析。
|
||||
在本文中,我想提供如何分析检查点的详细信息。
|
||||
|
||||
<!--
|
||||
Checkpointing is still an alpha feature in Kubernetes and this article
|
||||
wants to provide a preview how the feature might work in the future.
|
||||
-->
|
||||
检查点仍然是 Kubernetes 中的一个 alpha 功能,本文希望提供该功能未来如何工作的预览。
|
||||
|
||||
<!--
|
||||
## Preparation
|
||||
-->
|
||||
## 准备
|
||||
|
||||
<!--
|
||||
Details about how to configure Kubernetes and the underlying CRI implementation
|
||||
to enable checkpointing support can be found in my [Forensic container
|
||||
checkpointing in Kubernetes][forensic-blog] article.
|
||||
|
||||
As an example I prepared a container image (`quay.io/adrianreber/counter:blog`)
|
||||
which I want to checkpoint and then analyze in this article. This container allows
|
||||
me to create files in the container and also store information in memory which
|
||||
I later want to find in the checkpoint.
|
||||
-->
|
||||
有关如何配置 Kubernetes 和底层 CRI 实现以启用检查点支持的详细信息,请参阅 [Kubernetes 中的取证容器检查点][forensic-blog]文章。
|
||||
|
||||
作为示例,我准备了一个容器镜像(`quay.io/adrianreber/counter:blog`),我想对其进行检查点,然后在本文中进行分析。
|
||||
这个容器允许我在容器中创建文件,并将信息存储在内存中,稍后我想在检查点中找到这些信息。
|
||||
|
||||
<!--
|
||||
To run that container I need a pod, and for this example I am using the following Pod manifest:
|
||||
-->
|
||||
要运行该容器,我需要一个 pod,在本示例中,我使用以下 Pod 清单:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: counters
|
||||
spec:
|
||||
containers:
|
||||
- name: counter
|
||||
image: quay.io/adrianreber/counter:blog
|
||||
```
|
||||
|
||||
<!--
|
||||
This results in a container called `counter` running in a pod called `counters`.
|
||||
|
||||
Once the container is running I am performing following actions with that
|
||||
container:
|
||||
-->
|
||||
这会导致一个名为 `counter` 的容器在名为 `counters` 的 Pod 中运行。
|
||||
|
||||
容器运行后,我将对该容器执行以下操作:
|
||||
|
||||
```console
|
||||
$ kubectl get pod counters --template '{{.status.podIP}}'
|
||||
10.88.0.25
|
||||
$ curl 10.88.0.25:8088/create?test-file
|
||||
$ curl 10.88.0.25:8088/secret?RANDOM_1432_KEY
|
||||
$ curl 10.88.0.25:8088
|
||||
```
|
||||
|
||||
<!--
|
||||
The first access creates a file called `test-file` with the content `test-file`
|
||||
in the container and the second access stores my secret information
|
||||
(`RANDOM_1432_KEY`) somewhere in the container's memory. The last access just
|
||||
adds an additional line to the internal log file.
|
||||
|
||||
The last step before I can analyze the checkpoint it to tell Kubernetes to create
|
||||
the checkpoint. As described in the previous article this requires access to the
|
||||
*kubelet* only `checkpoint` API endpoint.
|
||||
|
||||
For a container named *counter* in a pod named *counters* in a namespace named
|
||||
*default* the *kubelet* API endpoint is reachable at:
|
||||
-->
|
||||
1. 第一次访问在容器中创建一个名为 `test-file` 的文件,其内容为 `test-file`;
|
||||
2. 第二次访问将我的秘密信息(`RANDOM_1432_KEY`)存储在容器内存中的某处;
|
||||
3. 最后一次访问在内部日志文件中添加了一行。
|
||||
|
||||
在分析检查点之前的最后一步是告诉 Kubernetes 创建检查点。如上一篇文章所述,这需要访问 **kubelet** 唯一的“检查点” API 端点。
|
||||
|
||||
对于 **default** 命名空间中 **counters** Pod 中名为 **counter** 的容器,
|
||||
可通过以下方式访问 **kubelet** API 端点:
|
||||
|
||||
<!--
|
||||
```shell
|
||||
# run this on the node where that Pod is executing
|
||||
curl -X POST "https://localhost:10250/checkpoint/default/counters/counter"
|
||||
```
|
||||
-->
|
||||
```shell
|
||||
# 在运行 Pod 的节点上运行这条命令
|
||||
curl -X POST "https://localhost:10250/checkpoint/default/counters/counter"
|
||||
```
|
||||
|
||||
<!--
|
||||
For completeness the following `curl` command-line options are necessary to
|
||||
have `curl` accept the *kubelet*'s self signed certificate and authorize the
|
||||
use of the *kubelet* `checkpoint` API:
|
||||
-->
|
||||
为了完整起见,以下 curl 命令行选项对于让 curl 接受 **kubelet** 的自签名证书并授权使用 **kubelet** 检查点 API 是必要的:
|
||||
|
||||
```shell
|
||||
--insecure --cert /var/run/kubernetes/client-admin.crt --key /var/run/kubernetes/client-admin.key
|
||||
```
|
||||
|
||||
<!--
|
||||
Once the checkpointing has finished the checkpoint should be available at
|
||||
`/var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar`
|
||||
-->
|
||||
检查点操作完成后,检查点应该位于 `/var/lib/kubelet/checkpoints/checkpoint-<pod-name>_<namespace-name>-<container-name>-<timestamp>.tar`
|
||||
|
||||
<!--
|
||||
In the following steps of this article I will use the name `checkpoint.tar`
|
||||
when analyzing the checkpoint archive.
|
||||
-->
|
||||
在本文的以下步骤中,我将在分析检查点归档时使用名称 `checkpoint.tar`。
|
||||
|
||||
<!--
|
||||
## Checkpoint archive analysis using `checkpointctl`
|
||||
-->
|
||||
## 使用 `checkpointctl` 进行检查点归档分析
|
||||
|
||||
<!--
|
||||
To get some initial information about the checkpointed container I am using the
|
||||
tool [checkpointctl][checkpointctl] like this:
|
||||
-->
|
||||
我使用工具 [checkpointctl][checkpointctl] 获取有关检查点容器的一些初始信息,如下所示:
|
||||
|
||||
```console
|
||||
$ checkpointctl show checkpoint.tar --print-stats
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
| CONTAINER | IMAGE | ID | RUNTIME | CREATED | ENGINE | IP | CHKPT SIZE | ROOT FS DIFF SIZE |
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
| counter | quay.io/adrianreber/counter:blog | 059a219a22e5 | runc | 2023-03-02T06:06:49 | CRI-O | 10.88.0.23 | 8.6 MiB | 3.0 KiB |
|
||||
+-----------+----------------------------------+--------------+---------+---------------------+--------+------------+------------+-------------------+
|
||||
CRIU dump statistics
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
| FREEZING TIME | FROZEN TIME | MEMDUMP TIME | MEMWRITE TIME | PAGES SCANNED | PAGES WRITTEN |
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
| 100809 us | 119627 us | 11602 us | 7379 us | 7800 | 2198 |
|
||||
+---------------+-------------+--------------+---------------+---------------+---------------+
|
||||
```
|
||||
|
||||
<!--
|
||||
This gives me already some information about the checkpoint in that checkpoint
|
||||
archive. I can see the name of the container, information about the container
|
||||
runtime and container engine. It also lists the size of the checkpoint (`CHKPT
|
||||
SIZE`). This is mainly the size of the memory pages included in the checkpoint,
|
||||
but there is also information about the size of all changed files in the
|
||||
container (`ROOT FS DIFF SIZE`).
|
||||
-->
|
||||
这展示了有关该检查点归档中的检查点的一些信息。我们可以看到容器的名称、有关容器运行时和容器引擎的信息。
|
||||
它还列出了检查点的大小(`CHKPT SIZE`)。
|
||||
这主要是检查点中包含的内存页的大小,同时也有有关容器中所有更改文件的大小的信息(`ROOT FS DIFF SIZE`)。
|
||||
|
||||
<!--
|
||||
The additional parameter `--print-stats` decodes information in the checkpoint
|
||||
archive and displays them in the second table (*CRIU dump statistics*). This
|
||||
information is collected during checkpoint creation and gives an overview how much
|
||||
time CRIU needed to checkpoint the processes in the container and how many
|
||||
memory pages were analyzed and written during checkpoint creation.
|
||||
-->
|
||||
使用附加参数 `--print-stats` 可以解码检查点归档中的信息并将其显示在第二个表中(**CRIU 转储统计信息**)。
|
||||
此信息是在检查点创建期间收集的,并概述了 CRIU 对容器中的进程生成检查点所需的时间以及在检查点创建期间分析和写入了多少内存页。
|
||||
|
||||
<!--
|
||||
## Digging deeper
|
||||
-->
|
||||
## 深入挖掘
|
||||
<!--
|
||||
With the help of `checkpointctl` I am able to get some high level information
|
||||
about the checkpoint archive. To be able to analyze the checkpoint archive
|
||||
further I have to extract it. The checkpoint archive is a *tar* archive and can
|
||||
be extracted with the help of `tar xf checkpoint.tar`.
|
||||
|
||||
Extracting the checkpoint archive will result in following files and directories:
|
||||
|
||||
* `bind.mounts` - this file contains information about bind mounts and is needed
|
||||
during restore to mount all external files and directories at the right location
|
||||
* `checkpoint/` - this directory contains the actual checkpoint as created by
|
||||
CRIU
|
||||
* `config.dump` and `spec.dump` - these files contain metadata about the container
|
||||
which is needed during restore
|
||||
* `dump.log` - this file contains the debug output of CRIU created during
|
||||
checkpointing
|
||||
* `stats-dump` - this file contains the data which is used by `checkpointctl`
|
||||
to display dump statistics (`--print-stats`)
|
||||
* `rootfs-diff.tar` - this file contains all changed files on the container's
|
||||
file-system
|
||||
-->
|
||||
借助 `checkpointctl`,我可以获得有关检查点归档的一些高级信息。为了能够进一步分析检查点归档,我必须将其提取。
|
||||
检查点归档是 **tar** 归档文件,可以借助 `tar xf checkpoint.tar` 进行解压。
|
||||
|
||||
展开检查点存档时,将创建以下文件和目录:
|
||||
|
||||
* `bind.mounts` - 该文件包含有关绑定挂载的信息,并且需要在恢复期间需要将所有外部文件和目录挂载到正确的位置。
|
||||
* `checkpoint/` - 该目录包含 CRIU 创建的实际检查点,
|
||||
* `config.dump` 和 `spec.dump` - 这些文件包含恢复期间所需的有关容器的元数据。
|
||||
* `dump.log` - 该文件包含在检查点期间创建的 CRIU 的调试输出。
|
||||
* `stats-dump` - 此文件包含 `checkpointctl` 用于通过 `--print-stats` 显示转储统计信息的数据。
|
||||
* `rootfs-diff.tar` - 该文件包含容器文件系统上所有已更改的文件。
|
||||
|
||||
<!--
|
||||
### File-system changes - `rootfs-diff.tar`
|
||||
-->
|
||||
### 更改文件系统 - `rootfs-diff.tar`
|
||||
|
||||
<!--
|
||||
The first step to analyze the container's checkpoint further is to look at
|
||||
the files that have changed in my container. This can be done by looking at the
|
||||
file `rootfs-diff.tar`:
|
||||
-->
|
||||
进一步分析容器检查点的第一步是查看容器内已更改的文件。这可以通过引用 `rootfs-diff.tar` 文件来完成。
|
||||
|
||||
```console
|
||||
$ tar xvf rootfs-diff.tar
|
||||
home/counter/logfile
|
||||
home/counter/test-file
|
||||
```
|
||||
|
||||
<!--
|
||||
Now the files that changed in the container can be studied:
|
||||
-->
|
||||
现在你可以检查容器内已更改的文件。
|
||||
|
||||
```console
|
||||
$ cat home/counter/logfile
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:29] "GET /create?test-file HTTP/1.1" 200 -
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:40] "GET /secret?RANDOM_1432_KEY HTTP/1.1" 200 -
|
||||
10.88.0.1 - - [02/Mar/2023 06:07:43] "GET / HTTP/1.1" 200 -
|
||||
$ cat home/counter/test-file
|
||||
test-file
|
||||
```
|
||||
|
||||
<!--
|
||||
Compared to the container image (`quay.io/adrianreber/counter:blog`) this
|
||||
container is based on, I can see that the file `logfile` contains information
|
||||
about all access to the service the container provides and the file `test-file`
|
||||
was created just as expected.
|
||||
|
||||
With the help of `rootfs-diff.tar` it is possible to inspect all files that
|
||||
were created or changed compared to the base image of the container.
|
||||
-->
|
||||
与该容器所基于的容器镜像(`quay.io/adrianreber/counter:blog`)相比,
|
||||
它包含容器提供的服务的所有访问信息以及预期创建的 `logfile` 可以检查 `test-file` 文件。
|
||||
|
||||
在 `rootfs-diff.tar` 的帮助下,可以根据容器的基本镜像检查所有创建或修改的文件。
|
||||
|
||||
<!--
|
||||
### Analyzing the checkpointed processes - `checkpoint/`
|
||||
-->
|
||||
### 分析检查点进程 - `checkpoint/`
|
||||
|
||||
<!--
|
||||
The directory `checkpoint/` contains data created by CRIU while checkpointing
|
||||
the processes in the container. The content in the directory `checkpoint/`
|
||||
consists of different [image files][image-files] which can be analyzed with the
|
||||
help of the tool [CRIT][crit] which is distributed as part of CRIU.
|
||||
|
||||
First lets get an overview of the processes inside of the container:
|
||||
-->
|
||||
目录 `checkpoint/` 包含 CRIU 在容器内对进程进行检查点时创建的数据。
|
||||
目录 `checkpoint/` 的内容由各种[镜像文件][image-files] 组成,可以使用作为 CRIU 一部分分发的 [CRIT][crit] 工具进行分析。
|
||||
|
||||
首先,我们先了解一下容器的内部流程。
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/pstree.img | jq .entries[].pid
|
||||
1
|
||||
7
|
||||
8
|
||||
```
|
||||
|
||||
<!--
|
||||
This output means that I have three processes inside of the container's PID
|
||||
namespace with the PIDs: 1, 7, 8
|
||||
|
||||
This is only the view from the inside of the container's PID namespace. During
|
||||
restore exactly these PIDs will be recreated. From the outside of the
|
||||
container's PID namespace the PIDs will change after restore.
|
||||
|
||||
The next step is to get some additional information about these three processes:
|
||||
-->
|
||||
此输出意味着容器的 PID 命名空间内有 3 个进程(PID 为 1、7 和 8)。
|
||||
|
||||
这只是容器 PID 命名空间的内部视图。这些 PID 在恢复过程中会准确地重新创建。从容器的 PID 命名空间外部,PID 将在恢复后更改。
|
||||
|
||||
下一步是获取有关这三个进程的更多信息。
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/core-1.img | jq .entries[0].tc.comm
|
||||
"bash"
|
||||
$ crit show checkpoint/core-7.img | jq .entries[0].tc.comm
|
||||
"counter.py"
|
||||
$ crit show checkpoint/core-8.img | jq .entries[0].tc.comm
|
||||
"tee"
|
||||
```
|
||||
|
||||
<!--
|
||||
This means the three processes in my container are `bash`, `counter.py` (a Python
|
||||
interpreter) and `tee`. For details about the parent child relations of these processes there
|
||||
is more data to be analyzed in `checkpoint/pstree.img`.
|
||||
|
||||
Let's compare the so far collected information to the still running container:
|
||||
-->
|
||||
这意味着容器内的三个进程是 `bash`、`counter.py`(Python 解释器)和 `tee`。
|
||||
`checkpoint/pstree.img` 中有更多数据可供分析,以获取有关进程起源的详细信息。
|
||||
|
||||
让我们将目前为止收集到的信息与仍在运行的容器进行比较。
|
||||
|
||||
```console
|
||||
$ crictl inspect --output go-template --template "{{(index .info.pid)}}" 059a219a22e56
|
||||
722520
|
||||
$ ps auxf | grep -A 2 722520
|
||||
fedora 722520 \_ bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile
|
||||
fedora 722541 \_ /usr/bin/python3 /home/counter/counter.py
|
||||
fedora 722542 \_ /usr/bin/coreutils --coreutils-prog-shebang=tee /usr/bin/tee /home/counter/logfile
|
||||
$ cat /proc/722520/comm
|
||||
bash
|
||||
$ cat /proc/722541/comm
|
||||
counter.py
|
||||
$ cat /proc/722542/comm
|
||||
tee
|
||||
```
|
||||
|
||||
<!--
|
||||
In this output I am first retrieving the PID of the first process in the
|
||||
container and then I am looking for that PID and child processes on the system
|
||||
where the container is running. I am seeing three processes and the first one is
|
||||
"bash" which is PID 1 inside of the containers PID namespace. Then I am looking
|
||||
at `/proc/<PID>/comm` and I can find the exact same value
|
||||
as in the checkpoint image.
|
||||
|
||||
Important to remember is that the checkpoint will contain the view from within the
|
||||
container's PID namespace because that information is important to restore the
|
||||
processes.
|
||||
|
||||
One last example of what `crit` can tell us about the container is the information
|
||||
about the UTS namespace:
|
||||
-->
|
||||
在此输出中,我们首先获取容器中第一个进程的 PID。在运行容器的系统上,它会查找其 PID 和子进程。
|
||||
你应该看到三个进程,第一个进程是 `bash`,容器 PID 命名空间中的 PID 为 1。
|
||||
然后查看 `/proc/<PID>/comm`,可以找到与检查点镜像完全相同的值。
|
||||
|
||||
需要记住的重点是,检查点包含容器的 PID 命名空间内的视图。因为这些信息对于恢复进程非常重要。
|
||||
|
||||
`crit` 告诉我们有关容器的最后一个例子是有关 UTS 命名空间的信息。
|
||||
|
||||
```console
|
||||
$ crit show checkpoint/utsns-12.img
|
||||
{
|
||||
"magic": "UTSNS",
|
||||
"entries": [
|
||||
{
|
||||
"nodename": "counters",
|
||||
"domainname": "(none)"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
This tells me that the hostname inside of the UTS namespace is `counters`.
|
||||
|
||||
For every resource CRIU collected during checkpointing the `checkpoint/`
|
||||
directory contains corresponding image files which can be analyzed with the help
|
||||
of `crit`.
|
||||
-->
|
||||
这里输出表示 UTS 命名空间中的主机名是 `counters`。
|
||||
|
||||
对于检查点创建期间收集的每个资源 CRIU,`checkpoint/` 目录包含相应的镜像文件。可以使用 `crit` 来分析该镜像文件。
|
||||
|
||||
<!--
|
||||
#### Looking at the memory pages
|
||||
-->
|
||||
#### 查看内存页面
|
||||
|
||||
<!--
|
||||
In addition to the information from CRIU that can be decoded with the help
|
||||
of CRIT, there are also files containing the raw memory pages written by
|
||||
CRIU to disk:
|
||||
-->
|
||||
除了可以借助 CRIT 解码的 CRIU 信息之外,还有包含 CRIU 写入磁盘的原始内存页的文件:
|
||||
|
||||
```console
|
||||
$ ls checkpoint/pages-*
|
||||
checkpoint/pages-1.img checkpoint/pages-2.img checkpoint/pages-3.img
|
||||
```
|
||||
|
||||
<!--
|
||||
When I initially used the container I stored a random key (`RANDOM_1432_KEY`)
|
||||
somewhere in the memory. Let see if I can find it:
|
||||
-->
|
||||
当我最初使用该容器时,我在内存中的某个位置存储了一个随机密钥。让我看看是否能找到它:
|
||||
|
||||
```console
|
||||
$ grep -ao RANDOM_1432_KEY checkpoint/pages-*
|
||||
checkpoint/pages-2.img:RANDOM_1432_KEY
|
||||
```
|
||||
|
||||
<!--
|
||||
And indeed, there is my data. This way I can easily look at the content
|
||||
of all memory pages of the processes in the container, but it is also
|
||||
important to remember that anyone that can access the checkpoint
|
||||
archive has access to all information that was stored in the memory of the
|
||||
container's processes.
|
||||
-->
|
||||
确实有我的数据。通过这种方式,我可以轻松查看容器中进程的所有内存页面的内容,
|
||||
但需要注意的是可以访问检查点存档的任何人都可以访问存储在容器进程内存中的所有信息。
|
||||
|
||||
<!--
|
||||
#### Using gdb for further analysis
|
||||
-->
|
||||
#### 使用 gdb 进行进一步分析
|
||||
|
||||
<!--
|
||||
Another possibility to look at the checkpoint images is `gdb`. The CRIU repository
|
||||
contains the script [coredump][criu-coredump] which can convert a checkpoint
|
||||
into a coredump file:
|
||||
-->
|
||||
查看检查点镜像的另一种方法是 `gdb`。CRIU 存储库包含脚本 [coredump][criu-coredump],它可以将检查点转换为 coredump 文件:
|
||||
|
||||
```console
|
||||
$ /home/criu/coredump/coredump-python3
|
||||
$ ls -al core*
|
||||
core.1 core.7 core.8
|
||||
```
|
||||
|
||||
<!--
|
||||
Running the `coredump-python3` script will convert the checkpoint images into
|
||||
one coredump file for each process in the container. Using `gdb` I can also look
|
||||
at the details of the processes:
|
||||
-->
|
||||
运行 `coredump-python3` 脚本会将检查点镜像转换为容器中每个进程一个的 coredump 文件。 使用 `gdb` 我还可以查看进程的详细信息:
|
||||
|
||||
```console
|
||||
$ echo info registers | gdb --core checkpoint/core.1 -q
|
||||
|
||||
[New LWP 1]
|
||||
|
||||
Core was generated by `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile'.
|
||||
|
||||
#0 0x00007fefba110198 in ?? ()
|
||||
(gdb)
|
||||
rax 0x3d 61
|
||||
rbx 0x8 8
|
||||
rcx 0x7fefba11019a 140667595587994
|
||||
rdx 0x0 0
|
||||
rsi 0x7fffed9c1110 140737179816208
|
||||
rdi 0xffffffff 4294967295
|
||||
rbp 0x1 0x1
|
||||
rsp 0x7fffed9c10e8 0x7fffed9c10e8
|
||||
r8 0x1 1
|
||||
r9 0x0 0
|
||||
r10 0x0 0
|
||||
r11 0x246 582
|
||||
r12 0x0 0
|
||||
r13 0x7fffed9c1170 140737179816304
|
||||
r14 0x0 0
|
||||
r15 0x0 0
|
||||
rip 0x7fefba110198 0x7fefba110198
|
||||
eflags 0x246 [ PF ZF IF ]
|
||||
cs 0x33 51
|
||||
ss 0x2b 43
|
||||
ds 0x0 0
|
||||
es 0x0 0
|
||||
fs 0x0 0
|
||||
gs 0x0 0
|
||||
```
|
||||
|
||||
In this example I can see the value of all registers as they were during
|
||||
checkpointing and I can also see the complete command-line of my container's PID
|
||||
1 process: `bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile`
|
||||
|
||||
在这个例子中,我可以看到所有寄存器的值,因为它们在检查点,我还可以看到容器的 PID 1 进程的完整命令行:
|
||||
`bash -c /home/counter/counter.py 2>&1 | tee /home/counter/logfile`。
|
||||
|
||||
<!--
|
||||
## Summary
|
||||
-->
|
||||
## 总结
|
||||
|
||||
<!--
|
||||
With the help of container checkpointing, it is possible to create a
|
||||
checkpoint of a running container without stopping the container and without the
|
||||
container knowing that it was checkpointed. The result of checkpointing a
|
||||
container in Kubernetes is a checkpoint archive; using different tools like
|
||||
`checkpointctl`, `tar`, `crit` and `gdb` the checkpoint can be analyzed. Even
|
||||
with simple tools like `grep` it is possible to find information in the
|
||||
checkpoint archive.
|
||||
|
||||
The different examples I have shown in this article how to analyze a checkpoint
|
||||
are just the starting point. Depending on your requirements it is possible to
|
||||
look at certain things in much more detail, but this article should give you an
|
||||
introduction how to start the analysis of your checkpoint.
|
||||
-->
|
||||
借助容器检查点,可以在不停止容器且在容器不知情的情况下,为正在运行的容器创建检查点。
|
||||
在 Kubernetes 中对容器创建一个检查点的结果是检查点存档文件;
|
||||
使用不同的工具,如 `checkpointctl`、`tar`、`crit` 和 `gdb`,可以分析检查点。
|
||||
即使使用像 `grep` 这样的简单工具,也可以在检查点存档中找到信息。
|
||||
|
||||
我在本文中展示的如何分析检查点的不同示例,这只是一个起点。
|
||||
根据你的需求,可以更详细地查看某些内容,本文向你介绍了如何开始进行检查点分析。
|
||||
|
||||
<!--
|
||||
## How do I get involved?
|
||||
-->
|
||||
## 如何参与?
|
||||
|
||||
<!--
|
||||
You can reach SIG Node by several means:
|
||||
-->
|
||||
你可以通过多种方式联系到 SIG Node。
|
||||
|
||||
* Slack: [#sig-node][slack-sig-node]
|
||||
* Slack: [#sig-security][slack-sig-security]
|
||||
* [邮件列表][sig-node-ml]
|
||||
|
||||
[forensic-blog]: https://kubernetes.io/zh-cn/blog/2022/12/05/forensic-container-checkpointing-alpha/
|
||||
[checkpointctl]: https://github.com/checkpoint-restore/checkpointctl
|
||||
[image-files]: https://criu.org/Images
|
||||
[crit]: https://criu.org/CRIT
|
||||
[slack-sig-node]: https://kubernetes.slack.com/messages/sig-node
|
||||
[slack-sig-security]: https://kubernetes.slack.com/messages/sig-security
|
||||
[sig-node-ml]: https://groups.google.com/forum/#!forum/kubernetes-sig-node
|
||||
[criu-coredump]: https://github.com/checkpoint-restore/criu/tree/criu-dev/coredump
|
|
@ -0,0 +1,308 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27:为 NodePort Service 分配端口时避免冲突"
|
||||
date: 2023-05-11
|
||||
slug: nodeport-dynamic-and-static-allocation
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: Avoid Collisions Assigning Ports to NodePort Services"
|
||||
date: 2023-05-11
|
||||
slug: nodeport-dynamic-and-static-allocation
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Author:** Xu Zhenglun (Alibaba)
|
||||
-->
|
||||
**作者:** Xu Zhenglun (Alibaba)
|
||||
|
||||
**译者:** [Michael Yao](https://github.com/windsonsea) (DaoCloud)
|
||||
|
||||
<!--
|
||||
In Kubernetes, a Service can be used to provide a unified traffic endpoint for
|
||||
applications running on a set of Pods. Clients can use the virtual IP address (or _VIP_) provided
|
||||
by the Service for access, and Kubernetes provides load balancing for traffic accessing
|
||||
different back-end Pods, but a ClusterIP type of Service is limited to providing access to
|
||||
nodes within the cluster, while traffic from outside the cluster cannot be routed.
|
||||
One way to solve this problem is to use a `type: NodePort` Service, which sets up a mapping
|
||||
to a specific port of all nodes in the cluster, thus redirecting traffic from the
|
||||
outside to the inside of the cluster.
|
||||
-->
|
||||
在 Kubernetes 中,对于以一组 Pod 运行的应用,Service 可以为其提供统一的流量端点。
|
||||
客户端可以使用 Service 提供的虚拟 IP 地址(或 **VIP**)进行访问,
|
||||
Kubernetes 为访问不同的后端 Pod 的流量提供负载均衡能力,
|
||||
但 ClusterIP 类型的 Service 仅限于供集群内的节点来访问,
|
||||
而来自集群外的流量无法被路由。解决这个难题的一种方式是使用 `type: NodePort` Service,
|
||||
这种服务会在集群所有节点上为特定端口建立映射关系,从而将来自集群外的流量重定向到集群内。
|
||||
|
||||
<!--
|
||||
## How Kubernetes allocates node ports to Services?
|
||||
|
||||
When a `type: NodePort` Service is created, its corresponding port(s) are allocated in one
|
||||
of two ways:
|
||||
|
||||
- **Dynamic** : If the Service type is `NodePort` and you do not set a `nodePort`
|
||||
value explicitly in the `spec` for that Service, the Kubernetes control plane will
|
||||
automatically allocate an unused port to it at creation time.
|
||||
|
||||
- **Static** : In addition to the dynamic auto-assignment described above, you can also
|
||||
explicitly assign a port that is within the nodeport port range configuration.
|
||||
-->
|
||||
## Kubernetes 如何为 Services 分配节点端口?
|
||||
|
||||
当 `type: NodePort` Service 被创建时,其所对应的端口将以下述两种方式之一分配:
|
||||
|
||||
- **动态分配**:如果 Service 类型是 `NodePort` 且你没有为 Service 显式设置 `nodePort` 值,
|
||||
Kubernetes 控制面将在创建时自动为其分配一个未使用的端口。
|
||||
|
||||
- **静态分配**:除了上述动态自动分配,你还可以显式指定 nodeport 端口范围配置内的某端口。
|
||||
|
||||
<!--
|
||||
The value of `nodePort` that you manually assign must be unique across the whole cluster.
|
||||
Attempting to create a Service of `type: NodePort` where you explicitly specify a node port that
|
||||
was already allocated results in an error.
|
||||
-->
|
||||
你手动分配的 `nodePort` 值在整个集群范围内一定不能重复。
|
||||
如果尝试在创建 `type: NodePort` Service 时显式指定已分配的节点端口,将产生错误。
|
||||
|
||||
<!--
|
||||
## Why do you need to reserve ports of NodePort Service?
|
||||
|
||||
Sometimes, you may want to have a NodePort Service running on well-known ports
|
||||
so that other components and users inside o r outside the cluster can use them.
|
||||
-->
|
||||
## 为什么需要保留 NodePort Service 的端口?
|
||||
|
||||
有时你可能想要 NodePort Service 运行在众所周知的端口上,
|
||||
便于集群内外的其他组件和用户可以使用这些端口。
|
||||
|
||||
<!--
|
||||
In some complex cluster deployments with a mix of Kubernetes nodes and other servers on the same network,
|
||||
it may be necessary to use some pre-defined ports for communication. In particular, some fundamental
|
||||
components cannot rely on the VIPs that back `type: LoadBalancer` Services
|
||||
because the virtual IP address mapping implementation for that cluster also relies on
|
||||
these foundational components.
|
||||
|
||||
Now suppose you need to expose a Minio object storage service on Kubernetes to clients
|
||||
running outside the Kubernetes cluster, and the agreed port is `30009`, we need to
|
||||
create a Service as follows:
|
||||
-->
|
||||
在某些复杂的集群部署场景中在同一网络上混合了 Kubernetes 节点和其他服务器,
|
||||
可能有必要使用某些预定义的端口进行通信。尤为特别的是,某些基础组件无法使用用来支撑
|
||||
`type: LoadBalancer` Service 的 VIP,因为针对集群实现的虚拟 IP 地址映射也依赖这些基础组件。
|
||||
|
||||
现在假设你需要在 Kubernetes 上将一个 Minio 对象存储服务暴露给运行在 Kubernetes 集群外的客户端,
|
||||
协商后的端口是 `30009`,我们需要创建以下 Service:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
ports:
|
||||
- name: api
|
||||
nodePort: 30009
|
||||
port: 9000
|
||||
protocol: TCP
|
||||
targetPort: 9000
|
||||
selector:
|
||||
app: minio
|
||||
type: NodePort
|
||||
```
|
||||
|
||||
<!--
|
||||
However, as mentioned before, if the port (30009) required for the `minio` Service is not reserved,
|
||||
and another `type: NodePort` (or possibly `type: LoadBalancer`) Service is created and dynamically
|
||||
allocated before or concurrently with the `minio` Service, TCP port 30009 might be allocated to that
|
||||
other Service; if so, creation of the `minio` Service will fail due to a node port collision.
|
||||
-->
|
||||
然而如前文所述,如果 `minio` Service 所需的端口 (30009) 未被预留,
|
||||
且另一个 `type: NodePort`(或者也包括 `type: LoadBalancer`)Service
|
||||
在 `minio` Service 之前或与之同时被创建、动态分配,TCP 端口 30009 可能被分配给了这个 Service;
|
||||
如果出现这种情况,`minio` Service 的创建将由于节点端口冲突而失败。
|
||||
|
||||
<!--
|
||||
## How can you avoid NodePort Service port conflicts?
|
||||
Kubernetes 1.24 introduced changes for `type: ClusterIP` Services, dividing the CIDR range for cluster
|
||||
IP addresses into two blocks that use different allocation policies to [reduce the risk of conflicts](/docs/reference/networking/virtual-ips/#avoiding-collisions).
|
||||
In Kubernetes 1.27, as an alpha feature, you can adopt a similar policy for `type: NodePort` Services.
|
||||
You can enable a new [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ServiceNodePortStaticSubrange`. Turning this on allows you to use a different port allocation strategy
|
||||
for `type: NodePort` Services, and reduce the risk of collision.
|
||||
-->
|
||||
## 如何才能避免 NodePort Service 端口冲突?
|
||||
|
||||
Kubernetes 1.24 引入了针对 `type: ClusterIP` Service 的变更,将集群 IP 地址的 CIDR
|
||||
范围划分为使用不同分配策略的两块来[减少冲突的风险](/zh-cn/docs/reference/networking/virtual-ips/#avoiding-collisions)。
|
||||
在 Kubernetes 1.27 中,作为一个 Alpha 特性,你可以为 `type: NodePort` Service 采用类似的策略。
|
||||
你可以启用新的[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ServiceNodePortStaticSubrange`。开启此门控将允许你为
|
||||
`type: NodePort` Service 使用不同的端口分配策略,减少冲突的风险。
|
||||
|
||||
<!--
|
||||
The port range for `NodePort` will be divided, based on the formula `min(max(16, nodeport-size / 32), 128)`.
|
||||
The outcome of the formula will be a number between 16 and 128, with a step size that increases as the
|
||||
size of the nodeport range increases. The outcome of the formula determine that the size of static port
|
||||
range. When the port range is less than 16, the size of static port range will be set to 0,
|
||||
which means that all ports will be dynamically allocated.
|
||||
|
||||
Dynamic port assignment will use the upper band by default, once this has been exhausted it will use the lower range.
|
||||
This will allow users to use static allocations on the lower band with a low risk of collision.
|
||||
-->
|
||||
`NodePort` 的端口范围将基于公式 `min(max(16, 节点端口数 / 32), 128)` 进行划分。
|
||||
这个公式的结果将是一个介于 16 到 128 的数字,随着节点端口范围变大,步进值也会变大。
|
||||
此公式的结果决定了静态端口范围的大小。当端口范围小于 16 时,静态端口范围的大小将被设为 0,
|
||||
这意味着所有端口都将被动态分配。
|
||||
|
||||
动态端口分配默认使用数值较高的一段,一旦用完,它将使用较低范围。
|
||||
这将允许用户在冲突风险较低的较低端口段上使用静态分配。
|
||||
|
||||
<!--
|
||||
## Examples
|
||||
|
||||
### default range: 30000-32767
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-32767 |
|
||||
| Band Offset |   `min(max(16, 2768/32), 128)` <br>= `min(max(16, 86), 128)` <br>= `min(86, 128)` <br>= 86 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30085 |
|
||||
| Dynamic band start | 30086 |
|
||||
| Dynamic band end | 32767 |
|
||||
-->
|
||||
## 示例
|
||||
|
||||
### 默认范围:30000-32767
|
||||
|
||||
| 范围属性 | 值 |
|
||||
| ----------------------- | ----------------------------------------------------------------------------------------------- |
|
||||
| service-node-port-range | 30000-32767 |
|
||||
| 分段偏移量 |   `min(max(16, 2768/32), 128)` <br>= `min(max(16, 86), 128)` <br>= `min(86, 128)` <br>= 86 |
|
||||
| 起始静态段 | 30000 |
|
||||
| 结束静态段 | 30085 |
|
||||
| 起始动态段 | 30086 |
|
||||
| 结束动态段 | 32767 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-32767
|
||||
"Static" : 86
|
||||
"Dynamic" : 2682
|
||||
{{< /mermaid >}}
|
||||
|
||||
<!--
|
||||
### very small range: 30000-30015
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-30015 |
|
||||
| Band Offset | 0 |
|
||||
| Static band start | - |
|
||||
| Static band end | - |
|
||||
| Dynamic band start | 30000 |
|
||||
| Dynamic band end | 30015 |
|
||||
-->
|
||||
### 超小范围:30000-30015
|
||||
|
||||
| 范围属性 | 值 |
|
||||
| ----------------------- | ----------- |
|
||||
| service-node-port-range | 30000-30015 |
|
||||
| 分段偏移量 | 0 |
|
||||
| 起始静态段 | - |
|
||||
| 结束静态段 | - |
|
||||
| 起始动态段 | 30000 |
|
||||
| 动态动态段 | 30015 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-30015
|
||||
"Static" : 0
|
||||
"Dynamic" : 16
|
||||
{{< /mermaid >}}
|
||||
|
||||
<!--
|
||||
### small(lower boundary) range: 30000-30127
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-30127 |
|
||||
| Band Offset |   `min(max(16, 128/32), 128)` <br>= `min(max(16, 4), 128)` <br>= `min(16, 128)` <br>= 16 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30015 |
|
||||
| Dynamic band start | 30016 |
|
||||
| Dynamic band end | 30127 |
|
||||
-->
|
||||
### 小(下边界)范围:30000-30127
|
||||
|
||||
| 范围属性 | 值 |
|
||||
| ---------------------- | --------------------------------------------------------------------------------------------- |
|
||||
| service-node-port-range | 30000-30127 |
|
||||
| 分段偏移量 |   `min(max(16, 128/32), 128)` <br>= `min(max(16, 4), 128)` <br>= `min(16, 128)` <br>= 16 |
|
||||
| 起始静态段 | 30000 |
|
||||
| 结束静态段 | 30015 |
|
||||
| 起始动态段 | 30016 |
|
||||
| 结束动态段 | 30127 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-30127
|
||||
"Static" : 16
|
||||
"Dynamic" : 112
|
||||
{{< /mermaid >}}
|
||||
|
||||
<!--
|
||||
### large(upper boundary) range: 30000-34095
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-34095 |
|
||||
| Band Offset |   `min(max(16, 4096/32), 128)` <br>= `min(max(16, 128), 128)` <br>= `min(128, 128)` <br>= 128 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30127 |
|
||||
| Dynamic band start | 30128 |
|
||||
| Dynamic band end | 34095 |
|
||||
-->
|
||||
### 大(上边界)范围:30000-34095
|
||||
|
||||
| 范围属性 | 值 |
|
||||
| -----------------------| -------------------------------------------------------------------------------------------------- |
|
||||
| service-node-port-range | 30000-34095 |
|
||||
| 分段偏移量 |   `min(max(16, 4096/32), 128)` <br>= `min(max(16, 128), 128)` <br>= `min(128, 128)` <br>= 128 |
|
||||
| 起始静态段 | 30000 |
|
||||
| 结束静态段 | 30127 |
|
||||
| 起始动态段 | 30128 |
|
||||
| 结束动态段 | 34095 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-34095
|
||||
"Static" : 128
|
||||
"Dynamic" : 3968
|
||||
{{< /mermaid >}}
|
||||
|
||||
<!--
|
||||
### very large range: 30000-38191
|
||||
| Range properties | Values |
|
||||
|-------------------------|-------------------------------------------------------|
|
||||
| service-node-port-range | 30000-38191 |
|
||||
| Band Offset |   `min(max(16, 8192/32), 128)` <br>= `min(max(16, 256), 128)` <br>= `min(256, 128)` <br>= 128 |
|
||||
| Static band start | 30000 |
|
||||
| Static band end | 30127 |
|
||||
| Dynamic band start | 30128 |
|
||||
| Dynamic band end | 38191 |
|
||||
-->
|
||||
### 超大范围:30000-38191
|
||||
|
||||
| 范围属性 | 值 |
|
||||
| ---------------------- | -------------------------------------------------------------------------------------------------- |
|
||||
| service-node-port-range | 30000-38191 |
|
||||
| 分段偏移量 |   `min(max(16, 8192/32), 128)` <br>= `min(max(16, 256), 128)` <br>= `min(256, 128)` <br>= 128 |
|
||||
| 起始静态段 | 30000 |
|
||||
| 结束静态段 | 30127 |
|
||||
| 起始动态段 | 30128 |
|
||||
| 结束动态段 | 38191 |
|
||||
|
||||
{{< mermaid >}}
|
||||
pie showData
|
||||
title 30000-38191
|
||||
"Static" : 128
|
||||
"Dynamic" : 8064
|
||||
{{< /mermaid >}}
|
|
@ -0,0 +1,396 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: 原地调整 Pod 资源 (alpha)"
|
||||
date: 2023-05-12
|
||||
slug: in-place-pod-resize-alpha
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)"
|
||||
date: 2023-05-12
|
||||
slug: in-place-pod-resize-alpha
|
||||
-->
|
||||
|
||||
**作者:** Vinay Kulkarni (Kubescaler Labs)
|
||||
<!--
|
||||
**Author:** [Vinay Kulkarni](https://github.com/vinaykul) (Kubescaler Labs)
|
||||
-->
|
||||
|
||||
**译者**:[Paco Xu](https://github.com/pacoxu) (Daocloud)
|
||||
|
||||
<!--
|
||||
If you have deployed Kubernetes pods with CPU and/or memory resources
|
||||
specified, you may have noticed that changing the resource values involves
|
||||
restarting the pod. This has been a disruptive operation for running
|
||||
workloads... until now.
|
||||
-->
|
||||
如果你部署的 Pod 设置了 CPU 或内存资源,你就可能已经注意到更改资源值会导致 Pod 重新启动。
|
||||
以前,这对于运行的负载来说是一个破坏性的操作。
|
||||
|
||||
<!--
|
||||
In Kubernetes v1.27, we have added a new alpha feature that allows users
|
||||
to resize CPU/memory resources allocated to pods without restarting the
|
||||
containers. To facilitate this, the `resources` field in a pod's containers
|
||||
now allow mutation for `cpu` and `memory` resources. They can be changed
|
||||
simply by patching the running pod spec.
|
||||
-->
|
||||
在 Kubernetes v1.27 中,我们添加了一个新的 alpha 特性,允许用户调整分配给 Pod 的
|
||||
CPU 和内存资源大小,而无需重新启动容器。 首先,API 层面现在允许修改 Pod 容器中的
|
||||
`resources` 字段下的 `cpu` 和 `memory` 资源。资源修改只需 patch 正在运行的 pod
|
||||
规约即可。
|
||||
|
||||
<!--
|
||||
This also means that `resources` field in the pod spec can no longer be
|
||||
relied upon as an indicator of the pod's actual resources. Monitoring tools
|
||||
and other such applications must now look at new fields in the pod's status.
|
||||
Kubernetes queries the actual CPU and memory requests and limits enforced on
|
||||
the running containers via a CRI (Container Runtime Interface) API call to the
|
||||
runtime, such as containerd, which is responsible for running the containers.
|
||||
The response from container runtime is reflected in the pod's status.
|
||||
-->
|
||||
这也意味着 Pod 定义中的 `resource` 字段不能再被视为 Pod 实际资源的指标。监控程序必须
|
||||
查看 Pod 状态中的新字段来获取实际资源状况。Kubernetes 通过 CRI(Container Runtime
|
||||
Interface,容器运行时接口)API 调用运行时(例如 containerd)来查询实际的 CPU 和内存
|
||||
的请求和限制。容器运行时的响应会反映在 Pod 的状态中。
|
||||
|
||||
<!--
|
||||
In addition, a new `restartPolicy` for resize has been added. It gives users
|
||||
control over how their containers are handled when resources are resized.
|
||||
-->
|
||||
此外,Pod 中还添加了对应于资源调整的新字段 `restartPolicy`。这个字段使用户可以控制在资
|
||||
源调整时容器的行为。
|
||||
|
||||
<!--
|
||||
## What's new in v1.27?
|
||||
-->
|
||||
## 1.27 版本有什么新内容?
|
||||
|
||||
<!--
|
||||
Besides the addition of resize policy in the pod's spec, a new field named
|
||||
`allocatedResources` has been added to `containerStatuses` in the pod's status.
|
||||
This field reflects the node resources allocated to the pod's containers.
|
||||
-->
|
||||
除了在 Pod 规范中添加调整策略之外,还在 Pod 状态中的 `containerStatuses` 中添加了一个名为
|
||||
`allocatedResources` 的新字段。该字段反映了分配给 Pod 容器的节点资源。
|
||||
|
||||
<!--
|
||||
In addition, a new field called `resources` has been added to the container's
|
||||
status. This field reflects the actual resource requests and limits configured
|
||||
on the running containers as reported by the container runtime.
|
||||
-->
|
||||
此外,容器状态中还添加了一个名为 `resources` 的新字段。该字段反映的是如同容器运行时所报告的、
|
||||
针对正运行的容器配置的实际资源 requests 和 limits。
|
||||
|
||||
<!--
|
||||
此处使用了 https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/ 内容:
|
||||
Lastly, a new field named `resize` has been added to the pod's status to show the
|
||||
status of the last requested resize. A value of `Proposed` is an acknowledgement
|
||||
of the requested resize and indicates that request was validated and recorded. A
|
||||
value of `InProgress` indicates that the node has accepted the resize request
|
||||
and is in the process of applying the resize request to the pod's containers.
|
||||
A value of `Deferred` means that the requested resize cannot be granted at this
|
||||
time, and the node will keep retrying. The resize may be granted when other pods
|
||||
leave and free up node resources. A value of `Infeasible` is a signal that the
|
||||
node cannot accommodate the requested resize. This can happen if the requested
|
||||
resize exceeds the maximum resources the node can ever allocate for a pod.
|
||||
-->
|
||||
最后,Pod 状态中添加了新字段 `resize`。`resize` 字段显示上次请求待处理的调整状态。
|
||||
此字段可以具有以下值:
|
||||
|
||||
- Proposed:此值表示请求调整已被确认,并且请求已被验证和记录。
|
||||
- InProgress:此值表示节点已接受调整请求,并正在将其应用于 Pod 的容器。
|
||||
- Deferred:此值意味着在此时无法批准请求的调整,节点将继续重试。 当其他 Pod 退出并释放节点资源时,调整可能会被真正实施。
|
||||
- Infeasible:此值是一种信号,表示节点无法承接所请求的调整值。 如果所请求的调整超过节点可分配给 Pod 的最大资源,则可能会发生这种情况。
|
||||
|
||||
<!--
|
||||
## When to use this feature
|
||||
-->
|
||||
## 何时使用此功能?
|
||||
|
||||
<!--
|
||||
Here are a few examples where this feature may be useful:
|
||||
|
||||
- Pod is running on node but with either too much or too little resources.
|
||||
- Pods are not being scheduled do to lack of sufficient CPU or memory in a
|
||||
cluster that is underutilized by running pods that were overprovisioned.
|
||||
- Evicting certain stateful pods that need more resources to schedule them
|
||||
on bigger nodes is an expensive or disruptive operation when other lower
|
||||
priority pods in the node can be resized down or moved.
|
||||
-->
|
||||
以下是此功能可能有价值的一些示例:
|
||||
|
||||
- 正在运行的 Pod 资源限制或者请求过多或过少。
|
||||
- 一些过度预配资源的 Pod 调度到某个节点,会导致资源利用率较低的集群上因为
|
||||
CPU 或内存不足而无法调度 Pod。
|
||||
- 驱逐某些需要较多资源的有状态 Pod 是一项成本较高或破坏性的操作。
|
||||
这种场景下,缩小节点中的其他优先级较低的 Pod 的资源,或者移走这些 Pod 的成本更低。
|
||||
|
||||
<!--
|
||||
## How to use this feature
|
||||
-->
|
||||
## 如何使用这个功能
|
||||
|
||||
<!--
|
||||
In order to use this feature in v1.27, the `InPlacePodVerticalScaling`
|
||||
feature gate must be enabled. A local cluster with this feature enabled
|
||||
can be started as shown below:
|
||||
-->
|
||||
在 v1.27 中使用此功能,必须启用 `InPlacePodVerticalScaling` 特性门控。
|
||||
可以如下所示启动一个启用了此特性的本地集群:
|
||||
|
||||
<!--
|
||||
```
|
||||
root@vbuild:~/go/src/k8s.io/kubernetes# FEATURE_GATES=InPlacePodVerticalScaling=true ./hack/local-up-cluster.sh
|
||||
go version go1.20.2 linux/arm64
|
||||
+++ [0320 13:52:02] Building go targets for linux/arm64
|
||||
k8s.io/kubernetes/cmd/kubectl (static)
|
||||
k8s.io/kubernetes/cmd/kube-apiserver (static)
|
||||
k8s.io/kubernetes/cmd/kube-controller-manager (static)
|
||||
k8s.io/kubernetes/cmd/cloud-controller-manager (non-static)
|
||||
k8s.io/kubernetes/cmd/kubelet (non-static)
|
||||
...
|
||||
...
|
||||
Logs:
|
||||
/tmp/etcd.log
|
||||
/tmp/kube-apiserver.log
|
||||
/tmp/kube-controller-manager.log
|
||||
|
||||
/tmp/kube-proxy.log
|
||||
/tmp/kube-scheduler.log
|
||||
/tmp/kubelet.log
|
||||
|
||||
To start using your cluster, you can open up another terminal/tab and run:
|
||||
|
||||
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
|
||||
cluster/kubectl.sh
|
||||
|
||||
Alternatively, you can write to the default kubeconfig:
|
||||
|
||||
export KUBERNETES_PROVIDER=local
|
||||
|
||||
cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
|
||||
cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
|
||||
cluster/kubectl.sh config set-context local --cluster=local --user=myself
|
||||
cluster/kubectl.sh config use-context local
|
||||
cluster/kubectl.sh
|
||||
|
||||
```
|
||||
-->
|
||||
```
|
||||
root@vbuild:~/go/src/k8s.io/kubernetes# FEATURE_GATES=InPlacePodVerticalScaling=true ./hack/local-up-cluster.sh
|
||||
go version go1.20.2 linux/arm64
|
||||
+++ [0320 13:52:02] Building go targets for linux/arm64
|
||||
k8s.io/kubernetes/cmd/kubectl (static)
|
||||
k8s.io/kubernetes/cmd/kube-apiserver (static)
|
||||
k8s.io/kubernetes/cmd/kube-controller-manager (static)
|
||||
k8s.io/kubernetes/cmd/cloud-controller-manager (non-static)
|
||||
k8s.io/kubernetes/cmd/kubelet (non-static)
|
||||
...
|
||||
...
|
||||
Logs:
|
||||
/tmp/etcd.log
|
||||
/tmp/kube-apiserver.log
|
||||
/tmp/kube-controller-manager.log
|
||||
|
||||
/tmp/kube-proxy.log
|
||||
/tmp/kube-scheduler.log
|
||||
/tmp/kubelet.log
|
||||
|
||||
To start using your cluster, you can open up another terminal/tab and run:
|
||||
|
||||
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
|
||||
cluster/kubectl.sh
|
||||
|
||||
# Alternatively, you can write to the default kubeconfig:
|
||||
|
||||
export KUBERNETES_PROVIDER=local
|
||||
|
||||
cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
|
||||
cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
|
||||
cluster/kubectl.sh config set-context local --cluster=local --user=myself
|
||||
cluster/kubectl.sh config use-context local
|
||||
cluster/kubectl.sh
|
||||
|
||||
```
|
||||
|
||||
<!--
|
||||
Once the local cluster is up and running, Kubernetes users can schedule pods
|
||||
with resources, and resize the pods via kubectl. An example of how to use this
|
||||
feature is illustrated in the following demo video.
|
||||
-->
|
||||
一旦本地集群启动并运行,Kubernetes 用户就可以调度带有资源配置的 pod,并通过 kubectl 调整 pod
|
||||
的资源。 以下演示视频演示了如何使用此功能的示例。
|
||||
|
||||
<!--
|
||||
{{< youtube id="1m2FOuB6Bh0" title="In-place resize of pod CPU and memory resources">}}
|
||||
-->
|
||||
{{< youtube id="1m2FOuB6Bh0" title="原地调整 Pod CPU 或内存资源">}}
|
||||
|
||||
<!--
|
||||
## Example Use Cases
|
||||
-->
|
||||
## 示例用例
|
||||
|
||||
<!--
|
||||
### Cloud-based Development Environment
|
||||
-->
|
||||
### 云端开发环境
|
||||
|
||||
<!--
|
||||
In this scenario, developers or development teams write their code locally
|
||||
but build and test their code in Kubernetes pods with consistent configs
|
||||
that reflect production use. Such pods need minimal resources when the
|
||||
developers are writing code, but need significantly more CPU and memory
|
||||
when they build their code or run a battery of tests. This use case can
|
||||
leverage in-place pod resize feature (with a little help from eBPF) to
|
||||
quickly resize the pod's resources and avoid kernel OOM (out of memory)
|
||||
killer from terminating their processes.
|
||||
-->
|
||||
在这种场景下,开发人员或开发团队在本地编写代码,但在和生产环境资源配置相同的 Kubernetes pod 中的
|
||||
构建和测试代码。当开发人员编写代码时,此类 Pod 需要最少的资源,但在构建代码或运行一系列测试时需要
|
||||
更多的 CPU 和内存。 这个用例可以利用原地调整 pod 资源的功能(在 eBPF 的一点帮助下)快速调整 pod
|
||||
资源的大小,并避免内核 OOM(内存不足)Killer 终止其进程。
|
||||
|
||||
<!--
|
||||
This [KubeCon North America 2022 conference talk](https://www.youtube.com/watch?v=jjfa1cVJLwc)
|
||||
illustrates the use case.
|
||||
-->
|
||||
[KubeCon North America 2022 会议演讲](https://www.youtube.com/watch?v=jjfa1cVJLwc)中详细介绍了上述用例。
|
||||
|
||||
<!--
|
||||
### Java processes initialization CPU requirements
|
||||
-->
|
||||
### Java进程初始化CPU要求
|
||||
|
||||
<!--
|
||||
Some Java applications may need significantly more CPU during initialization
|
||||
than what is needed during normal process operation time. If such applications
|
||||
specify CPU requests and limits suited for normal operation, they may suffer
|
||||
from very long startup times. Such pods can request higher CPU values at the
|
||||
time of pod creation, and can be resized down to normal running needs once the
|
||||
application has finished initializing.
|
||||
-->
|
||||
某些 Java 应用程序在初始化期间 CPU 资源使用量可能比正常进程操作期间所需的 CPU 资源多很多。
|
||||
如果此类应用程序指定适合正常操作的 CPU 请求和限制,会导致程序启动时间很长。这样的 pod
|
||||
可以在创建 pod 时请求更高的 CPU 值。在应用程序完成初始化后,降低资源配置仍然可以正常运行。
|
||||
|
||||
<!--
|
||||
## Known Issues
|
||||
-->
|
||||
## 已知问题
|
||||
|
||||
<!--
|
||||
This feature enters v1.27 at [alpha stage](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages).
|
||||
Below are a few known issues users may encounter:
|
||||
-->
|
||||
该功能在 v1.27 中仍然是 [alpha 阶段](/docs/reference/command-line-tools-reference/feature-gates/#feature-stages).
|
||||
以下是用户可能会遇到的一些已知问题:
|
||||
|
||||
<!--
|
||||
- containerd versions below v1.6.9 do not have the CRI support needed for full
|
||||
end-to-end operation of this feature. Attempts to resize pods will appear
|
||||
to be _stuck_ in the `InProgress` state, and `resources` field in the pod's
|
||||
status are never updated even though the new resources may have been enacted
|
||||
on the running containers.
|
||||
- Pod resize may encounter a race condition with other pod updates, causing
|
||||
delayed enactment of pod resize.
|
||||
- Reflecting the resized container resources in pod's status may take a while.
|
||||
- Static CPU management policy is not supported with this feature.
|
||||
-->
|
||||
- containerd v1.6.9 以下的版本不具备此功能的所需的 CRI 支持,无法完成端到端的闭环。
|
||||
尝试调整 Pod 大小将显示为卡在 `InProgress` 状态,并且 Pod 状态中的 `resources`
|
||||
字段永远不会更新,即使新资源配置可能已经在正在运行的容器上生效了。
|
||||
- Pod 资源调整可能会遇到与其他 Pod 更新的冲突,导致 pod 资源调整操作被推迟。
|
||||
- 可能需要一段时间才能在 Pod 的状态中反映出调整后的容器资源。
|
||||
- 此特性与静态 CPU 管理策略不兼容。
|
||||
|
||||
<!--
|
||||
## Credits
|
||||
-->
|
||||
## 致谢
|
||||
|
||||
<!--
|
||||
This feature is a result of the efforts of a very collaborative Kubernetes community.
|
||||
Here's a little shoutout to just a few of the many many people that contributed
|
||||
countless hours of their time and helped make this happen.
|
||||
-->
|
||||
此功能是 Kubernetes 社区高度协作努力的结果。这里是对在这个功能实现过程中,贡献了很多帮助的一部分人的一点点致意。
|
||||
|
||||
<!--
|
||||
- [@thockin](https://github.com/thockin) for detail-oriented API design and air-tight code reviews.
|
||||
- [@derekwaynecarr](https://github.com/derekwaynecarr) for simplifying the design and thorough API and node reviews.
|
||||
- [@dchen1107](https://github.com/dchen1107) for bringing vast knowledge from Borg and helping us avoid pitfalls.
|
||||
- [@ruiwen-zhao](https://github.com/ruiwen-zhao) for adding containerd support that enabled full E2E implementation.
|
||||
- [@wangchen615](https://github.com/wangchen615) for implementing comprehensive E2E tests and driving scheduler fixes.
|
||||
- [@bobbypage](https://github.com/bobbypage) for invaluable help getting CI ready and quickly investigating issues, covering for me on my vacation.
|
||||
- [@Random-Liu](https://github.com/Random-Liu) for thorough kubelet reviews and identifying problematic race conditions.
|
||||
- [@Huang-Wei](https://github.com/Huang-Wei), [@ahg-g](https://github.com/ahg-g), [@alculquicondor](https://github.com/alculquicondor) for helping get scheduler changes done.
|
||||
- [@mikebrow](https://github.com/mikebrow) [@marosset](https://github.com/marosset) for reviews on short notice that helped CRI changes make it into v1.25.
|
||||
- [@endocrimes](https://github.com/endocrimes), [@ehashman](https://github.com/ehashman) for helping ensure that the oft-overlooked tests are in good shape.
|
||||
- [@mrunalp](https://github.com/mrunalp) for reviewing cgroupv2 changes and ensuring clean handling of v1 vs v2.
|
||||
- [@liggitt](https://github.com/liggitt), [@gjkim42](https://github.com/gjkim42) for tracking down, root-causing important missed issues post-merge.
|
||||
- [@SergeyKanzhelev](https://github.com/SergeyKanzhelev) for supporting and shepherding various issues during the home stretch.
|
||||
- [@pdgetrf](https://github.com/pdgetrf) for making the first prototype a reality.
|
||||
- [@dashpole](https://github.com/dashpole) for bringing me up to speed on 'the Kubernetes way' of doing things.
|
||||
- [@bsalamat](https://github.com/bsalamat), [@kgolab](https://github.com/kgolab) for very thoughtful insights and suggestions in the early stages.
|
||||
- [@sftim](https://github.com/sftim), [@tengqm](https://github.com/tengqm) for ensuring docs are easy to follow.
|
||||
- [@dims](https://github.com/dims) for being omnipresent and helping make merges happen at critical hours.
|
||||
- Release teams for ensuring that the project stayed healthy.
|
||||
-->
|
||||
- [@thockin](https://github.com/thockin) 如此细致的 API 设计和严密的代码审核。
|
||||
- [@derekwaynecarr](https://github.com/derekwaynecarr) 设计简化和 API & Node 代码审核。
|
||||
- [@dchen1107](https://github.com/dchen1107) 介绍了 Borg 的大量知识,帮助我们避免落入潜在的陷阱。
|
||||
- [@ruiwen-zhao](https://github.com/ruiwen-zhao) 增加 containerd 支持,使得 E2E 能够闭环。
|
||||
- [@wangchen615](https://github.com/wangchen615) 实现完整的 E2E 测试并推进调度问题修复。
|
||||
- [@bobbypage](https://github.com/bobbypage) 提供宝贵的帮助,让 CI 准备就绪并快速排查问题,尤其是在我休假时。
|
||||
- [@Random-Liu](https://github.com/Random-Liu) kubelet 代码审查以及定位竞态条件问题。
|
||||
- [@Huang-Wei](https://github.com/Huang-Wei), [@ahg-g](https://github.com/ahg-g), [@alculquicondor](https://github.com/alculquicondor) 帮助完成调度部分的修改。
|
||||
- [@mikebrow](https://github.com/mikebrow) [@marosset](https://github.com/marosset) 帮助我在 v1.25 代码审查并最终合并 CRI 部分的修改。
|
||||
- [@endocrimes](https://github.com/endocrimes), [@ehashman](https://github.com/ehashman) 帮助确保经常被忽视的测试处于良好状态。
|
||||
- [@mrunalp](https://github.com/mrunalp) cgroupv2 部分的代码审查并保证了 v1 和 v2 的清晰处理。
|
||||
- [@liggitt](https://github.com/liggitt), [@gjkim42](https://github.com/gjkim42) 在合并代码后,帮助追踪遗漏的重要问题的根因。
|
||||
- [@SergeyKanzhelev](https://github.com/SergeyKanzhelev) 在冲刺阶段支持和解决各种问题。
|
||||
- [@pdgetrf](https://github.com/pdgetrf) 完成了第一个原型。
|
||||
- [@dashpole](https://github.com/dashpole) 让我快速了解 Kubernetes 的做事方式。
|
||||
- [@bsalamat](https://github.com/bsalamat), [@kgolab](https://github.com/kgolab) 在早期阶段提供非常周到的见解和建议。
|
||||
- [@sftim](https://github.com/sftim), [@tengqm](https://github.com/tengqm) 确保文档易于理解。
|
||||
- [@dims](https://github.com/dims) 无所不在并帮助在关键时刻进行合并。
|
||||
- 发布团队确保了项目保持健康。
|
||||
|
||||
<!--
|
||||
And a big thanks to my very supportive management [Dr. Xiaoning Ding](https://www.linkedin.com/in/xiaoningding/)
|
||||
and [Dr. Ying Xiong](https://www.linkedin.com/in/ying-xiong-59a2482/) for their patience and encouragement.
|
||||
-->
|
||||
非常感谢我非常支持的管理层 [Xiaoning Ding 博士](https://www.linkedin.com/in/xiaoningding/) 和
|
||||
[Ying Xiong 博士](https://www.linkedin.com/in/ying-xiong-59a2482/),感谢他们的耐心和鼓励。
|
||||
|
||||
<!--
|
||||
## References
|
||||
-->
|
||||
## 参考
|
||||
|
||||
<!--
|
||||
### For app developers
|
||||
-->
|
||||
### 应用程序开发者参考
|
||||
|
||||
<!--
|
||||
- [Resize CPU and Memory Resources assigned to Containers](/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
- [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
- [Assign CPU Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
-->
|
||||
- [调整分配给容器的 CPU 和内存资源](/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
- [为容器和 Pod 分配内存资源](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
- [为容器和 Pod 分配 CPU 资源](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
|
||||
<!--
|
||||
### For cluster administrators
|
||||
-->
|
||||
### 集群管理员参考
|
||||
|
||||
<!--
|
||||
- [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
|
||||
- [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
|
||||
-->
|
||||
- [为命名空间配置默认的内存请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
|
||||
- [为命名空间配置默认的 CPU 请求和限制](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
|
|
@ -162,11 +162,11 @@ You should avoid using the `:latest` tag when deploying containers in production
|
|||
it is harder to track which version of the image is running and more difficult to
|
||||
roll back properly.
|
||||
|
||||
Instead, specify a meaningful tag such as `v1.42.0`.
|
||||
Instead, specify a meaningful tag such as `v1.42.0` and/or a digest.
|
||||
-->
|
||||
在生产环境中部署容器时,你应该避免使用 `:latest` 标签,因为这使得正在运行的镜像的版本难以追踪,并且难以正确地回滚。
|
||||
|
||||
相反,应指定一个有意义的标签,如 `v1.42.0`。
|
||||
相反,应指定一个有意义的标签,如 `v1.42.0`,和/或者一个摘要。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -204,11 +204,17 @@ running the same code no matter what tag changes happen at the registry.
|
|||
|
||||
When you (or a controller) submit a new Pod to the API server, your cluster sets the
|
||||
`imagePullPolicy` field when specific conditions are met:
|
||||
|
||||
- if you omit the `imagePullPolicy` field, and you specify the digest for the
|
||||
container image, the `imagePullPolicy` is automatically set to `IfNotPresent`.
|
||||
-->
|
||||
#### 默认镜像拉取策略 {#imagepullpolicy-defaulting}
|
||||
|
||||
当你(或控制器)向 API 服务器提交一个新的 Pod 时,你的集群会在满足特定条件时设置 `imagePullPolicy` 字段:
|
||||
|
||||
- 如果你省略了 `imagePullPolicy` 字段,并且你为容器镜像指定了摘要,
|
||||
那么 `imagePullPolicy` 会自动设置为 `IfNotPresent`。
|
||||
|
||||
<!--
|
||||
- if you omit the `imagePullPolicy` field, and the tag for the container image is
|
||||
`:latest`, `imagePullPolicy` is automatically set to `Always`;
|
||||
|
@ -228,14 +234,15 @@ When you (or a controller) submit a new Pod to the API server, your cluster sets
|
|||
{{< note >}}
|
||||
<!--
|
||||
The value of `imagePullPolicy` of the container is always set when the object is
|
||||
first _created_, and is not updated if the image's tag later changes.
|
||||
first _created_, and is not updated if the image's tag or digest later changes.
|
||||
|
||||
For example, if you create a Deployment with an image whose tag is _not_
|
||||
`:latest`, and later update that Deployment's image to a `:latest` tag, the
|
||||
`imagePullPolicy` field will _not_ change to `Always`. You must manually change
|
||||
the pull policy of any object after its initial creation.
|
||||
-->
|
||||
容器的 `imagePullPolicy` 的值总是在对象初次 _创建_ 时设置的,如果后来镜像的标签发生变化,则不会更新。
|
||||
容器的 `imagePullPolicy` 的值总是在对象初次 _创建_ 时设置的,
|
||||
如果后来镜像的标签或摘要发生变化,则不会更新。
|
||||
|
||||
例如,如果你用一个 **非** `:latest` 的镜像标签创建一个 Deployment,
|
||||
并在随后更新该 Deployment 的镜像标签为 `:latest`,则 `imagePullPolicy` 字段 **不会** 变成 `Always`。
|
||||
|
|
|
@ -735,7 +735,7 @@ ensure your kubelet services are started with the following flags:
|
|||
-->
|
||||
## 设备插件与拓扑管理器的集成 {#device-plugin-integration-with-the-topology-manager}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
<!--
|
||||
The Topology Manager is a Kubelet component that allows resources to be co-ordinated in a Topology
|
||||
|
|
|
@ -273,13 +273,11 @@ kubectl api-resources --namespaced=false
|
|||
|
||||
<!--
|
||||
The Kubernetes control plane sets an immutable {{< glossary_tooltip text="label" term_id="label" >}}
|
||||
`kubernetes.io/metadata.name` on all namespaces, provided that the `NamespaceDefaultLabelName`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
`kubernetes.io/metadata.name` on all namespaces.
|
||||
The value of the label is the namespace name.
|
||||
-->
|
||||
Kubernetes 控制面会为所有名字空间设置一个不可变更的{{< glossary_tooltip text="标签" term_id="label" >}}
|
||||
`kubernetes.io/metadata.name`,只要 `NamespaceDefaultLabelName`
|
||||
这一[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)被启用。
|
||||
`kubernetes.io/metadata.name`。
|
||||
标签的值是名字空间的名称。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -388,11 +388,11 @@ are true. The following taints are built in:
|
|||
的一个控制器初始化这个节点后,kubelet 将删除这个污点。
|
||||
|
||||
<!--
|
||||
In case a node is to be evicted, the node controller or the kubelet adds relevant taints
|
||||
In case a node is to be drained, the node controller or the kubelet adds relevant taints
|
||||
with `NoExecute` effect. If the fault condition returns to normal the kubelet or node
|
||||
controller can remove the relevant taint(s).
|
||||
-->
|
||||
在节点被驱逐时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效果的相关污点。
|
||||
在节点被排空时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效果的相关污点。
|
||||
如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。
|
||||
|
||||
<!--
|
||||
|
@ -407,7 +407,7 @@ the pods that are scheduled for deletion may continue to run on the partitioned
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The control plane limits the rate of adding node new taints to nodes. This rate limiting
|
||||
The control plane limits the rate of adding new taints to nodes. This rate limiting
|
||||
manages the number of evictions that are triggered when many nodes become unreachable at
|
||||
once (for example: if there is a network disruption).
|
||||
-->
|
||||
|
|
|
@ -743,12 +743,12 @@ server will return a 422 HTTP status code to indicate that there's a problem.
|
|||
如果 IP 地址不合法,API 服务器会返回 HTTP 状态码 422,表示值不合法。
|
||||
|
||||
<!--
|
||||
Read [avoiding collisions](#avoiding-collisions)
|
||||
Read [avoiding collisions](/docs/reference/networking/virtual-ips/#avoiding-collisions)
|
||||
to learn how Kubernetes helps reduce the risk and impact of two different Services
|
||||
both trying to use the same IP address.
|
||||
-->
|
||||
阅读[避免冲突](#avoiding-collisions),了解 Kubernetes
|
||||
如何协助降低两种不同服务试图使用相同 IP 地址的风险和影响。
|
||||
阅读[避免冲突](/zh-cn/docs/reference/networking/virtual-ips/#avoiding-collisions),
|
||||
了解 Kubernetes 如何协助降低两种不同服务试图使用相同 IP 地址的风险和影响。
|
||||
|
||||
<!--
|
||||
### `type: NodePort` {#type-nodeport}
|
||||
|
@ -1219,461 +1219,6 @@ metadata:
|
|||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
#### TLS support on AWS {#ssl-support-on-aws}
|
||||
|
||||
For partial TLS / SSL support on clusters running on AWS, you can add three
|
||||
annotations to a `LoadBalancer` service:
|
||||
-->
|
||||
### AWS TLS 支持 {#ssl-support-on-aws}
|
||||
|
||||
为了对在 AWS 上运行的集群提供 TLS/SSL 部分支持,你可以向 `LoadBalancer`
|
||||
服务添加三个注解:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
|
||||
```
|
||||
|
||||
<!--
|
||||
The first specifies the ARN of the certificate to use. It can be either a
|
||||
certificate from a third party issuer that was uploaded to IAM or one created
|
||||
within AWS Certificate Manager.
|
||||
-->
|
||||
第一个指定要使用的证书的 ARN。 它可以是已上载到 IAM 的第三方颁发者的证书,
|
||||
也可以是在 AWS Certificate Manager 中创建的证书。
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: (https|http|ssl|tcp)
|
||||
```
|
||||
|
||||
<!--
|
||||
The second annotation specifies which protocol a Pod speaks. For HTTPS and
|
||||
SSL, the ELB expects the Pod to authenticate itself over the encrypted
|
||||
connection, using a certificate.
|
||||
|
||||
HTTP and HTTPS selects layer 7 proxying: the ELB terminates
|
||||
the connection with the user, parses headers, and injects the `X-Forwarded-For`
|
||||
header with the user's IP address (Pods only see the IP address of the
|
||||
ELB at the other end of its connection) when forwarding requests.
|
||||
|
||||
TCP and SSL selects layer 4 proxying: the ELB forwards traffic without
|
||||
modifying the headers.
|
||||
|
||||
In a mixed-use environment where some ports are secured and others are left unencrypted,
|
||||
you can use the following annotations:
|
||||
-->
|
||||
第二个注解指定 Pod 使用哪种协议。对于 HTTPS 和 SSL,ELB 希望 Pod
|
||||
使用证书通过加密连接对自己进行身份验证。
|
||||
|
||||
HTTP 和 HTTPS 选择第 7 层代理:ELB 终止与用户的连接,解析标头,并在转发请求时向
|
||||
`X-Forwarded-For` 标头注入用户的 IP 地址(Pod 仅在连接的另一端看到 ELB 的 IP 地址)。
|
||||
|
||||
TCP 和 SSL 选择第 4 层代理:ELB 转发流量而不修改报头。
|
||||
|
||||
在某些端口处于安全状态而其他端口未加密的混合使用环境中,可以使用以下注解:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
|
||||
```
|
||||
|
||||
<!--
|
||||
In the above example, if the Service contained three ports, `80`, `443`, and
|
||||
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
|
||||
|
||||
From Kubernetes v1.9 onwards you can use
|
||||
[predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
|
||||
with HTTPS or SSL listeners for your Services.
|
||||
To see which policies are available for use, you can use the `aws` command line tool:
|
||||
-->
|
||||
在上例中,如果服务包含 `80`、`443` 和 `8443` 三个端口, 那么 `443` 和 `8443` 将使用 SSL 证书,
|
||||
而 `80` 端口将转发 HTTP 数据包。
|
||||
|
||||
从 Kubernetes v1.9 起可以使用
|
||||
[预定义的 AWS SSL 策略](https://docs.aws.amazon.com/zh_cn/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
|
||||
为你的服务使用 HTTPS 或 SSL 侦听器。
|
||||
要查看可以使用哪些策略,可以使用 `aws` 命令行工具:
|
||||
|
||||
```bash
|
||||
aws elb describe-load-balancer-policies --query 'PolicyDescriptions[].PolicyName'
|
||||
```
|
||||
|
||||
<!--
|
||||
You can then specify any one of those policies using the
|
||||
"`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`"
|
||||
annotation; for example:
|
||||
-->
|
||||
然后,你可以使用 "`service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`"
|
||||
注解;例如:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
|
||||
```
|
||||
|
||||
<!--
|
||||
#### PROXY protocol support on AWS
|
||||
|
||||
To enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
|
||||
support for clusters running on AWS, you can use the following service
|
||||
annotation:
|
||||
-->
|
||||
#### AWS 上的 PROXY 协议支持
|
||||
|
||||
为了支持在 AWS 上运行的集群,启用
|
||||
[PROXY 协议](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)。
|
||||
你可以使用以下服务注解:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
|
||||
```
|
||||
|
||||
<!--
|
||||
Since version 1.3.0, the use of this annotation applies to all ports proxied by the ELB
|
||||
and cannot be configured otherwise.
|
||||
-->
|
||||
从 1.3.0 版开始,此注解的使用适用于 ELB 代理的所有端口,并且不能进行其他配置。
|
||||
|
||||
<!--
|
||||
#### ELB Access Logs on AWS
|
||||
|
||||
There are several annotations to manage access logs for ELB Services on AWS.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`
|
||||
controls whether access logs are enabled.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`
|
||||
controls the interval in minutes for publishing the access logs. You can specify
|
||||
an interval of either 5 or 60 minutes.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`
|
||||
controls the name of the Amazon S3 bucket where load balancer access logs are
|
||||
stored.
|
||||
|
||||
The annotation `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`
|
||||
specifies the logical hierarchy you created for your Amazon S3 bucket.
|
||||
-->
|
||||
#### AWS 上的 ELB 访问日志
|
||||
|
||||
有几个注解可用于管理 AWS 上 ELB 服务的访问日志。
|
||||
|
||||
注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled` 控制是否启用访问日志。
|
||||
|
||||
注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval`
|
||||
控制发布访问日志的时间间隔(以分钟为单位)。你可以指定 5 分钟或 60 分钟的间隔。
|
||||
|
||||
注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`
|
||||
控制存储负载均衡器访问日志的 Amazon S3 存储桶的名称。
|
||||
|
||||
注解 `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`
|
||||
指定为 Amazon S3 存储桶创建的逻辑层次结构。
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# Specifies whether access logs are enabled for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
|
||||
|
||||
# The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
|
||||
|
||||
# The name of the Amazon S3 bucket where the access logs are stored
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
|
||||
|
||||
# The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# 指定是否为负载均衡器启用访问日志
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
|
||||
# 发布访问日志的时间间隔。你可以将其设置为 5 分钟或 60 分钟。
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
|
||||
# 用来存放访问日志的 Amazon S3 Bucket 名称
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
|
||||
# 你为 Amazon S3 Bucket 所创建的逻辑层次结构,例如 `my-bucket-prefix/prod`
|
||||
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Connection Draining on AWS
|
||||
|
||||
Connection draining for Classic ELBs can be managed with the annotation
|
||||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled` set
|
||||
to the value of `"true"`. The annotation
|
||||
`service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout` can
|
||||
also be used to set maximum time, in seconds, to keep the existing connections open before
|
||||
deregistering the instances.
|
||||
-->
|
||||
#### AWS 上的连接排空
|
||||
|
||||
可以将注解 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`
|
||||
设置为 `"true"` 来管理 ELB 的连接排空。
|
||||
注解 `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`
|
||||
也可以用于设置最大时间(以秒为单位),以保持现有连接在注销实例之前保持打开状态。
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Other ELB annotations
|
||||
|
||||
There are other annotations to manage Classic Elastic Load Balancers that are described below.
|
||||
-->
|
||||
#### 其他 ELB 注解
|
||||
|
||||
还有其他一些注解,用于管理经典弹性负载均衡器,如下所述。
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# The time, in seconds, that the connection is allowed to be idle (no data has been sent
|
||||
# over the connection) before it is closed by the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
|
||||
|
||||
# Specifies whether cross-zone load balancing is enabled for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
|
||||
|
||||
# A comma-separated list of key-value pairs which will be recorded as
|
||||
# additional tags in the ELB.
|
||||
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
|
||||
|
||||
# The number of successive successful health checks required for a backend to
|
||||
# be considered healthy for traffic. Defaults to 2, must be between 2 and 10
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
|
||||
|
||||
# The number of unsuccessful health checks required for a backend to be
|
||||
# considered unhealthy for traffic. Defaults to 6, must be between 2 and 10
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
|
||||
|
||||
# The approximate interval, in seconds, between health checks of an
|
||||
# individual instance. Defaults to 10, must be between 5 and 300
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
|
||||
|
||||
# The amount of time, in seconds, during which no response means a failed
|
||||
# health check. This value must be less than the service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
|
||||
# value. Defaults to 5, must be between 2 and 60
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
|
||||
|
||||
# A list of existing security groups to be configured on the ELB created. Unlike the annotation
|
||||
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other
|
||||
# security groups previously assigned to the ELB and also overrides the creation
|
||||
# of a uniquely generated security group for this ELB.
|
||||
# The first security group ID on this list is used as a source to permit incoming traffic to
|
||||
# target worker nodes (service traffic and health checks).
|
||||
# If multiple ELBs are configured with the same security group ID, only a single permit line
|
||||
# will be added to the worker node security groups, that means if you delete any
|
||||
# of those ELBs it will remove the single permit line and block access for all ELBs that shared the same security group ID.
|
||||
# This can cause a cross-service outage if not used properly
|
||||
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
|
||||
|
||||
# A list of additional security groups to be added to the created ELB, this leaves the uniquely
|
||||
# generated security group in place, this ensures that every ELB
|
||||
# has a unique security group ID and a matching permit line to allow traffic to the target worker nodes
|
||||
# (service traffic and health checks).
|
||||
# Security groups defined here can be shared between services.
|
||||
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
|
||||
|
||||
# A comma separated list of key-value pairs which are used
|
||||
# to select the target nodes for the load balancer
|
||||
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
# 按秒计的时间,表示负载均衡器关闭连接之前连接可以保持空闲
|
||||
# (连接上无数据传输)的时间长度
|
||||
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
|
||||
|
||||
# 指定该负载均衡器上是否启用跨区的负载均衡能力
|
||||
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
|
||||
|
||||
# 逗号分隔列表值,每一项都是一个键-值耦对,会作为额外的标签记录于 ELB 中
|
||||
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
|
||||
|
||||
# 将某后端视为健康、可接收请求之前需要达到的连续成功健康检查次数。
|
||||
# 默认为 2,必须介于 2 和 10 之间
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
|
||||
|
||||
# 将某后端视为不健康、不可接收请求之前需要达到的连续不成功健康检查次数。
|
||||
# 默认为 6,必须介于 2 和 10 之间
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
|
||||
|
||||
# 对每个实例进行健康检查时,连续两次检查之间的大致间隔秒数
|
||||
# 默认为 10,必须介于 5 和 300 之间
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
|
||||
|
||||
# 时长秒数,在此期间没有响应意味着健康检查失败
|
||||
# 此值必须小于 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
|
||||
# 默认值为 5,必须介于 2 和 60 之间
|
||||
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
|
||||
|
||||
# 由已有的安全组所构成的列表,可以配置到所创建的 ELB 之上。
|
||||
# 与注解 service.beta.kubernetes.io/aws-load-balancer-extra-security-groups 不同,
|
||||
# 这一设置会替代掉之前指定给该 ELB 的所有其他安全组,也会覆盖掉为此
|
||||
# ELB 所唯一创建的安全组。
|
||||
# 此列表中的第一个安全组 ID 被用来作为决策源,以允许入站流量流入目标工作节点
|
||||
# (包括服务流量和健康检查)。
|
||||
# 如果多个 ELB 配置了相同的安全组 ID,为工作节点安全组添加的允许规则行只有一个,
|
||||
# 这意味着如果你删除了这些 ELB 中的任何一个,都会导致该规则记录被删除,
|
||||
# 以至于所有共享该安全组 ID 的其他 ELB 都无法访问该节点。
|
||||
# 此注解如果使用不当,会导致跨服务的不可用状况。
|
||||
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
|
||||
|
||||
# 额外的安全组列表,将被添加到所创建的 ELB 之上。
|
||||
# 添加时,会保留为 ELB 所专门创建的安全组。
|
||||
# 这样会确保每个 ELB 都有一个唯一的安全组 ID 和与之对应的允许规则记录,
|
||||
# 允许请求(服务流量和健康检查)发送到目标工作节点。
|
||||
# 这里顶一个安全组可以被多个服务共享。
|
||||
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
|
||||
|
||||
# 用逗号分隔的一个键-值偶对列表,用来为负载均衡器选择目标节点
|
||||
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Network Load Balancer support on AWS {#aws-nlb-support}
|
||||
-->
|
||||
#### AWS 上网络负载均衡器支持 {#aws-nlb-support}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
|
||||
|
||||
<!--
|
||||
To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernetes.io/aws-load-balancer-type` with the value set to `nlb`.
|
||||
-->
|
||||
要在 AWS 上使用网络负载均衡器,可以使用注解
|
||||
`service.beta.kubernetes.io/aws-load-balancer-type`,将其取值设为 `nlb`。
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
NLB only works with certain instance classes; see the
|
||||
[AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
|
||||
on Elastic Load Balancing for a list of supported instance types.
|
||||
-->
|
||||
NLB 仅适用于某些实例类。有关受支持的实例类型的列表,
|
||||
请参见
|
||||
[AWS 文档](https://docs.aws.amazon.com/zh_cn/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
|
||||
中关于所支持的实例类型的 Elastic Load Balancing 说明。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Unlike Classic Elastic Load Balancers, Network Load Balancers (NLBs) forward the
|
||||
client's IP address through to the node. If a Service's `.spec.externalTrafficPolicy`
|
||||
is set to `Cluster`, the client's IP address is not propagated to the end
|
||||
Pods.
|
||||
|
||||
By setting `.spec.externalTrafficPolicy` to `Local`, the client IP addresses is
|
||||
propagated to the end Pods, but this could result in uneven distribution of
|
||||
traffic. Nodes without any Pods for a particular LoadBalancer Service will fail
|
||||
the NLB Target Group's health check on the auto-assigned
|
||||
`.spec.healthCheckNodePort` and not receive any traffic.
|
||||
-->
|
||||
与经典弹性负载均衡器不同,网络负载均衡器(NLB)将客户端的 IP 地址转发到该节点。
|
||||
如果服务的 `.spec.externalTrafficPolicy` 设置为 `Cluster` ,则客户端的 IP 地址不会传达到最终的 Pod。
|
||||
|
||||
通过将 `.spec.externalTrafficPolicy` 设置为 `Local`,客户端 IP 地址将传播到最终的 Pod,
|
||||
但这可能导致流量分配不均。
|
||||
没有针对特定 LoadBalancer 服务的任何 Pod 的节点将无法通过自动分配的
|
||||
`.spec.healthCheckNodePort` 进行 NLB 目标组的运行状况检查,并且不会收到任何流量。
|
||||
|
||||
<!--
|
||||
In order to achieve even traffic, either use a DaemonSet or specify a
|
||||
[pod anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||
to not locate on the same node.
|
||||
|
||||
You can also use NLB Services with the [internal load balancer](/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
annotation.
|
||||
|
||||
In order for client traffic to reach instances behind an NLB, the Node security
|
||||
groups are modified with the following IP rules:
|
||||
-->
|
||||
为了获得均衡流量,请使用 DaemonSet 或指定
|
||||
[Pod 反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)
|
||||
使其不在同一节点上。
|
||||
|
||||
你还可以将 NLB 服务与[内部负载均衡器](/zh-cn/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
注解一起使用。
|
||||
|
||||
为了使客户端流量能够到达 NLB 后面的实例,使用以下 IP 规则修改了节点安全组:
|
||||
|
||||
<!--
|
||||
| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
|
||||
|------|----------|---------|------------|---------------------|
|
||||
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
|
||||
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
|
||||
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
|
||||
-->
|
||||
| 规则 | 协议 | 端口 | IpRange(s) | IpRange 描述 |
|
||||
|------|----------|---------|------------|---------------------|
|
||||
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
|
||||
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (默认值为 `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
|
||||
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (默认值为 `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
|
||||
|
||||
<!--
|
||||
In order to limit which client IP's can access the Network Load Balancer,
|
||||
specify `loadBalancerSourceRanges`.
|
||||
-->
|
||||
为了限制哪些客户端 IP 可以访问网络负载均衡器,请指定 `loadBalancerSourceRanges`。
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
loadBalancerSourceRanges:
|
||||
- "143.231.0.0/16"
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If `.spec.loadBalancerSourceRanges` is not set, Kubernetes
|
||||
allows traffic from `0.0.0.0/0` to the Node Security Group(s). If nodes have
|
||||
public IP addresses, be aware that non-NLB traffic can also reach all instances
|
||||
in those modified security groups.
|
||||
-->
|
||||
如果未设置 `.spec.loadBalancerSourceRanges` ,则 Kubernetes 允许从 `0.0.0.0/0` 到节点安全组的流量。
|
||||
如果节点具有公共 IP 地址,请注意,非 NLB 流量也可以到达那些修改后的安全组中的所有实例。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Further documentation on annotations for Elastic IPs and other common use-cases may be found
|
||||
in the [AWS Load Balancer Controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/).
|
||||
-->
|
||||
有关弹性 IP 注解和更多其他常见用例,
|
||||
请参阅[AWS 负载均衡控制器文档](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)。
|
||||
|
||||
<!--
|
||||
### `type: ExternalName` {#externalname}
|
||||
|
||||
|
@ -1802,6 +1347,9 @@ either:
|
|||
for all Service types other than `ExternalName`.
|
||||
* For IPv4 endpoints, the DNS system creates A records.
|
||||
* For IPv6 endpoints, the DNS system creates AAAA records.
|
||||
|
||||
When you define a headless Service without a selector, the `port` must
|
||||
match the `targetPort`.
|
||||
-->
|
||||
### 无选择算符的服务 {#without-selectors}
|
||||
|
||||
|
@ -1813,6 +1361,8 @@ either:
|
|||
* 对于 IPv4 端点,DNS 系统创建 A 条记录。
|
||||
* 对于 IPv6 端点,DNS 系统创建 AAAA 条记录。
|
||||
|
||||
当你定义无选择算符的无头服务时,`port` 必须与 `targetPort` 匹配。
|
||||
|
||||
<!--
|
||||
## Discovering services
|
||||
|
||||
|
|
|
@ -865,7 +865,7 @@ For example:
|
|||
1. Preheat oven to 350˚F
|
||||
|
||||
1. Prepare the batter, and pour into springform pan.
|
||||
`{{</* note */>}}Grease the pan for best results.{{</* /note */>}}`
|
||||
{{</* note */>}}Grease the pan for best results.{{</* /note */>}}
|
||||
|
||||
1. Bake for 20-25 minutes or until set.
|
||||
|
||||
|
|
|
@ -0,0 +1,455 @@
|
|||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<svg
|
||||
viewBox="0 0 1058.2677 595.27557"
|
||||
height="595.27557"
|
||||
width="1058.2677"
|
||||
xml:space="preserve"
|
||||
id="svg893"
|
||||
version="1.1"
|
||||
sodipodi:docname="sourceip-externaltrafficpolicy.svg"
|
||||
inkscape:version="1.2.2 (b0a8486, 2022-12-01)"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
||||
xmlns:cc="http://creativecommons.org/ns#"
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"><sodipodi:namedview
|
||||
id="namedview481"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#000000"
|
||||
borderopacity="0.25"
|
||||
inkscape:showpageshadow="2"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pagecheckerboard="0"
|
||||
inkscape:deskcolor="#d1d1d1"
|
||||
showgrid="false"
|
||||
inkscape:zoom="0.79291007"
|
||||
inkscape:cx="527.8026"
|
||||
inkscape:cy="351.8684"
|
||||
inkscape:window-width="1544"
|
||||
inkscape:window-height="818"
|
||||
inkscape:window-x="134"
|
||||
inkscape:window-y="25"
|
||||
inkscape:window-maximized="0"
|
||||
inkscape:current-layer="g901" /><metadata
|
||||
id="metadata899"><rdf:RDF><cc:Work
|
||||
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
|
||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title>Source IP with externalTrafficPolicy</dc:title></cc:Work></rdf:RDF></metadata><defs
|
||||
id="defs897"><clipPath
|
||||
id="clipPath909"
|
||||
clipPathUnits="userSpaceOnUse"><path
|
||||
id="path907"
|
||||
clip-rule="evenodd"
|
||||
d="M 0,0.028 H 793.672 V 446.456 H 0 Z" /></clipPath></defs><g
|
||||
transform="matrix(1.3333333,0,0,-1.3333333,0,595.27559)"
|
||||
id="g901"><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:normal;font-size:18px;font-family:sans-serif;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text945"
|
||||
x="625.09601"
|
||||
y="-225.94901"><tspan
|
||||
x="625.09601 637.08398 647.09198 652.99597 662.086 665.992 674.992"
|
||||
y="-225.94901"
|
||||
id="tspan943">Service</tspan></text><path
|
||||
id="path947"
|
||||
style="fill:#eeeeee;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
d="m 316.261,408.84 c -2.097,11.934 7.711,23.159 17.547,23.159 3.09,0.085 6.293,-1.049 8.929,-2.806 2.494,5.414 7.172,8.702 12.444,8.702 3.487,-0.17 7.172,-1.785 9.808,-4.677 1.899,4.762 5.868,7.569 10.176,7.569 3.629,0 6.945,-2.041 9.043,-5.159 2.409,3.203 6.038,5.159 9.808,5.159 6.151,0 11.338,-5.018 12.444,-12.076 5.953,-1.956 10.233,-8.362 10.233,-15.675 0,-2.24 -0.312,-4.366 -1.191,-6.435 2.523,-3.572 4.026,-7.965 4.026,-12.473 0,-10.233 -6.747,-18.85 -15.478,-20.324 0,-9.779 -6.718,-17.461 -15.108,-17.461 -2.892,0 -5.641,0.963 -8.051,2.749 -2.211,-8.617 -9.184,-14.712 -17.008,-14.712 -5.839,0 -11.338,3.629 -14.655,9.355 -3.089,-2.211 -1.474,-3.487 -10.346,-3.487 -7.285,0 -14.003,4.507 -17.518,11.821 -8.419,0.17 -12.728,5.981 -12.728,13.294 0,3.402 1.106,6.491 3.118,9.014 -3.628,2.155 -5.612,6.378 -5.612,11.254 0,6.831 4.393,12.444 10.119,13.209 z" /><path
|
||||
d="m 316.261,408.84 c -2.097,11.934 7.711,23.159 17.547,23.159 3.09,0.085 6.293,-1.049 8.929,-2.806 2.494,5.414 7.172,8.702 12.444,8.702 3.487,-0.17 7.172,-1.785 9.808,-4.677 1.899,4.762 5.868,7.569 10.176,7.569 3.629,0 6.945,-2.041 9.043,-5.159 2.409,3.203 6.038,5.159 9.808,5.159 6.151,0 11.338,-5.018 12.444,-12.076 5.953,-1.956 10.233,-8.362 10.233,-15.675 0,-2.24 -0.312,-4.366 -1.191,-6.435 2.523,-3.572 4.026,-7.965 4.026,-12.473 0,-10.233 -6.747,-18.85 -15.478,-20.324 0,-9.779 -6.718,-17.461 -15.108,-17.461 -2.892,0 -5.641,0.963 -8.051,2.749 -2.211,-8.617 -9.184,-14.712 -17.008,-14.712 -5.839,0 -11.338,3.629 -14.655,9.355 -3.089,-2.211 -1.474,-3.487 -10.346,-3.487 -7.285,0 -14.003,4.507 -17.518,11.821 -8.419,0.17 -12.728,5.981 -12.728,13.294 0,3.402 1.106,6.491 3.118,9.014 -3.628,2.155 -5.612,6.378 -5.612,11.254 0,6.831 4.393,12.444 10.119,13.209 z"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path951" /><path
|
||||
d="m 316.261,408.84 c 0.114,-1.105 0.567,-2.353 0.851,-3.401"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path955" /><path
|
||||
d="m 342.737,429.193 c 1.191,-0.85 2.665,-2.013 3.657,-3.175"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path959" /><path
|
||||
d="m 364.989,433.218 c -0.425,-0.935 -0.68,-2.069 -0.935,-3.118"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path963" /><path
|
||||
d="m 384.208,435.628 c -0.794,-1.078 -1.219,-2.495 -1.729,-3.799"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path967" /><path
|
||||
d="m 406.46,428.711 c 0.113,-0.822 0.652,-2.608 0.425,-3.005"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path971" /><path
|
||||
d="m 415.502,406.601 c -0.907,-2.268 -2.097,-4.28 -3.77,-5.925"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path975" /><path
|
||||
d="m 404.107,373.804 c 0.425,3.628 -1.984,12.558 -8.731,15.902"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path979" /><path
|
||||
d="m 380.891,359.092 c 0.426,1.446 0.596,2.807 0.681,4.224"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path983" /><path
|
||||
d="m 349.285,353.735 c -0.85,1.162 -1.361,2.494 -1.899,3.883"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path987" /><path
|
||||
d="m 321.364,362.069 c 0.992,0.17 1.984,0.453 2.919,0.85"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path991" /><path
|
||||
d="m 311.754,384.377 c 1.729,-1.162 3.714,-2.182 6.605,-1.786"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path995" /><path
|
||||
id="path997"
|
||||
style="fill:#666666;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
d="m 342.992,270.708 h -65.197 v 34.016 h 130.394 v -34.016 z" /><path
|
||||
d="m 342.992,270.708 h -65.197 v 34.016 h 130.394 v -34.016 z"
|
||||
style="fill:none;stroke:#666666;stroke-width:0.75;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1001" /><path
|
||||
d="M 362.835,344.409 V 324.538 H 342.992 V 304.724"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1011" /><g
|
||||
id="g2532"><path
|
||||
d="m 591.08,257.754 -16.837,-8.051 c -0.057,-0.028 -0.142,-0.056 -0.199,-0.113 -0.482,-0.284 -0.907,-0.68 -1.19,-1.162 -0.142,-0.284 -0.284,-0.567 -0.341,-0.879 l -4.167,-18.028 c -0.056,-0.256 -0.085,-0.482 -0.085,-0.709 0,-0.567 0.17,-1.134 0.454,-1.616 0.028,-0.028 0.057,-0.085 0.085,-0.142 0.057,-0.085 0.113,-0.17 0.17,-0.255 l 11.65,-14.485 c 0.256,-0.312 0.567,-0.567 0.908,-0.765 0.481,-0.284 1.048,-0.425 1.615,-0.425 v 0 h 18.652 v 0 c 0.567,0 1.134,0.141 1.616,0.425 0.34,0.198 0.652,0.482 0.907,0.794 l 11.651,14.456 c 0.085,0.142 0.17,0.284 0.255,0.397 0.283,0.51 0.425,1.049 0.425,1.616 0,0.227 0,0.482 -0.057,0.709 l -4.167,18.028 c -0.085,0.312 -0.17,0.595 -0.34,0.879 -0.255,0.482 -0.709,0.878 -1.191,1.162 -0.056,0.057 -0.141,0.085 -0.226,0.113 l -16.81,8.051 c -0.453,0.198 -0.935,0.311 -1.389,0.311 -0.056,0 -0.113,0 -0.141,0 -0.454,-0.028 -0.851,-0.141 -1.248,-0.311 z"
|
||||
style="fill:#326ce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path917" /><path
|
||||
d="m 590.995,259.029 -17.773,-8.475 c -0.057,-0.029 -0.142,-0.085 -0.198,-0.114 -0.539,-0.312 -0.964,-0.737 -1.248,-1.247 -0.17,-0.283 -0.311,-0.595 -0.368,-0.935 l -4.394,-19.021 c -0.057,-0.255 -0.085,-0.51 -0.085,-0.765 0,-0.596 0.17,-1.162 0.454,-1.673 0.028,-0.056 0.056,-0.113 0.085,-0.17 0.056,-0.085 0.141,-0.17 0.198,-0.255 l 12.303,-15.279 c 0.255,-0.34 0.595,-0.623 0.963,-0.822 0.511,-0.311 1.106,-0.453 1.701,-0.453 v 0 h 19.673 c 0,0 0,0 0.028,0 0.595,0 1.162,0.17 1.672,0.453 0.369,0.199 0.709,0.482 0.964,0.822 l 12.302,15.279 c 0.114,0.142 0.199,0.284 0.284,0.425 0.283,0.511 0.453,1.077 0.453,1.673 0,0.255 -0.028,0.51 -0.085,0.765 l -4.393,19.021 c -0.085,0.34 -0.199,0.652 -0.369,0.935 -0.283,0.51 -0.737,0.935 -1.247,1.247 -0.057,0.029 -0.142,0.085 -0.227,0.114 l -17.773,8.475 c -0.454,0.227 -0.964,0.34 -1.446,0.34 -0.056,0 -0.113,0 -0.17,0 -0.453,-0.028 -0.879,-0.141 -1.304,-0.34 z m 1.333,-0.964 c 0.028,0 0.085,0 0.141,0 0.454,0 0.936,-0.113 1.389,-0.311 l 16.81,-8.051 c 0.085,-0.028 0.17,-0.056 0.226,-0.113 0.482,-0.284 0.936,-0.68 1.191,-1.162 0.17,-0.284 0.255,-0.567 0.34,-0.879 l 4.167,-18.028 c 0.057,-0.227 0.057,-0.482 0.057,-0.709 0,-0.567 -0.142,-1.134 -0.425,-1.616 -0.085,-0.113 -0.17,-0.255 -0.255,-0.397 l -11.651,-14.456 c -0.255,-0.312 -0.567,-0.596 -0.907,-0.794 -0.482,-0.284 -1.049,-0.425 -1.616,-0.425 v 0 h -18.652 v 0 c -0.567,0 -1.134,0.141 -1.615,0.425 -0.341,0.198 -0.652,0.453 -0.908,0.765 l -11.65,14.485 c -0.057,0.085 -0.113,0.17 -0.17,0.255 -0.028,0.057 -0.057,0.114 -0.085,0.142 -0.284,0.482 -0.454,1.049 -0.454,1.588 0,0.255 0.029,0.481 0.085,0.737 l 4.167,18.028 c 0.057,0.312 0.199,0.595 0.341,0.879 0.283,0.482 0.68,0.878 1.19,1.162 0.057,0.057 0.142,0.085 0.199,0.113 l 16.837,8.051 c 0.397,0.17 0.794,0.283 1.248,0.311 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path919" /><path
|
||||
d="m 577.106,228.925 h 8.248 v -5.782 h -8.248 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path921" /><path
|
||||
d="m 588.359,228.925 h 8.221 v -5.782 h -8.221 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path923" /><path
|
||||
d="m 599.584,228.925 h 8.249 v -5.782 h -8.249 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path925" /><path
|
||||
d="m 585.95,246.047 h 13.039 v -5.783 H 585.95 Z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path927" /><path
|
||||
id="path935"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 592.469,240.264 v -5.669 h 11.225 v -5.67" /><path
|
||||
id="path939"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 592.469,240.264 v -5.669 h 0.029 v -5.67" /><path
|
||||
id="path931"
|
||||
style="fill:none;stroke:#ffffff;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 592.469,240.264 v -5.669 h -11.225 v -5.67" /><path
|
||||
id="path1015"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
d="m 592.469,259.369 v 2.948" /></g><path
|
||||
d="m 592.469,264.245 v 2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1019" /><path
|
||||
d="m 592.469,269.092 v 2.977"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1023" /><path
|
||||
d="m 592.469,273.968 v 2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1027" /><path
|
||||
d="m 592.469,278.843 v 2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1031" /><path
|
||||
d="m 592.469,283.719 v 2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1035" /><path
|
||||
d="m 591.619,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1039" /><path
|
||||
d="m 586.743,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1043" /><path
|
||||
d="M 581.868,287.716 H 578.92"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1047" /><path
|
||||
d="m 577.02,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1051" /><path
|
||||
d="m 572.145,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1055" /><path
|
||||
d="m 567.269,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1059" /><path
|
||||
d="m 562.394,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1063" /><path
|
||||
d="M 557.546,287.716 H 554.57"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1067" /><path
|
||||
d="m 552.671,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1071" /><path
|
||||
d="m 547.795,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1075" /><path
|
||||
d="m 542.92,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1079" /><path
|
||||
d="m 538.072,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1083" /><path
|
||||
d="m 533.197,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1087" /><path
|
||||
d="m 528.321,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1091" /><path
|
||||
d="m 523.474,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1095" /><path
|
||||
d="m 518.598,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1099" /><path
|
||||
d="m 513.723,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1103" /><path
|
||||
d="m 508.847,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1107" /><path
|
||||
d="m 504,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1111" /><path
|
||||
d="m 499.124,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1115" /><path
|
||||
d="m 494.249,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1119" /><path
|
||||
d="m 489.402,287.716 h -2.977"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1123" /><path
|
||||
d="M 484.526,287.716 H 481.55"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1127" /><path
|
||||
d="m 479.65,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1131" /><path
|
||||
d="m 474.775,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1135" /><path
|
||||
d="m 469.928,287.716 h -2.977"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1139" /><path
|
||||
d="m 465.052,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1143" /><path
|
||||
d="m 460.176,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1147" /><path
|
||||
d="m 455.329,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1151" /><path
|
||||
d="m 450.454,287.716 h -2.977"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1155" /><path
|
||||
d="M 445.578,287.716 H 442.63"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1159" /><path
|
||||
d="m 440.702,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1163" /><path
|
||||
d="m 435.855,287.716 h -2.976"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1167" /><path
|
||||
d="m 430.98,287.716 h -2.949"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1171" /><path
|
||||
d="m 426.104,287.716 h -2.948"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1175" /><path
|
||||
d="m 421.228,287.716 h -1.7"
|
||||
style="fill:none;stroke:#326ce5;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1179" /><path
|
||||
id="path1181"
|
||||
style="fill:#326ce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
d="m 408.189,287.716 11.877,-3.969 v 7.909 z" /><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:normal;font-size:14.995px;font-family:sans-serif;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text1187"
|
||||
x="492.77063"
|
||||
y="-296.13135">配置</text><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:normal;font-size:18px;font-family:sans-serif;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text1199"
|
||||
x="625.09601"
|
||||
y="-225.864"><tspan
|
||||
x="625.09601 637.08398 647.09198 652.99597 662.086 665.992 674.992"
|
||||
y="-225.864"
|
||||
id="tspan1197">Service</tspan></text><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:normal;font-size:14.995px;font-family:sans-serif;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text1205"
|
||||
x="426.61401"
|
||||
y="-69.798935"><tspan
|
||||
x="426.61401 437.4104 445.7926 454.1748 462.55701 466.74063"
|
||||
y="-69.798935"
|
||||
id="tspan1203">Node 2</tspan></text><g
|
||||
id="g2479"><path
|
||||
d="m 234.283,132.632 -16.837,-8.022 c -0.057,-0.028 -0.142,-0.085 -0.199,-0.113 -0.482,-0.284 -0.907,-0.68 -1.19,-1.162 -0.142,-0.284 -0.255,-0.567 -0.34,-0.879 l -4.139,-18.057 c -0.057,-0.255 -0.085,-0.482 -0.085,-0.708 0,-0.567 0.142,-1.134 0.425,-1.616 0.028,-0.028 0.057,-0.085 0.085,-0.142 0.057,-0.085 0.114,-0.17 0.17,-0.255 l 11.651,-14.457 c 0.255,-0.311 0.567,-0.595 0.907,-0.793 0.51,-0.255 1.049,-0.426 1.615,-0.426 0,0 0,0 0,0 h 18.681 c 0,0 0,0 0,0 0.567,0 1.134,0.171 1.616,0.454 0.34,0.198 0.651,0.454 0.907,0.765 l 11.622,14.485 c 0.113,0.114 0.198,0.256 0.283,0.369 0.284,0.51 0.425,1.049 0.425,1.616 0,0.226 -0.028,0.482 -0.085,0.708 l -4.138,18.057 c -0.085,0.312 -0.199,0.595 -0.369,0.879 -0.283,0.482 -0.68,0.878 -1.162,1.162 -0.085,0.057 -0.142,0.085 -0.227,0.113 l -16.809,8.022 c -0.454,0.227 -0.936,0.341 -1.418,0.341 -0.056,0 -0.113,0 -0.141,0 -0.454,-0.029 -0.851,-0.142 -1.248,-0.341 z"
|
||||
style="fill:#326ce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1207" /><path
|
||||
d="m 234.198,133.936 -17.744,-8.475 c -0.085,-0.057 -0.171,-0.085 -0.227,-0.114 -0.51,-0.311 -0.964,-0.737 -1.247,-1.247 -0.171,-0.283 -0.312,-0.595 -0.369,-0.935 l -4.394,-19.049 c -0.056,-0.255 -0.085,-0.51 -0.085,-0.766 0,-0.595 0.17,-1.162 0.454,-1.672 0.028,-0.057 0.057,-0.113 0.085,-0.142 0.085,-0.113 0.142,-0.198 0.198,-0.283 l 12.303,-15.279 c 0.255,-0.34 0.595,-0.595 0.963,-0.822 0.511,-0.283 1.106,-0.453 1.701,-0.453 0,0 0,0 0,0 h 19.701 c 0,0 0,0 0,0 0.595,0 1.191,0.17 1.701,0.453 0.368,0.227 0.708,0.482 0.964,0.822 l 12.302,15.279 c 0.085,0.142 0.198,0.283 0.283,0.425 0.284,0.51 0.454,1.077 0.454,1.672 0,0.256 -0.028,0.511 -0.085,0.766 l -4.394,19.049 c -0.085,0.34 -0.198,0.652 -0.368,0.935 -0.312,0.51 -0.737,0.936 -1.248,1.247 -0.085,0.029 -0.141,0.057 -0.226,0.114 l -17.774,8.475 c -0.453,0.227 -0.963,0.34 -1.474,0.34 -0.056,0 -0.113,0 -0.17,0 -0.453,-0.028 -0.878,-0.141 -1.304,-0.34 z m 1.333,-0.992 c 0.028,0.029 0.085,0.029 0.141,0.029 0.482,0 0.964,-0.114 1.418,-0.341 l 16.809,-8.022 c 0.085,-0.028 0.142,-0.056 0.227,-0.113 0.482,-0.284 0.879,-0.68 1.162,-1.162 0.17,-0.284 0.284,-0.567 0.369,-0.879 l 4.138,-18.057 c 0.057,-0.226 0.085,-0.482 0.085,-0.708 0,-0.567 -0.141,-1.106 -0.425,-1.616 -0.085,-0.113 -0.17,-0.255 -0.283,-0.369 L 247.55,87.221 c -0.256,-0.311 -0.567,-0.567 -0.907,-0.765 -0.482,-0.283 -1.049,-0.454 -1.616,-0.454 0,0 0,0 0,0 h -18.681 c 0,0 0,0 0,0 -0.566,0 -1.105,0.142 -1.615,0.426 -0.34,0.198 -0.652,0.482 -0.907,0.793 l -11.651,14.457 c -0.056,0.085 -0.113,0.17 -0.17,0.255 -0.028,0.057 -0.057,0.114 -0.085,0.142 -0.283,0.482 -0.425,1.049 -0.425,1.616 0,0.226 0.028,0.453 0.085,0.708 l 4.139,18.057 c 0.085,0.312 0.198,0.595 0.34,0.879 0.283,0.482 0.708,0.878 1.19,1.162 0.057,0.028 0.142,0.085 0.199,0.113 l 16.837,8.022 c 0.397,0.199 0.794,0.312 1.248,0.312 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1209" /><path
|
||||
d="m 235.644,123.42 c -0.397,0 -11.338,-5.386 -11.48,-5.641 -0.34,-0.68 -2.807,-12.132 -2.665,-12.388 0.085,-0.141 1.871,-2.409 3.969,-5.045 l 3.826,-4.762 h 6.293 l 6.293,-0.029 3.997,5.018 4.025,4.989 -1.417,6.151 c -0.765,3.373 -1.446,6.208 -1.531,6.264 -0.198,0.199 -11.111,5.414 -11.31,5.443 z m 0.539,-2.693 2.579,-0.765 -2.579,-0.737 -2.58,0.737 z m -2.58,-1.049 2.438,-0.709 -0.028,-3.344 -2.41,1.332 z m 5.159,0 v -2.721 l -2.381,-1.332 -0.028,3.344 z m -6.208,-4.564 2.552,-0.737 -2.552,-0.765 -2.579,0.765 z m 6.237,0 2.579,-0.737 -2.579,-0.765 -2.58,0.765 z m -8.816,-1.02 2.409,-0.737 -0.028,-3.345 -2.381,1.332 z m 5.131,0 v -2.75 l -2.382,-1.332 -0.028,3.345 z m 1.105,0 2.409,-0.737 v -3.345 l -2.409,1.332 z m 5.159,0 v -2.75 l -2.409,-1.332 v 3.345 z m -9.779,-4.309 c 0.765,-0.028 0.17,-0.794 1.048,-1.19 0.936,-0.454 1.134,0.652 1.701,-0.199 0.595,-0.85 -0.51,-0.623 -0.425,-1.644 0.085,-1.02 1.134,-0.652 0.68,-1.559 -0.425,-0.935 -0.793,0.113 -1.644,-0.454 -0.85,-0.595 0.029,-1.303 -1.02,-1.389 -1.021,-0.085 -0.284,0.766 -1.191,1.191 -0.935,0.454 -1.134,-0.652 -1.729,0.198 -0.567,0.851 0.539,0.624 0.454,1.645 -0.085,1.02 -1.134,0.652 -0.709,1.559 0.453,0.935 0.822,-0.114 1.672,0.453 0.851,0.595 -0.028,1.304 0.992,1.389 0.086,0 0.142,0 0.171,0 z m 7.908,-2.891 c 1.247,-0.567 1.332,1.275 2.268,0.283 0.935,-0.992 -0.907,-0.964 -0.397,-2.239 0.482,-1.276 1.843,-0.028 1.786,-1.417 -0.028,-1.361 -1.304,-0.029 -1.871,-1.276 -0.567,-1.247 1.276,-1.332 0.284,-2.268 -0.993,-0.935 -0.964,0.907 -2.24,0.425 -1.275,-0.481 -0.028,-1.842 -1.389,-1.814 -1.389,0.029 -0.057,1.333 -1.304,1.871 -1.247,0.567 -1.332,-1.275 -2.267,-0.283 -0.936,0.992 0.907,0.963 0.425,2.239 -0.482,1.276 -1.843,0.028 -1.814,1.417 0.056,1.361 1.332,0.029 1.899,1.276 0.538,1.247 -1.304,1.332 -0.312,2.268 0.992,0.935 0.964,-0.907 2.268,-0.426 1.275,0.482 0,1.843 1.389,1.815 1.36,-0.029 0.028,-1.333 1.275,-1.871 z m -7.852,1.162 c -0.822,0 -1.502,-0.68 -1.502,-1.502 0,-0.851 0.68,-1.531 1.502,-1.531 0.851,0 1.531,0.68 1.531,1.531 0,0.822 -0.68,1.502 -1.531,1.502 z m 6.435,-1.361 c -1.673,0 -3.005,-1.36 -3.005,-3.033 0,-1.644 1.332,-3.004 3.005,-3.004 1.644,0 3.005,1.36 3.005,3.004 0,1.673 -1.361,3.033 -3.005,3.033 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1211" /></g><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:normal;font-size:14.995px;font-family:sans-serif;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text1217"
|
||||
x="213.07295"
|
||||
y="-69.798935"><tspan
|
||||
x="213.07295 223.86935 232.16159 240.64876 249.03096 253.1246"
|
||||
y="-69.798935"
|
||||
id="tspan1215">Node 1</tspan></text><path
|
||||
d="m 342.992,270.708 v -68.06 h -107.32 v -68.003"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1221" /><path
|
||||
d="m 342.992,270.708 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1225" /><path
|
||||
d="m 342.992,261.694 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1229" /><path
|
||||
d="m 342.992,252.68 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1233" /><path
|
||||
d="m 342.992,243.665 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1237" /><path
|
||||
d="m 342.992,234.651 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1241" /><path
|
||||
d="m 342.992,225.637 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1245" /><path
|
||||
d="m 342.992,216.623 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1249" /><path
|
||||
d="m 342.992,207.609 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1253" /><path
|
||||
d="m 347.046,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1257" /><path
|
||||
d="m 356.06,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1261" /><path
|
||||
d="m 365.074,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1265" /><path
|
||||
d="m 374.088,202.648 h 1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1269" /><path
|
||||
d="m 383.102,202.648 h 1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1273" /><path
|
||||
d="m 392.117,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1277" /><path
|
||||
d="m 401.131,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1281" /><path
|
||||
d="m 410.145,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1285" /><path
|
||||
d="m 419.159,202.648 h 1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1289" /><path
|
||||
d="m 428.173,202.648 h 1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1293" /><path
|
||||
d="m 437.187,202.648 h 1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1297" /><path
|
||||
d="m 444.614,201.061 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1301" /><path
|
||||
d="m 444.614,192.047 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1305" /><path
|
||||
d="M 444.614,183.032 V 181.53"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1309" /><path
|
||||
d="m 444.614,174.018 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1313" /><path
|
||||
d="m 444.614,165.004 v -1.502"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1317" /><path
|
||||
d="m 444.614,155.99 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1321" /><g
|
||||
id="g2496"><path
|
||||
d="m 443.225,132.632 -16.838,-8.022 c -0.056,-0.028 -0.141,-0.085 -0.198,-0.113 -0.482,-0.284 -0.907,-0.68 -1.191,-1.162 -0.141,-0.284 -0.255,-0.567 -0.34,-0.879 l -4.138,-18.057 c -0.057,-0.255 -0.085,-0.482 -0.085,-0.708 0,-0.567 0.141,-1.134 0.425,-1.616 0.028,-0.028 0.057,-0.085 0.085,-0.142 0.057,-0.085 0.113,-0.17 0.17,-0.255 l 11.65,-14.457 c 0.255,-0.311 0.567,-0.595 0.907,-0.793 0.511,-0.255 1.049,-0.426 1.616,-0.426 0,0 0,0 0,0 h 18.681 c 0,0 0,0 0,0 0.566,0 1.133,0.171 1.615,0.454 0.34,0.198 0.652,0.454 0.907,0.765 l 11.622,14.485 c 0.114,0.114 0.199,0.256 0.284,0.369 0.283,0.51 0.425,1.049 0.425,1.616 0,0.226 -0.028,0.482 -0.085,0.708 l -4.139,18.057 c -0.085,0.312 -0.198,0.595 -0.368,0.879 -0.284,0.482 -0.68,0.878 -1.162,1.162 -0.085,0.057 -0.142,0.085 -0.227,0.113 l -16.81,8.022 c -0.453,0.227 -0.935,0.341 -1.417,0.341 -0.057,0 -0.113,0 -0.142,0 -0.453,-0.029 -0.85,-0.142 -1.247,-0.341 z"
|
||||
style="fill:#326ce5;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1189" /><path
|
||||
d="m 443.14,133.936 -17.745,-8.475 c -0.085,-0.057 -0.17,-0.085 -0.226,-0.114 -0.511,-0.311 -0.964,-0.737 -1.248,-1.247 -0.17,-0.283 -0.312,-0.595 -0.368,-0.935 l -4.394,-19.049 c -0.057,-0.255 -0.085,-0.51 -0.085,-0.766 0,-0.595 0.17,-1.162 0.454,-1.672 0.028,-0.057 0.056,-0.113 0.085,-0.142 0.085,-0.113 0.141,-0.198 0.198,-0.283 l 12.302,-15.279 c 0.256,-0.34 0.596,-0.595 0.964,-0.822 0.51,-0.283 1.106,-0.453 1.701,-0.453 0,0 0,0 0,0 h 19.701 c 0,0 0,0 0,0 0.595,0 1.19,0.17 1.701,0.453 0.368,0.227 0.708,0.482 0.963,0.822 l 12.303,15.279 c 0.085,0.142 0.198,0.283 0.283,0.425 0.284,0.51 0.454,1.077 0.454,1.672 0,0.256 -0.029,0.511 -0.085,0.766 l -4.394,19.049 c -0.085,0.34 -0.198,0.652 -0.369,0.935 -0.311,0.51 -0.737,0.936 -1.247,1.247 -0.085,0.029 -0.142,0.057 -0.227,0.114 l -17.773,8.475 c -0.453,0.227 -0.964,0.34 -1.474,0.34 -0.057,0 -0.113,0 -0.17,0 -0.453,-0.028 -0.879,-0.141 -1.304,-0.34 z m 1.332,-0.992 c 0.029,0.029 0.085,0.029 0.142,0.029 0.482,0 0.964,-0.114 1.417,-0.341 l 16.81,-8.022 c 0.085,-0.028 0.142,-0.056 0.227,-0.113 0.482,-0.284 0.878,-0.68 1.162,-1.162 0.17,-0.284 0.283,-0.567 0.368,-0.879 l 4.139,-18.057 c 0.057,-0.226 0.085,-0.482 0.085,-0.708 0,-0.567 -0.142,-1.106 -0.425,-1.616 -0.085,-0.113 -0.17,-0.255 -0.284,-0.369 L 456.491,87.221 c -0.255,-0.311 -0.567,-0.567 -0.907,-0.765 -0.482,-0.283 -1.049,-0.454 -1.615,-0.454 0,0 0,0 0,0 h -18.681 c 0,0 0,0 0,0 -0.567,0 -1.105,0.142 -1.616,0.426 -0.34,0.198 -0.652,0.482 -0.907,0.793 l -11.65,14.457 c -0.057,0.085 -0.113,0.17 -0.17,0.255 -0.028,0.057 -0.057,0.114 -0.085,0.142 -0.284,0.482 -0.425,1.049 -0.425,1.616 0,0.226 0.028,0.453 0.085,0.708 l 4.138,18.057 c 0.085,0.312 0.199,0.595 0.34,0.879 0.284,0.482 0.709,0.878 1.191,1.162 0.057,0.028 0.142,0.085 0.198,0.113 l 16.838,8.022 c 0.397,0.199 0.794,0.312 1.247,0.312 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1191" /><path
|
||||
d="m 444.586,123.42 c -0.397,0 -11.339,-5.386 -11.48,-5.641 -0.341,-0.68 -2.807,-12.132 -2.665,-12.388 0.085,-0.141 1.871,-2.409 3.968,-5.045 l 3.827,-4.762 h 6.293 l 6.293,-0.029 3.997,5.018 4.025,4.989 -1.417,6.151 c -0.766,3.373 -1.446,6.208 -1.531,6.264 -0.198,0.199 -11.112,5.414 -11.31,5.443 z m 0.538,-2.693 2.58,-0.765 -2.58,-0.737 -2.579,0.737 z m -2.579,-1.049 2.438,-0.709 -0.029,-3.344 -2.409,1.332 z m 5.159,0 v -2.721 l -2.381,-1.332 -0.029,3.344 z m -6.208,-4.564 2.551,-0.737 -2.551,-0.765 -2.579,0.765 z m 6.236,0 2.58,-0.737 -2.58,-0.765 -2.579,0.765 z m -8.815,-1.02 2.409,-0.737 -0.028,-3.345 -2.381,1.332 z m 5.13,0 v -2.75 l -2.381,-1.332 -0.028,3.345 z m 1.106,0 2.409,-0.737 v -3.345 l -2.409,1.332 z m 5.159,0 v -2.75 l -2.41,-1.332 v 3.345 z m -9.78,-4.309 c 0.766,-0.028 0.17,-0.794 1.049,-1.19 0.936,-0.454 1.134,0.652 1.701,-0.199 0.595,-0.85 -0.51,-0.623 -0.425,-1.644 0.085,-1.02 1.134,-0.652 0.68,-1.559 -0.425,-0.935 -0.794,0.113 -1.644,-0.454 -0.85,-0.595 0.028,-1.303 -1.021,-1.389 -1.02,-0.085 -0.283,0.766 -1.19,1.191 -0.936,0.454 -1.134,-0.652 -1.729,0.198 -0.567,0.851 0.538,0.624 0.453,1.645 -0.085,1.02 -1.134,0.652 -0.708,1.559 0.453,0.935 0.822,-0.114 1.672,0.453 0.85,0.595 -0.028,1.304 0.992,1.389 0.085,0 0.142,0 0.17,0 z m 7.909,-2.891 c 1.247,-0.567 1.332,1.275 2.268,0.283 0.935,-0.992 -0.907,-0.964 -0.397,-2.239 0.482,-1.276 1.842,-0.028 1.786,-1.417 -0.029,-1.361 -1.304,-0.029 -1.871,-1.276 -0.567,-1.247 1.275,-1.332 0.283,-2.268 -0.992,-0.935 -0.964,0.907 -2.239,0.425 -1.276,-0.481 -0.028,-1.842 -1.389,-1.814 -1.389,0.029 -0.057,1.333 -1.304,1.871 -1.247,0.567 -1.332,-1.275 -2.268,-0.283 -0.935,0.992 0.907,0.963 0.425,2.239 -0.481,1.276 -1.842,0.028 -1.814,1.417 0.057,1.361 1.333,0.029 1.899,1.276 0.539,1.247 -1.303,1.332 -0.311,2.268 0.992,0.935 0.963,-0.907 2.267,-0.426 1.276,0.482 0,1.843 1.389,1.815 1.361,-0.029 0.029,-1.333 1.276,-1.871 z m -7.852,1.162 c -0.822,0 -1.502,-0.68 -1.502,-1.502 0,-0.851 0.68,-1.531 1.502,-1.531 0.85,0 1.531,0.68 1.531,1.531 0,0.822 -0.681,1.502 -1.531,1.502 z m 6.435,-1.361 c -1.673,0 -3.005,-1.36 -3.005,-3.033 0,-1.644 1.332,-3.004 3.005,-3.004 1.644,0 3.004,1.36 3.004,3.004 0,1.673 -1.36,3.033 -3.004,3.033 z"
|
||||
style="fill:#ffffff;fill-opacity:1;fill-rule:evenodd;stroke:none"
|
||||
id="path1193" /></g><path
|
||||
d="m 444.614,146.976 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1325" /><path
|
||||
d="m 444.614,137.962 v -1.503"
|
||||
style="fill:none;stroke:#780373;stroke-width:1.50233;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
|
||||
id="path1329" /><text
|
||||
transform="scale(1,-1)"
|
||||
style="font-variant:normal;font-weight:bold;font-size:18px;font-family:sans-serif;-inkscape-font-specification:LiberationSans-Bold;writing-mode:lr-tb;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none"
|
||||
id="text1395"
|
||||
x="158.797"
|
||||
y="-90.849998"><tspan
|
||||
y="-90.849998"
|
||||
x="158.797"
|
||||
id="tspan1393" /></text><text
|
||||
xml:space="preserve"
|
||||
style="font-size:18px;fill:#000000;stroke-width:0.75"
|
||||
x="298.89899"
|
||||
y="-282.81897"
|
||||
id="text695"
|
||||
transform="scale(1,-1)"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan693"
|
||||
style="font-size:18px;fill:#ffffff;stroke-width:0.75"
|
||||
x="298.89899"
|
||||
y="-282.81897">负载均衡器</tspan></text><text
|
||||
xml:space="preserve"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="157.30026"
|
||||
y="-183.20107"
|
||||
id="text695-8"
|
||||
transform="scale(1,-1)"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan693-6"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="157.30026"
|
||||
y="-183.20107">Node 1</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="157.30026"
|
||||
y="-163.20107"
|
||||
id="tspan878">健康检查</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="157.30026"
|
||||
y="-143.02231"
|
||||
id="tspan876">返回 200</tspan></text><text
|
||||
xml:space="preserve"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="457.12411"
|
||||
y="-183.75008"
|
||||
id="text695-8-2"
|
||||
transform="scale(1,-1)"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan693-6-6"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="457.12411"
|
||||
y="-183.75008">Node 2</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="457.12411"
|
||||
y="-163.75008"
|
||||
id="tspan878-6">健康检查</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:16px;fill:#000000;stroke-width:0.75"
|
||||
x="457.12411"
|
||||
y="-143.57132"
|
||||
id="tspan876-7">返回 <tspan
|
||||
style="font-weight:bold"
|
||||
id="tspan917">500</tspan></tspan></text></g></svg>
|
After Width: | Height: | Size: 42 KiB |
|
@ -17,84 +17,98 @@ weight: 80
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together.
|
||||
Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted
|
||||
to users through the use of policies which combine attributes together.
|
||||
-->
|
||||
基于属性的访问控制(Attribute-based access control - ABAC)定义了访问控制范例,
|
||||
其中通过使用将属性组合在一起的策略来向用户授予访问权限。
|
||||
|
||||
|
||||
基于属性的访问控制(Attribute-based access control,ABAC)定义了访问控制范例,
|
||||
ABAC 通过使用将属性组合在一起的策略来向用户授予访问权限。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Policy File Format
|
||||
|
||||
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC` on startup.
|
||||
To enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC`
|
||||
on startup.
|
||||
|
||||
The file format is [one JSON object per line](https://jsonlines.org/). There
|
||||
The file format is [one JSON object per line](https://jsonlines.org/). There
|
||||
should be no enclosing list or map, only one map per line.
|
||||
|
||||
Each line is a "policy object", where each such object is a map with the following
|
||||
properties:
|
||||
|
||||
- Versioning properties:
|
||||
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning and conversion of the policy format.
|
||||
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
|
||||
- `spec` property set to a map with the following properties:
|
||||
- Subject-matching properties:
|
||||
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the username of the authenticated user.
|
||||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user. `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all unauthenticated requests.
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
- Ex: `/version` or `/apis`
|
||||
- Wildcard:
|
||||
- `*` matches all non-resource requests.
|
||||
- `/foo/*` matches all subpaths of `/foo/`.
|
||||
- `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list, and watch operations, Non-resource-matching policy only applies to get operation.
|
||||
-->
|
||||
## 策略文件格式 {#policy-file-format}
|
||||
|
||||
基于 `ABAC` 模式,可以这样指定策略文件 `--authorization-policy-file=SOME_FILENAME`。
|
||||
要启用 `ABAC` 模式,可以在启动时指定 `--authorization-policy-file=SOME_FILENAME` 和 `--authorization-mode=ABAC`。
|
||||
|
||||
此文件格式是 [JSON Lines](https://jsonlines.org/),不应存在外层的列表或映射,每行应只有一个映射。
|
||||
此文件格式是[每行一个 JSON 对象](https://jsonlines.org/),不应存在外层的列表或映射,每行应只有一个映射。
|
||||
|
||||
每一行都是一个策略对象,策略对象是具有以下属性的映射:
|
||||
每一行都是一个“策略对象”,策略对象是具有以下属性的映射:
|
||||
|
||||
- 版本控制属性:
|
||||
- `apiVersion`,字符串类型:有效值为 `abac.authorization.kubernetes.io/v1beta1`,允许对策略格式进行版本控制和转换。
|
||||
- `kind`,字符串类型:有效值为 `Policy`,允许对策略格式进行版本控制和转换。
|
||||
- `spec` 配置为具有以下映射的属性:
|
||||
- 主体匹配属性:
|
||||
- `user`,字符串类型;来自 `--token-auth-file` 的用户字符串,如果你指定 `user`,它必须与验证用户的用户名匹配。
|
||||
- `group`,字符串类型;如果指定 `group`,它必须与经过身份验证的用户的一个组匹配,`system:authenticated` 匹配所有经过身份验证的请求。
|
||||
`system:unauthenticated` 匹配所有未经过身份验证的请求。
|
||||
<!--
|
||||
- Versioning properties:
|
||||
- `apiVersion`, type string; valid values are "abac.authorization.kubernetes.io/v1beta1". Allows versioning
|
||||
and conversion of the policy format.
|
||||
- `kind`, type string: valid values are "Policy". Allows versioning and conversion of the policy format.
|
||||
-->
|
||||
- 版本控制属性:
|
||||
- `apiVersion`,字符串类型:有效值为 `abac.authorization.kubernetes.io/v1beta1`,允许对策略格式进行版本控制和转换。
|
||||
- `kind`,字符串类型:有效值为 `Policy`,允许对策略格式进行版本控制和转换。
|
||||
<!--
|
||||
- `spec` property set to a map with the following properties:
|
||||
- Subject-matching properties:
|
||||
- `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the
|
||||
username of the authenticated user.
|
||||
- `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.
|
||||
`system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all
|
||||
unauthenticated requests.
|
||||
-->
|
||||
- `spec` 配置为具有以下映射的属性:
|
||||
- 主体匹配属性:
|
||||
- `user`,字符串类型;来自 `--token-auth-file` 的用户字符串,如果你指定 `user`,它必须与验证用户的用户名匹配。
|
||||
- `group`,字符串类型;如果指定 `group`,它必须与经过身份验证的用户的一个组匹配,
|
||||
`system:authenticated` 匹配所有经过身份验证的请求。
|
||||
`system:unauthenticated` 匹配所有未经过身份验证的请求。
|
||||
<!--
|
||||
- Resource-matching properties:
|
||||
- `apiGroup`, type string; an API group.
|
||||
- Ex: `apps`, `networking.k8s.io`
|
||||
- Wildcard: `*` matches all API groups.
|
||||
- `namespace`, type string; a namespace.
|
||||
- Ex: `kube-system`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
- `resource`, type string; a resource type
|
||||
- Ex: `pods`, `deployments`
|
||||
- Wildcard: `*` matches all resource requests.
|
||||
-->
|
||||
- 资源匹配属性:
|
||||
- `apiGroup`,字符串类型;一个 API 组。
|
||||
- 例如:`apps`,`networking.k8s.io`
|
||||
- 例如:`apps`、`networking.k8s.io`
|
||||
- 通配符:`*`匹配所有 API 组。
|
||||
- `namespace`,字符串类型;一个命名空间。
|
||||
- 例如:`kube-system`
|
||||
- 通配符:`*`匹配所有资源请求。
|
||||
- `resource`,字符串类型;资源类型。
|
||||
- 例如:`pods`,`deployments`
|
||||
- 例如:`pods`、`deployments`
|
||||
- 通配符:`*`匹配所有资源请求。
|
||||
<!--
|
||||
- Non-resource-matching properties:
|
||||
- `nonResourcePath`, type string; non-resource request paths.
|
||||
- Ex: `/version` or `/apis`
|
||||
- Wildcard:
|
||||
- `*` matches all non-resource requests.
|
||||
- `/foo/*` matches all subpaths of `/foo/`.
|
||||
- `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list,
|
||||
and watch operations, Non-resource-matching policy only applies to get operation.
|
||||
-->
|
||||
- 非资源匹配属性:
|
||||
- `nonResourcePath`,字符串类型;非资源请求路径。
|
||||
- 例如:`/version` 或 `/apis`
|
||||
- 通配符:
|
||||
- `*` 匹配所有非资源请求。
|
||||
- `/foo/*` 匹配 `/foo/` 的所有子路径。
|
||||
- `readonly`,键入布尔值,如果为 true,则表示该策略仅适用于 get、list 和 watch 操作。
|
||||
- `readonly`,布尔值类型。如果为 true,则表示该策略仅适用于 get、list 和 watch 操作。
|
||||
非资源匹配属性仅适用于 get 操作。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
|
@ -106,10 +120,10 @@ readability.
|
|||
In the future, policies may be expressed in a JSON format, and managed via a
|
||||
REST interface.
|
||||
-->
|
||||
属性未设置等效于属性被设置为对应类型的零值( 例如空字符串、0、false),然而,出于可读性考虑,应尽量选择不设置这类属性。
|
||||
属性未设置等效于属性被设置为对应类型的零值(例如空字符串、0、false)。
|
||||
然而,出于可读性考虑,应尽量选择不设置这类属性。
|
||||
|
||||
在将来,策略可能以 JSON 格式表示,并通过 REST 界面进行管理。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -117,11 +131,20 @@ REST interface.
|
|||
|
||||
A request has attributes which correspond to the properties of a policy object.
|
||||
|
||||
When a request is received, the attributes are determined. Unknown attributes
|
||||
When a request is received, the attributes are determined. Unknown attributes
|
||||
are set to the zero value of its type (e.g. empty string, 0, false).
|
||||
|
||||
A property set to `"*"` will match any value of the corresponding attribute.
|
||||
-->
|
||||
## 鉴权算法 {#authorization-algorithm}
|
||||
|
||||
请求具有与策略对象的属性对应的属性。
|
||||
|
||||
当接收到请求时,属性是确定的。未知属性设置为其类型的零值(例如:空字符串、0、false)。
|
||||
|
||||
设置为 `"*"` 的属性将匹配相应属性的任何值。
|
||||
|
||||
<!--
|
||||
The tuple of attributes is checked for a match against every policy in the
|
||||
policy file. If at least one line matches the request attributes, then the
|
||||
request is authorized (but may fail later validation).
|
||||
|
@ -135,22 +158,15 @@ group property set to `"system:unauthenticated"`.
|
|||
To permit a user to do anything, write a policy with the apiGroup, namespace,
|
||||
resource, and nonResourcePath properties set to `"*"`.
|
||||
-->
|
||||
|
||||
## 鉴权算法 {#authorization-algorithm}
|
||||
|
||||
请求具有与策略对象的属性对应的属性。
|
||||
|
||||
当接收到请求时,确定属性。未知属性设置为其类型的零值(例如:空字符串,0,false)。
|
||||
|
||||
设置为 `"*"` 的属性将匹配相应属性的任何值。
|
||||
|
||||
检查属性的元组,以匹配策略文件中的每个策略。如果至少有一行匹配请求属性,则请求被鉴权(但仍可能无法通过稍后的合法性检查)。
|
||||
检查属性的元组,以匹配策略文件中的每个策略。如果至少有一行匹配请求属性,
|
||||
则请求被鉴权(但仍可能无法通过稍后的合法性检查)。
|
||||
|
||||
要允许任何经过身份验证的用户执行某些操作,请将策略组属性设置为 `"system:authenticated"`。
|
||||
|
||||
要允许任何未经身份验证的用户执行某些操作,请将策略组属性设置为 `"system:unauthenticated"`。
|
||||
|
||||
要允许用户执行任何操作,请使用设置为 `"*"` 的 apiGroup,namespace,resource 和 nonResourcePath 属性编写策略。
|
||||
要允许用户执行任何操作,请使用设置为 `"*"` 的 apiGroup、namespace、resource 和
|
||||
nonResourcePath 属性编写策略。
|
||||
|
||||
<!--
|
||||
## Kubectl
|
||||
|
@ -161,7 +177,16 @@ operations using schema information located at `/openapi/v2`.
|
|||
|
||||
When using ABAC authorization, those special resources have to be explicitly
|
||||
exposed via the `nonResourcePath` property in a policy (see [examples](#examples) below):
|
||||
-->
|
||||
## kubectl
|
||||
|
||||
kubectl 使用 apiserver 的 `/api` 和 `/apis` 端点来发现服务资源类型,
|
||||
并使用位于 `/openapi/v2` 的模式信息来验证通过创建/更新操作发送到 API 的对象。
|
||||
|
||||
当使用 ABAC 鉴权时,这些特殊资源必须显式地通过策略中的 `nonResourcePath` 属性暴露出来
|
||||
(参见下面的 [示例](#examples)):
|
||||
|
||||
<!--
|
||||
* `/api`, `/api/*`, `/apis`, and `/apis/*` for API version negotiation.
|
||||
* `/version` for retrieving the server version via `kubectl version`.
|
||||
* `/swaggerapi/*` for create/update operations.
|
||||
|
@ -171,100 +196,79 @@ up the verbosity:
|
|||
|
||||
kubectl --v=8 version
|
||||
-->
|
||||
|
||||
## kubectl
|
||||
|
||||
kubectl 使用 apiserver 的 `/api` 和 `/apis` 端点来发现服务资源类型,
|
||||
并使用位于 `/openapi/v2` 的模式信息来验证通过创建/更新操作发送到 API 的对象。
|
||||
|
||||
当使用 ABAC 鉴权时,这些特殊资源必须显式地通过策略中的 `nonResourcePath` 属性暴露出来(参见下面的 [示例](#examples)):
|
||||
|
||||
* `/api`,`/api/*`,`/apis` 和 `/apis/*` 用于 API 版本协商。
|
||||
* `/version` 通过 `kubectl version` 检索服务器版本。
|
||||
* `/swaggerapi/*` 用于创建 / 更新操作。
|
||||
|
||||
要检查涉及到特定 kubectl 操作的 HTTP 调用,你可以调整详细程度:
|
||||
kubectl --v=8 version
|
||||
|
||||
```shell
|
||||
kubectl --v=8 version
|
||||
```
|
||||
|
||||
<!--
|
||||
## Examples
|
||||
|
||||
1. Alice can do anything to all resources:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
|
||||
```
|
||||
2. The Kubelet can read any pods:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
3. The Kubelet can read and write events:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
|
||||
```
|
||||
-->
|
||||
1. Alice can do anything to all resources:
|
||||
-->
|
||||
## 例子 {#examples}
|
||||
|
||||
1. Alice 可以对所有资源做任何事情:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
|
||||
```
|
||||
2. kubelet 可以读取任何 pod:
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}
|
||||
```
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
<!--
|
||||
2. The Kubelet can read any pods:
|
||||
-->
|
||||
2. kubelet 可以读取所有 Pod:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
|
||||
<!--
|
||||
3. The Kubelet can read and write events:
|
||||
-->
|
||||
3. kubelet 可以读写事件:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
|
||||
```
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}
|
||||
```
|
||||
|
||||
<!--
|
||||
4. Bob can just read pods in namespace "projectCaribou":
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
5. Anyone can make read-only requests to all non-resource paths:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
```
|
||||
<!--
|
||||
4. Bob can just read pods in namespace "projectCaribou":
|
||||
-->
|
||||
4. Bob 可以在命名空间 `projectCaribou` 中读取 pod:
|
||||
4. Bob 可以在命名空间 `projectCaribou` 中读取 Pod:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}
|
||||
```
|
||||
|
||||
<!--
|
||||
5. Anyone can make read-only requests to all non-resource paths:
|
||||
-->
|
||||
5. 任何人都可以对所有非资源路径进行只读请求:
|
||||
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
```
|
||||
```json
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}
|
||||
```
|
||||
|
||||
<!--
|
||||
[Complete file example](https://releases.k8s.io/v{{< skew currentPatchVersion >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## A quick note on service accounts
|
||||
|
||||
Every service account has a corresponding ABAC username, and that service account's username is generated according to the naming convention:
|
||||
|
||||
```shell
|
||||
system:serviceaccount:<namespace>:<serviceaccountname>
|
||||
```
|
||||
|
||||
Every service account has a corresponding ABAC username, and that service account's username is generated
|
||||
according to the naming convention:
|
||||
-->
|
||||
[完整文件示例](https://releases.k8s.io/v{{< skew currentPatchVersion >}}/pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## 服务帐户的快速说明 {#a-quick-note-on-service-accounts}
|
||||
## 服务账号的快速说明 {#a-quick-note-on-service-accounts}
|
||||
|
||||
服务帐户自动生成用户。用户名是根据命名约定生成的:
|
||||
每个服务账号都有对应的 ABAC 用户名,服务账号的用户名是根据命名约定生成的:
|
||||
|
||||
```shell
|
||||
system:serviceaccount:<namespace>:<serviceaccountname>
|
||||
|
@ -272,31 +276,25 @@ system:serviceaccount:<namespace>:<serviceaccountname>
|
|||
|
||||
<!--
|
||||
Creating a new namespace leads to the creation of a new service account in the following format:
|
||||
-->
|
||||
创建新的命名空间也会导致创建一个新的服务账号:
|
||||
|
||||
```shell
|
||||
system:serviceaccount:<namespace>:default
|
||||
```
|
||||
|
||||
For example, if you wanted to grant the default service account (in the `kube-system` namespace) full
|
||||
<!--
|
||||
For example, if you wanted to grant the default service account (in the `kube-system` namespace) full
|
||||
privilege to the API using ABAC, you would add this line to your policy file:
|
||||
-->
|
||||
例如,如果你要使用 ABAC 将(`kube-system` 命名空间中)的默认服务账号完整权限授予 API,
|
||||
则可以将此行添加到策略文件中:
|
||||
|
||||
```json
|
||||
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}}
|
||||
```
|
||||
|
||||
<!--
|
||||
The apiserver will need to be restarted to pick up the new policy lines.
|
||||
-->
|
||||
|
||||
创建新的命名空间也会导致创建一个新的服务帐户:
|
||||
|
||||
```shell
|
||||
system:serviceaccount:<namespace>:default
|
||||
```
|
||||
|
||||
例如,如果要将 API 的 kube-system 完整权限中的默认服务帐户授予,则可以将此行添加到策略文件中:
|
||||
|
||||
```json
|
||||
{"apiVersion":"abac.authorization.kubernetes.io/v1beta1","kind":"Policy","spec":{"user":"system:serviceaccount:kube-system:default","namespace":"*","resource":"*","apiGroup":"*"}}
|
||||
```
|
||||
|
||||
需要重新启动 apiserver 以获取新的策略行。
|
||||
API 服务器将需要被重新启动以获取新的策略行。
|
||||
|
|
|
@ -4,7 +4,6 @@ linkTitle: 准入控制器
|
|||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- lavalamp
|
||||
|
@ -342,7 +341,8 @@ This admission controller ignores any `PersistentVolumeClaim` updates; it acts o
|
|||
See [persistent volume](/docs/concepts/storage/persistent-volumes/) documentation about persistent volume claims and
|
||||
storage classes and how to mark a storage class as default.
|
||||
-->
|
||||
关于持久卷申领和存储类,以及如何将存储类标记为默认,请参见[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)页面。
|
||||
关于持久卷申领和存储类,以及如何将存储类标记为默认,
|
||||
请参见[持久卷](/zh-cn/docs/concepts/storage/persistent-volumes/)页面。
|
||||
|
||||
### DefaultTolerationSeconds {#defaulttolerationseconds}
|
||||
|
||||
|
@ -505,6 +505,20 @@ This file may be json or yaml and has the following format:
|
|||
ImagePolicyWebhook 使用配置文件来为后端行为设置选项。该文件可以是 JSON 或 YAML,
|
||||
并具有以下格式:
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
imagePolicy:
|
||||
kubeConfigFile: /path/to/kubeconfig/for/backend
|
||||
# time in s to cache approval
|
||||
allowTTL: 50
|
||||
# time in s to cache denial
|
||||
denyTTL: 50
|
||||
# time in ms to wait between retries
|
||||
retryBackoff: 500
|
||||
# determines behavior if the webhook backend fails
|
||||
defaultAllow: true
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
imagePolicy:
|
||||
kubeConfigFile: /path/to/kubeconfig/for/backend
|
||||
|
@ -635,15 +649,14 @@ group (`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`).
|
|||
-->
|
||||
注意,Webhook API 对象与其他 Kubernetes API 对象一样受制于相同的版本控制兼容性规则。
|
||||
实现者应该知道对 alpha 对象兼容性是相对宽松的,并检查请求的 "apiVersion" 字段,
|
||||
以确保正确的反序列化。
|
||||
此外,API 服务器必须启用 `imagepolicy.k8s.io/v1alpha1` API 扩展组
|
||||
以确保正确的反序列化。此外,API 服务器必须启用 `imagepolicy.k8s.io/v1alpha1` API 扩展组
|
||||
(`--runtime-config=imagepolicy.k8s.io/v1alpha1=true`)。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
An example request body:
|
||||
-->
|
||||
请求载荷示例:
|
||||
请求体示例:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -808,7 +821,7 @@ group/version via the `--runtime-config` flag, both are on by default.
|
|||
<!--
|
||||
#### Use caution when authoring and installing mutating webhooks
|
||||
-->
|
||||
#### 谨慎编写和安装变更 webhook {#use-caution-when-authoring-and-installing-mutating-webhooks}
|
||||
#### 谨慎编写和安装变更 Webhook {#use-caution-when-authoring-and-installing-mutating-webhooks}
|
||||
|
||||
<!--
|
||||
* Users may be confused when the objects they try to create are different from
|
||||
|
@ -860,8 +873,7 @@ This admission controller also prevents deletion of three system reserved namesp
|
|||
`kube-system`, `kube-public`.
|
||||
-->
|
||||
该准入控制器禁止在一个正在被终止的 `Namespace` 中创建新对象,并确保针对不存在的
|
||||
`Namespace` 的请求被拒绝。
|
||||
该准入控制器还会禁止删除三个系统保留的名字空间,即 `default`、
|
||||
`Namespace` 的请求被拒绝。该准入控制器还会禁止删除三个系统保留的名字空间,即 `default`、
|
||||
`kube-system` 和 `kube-public`。
|
||||
|
||||
<!--
|
||||
|
@ -1006,8 +1018,7 @@ This admission controller is disabled by default.
|
|||
此准入控制器会自动将由云提供商(如 Azure 或 GCP)定义的区(region)或区域(zone)
|
||||
标签附加到 PersistentVolume 上。这有助于确保 Pod 和 PersistentVolume 位于相同的区或区域。
|
||||
如果准入控制器不支持为 PersistentVolumes 自动添加标签,那你可能需要手动添加标签,
|
||||
以防止 Pod 挂载其他区域的卷。
|
||||
PersistentVolumeLabel **已被弃用**,
|
||||
以防止 Pod 挂载其他区域的卷。PersistentVolumeLabel **已被弃用**,
|
||||
为持久卷添加标签的操作已由{{< glossary_tooltip text="云管理控制器" term_id="cloud-controller-manager" >}}接管。
|
||||
|
||||
此准入控制器默认被禁用。
|
||||
|
@ -1129,7 +1140,7 @@ admitted, determines if it should be admitted based on the requested security co
|
|||
for the namespace that the Pod would be in.
|
||||
-->
|
||||
PodSecurity 准入控制器在新 Pod 被准入之前对其进行检查,
|
||||
根据请求的安全上下文和 Pod 所在命名空间允许的
|
||||
根据请求的安全上下文和 Pod 所在名字空间允许的
|
||||
[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards/)的限制来确定新 Pod
|
||||
是否应该被准入。
|
||||
|
||||
|
@ -1253,21 +1264,34 @@ for more information.
|
|||
|
||||
### SecurityContextDeny {#securitycontextdeny}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="deprecated" >}}
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
This admission controller plugin is **outdated** and **incomplete**, it may be
|
||||
unusable or not do what you would expect. It was originally designed to prevent
|
||||
the use of some, but not all, security-sensitive fields. Indeed, fields like
|
||||
`privileged`, were not filtered at creation and the plugin was not updated with
|
||||
the most recent fields, and new APIs like the `ephemeralContainers` field for a
|
||||
Pod.
|
||||
The Kubernetes project recommends that you **do not use** the
|
||||
`SecurityContextDeny` admission controller.
|
||||
|
||||
The `SecurityContextDeny` admission controller plugin is deprecated and disabled
|
||||
by default. It will be removed in a future version. If you choose to enable the
|
||||
`SecurityContextDeny` admission controller plugin, you must enable the
|
||||
`SecurityContextDeny` feature gate as well.
|
||||
-->
|
||||
这个准入控制器插件是**过时的**且**不完整的**,它可能无法使用或无法达到你的预期。
|
||||
它最初旨在防止使用某些(但不是全部)安全敏感字段。
|
||||
事实上,像 `privileged` 这样的字段在创建时并没有被过滤,
|
||||
而且该插件没有根据最新的字段和新的 API(例如 Pod 的 `ephemeralContainers` 字段)来更新。
|
||||
Kubernetes 项目建议你**不要使用** `SecurityContextDeny` 准入控制器。
|
||||
|
||||
`SecurityContextDeny` 准入控制器插件已被弃用,并且默认处于禁用状态。
|
||||
此插件将在后续的版本中被移除。如果你选择启用 `SecurityContextDeny` 准入控制器插件,
|
||||
也必须同时启用 `SecurityContextDeny` 特性门控。
|
||||
|
||||
<!--
|
||||
The `SecurityContextDeny` admission plugin is deprecated because it is outdated
|
||||
and incomplete; it may be unusable or not do what you would expect. As
|
||||
implemented, this plugin is unable to restrict all security-sensitive attributes
|
||||
of the Pod API. For example, the `privileged` and `ephemeralContainers` fields
|
||||
were never restricted by this plugin.
|
||||
-->
|
||||
`SecurityContextDeny` 准入插件已被弃用,因为它已经过时且不完整;
|
||||
它可能无法使用或无法达到你的预期。该插件实现之时,就无法限制 Pod API 的所有与安全相关的属性。
|
||||
例如,`privileged` 和 `ephemeralContainers` 字段就从未受过此插件的限制。
|
||||
|
||||
<!--
|
||||
The [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
|
@ -1333,8 +1357,8 @@ Refer to the
|
|||
for more detailed information.
|
||||
-->
|
||||
`StorageObjectInUseProtection` 插件将 `kubernetes.io/pvc-protection` 或
|
||||
`kubernetes.io/pv-protection` finalizers 添加到新创建的持久卷申领(PVC)
|
||||
或持久卷(PV)中。如果用户尝试删除 PVC/PV,除非 PVC/PV 的保护控制器移除终结器(finalizers),
|
||||
`kubernetes.io/pv-protection` 终结器(finalizers)添加到新创建的持久卷申领(PVC)
|
||||
或持久卷(PV)中。如果用户尝试删除 PVC/PV,除非 PVC/PV 的保护控制器移除终结器,
|
||||
否则 PVC/PV 不会被删除。有关更多详细信息,
|
||||
请参考[保护使用中的存储对象](/zh-cn/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection)。
|
||||
|
||||
|
@ -1406,4 +1430,3 @@ You can enable additional admission controllers beyond the default set using the
|
|||
(请查看[这里](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/#options))。
|
||||
因此,你无需显式指定它们。
|
||||
你可以使用 `--enable-admission-plugins` 标志( **顺序不重要** )来启用默认设置以外的其他准入控制器。
|
||||
|
||||
|
|
|
@ -360,7 +360,7 @@ Kubernetes 提供了内置的签名者,每个签名者都有一个众所周知
|
|||
1. Permitted subjects - organizations are exactly `["system:nodes"]`, common name starts with "`system:node:`".
|
||||
1. Permitted x509 extensions - honors key usage and DNSName/IPAddress subjectAltName extensions, forbids EmailAddress and
|
||||
URI subjectAltName extensions, drops other extensions. At least one DNS or IP subjectAltName must be present.
|
||||
1. Permitted key usages - `["key encipherment", "digital signature", "client auth"]` or `["digital signature", "client auth"]`.
|
||||
1. Permitted key usages - `["key encipherment", "digital signature", "server auth"]` or `["digital signature", "server auth"]`.
|
||||
1. Expiration/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum
|
||||
of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
@ -372,8 +372,8 @@ Kubernetes 提供了内置的签名者,每个签名者都有一个众所周知
|
|||
1. 许可的 x509 扩展:允许 key usage、DNSName/IPAddress subjectAltName 等扩展,
|
||||
禁止 EmailAddress、URI subjectAltName 等扩展,并丢弃其他扩展。
|
||||
至少有一个 DNS 或 IP 的 SubjectAltName 存在。
|
||||
1. 许可的密钥用途:`["key encipherment", "digital signature", "client auth"]`
|
||||
或 `["digital signature", "client auth"]`。
|
||||
1. 许可的密钥用途:`["key encipherment", "digital signature", "server auth"]`
|
||||
或 `["digital signature", "server auth"]`。
|
||||
1. 过期时间/证书有效期:对于 kube-controller-manager 实现的签名者,
|
||||
设置为 `--cluster-signing-duration` 选项和 CSR 对象的 `spec.expirationSeconds` 字段(如有设置该字段)中的最小值。
|
||||
1. 允许/不允许 CA 位:不允许。
|
||||
|
|
|
@ -409,6 +409,7 @@ That manifest snippet defines a projected volume that combines information from
|
|||
1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.
|
||||
The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires
|
||||
either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).
|
||||
The kubelet also refreshes that token before the token expires.
|
||||
The token is bound to the specific Pod and has the kube-apiserver as its audience.
|
||||
1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these
|
||||
certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox
|
||||
|
@ -420,8 +421,8 @@ That manifest snippet defines a projected volume that combines information from
|
|||
|
||||
1. `serviceAccountToken` 数据源,包含 kubelet 从 kube-apiserver 获取的令牌。
|
||||
kubelet 使用 TokenRequest API 获取有时间限制的令牌。为 TokenRequest 服务的这个令牌会在
|
||||
Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。该令牌绑定到特定的 Pod,
|
||||
并将其 audience(受众)设置为与 `kube-apiserver` 的 audience 相匹配。
|
||||
Pod 被删除或定义的生命周期(默认为 1 小时)结束之后过期。在令牌过期之前,kubelet 还会刷新该令牌。
|
||||
该令牌绑定到特定的 Pod,并将其 audience(受众)设置为与 `kube-apiserver` 的 audience 相匹配。
|
||||
1. `configMap` 数据源。ConfigMap 包含一组证书颁发机构数据。
|
||||
Pod 可以使用这些证书来确保自己连接到集群的 kube-apiserver(而不是连接到中间件或意外配置错误的对等点上)。
|
||||
1. `downwardAPI` 数据源。这个 `downwardAPI` 卷获得包含 Pod 的名字空间的名称,
|
||||
|
|
|
@ -1269,12 +1269,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
ClusterIP range is subdivided. Dynamic allocated ClusterIP addresses will be allocated preferently
|
||||
from the upper range allowing users to assign static ClusterIPs from the lower range with a low
|
||||
risk of collision. See
|
||||
[Avoiding collisions](/docs/concepts/services-networking/service/#avoiding-collisions)
|
||||
[Avoiding collisions](/docs/reference/networking/virtual-ips/#avoiding-collisions)
|
||||
for more details.
|
||||
-->
|
||||
- `ServiceIPStaticSubrange`:启用服务 ClusterIP 分配策略,从而细分 ClusterIP 范围。
|
||||
动态分配的 ClusterIP 地址将优先从较高范围分配,以低冲突风险允许用户从较低范围分配静态 ClusterIP。
|
||||
更多详细信息请参阅[避免冲突](/zh-cn/docs/concepts/services-networking/service/#avoiding-collisions)
|
||||
更多详细信息请参阅[避免冲突](/zh-cn/docs/reference/networking/virtual-ips/#avoiding-collisions)
|
||||
<!--
|
||||
- `SizeMemoryBackedVolumes`: Enable kubelets to determine the size limit for
|
||||
memory-backed volumes (mainly `emptyDir` volumes).
|
||||
|
|
|
@ -228,7 +228,7 @@ Alternatively, you can use the `skipPhases` field under `InitConfiguration`.
|
|||
<!--
|
||||
The config file is still considered beta and may change in future versions.
|
||||
-->
|
||||
配置文件的功能仍然处于 alpha 状态并且在将来的版本中可能会改变。
|
||||
配置文件的功能仍然处于 beta 状态并且在将来的版本中可能会改变。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -1766,15 +1766,15 @@ Continue Token, Exact
|
|||
{{< note >}}
|
||||
<!--
|
||||
When you **list** resources and receive a collection response, the response includes the
|
||||
[metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta) of the collection as
|
||||
well as [object metadata](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
|
||||
[list metadata](/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#listmeta-v1-meta) of the collection as
|
||||
well as [object metadata](/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#objectmeta-v1-meta)
|
||||
for each item in that collection. For individual objects found within a collection response,
|
||||
`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date
|
||||
the object is when served.
|
||||
-->
|
||||
当你 **list** 资源并收到集合响应时,
|
||||
响应包括集合的[元数据](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
|
||||
以及该集合中每个项目的[对象元数据](/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)。
|
||||
响应包括集合的[列表元数据](/zh-cn/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#listmeta-v1-meta)。
|
||||
以及该集合中每个项目的[对象元数据](/zh-cn/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#objectmeta-v1-meta)。
|
||||
对于在集合响应中找到的单个对象,`.metadata.resourceVersion` 跟踪该对象的最后更新时间,
|
||||
而不是对象在服务时的最新程度。
|
||||
{{< /note >}}
|
||||
|
|
|
@ -3,7 +3,6 @@ title: 已弃用 API 的迁移指南
|
|||
weight: 45
|
||||
content_type: reference
|
||||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- liggitt
|
||||
|
@ -62,7 +61,7 @@ The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and Prior
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
* **flowcontrol.apiserver.k8s.io/v1beta3** 中需要额外注意的变更:
|
||||
* PriorityLevelConfiguration 的 `spec.limited.assuredConcurrencyShares`
|
||||
字段已被更名为 `spec.limited.nominalConcurrencyShares`
|
||||
字段已被更名为 `spec.limited.nominalConcurrencyShares`
|
||||
|
||||
### v1.27
|
||||
|
||||
|
@ -158,9 +157,9 @@ The **discovery.k8s.io/v1beta1** API version of EndpointSlice is no longer serve
|
|||
* Migrate manifests and API clients to use the **discovery.k8s.io/v1** API version, available since v1.21.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **discovery.k8s.io/v1**:
|
||||
* use per Endpoint `nodeName` field instead of deprecated `topology["kubernetes.io/hostname"]` field
|
||||
* use per Endpoint `zone` field instead of deprecated `topology["topology.kubernetes.io/zone"]` field
|
||||
* `topology` is replaced with the `deprecatedTopology` field which is not writable in v1
|
||||
* use per Endpoint `nodeName` field instead of deprecated `topology["kubernetes.io/hostname"]` field
|
||||
* use per Endpoint `zone` field instead of deprecated `topology["topology.kubernetes.io/zone"]` field
|
||||
* `topology` is replaced with the `deprecatedTopology` field which is not writable in v1
|
||||
-->
|
||||
从 v1.25 版本开始不再提供 **discovery.k8s.io/v1beta1** API 版本的 EndpointSlice。
|
||||
|
||||
|
@ -188,14 +187,20 @@ The **events.k8s.io/v1beta1** API version of Event is no longer served as of v1.
|
|||
|
||||
<!--
|
||||
* Notable changes in **events.k8s.io/v1**:
|
||||
* `type` is limited to `Normal` and `Warning`
|
||||
* `involvedObject` is renamed to `regarding`
|
||||
* `action`, `reason`, `reportingController`, and `reportingInstance` are required when creating new **events.k8s.io/v1** Events
|
||||
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.count` instead of the deprecated `count` field (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingController` instead of the deprecated `source.component` field (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingInstance` instead of the deprecated `source.host` field (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* `type` is limited to `Normal` and `Warning`
|
||||
* `involvedObject` is renamed to `regarding`
|
||||
* `action`, `reason`, `reportingController`, and `reportingInstance` are required
|
||||
when creating new **events.k8s.io/v1** Events
|
||||
* use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed
|
||||
to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field
|
||||
(which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `series.count` instead of the deprecated `count` field
|
||||
(which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingController` instead of the deprecated `source.component` field
|
||||
(which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io/v1** Events)
|
||||
* use `reportingInstance` instead of the deprecated `source.host` field
|
||||
(which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io/v1** Events)
|
||||
-->
|
||||
* **events.k8s.io/v1** 中值得注意的变更有:
|
||||
* `type` 字段只能设置为 `Normal` 和 `Warning` 之一;
|
||||
|
@ -210,7 +215,7 @@ The **events.k8s.io/v1beta1** API version of Event is no longer served as of v1.
|
|||
(该字段已被更名为 `deprecatedCount`,且不允许出现在新的 **events.k8s.io/v1** Event 对象中);
|
||||
* 使用 `reportingController` 而不是已被弃用的 `source.component` 字段
|
||||
(该字段已被更名为 `deprecatedSource.component`,且不允许出现在新的 **events.k8s.io/v1** Event 对象中);
|
||||
* 使用 `reportingInstance` 而不是已被弃用的 `source.host` 字段
|
||||
* 使用 `reportingInstance` 而不是已被弃用的 `source.host` 字段
|
||||
(该字段已被更名为 `deprecatedSource.host`,且不允许出现在新的 **events.k8s.io/v1** Event 对象中)。
|
||||
|
||||
#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v125}
|
||||
|
@ -235,7 +240,9 @@ The **policy/v1beta1** API version of PodDisruptionBudget is no longer served as
|
|||
* Migrate manifests and API clients to use the **policy/v1** API version, available since v1.21.
|
||||
* All existing persisted objects are accessible via the new API
|
||||
* Notable changes in **policy/v1**:
|
||||
* an empty `spec.selector` (`{}`) written to a `policy/v1` PodDisruptionBudget selects all pods in the namespace (in `policy/v1beta1` an empty `spec.selector` selected no pods). An unset `spec.selector` selects no pods in either API version.
|
||||
* an empty `spec.selector` (`{}`) written to a `policy/v1` PodDisruptionBudget selects all
|
||||
pods in the namespace (in `policy/v1beta1` an empty `spec.selector` selected no pods).
|
||||
An unset `spec.selector` selects no pods in either API version.
|
||||
-->
|
||||
从 v1.25 版本开始不再提供 **policy/v1beta1** API 版本的 PodDisruptionBudget。
|
||||
|
||||
|
@ -250,7 +257,8 @@ The **policy/v1beta1** API version of PodDisruptionBudget is no longer served as
|
|||
#### PodSecurityPolicy {#psp-v125}
|
||||
|
||||
<!--
|
||||
PodSecurityPolicy in the **policy/v1beta1** API version is no longer served as of v1.25, and the PodSecurityPolicy admission controller will be removed.
|
||||
PodSecurityPolicy in the **policy/v1beta1** API version is no longer served as of v1.25,
|
||||
and the PodSecurityPolicy admission controller will be removed.
|
||||
|
||||
Migrate to [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
|
||||
or a [3rd party admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/).
|
||||
|
@ -260,7 +268,7 @@ For more information on the deprecation, see [PodSecurityPolicy Deprecation: Pas
|
|||
从 v1.25 版本开始不再提供 **policy/v1beta1** API 版本中的 PodSecurityPolicy,
|
||||
并且 PodSecurityPolicy 准入控制器也会被删除。
|
||||
|
||||
迁移到 [Pod 安全准入](/zh-cn/docs/concepts/security/pod-security-admission/)或[第三方准入 webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/)。
|
||||
迁移到 [Pod 安全准入](/zh-cn/docs/concepts/security/pod-security-admission/)或[第三方准入 Webhook](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/)。
|
||||
有关迁移指南,请参阅[从 PodSecurityPolicy 迁移到内置 PodSecurity 准入控制器](/zh-cn/docs/tasks/configure-pod-container/migrate-from-psp/)。
|
||||
有关弃用的更多信息,请参阅 [PodSecurityPolicy 弃用:过去、现在和未来](/zh-cn/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/)。
|
||||
|
||||
|
@ -292,7 +300,8 @@ The **v1.22** release stopped serving the following deprecated API versions:
|
|||
#### Webhook 资源 {#webhook-resources-v122}
|
||||
|
||||
<!--
|
||||
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration is no longer served as of v1.22.
|
||||
The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfiguration
|
||||
and ValidatingWebhookConfiguration is no longer served as of v1.22.
|
||||
-->
|
||||
**admissionregistration.k8s.io/v1beta1** API 版本的 MutatingWebhookConfiguration
|
||||
和 ValidatingWebhookConfiguration 不在 v1.22 版本中继续提供。
|
||||
|
@ -307,23 +316,25 @@ The **admissionregistration.k8s.io/v1beta1** API version of MutatingWebhookConfi
|
|||
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
|
||||
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
|
||||
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
|
||||
* `webhooks[*].sideEffects` default value is removed, and the field made required, and only `None` and `NoneOnDryRun` are permitted for v1
|
||||
* `webhooks[*].admissionReviewVersions` default value is removed and the field made required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
|
||||
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
|
||||
* `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1
|
||||
* `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1
|
||||
* `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1
|
||||
* `webhooks[*].sideEffects` default value is removed, and the field made required,
|
||||
and only `None` and `NoneOnDryRun` are permitted for v1
|
||||
* `webhooks[*].admissionReviewVersions` default value is removed and the field made
|
||||
required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)
|
||||
* `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io/v1`
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `webhooks[*].failurePolicy` 在 v1 版本中默认值从 `Ignore` 改为 `Fail`
|
||||
* `webhooks[*].matchPolicy` 在 v1 版本中默认值从 `Exact` 改为 `Equivalent`
|
||||
* `webhooks[*].timeoutSeconds` 在 v1 版本中默认值从 `30s` 改为 `10s`
|
||||
* `webhooks[*].sideEffects` 的默认值被删除,并且该字段变为必须指定;
|
||||
在 v1 版本中可选的值只能是 `None` 和 `NoneOnDryRun` 之一
|
||||
* `webhooks[*].admissionReviewVersions` 的默认值被删除,在 v1
|
||||
版本中此字段变为必须指定(AdmissionReview 的被支持版本包括 `v1` 和 `v1beta1`)
|
||||
* `webhooks[*].name` 必须在通过 `admissionregistration.k8s.io/v1`
|
||||
创建的对象列表中唯一
|
||||
* `webhooks[*].failurePolicy` 在 v1 版本中默认值从 `Ignore` 改为 `Fail`
|
||||
* `webhooks[*].matchPolicy` 在 v1 版本中默认值从 `Exact` 改为 `Equivalent`
|
||||
* `webhooks[*].timeoutSeconds` 在 v1 版本中默认值从 `30s` 改为 `10s`
|
||||
* `webhooks[*].sideEffects` 的默认值被删除,并且该字段变为必须指定;
|
||||
在 v1 版本中可选的值只能是 `None` 和 `NoneOnDryRun` 之一
|
||||
* `webhooks[*].admissionReviewVersions` 的默认值被删除,在 v1
|
||||
版本中此字段变为必须指定(AdmissionReview 的被支持版本包括 `v1` 和 `v1beta1`)
|
||||
* `webhooks[*].name` 必须在通过 `admissionregistration.k8s.io/v1`
|
||||
创建的对象列表中唯一
|
||||
|
||||
#### CustomResourceDefinition {#customresourcedefinition-v122}
|
||||
|
||||
|
@ -340,38 +351,43 @@ The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition is
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
|
||||
* `spec.version` is removed in v1; use `spec.versions` instead
|
||||
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
|
||||
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
|
||||
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
|
||||
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
|
||||
* `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified
|
||||
* `spec.version` is removed in v1; use `spec.versions` instead
|
||||
* `spec.validation` is removed in v1; use `spec.versions[*].schema` instead
|
||||
* `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead
|
||||
* `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead
|
||||
* `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.scope` 的默认值不再是 `Namespaced`,该字段必须显式指定
|
||||
* `spec.version` 在 v1 版本中被删除;应改用 `spec.versions`
|
||||
* `spec.validation` 在 v1 版本中被删除;应改用 `spec.versions[*].schema`
|
||||
* `spec.subresources` 在 v1 版本中被删除;应改用 `spec.versions[*].subresources`
|
||||
* `spec.additionalPrinterColumns` 在 v1 版本中被删除;应改用
|
||||
`spec.versions[*].additionalPrinterColumns`
|
||||
* `spec.conversion.webhookClientConfig` 在 v1 版本中被移动到
|
||||
`spec.conversion.webhook.clientConfig` 中
|
||||
<!--
|
||||
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
|
||||
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects, and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects; it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
|
||||
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1 (fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
-->
|
||||
* `spec.conversion.conversionReviewVersions` 在 v1 版本中被移动到
|
||||
`spec.conversion.webhook.conversionReviewVersions`
|
||||
* `spec.versions[*].schema.openAPIV3Schema` 在创建 v1 版本的
|
||||
CustomResourceDefinition 对象时变成必需字段,并且其取值必须是一个
|
||||
[结构化的 Schema](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` 在创建 v1 版本的 CustomResourceDefinition
|
||||
对象时不允许指定;该配置必须在 Schema 定义中使用
|
||||
`x-kubernetes-preserve-unknown-fields: true` 来设置
|
||||
* 在 v1 版本中,`additionalPrinterColumns` 的条目中的 `JSONPath` 字段被更名为
|
||||
`jsonPath`(补丁 [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
* `spec.scope` 的默认值不再是 `Namespaced`,该字段必须显式指定
|
||||
* `spec.version` 在 v1 版本中被删除;应改用 `spec.versions`
|
||||
* `spec.validation` 在 v1 版本中被删除;应改用 `spec.versions[*].schema`
|
||||
* `spec.subresources` 在 v1 版本中被删除;应改用 `spec.versions[*].subresources`
|
||||
* `spec.additionalPrinterColumns` 在 v1 版本中被删除;应改用
|
||||
`spec.versions[*].additionalPrinterColumns`
|
||||
* `spec.conversion.webhookClientConfig` 在 v1 版本中被移动到
|
||||
`spec.conversion.webhook.clientConfig` 中
|
||||
|
||||
<!--
|
||||
* `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1
|
||||
* `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects,
|
||||
and must be a [structural schema](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects;
|
||||
it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`
|
||||
* In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1
|
||||
(fixes [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
-->
|
||||
|
||||
* `spec.conversion.conversionReviewVersions` 在 v1 版本中被移动到
|
||||
`spec.conversion.webhook.conversionReviewVersions`
|
||||
* `spec.versions[*].schema.openAPIV3Schema` 在创建 v1 版本的
|
||||
CustomResourceDefinition 对象时变成必需字段,并且其取值必须是一个
|
||||
[结构化的 Schema](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema)
|
||||
* `spec.preserveUnknownFields: true` 在创建 v1 版本的 CustomResourceDefinition
|
||||
对象时不允许指定;该配置必须在 Schema 定义中使用
|
||||
`x-kubernetes-preserve-unknown-fields: true` 来设置
|
||||
* 在 v1 版本中,`additionalPrinterColumns` 的条目中的 `JSONPath` 字段被更名为
|
||||
`jsonPath`(补丁 [#66531](https://github.com/kubernetes/kubernetes/issues/66531))
|
||||
|
||||
#### APIService {#apiservice-v122}
|
||||
|
||||
|
@ -407,11 +423,12 @@ The **authentication.k8s.io/v1beta1** API version of TokenReview is no longer se
|
|||
#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}
|
||||
|
||||
<!--
|
||||
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview, SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.
|
||||
The **authorization.k8s.io/v1beta1** API version of LocalSubjectAccessReview,
|
||||
SelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **authorization.k8s.io/v1** API version, available since v1.6.
|
||||
* Notable changes:
|
||||
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
|
||||
* `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https://github.com/kubernetes/kubernetes/issues/32709))
|
||||
-->
|
||||
**authorization.k8s.io/v1beta1** API 版本的 LocalSubjectAccessReview、
|
||||
SelfSubjectAccessReview、SubjectAccessReview、SelfSubjectRulesReview 不在 v1.22 版本中继续提供。
|
||||
|
@ -440,13 +457,15 @@ v1.22 版本中继续提供。
|
|||
|
||||
<!--
|
||||
* Notable changes in `certificates.k8s.io/v1`:
|
||||
* For API clients requesting certificates:
|
||||
* `spec.signerName` is now required (see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)), and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
|
||||
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
|
||||
* For API clients approving or signing certificates:
|
||||
* `status.conditions` may not contain duplicate types
|
||||
* `status.conditions[*].status` is now required
|
||||
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
|
||||
* For API clients requesting certificates:
|
||||
* `spec.signerName` is now required
|
||||
(see [known Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers)),
|
||||
and requests for `kubernetes.io/legacy-unknown` are not allowed to be created via the `certificates.k8s.io/v1` API
|
||||
* `spec.usages` is now required, may not contain duplicate values, and must only contain known usages
|
||||
* For API clients approving or signing certificates:
|
||||
* `status.conditions` may not contain duplicate types
|
||||
* `status.conditions[*].status` is now required
|
||||
* `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks
|
||||
-->
|
||||
* `certificates.k8s.io/v1` 中需要额外注意的变更:
|
||||
* 对于请求证书的 API 客户端而言:
|
||||
|
@ -455,7 +474,7 @@ v1.22 版本中继续提供。
|
|||
并且通过 `certificates.k8s.io/v1` API 不可以创建签署者为
|
||||
`kubernetes.io/legacy-unknown` 的请求
|
||||
* `spec.usages` 现在变成必需字段,其中不可以包含重复的字符串值,
|
||||
并且只能包含已知的用法字符串
|
||||
并且只能包含已知的用法字符串
|
||||
* 对于要批准或者签署证书的 API 客户端而言:
|
||||
* `status.conditions` 中不可以包含重复的类型
|
||||
* `status.conditions[*].status` 字段现在变为必需字段
|
||||
|
@ -494,11 +513,12 @@ The **extensions/v1beta1** and **networking.k8s.io/v1beta1** API versions of Ing
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.backend` is renamed to `spec.defaultBackend`
|
||||
* The backend `serviceName` field is renamed to `service.name`
|
||||
* Numeric backend `servicePort` fields are renamed to `service.port.number`
|
||||
* String backend `servicePort` fields are renamed to `service.port.name`
|
||||
* `pathType` is now required for each specified path. Options are `Prefix`, `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
|
||||
* `spec.backend` is renamed to `spec.defaultBackend`
|
||||
* The backend `serviceName` field is renamed to `service.name`
|
||||
* Numeric backend `servicePort` fields are renamed to `service.port.number`
|
||||
* String backend `servicePort` fields are renamed to `service.port.name`
|
||||
* `pathType` is now required for each specified path. Options are `Prefix`,
|
||||
`Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.backend` 字段被更名为 `spec.defaultBackend`
|
||||
|
@ -526,9 +546,10 @@ The **networking.k8s.io/v1beta1** API version of IngressClass is no longer serve
|
|||
* 没有需要额外注意的变更。
|
||||
|
||||
<!--
|
||||
#### RBAC resources {#rbac-resources-v122}
|
||||
#### RBAC resources {#rbac-resources-v122}
|
||||
|
||||
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding is no longer served as of v1.22.
|
||||
The **rbac.authorization.k8s.io/v1beta1** API version of ClusterRole, ClusterRoleBinding,
|
||||
Role, and RoleBinding is no longer served as of v1.22.
|
||||
|
||||
* Migrate manifests and API clients to use the **rbac.authorization.k8s.io/v1** API version, available since v1.8.
|
||||
* All existing persisted objects are accessible via the new APIs
|
||||
|
@ -623,9 +644,11 @@ v1.16 版本中不再继续提供。
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.templateGeneration` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`)
|
||||
* `spec.templateGeneration` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing
|
||||
template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
|
||||
(the default in `extensions/v1beta1` was `OnDelete`)
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.templateGeneration` 字段被删除
|
||||
|
@ -649,11 +672,15 @@ Deployment 在 v1.16 版本中不再继续提供。
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.rollbackTo` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline)
|
||||
* `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
|
||||
* `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`)
|
||||
* `spec.rollbackTo` is removed
|
||||
* `spec.selector` is now required and immutable after creation; use the existing
|
||||
template labels as the selector for seamless upgrades
|
||||
* `spec.progressDeadlineSeconds` now defaults to `600` seconds
|
||||
(the default in `extensions/v1beta1` was no deadline)
|
||||
* `spec.revisionHistoryLimit` now defaults to `10`
|
||||
(the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
|
||||
* `maxSurge` and `maxUnavailable` now default to `25%`
|
||||
(the default in `extensions/v1beta1` was `1`)
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.rollbackTo` 字段被删除
|
||||
|
@ -681,8 +708,10 @@ The **apps/v1beta1** and **apps/v1beta2** API versions of StatefulSet are no lon
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`)
|
||||
* `spec.selector` is now required and immutable after creation;
|
||||
use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.updateStrategy.type` now defaults to `RollingUpdate`
|
||||
(the default in `apps/v1beta1` was `OnDelete`)
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.selector` 字段现在变为必需字段,并且在 StatefulSet 创建之后不可变更;
|
||||
|
@ -705,7 +734,7 @@ ReplicaSet 在 v1.16 版本中不再继续提供。
|
|||
* 所有的已保存的对象都可以通过新的 API 来访问;
|
||||
<!--
|
||||
* Notable changes:
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
* `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
|
||||
-->
|
||||
* 值得注意的变更:
|
||||
* `spec.selector` 现在变成必需字段,并且在对象创建之后不可变更;
|
||||
|
|
|
@ -377,7 +377,7 @@ Install CNI plugins (required for most pod network):
|
|||
安装 CNI 插件(大多数 Pod 网络都需要):
|
||||
|
||||
```bash
|
||||
CNI_PLUGINS_VERSION="v1.2.0"
|
||||
CNI_PLUGINS_VERSION="v1.3.0"
|
||||
ARCH="amd64"
|
||||
DEST="/opt/cni/bin"
|
||||
sudo mkdir -p "$DEST"
|
||||
|
@ -409,7 +409,7 @@ Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)
|
|||
安装 crictl(kubeadm/kubelet 容器运行时接口(CRI)所需)
|
||||
|
||||
```bash
|
||||
CRICTL_VERSION="v1.26.0"
|
||||
CRICTL_VERSION="v1.27.0"
|
||||
ARCH="amd64"
|
||||
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
|
||||
```
|
||||
|
@ -426,7 +426,7 @@ cd $DOWNLOAD_DIR
|
|||
sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet}
|
||||
sudo chmod +x {kubeadm,kubelet}
|
||||
|
||||
RELEASE_VERSION="v0.4.0"
|
||||
RELEASE_VERSION="v0.15.1"
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
|
||||
sudo mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
|
|
|
@ -179,5 +179,4 @@ CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dock
|
|||
[迁移到不同的运行时](/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/)
|
||||
找到更多信息。或者,如果你想在 Kubernetes v1.24 及以后的版本仍使用 Docker Engine,
|
||||
可以安装 CRI 兼容的适配器实现,如 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。
|
||||
[`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。
|
||||
|
||||
|
|
|
@ -13,17 +13,17 @@ weight: 140
|
|||
<!--
|
||||
This page shows how to configure liveness, readiness and startup probes for containers.
|
||||
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses liveness probes to know when to
|
||||
restart a container. For example, liveness probes could catch a deadlock,
|
||||
where an application is running, but unable to make progress. Restarting a
|
||||
container in such a state can help to make the application more available
|
||||
despite bugs.
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) uses
|
||||
liveness probes to know when to restart a container. For example, liveness
|
||||
probes could catch a deadlock, where an application is running, but unable to
|
||||
make progress. Restarting a container in such a state can help to make the
|
||||
application more available despite bugs.
|
||||
-->
|
||||
这篇文章介绍如何给容器配置存活(Liveness)、就绪(Readiness)和启动(Startup)探针。
|
||||
|
||||
[kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/)
|
||||
使用存活探针来确定什么时候要重启容器。
|
||||
例如,存活探针可以探测到应用死锁(应用程序在运行,但是无法继续执行后面的步骤)情况。
|
||||
例如,存活探针可以探测到应用死锁(应用在运行,但是无法继续执行后面的步骤)情况。
|
||||
重启这种状态下的容器有助于提高应用的可用性,即使其中存在缺陷。
|
||||
|
||||
<!--
|
||||
|
@ -41,7 +41,7 @@ One use of this signal is to control which Pods are used as backends for Service
|
|||
When a Pod is not ready, it is removed from Service load balancers.
|
||||
|
||||
The kubelet uses startup probes to know when a container application has started.
|
||||
If such a probe is configured, it disables liveness and readiness checks until
|
||||
If such a probe is configured, liveness and readiness probes do not start until
|
||||
it succeeds, making sure those probes don't interfere with the application startup.
|
||||
This can be used to adopt liveness checks on slow starting containers, avoiding them
|
||||
getting killed by the kubelet before they are up and running.
|
||||
|
@ -52,8 +52,7 @@ kubelet 使用就绪探针可以知道容器何时准备好接受请求流量,
|
|||
若 Pod 尚未就绪,会被从 Service 的负载均衡器中剔除。
|
||||
|
||||
kubelet 使用启动探针来了解应用容器何时启动。
|
||||
如果配置了这类探针,你就可以控制容器在启动成功后再进行存活性和就绪态检查,
|
||||
确保这些存活、就绪探针不会影响应用的启动。
|
||||
如果配置了这类探针,存活探针和就绪探针成功之前不会重启,确保这些探针不会影响应用的启动。
|
||||
启动探针可以用于对慢启动容器进行存活性检测,避免它们在启动运行之前就被杀掉。
|
||||
|
||||
{{< caution >}}
|
||||
|
@ -74,9 +73,9 @@ scalable; and increased workload on remaining pods due to some failed pods.
|
|||
Understand the difference between readiness and liveness probes and when to apply them for your app.
|
||||
-->
|
||||
错误的存活探针可能会导致级联故障。
|
||||
这会导致在高负载下容器重启;例如由于应用程序无法扩展,导致客户端请求失败;以及由于某些
|
||||
这会导致在高负载下容器重启;例如由于应用无法扩展,导致客户端请求失败;以及由于某些
|
||||
Pod 失败而导致剩余 Pod 的工作负载增加。了解就绪探针和存活探针之间的区别,
|
||||
以及何时为应用程序配置使用它们非常重要。
|
||||
以及何时为应用配置使用它们非常重要。
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
@ -161,14 +160,14 @@ The output indicates that no liveness probes have failed yet:
|
|||
-->
|
||||
输出结果表明还没有存活探针失败:
|
||||
|
||||
```
|
||||
```none
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 9s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 7s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 7s kubelet, node01 Created container liveness
|
||||
Normal Started 7s kubelet, node01 Started container liveness
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 9s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 7s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 7s kubelet, node01 Created container liveness
|
||||
Normal Started 7s kubelet, node01 Started container liveness
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -186,16 +185,16 @@ probes have failed, and the failed containers have been killed and recreated.
|
|||
-->
|
||||
在输出结果的最下面,有信息显示存活探针失败了,这个失败的容器被杀死并且被重建了。
|
||||
|
||||
```
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 55s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 53s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 53s kubelet, node01 Created container liveness
|
||||
Normal Started 53s kubelet, node01 Started container liveness
|
||||
Warning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
Normal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted
|
||||
```none
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 55s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 53s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 53s kubelet, node01 Created container liveness
|
||||
Normal Started 53s kubelet, node01 Started container liveness
|
||||
Warning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
Normal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -208,12 +207,13 @@ kubectl get pod liveness-exec
|
|||
```
|
||||
|
||||
<!--
|
||||
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container comes back to the running state:
|
||||
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter
|
||||
increments as soon as a failed container comes back to the running state:
|
||||
-->
|
||||
输出结果显示 `RESTARTS` 的值增加了 1。
|
||||
请注意,一旦失败的容器恢复为运行状态,`RESTARTS` 计数器就会增加 1:
|
||||
|
||||
```
|
||||
```none
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
@ -222,8 +222,7 @@ liveness-exec 1/1 Running 1 1m
|
|||
## Define a liveness HTTP request
|
||||
|
||||
Another kind of liveness probe uses an HTTP GET request. Here is the configuration
|
||||
file for a Pod that runs a container based on the `registry.k8s.io/liveness`
|
||||
image.
|
||||
file for a Pod that runs a container based on the `registry.k8s.io/liveness` image.
|
||||
-->
|
||||
## 定义一个存活态 HTTP 请求接口 {#define-a-liveness-HTTP-request}
|
||||
|
||||
|
@ -307,14 +306,9 @@ kubectl describe pod liveness-http
|
|||
```
|
||||
|
||||
<!--
|
||||
In releases prior to v1.13 (including v1.13), if the environment variable
|
||||
`http_proxy` (or `HTTP_PROXY`) is set on the node where a Pod is running,
|
||||
the HTTP liveness probe uses that proxy.
|
||||
In releases after v1.13, local HTTP proxy environment variable settings do not
|
||||
affect the HTTP liveness probe.
|
||||
-->
|
||||
在 1.13 之前(包括 1.13)的版本中,如果在 Pod 运行的节点上设置了环境变量
|
||||
`http_proxy`(或者 `HTTP_PROXY`),HTTP 的存活探测会使用这个代理。
|
||||
在 1.13 之后的版本中,设置本地的 HTTP 代理环境变量不会影响 HTTP 的存活探测。
|
||||
|
||||
<!--
|
||||
|
@ -380,7 +374,8 @@ kubectl describe pod goproxy
|
|||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
|
||||
<!--
|
||||
If your application implements the [gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
If your application implements the
|
||||
[gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
this example shows how to configure Kubernetes to use it for application liveness checks.
|
||||
Similarly you can configure readiness and startup probes.
|
||||
|
||||
|
@ -393,7 +388,7 @@ kubelet 可以配置为使用该协议来执行应用存活性检查。
|
|||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
才能配置依赖于 gRPC 的检查机制。
|
||||
|
||||
这个例子展示了如何配置 Kubernetes 以将其用于应用程序的存活性检查。
|
||||
这个例子展示了如何配置 Kubernetes 以将其用于应用的存活性检查。
|
||||
类似地,你可以配置就绪探针和启动探针。
|
||||
|
||||
下面是一个示例清单:
|
||||
|
@ -404,9 +399,9 @@ kubelet 可以配置为使用该协议来执行应用存活性检查。
|
|||
To use a gRPC probe, `port` must be configured. If you want to distinguish probes of different types
|
||||
and probes for different features you can use the `service` field.
|
||||
You can set `service` to the value `liveness` and make your gRPC Health Checking endpoint
|
||||
respond to this request differently then when you set `service` set to `readiness`.
|
||||
respond to this request differently than when you set `service` set to `readiness`.
|
||||
This lets you use the same endpoint for different kinds of container health check
|
||||
(rather than needing to listen on two different ports).
|
||||
rather than listening on two different ports.
|
||||
If you want to specify your own custom service name and also specify a probe type,
|
||||
the Kubernetes project recommends that you use a name that concatenates
|
||||
those. For example: `myservice-liveness` (using `-` as a separator).
|
||||
|
@ -415,14 +410,14 @@ those. For example: `myservice-liveness` (using `-` as a separator).
|
|||
如果要区分不同类型的探针和不同功能的探针,可以使用 `service` 字段。
|
||||
你可以将 `service` 设置为 `liveness`,并使你的 gRPC
|
||||
健康检查端点对该请求的响应与将 `service` 设置为 `readiness` 时不同。
|
||||
这使你可以使用相同的端点进行不同类型的容器健康检查(而不需要在两个不同的端口上侦听)。
|
||||
这使你可以使用相同的端点进行不同类型的容器健康检查而不是监听两个不同的端口。
|
||||
如果你想指定自己的自定义服务名称并指定探测类型,Kubernetes
|
||||
项目建议你使用使用一个可以关联服务和探测类型的名称来命名。
|
||||
例如:`myservice-liveness`(使用 `-` 作为分隔符)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Unlike HTTP and TCP probes, you cannot specify the healthcheck port by name, and you
|
||||
Unlike HTTP and TCP probes, you cannot specify the health check port by name, and you
|
||||
cannot configure a custom hostname.
|
||||
-->
|
||||
与 HTTP 和 TCP 探针不同,gRPC 探测不能使用按名称指定端口,
|
||||
|
@ -430,13 +425,13 @@ cannot configure a custom hostname.
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Configuration problems (for example: incorrect port and service, unimplemented health checking protocol)
|
||||
Configuration problems (for example: incorrect port or service, unimplemented health checking protocol)
|
||||
are considered a probe failure, similar to HTTP and TCP probes.
|
||||
|
||||
To try the gRPC liveness check, create a Pod using the command below.
|
||||
In the example below, the etcd pod is configured to use gRPC liveness probe.
|
||||
-->
|
||||
配置问题(例如:错误的 `port` 和 `service`、未实现健康检查协议)
|
||||
配置问题(例如:错误的 `port` 或 `service`、未实现健康检查协议)
|
||||
都被认作是探测失败,这一点与 HTTP 和 TCP 探针类似。
|
||||
|
||||
```shell
|
||||
|
@ -453,24 +448,26 @@ kubectl describe pod etcd-with-grpc
|
|||
```
|
||||
|
||||
<!--
|
||||
Before Kubernetes 1.23, gRPC health probes were often implemented using [grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
|
||||
as described in the blog post [Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
|
||||
The built-in gRPC probes behavior is similar to one implemented by grpc-health-probe.
|
||||
Before Kubernetes 1.23, gRPC health probes were often implemented using
|
||||
[grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
|
||||
as described in the blog post
|
||||
[Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
|
||||
The built-in gRPC probe's behavior is similar to the one implemented by grpc-health-probe.
|
||||
When migrating from grpc-health-probe to built-in probes, remember the following differences:
|
||||
-->
|
||||
在 Kubernetes 1.23 之前,gRPC 健康探测通常使用
|
||||
[grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/)
|
||||
来实现,如博客 [Health checking gRPC servers on Kubernetes(对 Kubernetes 上的 gRPC 服务器执行健康检查)](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/)所描述。
|
||||
内置的 gRPC 探针行为与 `grpc-health-probe` 所实现的行为类似。
|
||||
内置的 gRPC 探针的行为与 `grpc-health-probe` 所实现的行为类似。
|
||||
从 `grpc-health-probe` 迁移到内置探针时,请注意以下差异:
|
||||
|
||||
<!--
|
||||
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against `127.0.0.1`.
|
||||
Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
|
||||
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against
|
||||
`127.0.0.1`. Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
|
||||
- Built-in probes do not support any authentication parameters (like `-tls`).
|
||||
- There are no error codes for built-in probes. All errors are considered as probe failures.
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not** respect the `timeoutSeconds` setting (which defaults to 1s),
|
||||
while built-in probe would fail on timeout.
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not**
|
||||
respect the `timeoutSeconds` setting (which defaults to 1s), while built-in probe would fail on timeout.
|
||||
-->
|
||||
- 内置探针运行时针对的是 Pod 的 IP 地址,不像 `grpc-health-probe`
|
||||
那样通常针对 `127.0.0.1` 执行探测;
|
||||
|
@ -484,8 +481,7 @@ When migrating from grpc-health-probe to built-in probes, remember the following
|
|||
<!--
|
||||
## Use a named port
|
||||
|
||||
You can use a named
|
||||
[`port`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports)
|
||||
You can use a named [`port`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports)
|
||||
for HTTP and TCP probes. (gRPC probes do not support named ports).
|
||||
|
||||
For example:
|
||||
|
@ -560,7 +556,7 @@ provide a fast response to container deadlocks.
|
|||
If the startup probe never succeeds, the container is killed after 300s and
|
||||
subject to the pod's `restartPolicy`.
|
||||
-->
|
||||
幸亏有启动探测,应用程序将会有最多 5 分钟(30 * 10 = 300s)的时间来完成其启动过程。
|
||||
幸亏有启动探测,应用将会有最多 5 分钟(30 * 10 = 300s)的时间来完成其启动过程。
|
||||
一旦启动探测成功一次,存活探测任务就会接管对容器的探测,对容器死锁作出快速响应。
|
||||
如果启动探测一直没有成功,容器会在 300 秒后被杀死,并且根据 `restartPolicy`
|
||||
来执行进一步处置。
|
||||
|
@ -594,7 +590,9 @@ Readiness probes runs on the container during its whole lifecycle.
|
|||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
|
||||
Liveness probes *do not* wait for readiness probes to succeed.
|
||||
If you want to wait before executing a liveness probe you should use
|
||||
`initialDelaySeconds` or a `startupProbe`.
|
||||
-->
|
||||
存活探针**不等待**就绪性探针成功。
|
||||
如果要在执行存活探针之前等待,应该使用 `initialDelaySeconds` 或 `startupProbe`。
|
||||
|
@ -635,33 +633,31 @@ HTTP 和 TCP 的就绪探针配置也和存活探针的配置完全相同。
|
|||
-->
|
||||
## 配置探针 {#configure-probes}
|
||||
|
||||
<!--
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
-->
|
||||
{{< comment >}}
|
||||
最后,本节的一些内容可以放到某个概念主题里。
|
||||
{{< /comment >}}
|
||||
<!--Eventually, some of this section could be moved to a concept topic.-->
|
||||
|
||||
<!--
|
||||
[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) have a number of fields that
|
||||
you can use to more precisely control the behavior of startup, liveness and readiness
|
||||
checks:
|
||||
[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
have a number of fields that you can use to more precisely control the behavior of startup,
|
||||
liveness and readiness checks:
|
||||
-->
|
||||
[Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
有很多配置字段,可以使用这些字段精确地控制启动、存活和就绪检测的行为:
|
||||
|
||||
<!--
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started
|
||||
before startup, liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
|
||||
seconds. Minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
|
||||
to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be
|
||||
considered successful after having failed. Defaults to 1. Must be 1 for liveness
|
||||
and startup Probes. Minimum value is 1.
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started before startup,
|
||||
liveness or readiness probes are initiated. If a startup probe is defined, liveness and
|
||||
readiness probe delays do not begin until the startup probe has succeeded.
|
||||
Defaults to 0 seconds. Minimum value is 0.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10 seconds.
|
||||
The minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out.
|
||||
Defaults to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be considered successful
|
||||
after having failed. Defaults to 1. Must be 1 for liveness and startup Probes.
|
||||
Minimum value is 1.
|
||||
-->
|
||||
* `initialDelaySeconds`:容器启动后要等待多少秒后才启动启动、存活和就绪探针,
|
||||
* `initialDelaySeconds`:容器启动后要等待多少秒后才启动启动、存活和就绪探针。
|
||||
如果定义了启动探针,则存活探针和就绪探针的延迟将在启动探针已成功之后才开始计算。
|
||||
默认是 0 秒,最小值是 0。
|
||||
* `periodSeconds`:执行探测的时间间隔(单位是秒)。默认是 10 秒。最小值是 1。
|
||||
* `timeoutSeconds`:探测的超时后等待多少秒。默认值是 1 秒。最小值是 1。
|
||||
|
@ -669,12 +665,11 @@ and startup Probes. Minimum value is 1.
|
|||
存活和启动探测的这个值必须是 1。最小值是 1。
|
||||
<!--
|
||||
* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes
|
||||
considers that the overall check has failed: the container is _not_ ready / healthy /
|
||||
live.
|
||||
considers that the overall check has failed: the container is _not_ ready/healthy/live.
|
||||
For the case of a startup or liveness probe, if at least `failureThreshold` probes have
|
||||
failed, Kubernetes treats the container as unhealthy and triggers a restart for that
|
||||
specific container. The kubelet takes the setting of `terminationGracePeriodSeconds`
|
||||
for that container into account.
|
||||
specific container. The kubelet honors the setting of `terminationGracePeriodSeconds`
|
||||
for that container.
|
||||
For a failed readiness probe, the kubelet continues running the container that failed
|
||||
checks, and also continues to run more probes; because the check failed, the kubelet
|
||||
sets the `Ready` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)
|
||||
|
@ -684,14 +679,14 @@ and startup Probes. Minimum value is 1.
|
|||
Kubernetes 认为总体上检查已失败:容器状态未就绪、不健康、不活跃。
|
||||
对于启动探针或存活探针而言,如果至少有 `failureThreshold` 个探针已失败,
|
||||
Kubernetes 会将容器视为不健康并为这个特定的容器触发重启操作。
|
||||
kubelet 会考虑该容器的 `terminationGracePeriodSeconds` 设置。
|
||||
kubelet 遵循该容器的 `terminationGracePeriodSeconds` 设置。
|
||||
对于失败的就绪探针,kubelet 继续运行检查失败的容器,并继续运行更多探针;
|
||||
因为检查失败,kubelet 将 Pod 的 `Ready`
|
||||
[状况](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)设置为 `false`。
|
||||
<!--
|
||||
* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait
|
||||
between triggering a shut down of the failed container, and then forcing the
|
||||
container runtime to stop that container.
|
||||
* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait between
|
||||
triggering a shut down of the failed container, and then forcing the container runtime to stop
|
||||
that container.
|
||||
The default is to inherit the Pod-level value for `terminationGracePeriodSeconds`
|
||||
(30 seconds if not specified), and the minimum value is 1.
|
||||
See [probe-level `terminationGracePeriodSeconds`](#probe-level-terminationgraceperiodseconds)
|
||||
|
@ -714,12 +709,11 @@ until a result was returned.
|
|||
<!--
|
||||
This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior,
|
||||
even without realizing it, as the default timeout is 1 second.
|
||||
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`)
|
||||
on each kubelet to restore the behavior from older versions, then remove that override
|
||||
once all the exec probes in the cluster have a `timeoutSeconds` value set.
|
||||
If you have pods that are impacted from the default 1 second timeout,
|
||||
you should update their probe timeout so that you're ready for the
|
||||
eventual removal of that feature gate.
|
||||
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ExecProbeTimeout` (set it to `false`) on each kubelet to restore the behavior from older versions,
|
||||
then remove that override once all the exec probes in the cluster have a `timeoutSeconds` value set.
|
||||
If you have pods that are impacted from the default 1 second timeout, you should update their
|
||||
probe timeout so that you're ready for the eventual removal of that feature gate.
|
||||
-->
|
||||
这一缺陷在 Kubernetes v1.20 版本中得到修复。你可能一直依赖于之前错误的探测行为,
|
||||
甚至都没有觉察到这一问题的存在,因为默认的超时值是 1 秒钟。
|
||||
|
@ -755,12 +749,12 @@ of processes in the container, and resource starvation if this is left unchecked
|
|||
have additional fields that can be set on `httpGet`:
|
||||
|
||||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server. Defaults to /.
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to "HTTP".
|
||||
* `path`: Path to access on the HTTP server. Defaults to "/".
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
in the range 1 to 65535.
|
||||
-->
|
||||
### HTTP 探测 {#http-probes}
|
||||
|
||||
|
@ -774,8 +768,8 @@ in the range 1 to 65535.
|
|||
* `port`:访问容器的端口号或者端口名。如果数字必须在 1~65535 之间。
|
||||
|
||||
<!--
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified path and
|
||||
port to perform the check. The kubelet sends the probe to the pod's IP address,
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified port and
|
||||
path to perform the check. The kubelet sends the probe to the pod's IP address,
|
||||
unless the address is overridden by the optional `host` field in `httpGet`. If
|
||||
`scheme` field is set to `HTTPS`, the kubelet sends an HTTPS request skipping the
|
||||
certificate verification. In most scenarios, you do not want to set the `host` field.
|
||||
|
@ -784,7 +778,7 @@ and the Pod's `hostNetwork` field is true. Then `host`, under `httpGet`, should
|
|||
to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more common
|
||||
case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
-->
|
||||
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的路径和端口来执行检测。
|
||||
对于 HTTP 探测,kubelet 发送一个 HTTP 请求到指定的端口和路径来执行检测。
|
||||
除非 `httpGet` 中的 `host` 字段设置了,否则 kubelet 默认是给 Pod 的 IP 地址发送探测。
|
||||
如果 `scheme` 字段设置为了 `HTTPS`,kubelet 会跳过证书验证发送 HTTPS 请求。
|
||||
大多数情况下,不需要设置 `host` 字段。
|
||||
|
@ -795,16 +789,18 @@ case, you should not use `host`, but rather set the `Host` header in `httpHeader
|
|||
|
||||
<!--
|
||||
For an HTTP probe, the kubelet sends two request headers in addition to the mandatory `Host` header:
|
||||
`User-Agent`, and `Accept`. The default values for these headers are `kube-probe/{{< skew currentVersion >}}`
|
||||
(where `{{< skew currentVersion >}}` is the version of the kubelet ), and `*/*` respectively.
|
||||
- `User-Agent`: The default value is `kube-probe/{{< skew currentVersion >}}`,
|
||||
where `{{< skew currentVersion >}}` is the version of the kubelet.
|
||||
- `Accept`: The default value is `*/*`.
|
||||
|
||||
You can override the default headers by defining `.httpHeaders` for the probe; for example
|
||||
You can override the default headers by defining `httpHeaders` for the probe.
|
||||
For example
|
||||
-->
|
||||
针对 HTTP 探针,kubelet 除了必需的 `Host` 头部之外还发送两个请求头部字段:
|
||||
`User-Agent` 和 `Accept`。这些头部的默认值分别是 `kube-probe/{{ skew currentVersion >}}`
|
||||
(其中 `{{< skew currentVersion >}}` 是 kubelet 的版本号)和 `*/*`。
|
||||
- `User-Agent`:默认值是 `kube-probe/{{ skew currentVersion >}}`,其中 `{{< skew currentVersion >}}` 是 kubelet 的版本号。
|
||||
- `Accept`:默认值 `*/*`。
|
||||
|
||||
你可以通过为探测设置 `.httpHeaders` 来重载默认的头部字段值;例如:
|
||||
你可以通过为探测设置 `httpHeaders` 来重载默认的头部字段值。例如:
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
|
@ -842,7 +838,7 @@ startupProbe:
|
|||
<!--
|
||||
### TCP probes
|
||||
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the Pod, which
|
||||
means that you can not use a service name in the `host` parameter since the kubelet is unable
|
||||
to resolve it.
|
||||
-->
|
||||
|
@ -859,10 +855,10 @@ to resolve it.
|
|||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
<!--
|
||||
Prior to release 1.21, the pod-level `terminationGracePeriodSeconds` was used
|
||||
Prior to release 1.21, the Pod-level `terminationGracePeriodSeconds` was used
|
||||
for terminating a container that failed its liveness or startup probe. This
|
||||
coupling was unintended and may have resulted in failed containers taking an
|
||||
unusually long time to restart when a pod-level `terminationGracePeriodSeconds`
|
||||
unusually long time to restart when a Pod-level `terminationGracePeriodSeconds`
|
||||
was set.
|
||||
-->
|
||||
在 1.21 发行版之前,Pod 层面的 `terminationGracePeriodSeconds`
|
||||
|
@ -871,11 +867,11 @@ was set.
|
|||
时容器要花非常长的时间才能重新启动。
|
||||
|
||||
<!--
|
||||
In 1.25 and beyond, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
In 1.25 and above, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
`terminationGracePeriodSeconds` are set, the kubelet will use the probe-level value.
|
||||
-->
|
||||
在 1.21 及更高版本中,用户可以指定一个探针层面的 `terminationGracePeriodSeconds`
|
||||
在 1.25 及以上版本中,用户可以指定一个探针层面的 `terminationGracePeriodSeconds`
|
||||
作为探针规约的一部分。
|
||||
当 Pod 层面和探针层面的 `terminationGracePeriodSeconds`
|
||||
都已设置,kubelet 将使用探针层面设置的值。
|
||||
|
@ -885,8 +881,8 @@ Beginning in Kubernetes 1.25, the `ProbeTerminationGracePeriod` feature is enabl
|
|||
by default. For users choosing to disable this feature, please note the following:
|
||||
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
-->
|
||||
{{< note >}}
|
||||
从 Kubernetes 1.25 开始,默认启用 `ProbeTerminationGracePeriod` 特性。
|
||||
|
@ -898,23 +894,26 @@ it is present on a Pod.
|
|||
|
||||
<!--
|
||||
* If you have existing Pods where the `terminationGracePeriodSeconds` field is set and
|
||||
you no longer wish to use per-probe termination grace periods, you must delete
|
||||
those existing Pods.
|
||||
you no longer wish to use per-probe termination grace periods, you must delete
|
||||
those existing Pods.
|
||||
-->
|
||||
* 如果你已经为现有 Pod 设置了 `terminationGracePeriodSeconds`
|
||||
字段并且不再希望使用针对每个探针的终止宽限期,则必须删除现有的这类 Pod。
|
||||
|
||||
<!--
|
||||
* When you (or the control plane, or some other component) create replacement
|
||||
Pods, and the feature gate `ProbeTerminationGracePeriod` is disabled, then the
|
||||
API server ignores the Probe-level `terminationGracePeriodSeconds` field, even if
|
||||
a Pod or pod template specifies it.
|
||||
Pods, and the feature gate `ProbeTerminationGracePeriod` is disabled, then the
|
||||
API server ignores the Probe-level `terminationGracePeriodSeconds` field, even if
|
||||
a Pod or pod template specifies it.
|
||||
-->
|
||||
* 当你(或控制平面或某些其他组件)创建替换 Pod,并且特性门控 `ProbeTerminationGracePeriod`
|
||||
被禁用时,即使 Pod 或 Pod 模板指定了 `terminationGracePeriodSeconds` 字段,
|
||||
API 服务器也会忽略探针级别的 `terminationGracePeriodSeconds` 字段设置。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
For example:
|
||||
-->
|
||||
例如:
|
||||
|
||||
```yaml
|
||||
|
@ -966,4 +965,3 @@ You can also read the API references for:
|
|||
* [Pod](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/),尤其是:
|
||||
* [container](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
* [probe](/zh-cn/docs/reference/kubernetes-api/workload-resources/pod-v1/#Probe)
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ by applications that use the Kubernetes API, and by the control plane itself.
|
|||
|
||||
Auditing allows cluster administrators to answer the following questions:
|
||||
-->
|
||||
Kubernetes**审计(Auditing)**功能提供了与安全相关的、按时间顺序排列的记录集,
|
||||
Kubernetes **审计(Auditing)** 功能提供了与安全相关的、按时间顺序排列的记录集,
|
||||
记录每个用户、使用 Kubernetes API 的应用以及控制面自身引发的活动。
|
||||
|
||||
审计功能使得集群管理员能够回答以下问题:
|
||||
|
|
|
@ -837,13 +837,13 @@ for scaling down which allows a 100% of the currently running replicas to be rem
|
|||
means the scaling target can be scaled down to the minimum allowed replicas.
|
||||
For scaling up there is no stabilization window. When the metrics indicate that the target should be
|
||||
scaled up the target is scaled up immediately. There are 2 policies where 4 pods or a 100% of the currently
|
||||
running replicas will be added every 15 seconds till the HPA reaches its steady state.
|
||||
running replicas may at most be added every 15 seconds till the HPA reaches its steady state.
|
||||
-->
|
||||
用于缩小稳定窗口的时间为 **300** 秒(或是 `--horizontal-pod-autoscaler-downscale-stabilization`
|
||||
参数设定值)。
|
||||
只有一种缩容的策略,允许 100% 删除当前运行的副本,这意味着扩缩目标可以缩小到允许的最小副本数。
|
||||
对于扩容,没有稳定窗口。当指标显示目标应该扩容时,目标会立即扩容。
|
||||
这里有两种策略,每 15 秒添加 4 个 Pod 或 100% 当前运行的副本数,直到 HPA 达到稳定状态。
|
||||
这里有两种策略,每 15 秒最多添加 4 个 Pod 或 100% 当前运行的副本数,直到 HPA 达到稳定状态。
|
||||
|
||||
<!--
|
||||
### Example: change downscale stabilization window
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue