" $prepend $first (slicestr $firstPara 1) -}}
- {{- replace . $firstPara $prepended | safeHTML -}}
+ {{- replace .Content $firstPara $prepended | safeHTML -}}
{{- else -}}
- {{- . -}}
+ {{- .Content -}}
{{- end -}}
{{- end -}}
+{{- else -}}
+ {{- errorf "[%s] %q: %q is not a valid glossary term_id, see ./docs/reference/glossary/* for a full list" site.Language.Lang .Page.Path $id -}}
{{- end -}}
From 607d440d36be295118db0ca2e453533b4938cf37 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Tue, 15 Oct 2024 18:24:15 +0100
Subject: [PATCH 02/37] Drop unused partial
This layout is unused; drop it.
---
layouts/partials/docs/side-menu.html | 24 ------------------------
1 file changed, 24 deletions(-)
delete mode 100644 layouts/partials/docs/side-menu.html
diff --git a/layouts/partials/docs/side-menu.html b/layouts/partials/docs/side-menu.html
deleted file mode 100644
index fb5e949645..0000000000
--- a/layouts/partials/docs/side-menu.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
- {{/* This can be any page in the docs tree. Need to find the correct ancestor.
- In a roundabout way. This will improve when Go templates gets assignment and break support (both in Go 1.11).
- */}}
- {{ $p := . }}
- {{ .Scratch.Set "section" .CurrentSection }}
- {{ .Scratch.Set "sectionFound" false }}
- {{ $docs := site.GetPage "section" "docs" }}
- {{ if ne .CurrentSection $docs }}
- {{ range $docs.Sections }}
- {{ if not ($.Scratch.Get "sectionFound") }}
- {{ if $p.IsDescendant . }}
- {{ $.Scratch.Set "section" . }}
- {{ $.Scratch.Set "sectionFound" true }}
- {{ end }}
- {{ end }}
- {{ end }}
- {{ end }}
- {{ $section := (.Scratch.Get "section") }}
- {{ partialCached "tree.html" $section $section.RelPermalink }}
-
From 0666011822b3ebbec179d6b3a2e6832977daa3bc Mon Sep 17 00:00:00 2001
From: Peter Hunt
Date: Mon, 6 May 2024 10:52:44 -0400
Subject: [PATCH 05/37] clarify CPU and memory limit enforcement differences
Clairify the container runtime and kubelet do not enforece memory limits, the kernel does.
Also clarify that workloads won't always be OOM killed, and more clearly spell out the differences
between memory and CPU limits
Signed-off-by: Peter Hunt
---
.../manage-resources-containers.md | 38 +++++++++++++------
1 file changed, 26 insertions(+), 12 deletions(-)
diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md
index c84a64eef5..6acddaf30f 100644
--- a/content/en/docs/concepts/configuration/manage-resources-containers.md
+++ b/content/en/docs/concepts/configuration/manage-resources-containers.md
@@ -28,22 +28,37 @@ that system resource specifically for that container to use.
If the node where a Pod is running has enough of a resource available, it's possible (and
allowed) for a container to use more resource than its `request` for that resource specifies.
-However, a container is not allowed to use more than its resource `limit`.
For example, if you set a `memory` request of 256 MiB for a container, and that container is in
a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use
more RAM.
-If you set a `memory` limit of 4GiB for that container, the kubelet (and
-{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}) enforce the limit.
-The runtime prevents the container from using more than the configured resource limit. For example:
-when a process in the container tries to consume more than the allowed amount of memory,
-the system kernel terminates the process that attempted the allocation, with an out of memory
-(OOM) error.
+Limits are a different story. Both `cpu` and `memory` limits are applied by the kubelet (and
+{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}),
+and are ultimately enforced by the kernel. On Linux nodes, the Linux kernel
+enforces limits with
+{{< glossary_tooltip text="cgroups" term_id="cgroup" >}}.
+The behavior of `cpu` and `memory` limit enforcement is slightly different.
-Limits can be implemented either reactively (the system intervenes once it sees a violation)
-or by enforcement (the system prevents the container from ever exceeding the limit). Different
-runtimes can have different ways to implement the same restrictions.
+`cpu` limits are enforced by CPU throttling. When a container approaches
+its `cpu` limit, the kernel will restrict access to the CPU corresponding to the
+container's limit. Thus, a `cpu` limit is a hard limit the kernel enforces.
+Containers may not use more CPU than is specified in their `cpu` limit.
+
+`memory` limits are enforced by the kernel with out of memory (OOM) kills. When
+a container uses more than its `memory` limit, the kernel may terminate it. However,
+terminations only happen when the kernel detects memory pressure. Thus, a
+container that over allocates memory may not be immediately killed. This means
+`memory` limits are enforced reactively. A container may use more memory than
+its `memory` limit, but if it does, it may get killed.
+
+{{< note >}}
+There is an alpha feature `MemoryQoS` which attempts to add more preemptive
+limit enforcement for memory (as opposed to reactive enforcement by the OOM
+killer). However, this effort is
+[stalled](https://github.com/kubernetes/enhancements/tree/a47155b340/keps/sig-node/2570-memory-qos#latest-update-stalled)
+due to a potential livelock situation a memory hungry can cause.
+{{< /note >}}
{{< note >}}
If you specify a limit for a resource, but do not specify any request, and no admission-time
@@ -883,5 +898,4 @@ memory limit (and possibly request) for that container.
and its [resource requirements](/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources)
* Read about [project quotas](https://www.linux.org/docs/man8/xfs_quota.html) in XFS
* Read more about the [kube-scheduler configuration reference (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
-* Read more about [Quality of Service classes for Pods](/docs/concepts/workloads/pods/pod-qos/)
-
+* Read more about [Quality of Service classes for Pods](/docs/concepts/workloads/pods/pod-qos/)
\ No newline at end of file
From cd0b9c3a0c410527345d8041a10fa415d216a2ee Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Tue, 12 Nov 2024 17:08:40 +0800
Subject: [PATCH 06/37] Add reference to authentication-mechanisms.md
---
.../authentication-mechanisms.md | 155 +++++++++---------
1 file changed, 81 insertions(+), 74 deletions(-)
diff --git a/content/en/docs/concepts/security/hardening-guide/authentication-mechanisms.md b/content/en/docs/concepts/security/hardening-guide/authentication-mechanisms.md
index 1ac867fc28..acd4726d1a 100644
--- a/content/en/docs/concepts/security/hardening-guide/authentication-mechanisms.md
+++ b/content/en/docs/concepts/security/hardening-guide/authentication-mechanisms.md
@@ -9,135 +9,142 @@ weight: 90
Selecting the appropriate authentication mechanism(s) is a crucial aspect of securing your cluster.
-Kubernetes provides several built-in mechanisms, each with its own strengths and weaknesses that
+Kubernetes provides several built-in mechanisms, each with its own strengths and weaknesses that
should be carefully considered when choosing the best authentication mechanism for your cluster.
-In general, it is recommended to enable as few authentication mechanisms as possible to simplify
+In general, it is recommended to enable as few authentication mechanisms as possible to simplify
user management and prevent cases where users retain access to a cluster that is no longer required.
-It is important to note that Kubernetes does not have an in-built user database within the cluster.
-Instead, it takes user information from the configured authentication system and uses that to make
-authorization decisions. Therefore, to audit user access, you need to review credentials from every
+It is important to note that Kubernetes does not have an in-built user database within the cluster.
+Instead, it takes user information from the configured authentication system and uses that to make
+authorization decisions. Therefore, to audit user access, you need to review credentials from every
configured authentication source.
-For production clusters with multiple users directly accessing the Kubernetes API, it is
-recommended to use external authentication sources such as OIDC. The internal authentication
-mechanisms, such as client certificates and service account tokens, described below, are not
-suitable for this use-case.
+For production clusters with multiple users directly accessing the Kubernetes API, it is
+recommended to use external authentication sources such as OIDC. The internal authentication
+mechanisms, such as client certificates and service account tokens, described below, are not
+suitable for this use case.
## X.509 client certificate authentication {#x509-client-certificate-authentication}
-Kubernetes leverages [X.509 client certificate](/docs/reference/access-authn-authz/authentication/#x509-client-certificates)
-authentication for system components, such as when the Kubelet authenticates to the API Server.
-While this mechanism can also be used for user authentication, it might not be suitable for
+Kubernetes leverages [X.509 client certificate](/docs/reference/access-authn-authz/authentication/#x509-client-certificates)
+authentication for system components, such as when the kubelet authenticates to the API Server.
+While this mechanism can also be used for user authentication, it might not be suitable for
production use due to several restrictions:
-- Client certificates cannot be individually revoked. Once compromised, a certificate can be used
- by an attacker until it expires. To mitigate this risk, it is recommended to configure short
+- Client certificates cannot be individually revoked. Once compromised, a certificate can be used
+ by an attacker until it expires. To mitigate this risk, it is recommended to configure short
lifetimes for user authentication credentials created using client certificates.
-- If a certificate needs to be invalidated, the certificate authority must be re-keyed, which
-can introduce availability risks to the cluster.
-- There is no permanent record of client certificates created in the cluster. Therefore, all
-issued certificates must be recorded if you need to keep track of them.
-- Private keys used for client certificate authentication cannot be password-protected. Anyone
-who can read the file containing the key will be able to make use of it.
-- Using client certificate authentication requires a direct connection from the client to the
-API server with no intervening TLS termination points, which can complicate network architectures.
-- Group data is embedded in the `O` value of the client certificate, which means the user's group
-memberships cannot be changed for the lifetime of the certificate.
+- If a certificate needs to be invalidated, the certificate authority must be re-keyed, which
+ can introduce availability risks to the cluster.
+- There is no permanent record of client certificates created in the cluster. Therefore, all
+ issued certificates must be recorded if you need to keep track of them.
+- Private keys used for client certificate authentication cannot be password-protected. Anyone
+ who can read the file containing the key will be able to make use of it.
+- Using client certificate authentication requires a direct connection from the client to the
+ API server without any intervening TLS termination points, which can complicate network architectures.
+- Group data is embedded in the `O` value of the client certificate, which means the user's group
+ memberships cannot be changed for the lifetime of the certificate.
## Static token file {#static-token-file}
-Although Kubernetes allows you to load credentials from a
-[static token file](/docs/reference/access-authn-authz/authentication/#static-token-file) located
-on the control plane node disks, this approach is not recommended for production servers due to
+Although Kubernetes allows you to load credentials from a
+[static token file](/docs/reference/access-authn-authz/authentication/#static-token-file) located
+on the control plane node disks, this approach is not recommended for production servers due to
several reasons:
- Credentials are stored in clear text on control plane node disks, which can be a security risk.
-- Changing any credential requires a restart of the API server process to take effect, which can
-impact availability.
-- There is no mechanism available to allow users to rotate their credentials. To rotate a
-credential, a cluster administrator must modify the token on disk and distribute it to the users.
+- Changing any credential requires a restart of the API server process to take effect, which can
+ impact availability.
+- There is no mechanism available to allow users to rotate their credentials. To rotate a
+ credential, a cluster administrator must modify the token on disk and distribute it to the users.
- There is no lockout mechanism available to prevent brute-force attacks.
## Bootstrap tokens {#bootstrap-tokens}
-[Bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) are used for joining
+[Bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) are used for joining
nodes to clusters and are not recommended for user authentication due to several reasons:
-- They have hard-coded group memberships that are not suitable for general use, making them
-unsuitable for authentication purposes.
-- Manually generating bootstrap tokens can lead to weak tokens that can be guessed by an attacker,
-which can be a security risk.
-- There is no lockout mechanism available to prevent brute-force attacks, making it easier for
-attackers to guess or crack the token.
+- They have hard-coded group memberships that are not suitable for general use, making them
+ unsuitable for authentication purposes.
+- Manually generating bootstrap tokens can lead to weak tokens that can be guessed by an attacker,
+ which can be a security risk.
+- There is no lockout mechanism available to prevent brute-force attacks, making it easier for
+ attackers to guess or crack the token.
## ServiceAccount secret tokens {#serviceaccount-secret-tokens}
-[Service account secrets](/docs/reference/access-authn-authz/service-accounts-admin/#manual-secret-management-for-serviceaccounts)
-are available as an option to allow workloads running in the cluster to authenticate to the
-API server. In Kubernetes < 1.23, these were the default option, however, they are being replaced
-with TokenRequest API tokens. While these secrets could be used for user authentication, they are
+[Service account secrets](/docs/reference/access-authn-authz/service-accounts-admin/#manual-secret-management-for-serviceaccounts)
+are available as an option to allow workloads running in the cluster to authenticate to the
+API server. In Kubernetes < 1.23, these were the default option, however, they are being replaced
+with TokenRequest API tokens. While these secrets could be used for user authentication, they are
generally unsuitable for a number of reasons:
- They cannot be set with an expiry and will remain valid until the associated service account is deleted.
-- The authentication tokens are visible to any cluster user who can read secrets in the namespace
-that they are defined in.
+- The authentication tokens are visible to any cluster user who can read secrets in the namespace
+ that they are defined in.
- Service accounts cannot be added to arbitrary groups complicating RBAC management where they are used.
## TokenRequest API tokens {#tokenrequest-api-tokens}
-The TokenRequest API is a useful tool for generating short-lived credentials for service
-authentication to the API server or third-party systems. However, it is not generally recommended
-for user authentication as there is no revocation method available, and distributing credentials
+The TokenRequest API is a useful tool for generating short-lived credentials for service
+authentication to the API server or third-party systems. However, it is not generally recommended
+for user authentication as there is no revocation method available, and distributing credentials
to users in a secure manner can be challenging.
-When using TokenRequest tokens for service authentication, it is recommended to implement a short
+When using TokenRequest tokens for service authentication, it is recommended to implement a short
lifespan to reduce the impact of compromised tokens.
## OpenID Connect token authentication {#openid-connect-token-authentication}
-Kubernetes supports integrating external authentication services with the Kubernetes API using
-[OpenID Connect (OIDC)](/docs/reference/access-authn-authz/authentication/#openid-connect-tokens).
-There is a wide variety of software that can be used to integrate Kubernetes with an identity
-provider. However, when using OIDC authentication for Kubernetes, it is important to consider the
+Kubernetes supports integrating external authentication services with the Kubernetes API using
+[OpenID Connect (OIDC)](/docs/reference/access-authn-authz/authentication/#openid-connect-tokens).
+There is a wide variety of software that can be used to integrate Kubernetes with an identity
+provider. However, when using OIDC authentication in Kubernetes, it is important to consider the
following hardening measures:
-- The software installed in the cluster to support OIDC authentication should be isolated from
-general workloads as it will run with high privileges.
+- The software installed in the cluster to support OIDC authentication should be isolated from
+ general workloads as it will run with high privileges.
- Some Kubernetes managed services are limited in the OIDC providers that can be used.
-- As with TokenRequest tokens, OIDC tokens should have a short lifespan to reduce the impact of
-compromised tokens.
+- As with TokenRequest tokens, OIDC tokens should have a short lifespan to reduce the impact of
+ compromised tokens.
## Webhook token authentication {#webhook-token-authentication}
-[Webhook token authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
-is another option for integrating external authentication providers into Kubernetes. This mechanism
-allows for an authentication service, either running inside the cluster or externally, to be
-contacted for an authentication decision over a webhook. It is important to note that the suitability
-of this mechanism will likely depend on the software used for the authentication service, and there
+[Webhook token authentication](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
+is another option for integrating external authentication providers into Kubernetes. This mechanism
+allows for an authentication service, either running inside the cluster or externally, to be
+contacted for an authentication decision over a webhook. It is important to note that the suitability
+of this mechanism will likely depend on the software used for the authentication service, and there
are some Kubernetes-specific considerations to take into account.
-To configure Webhook authentication, access to control plane server filesystems is required. This
-means that it will not be possible with Managed Kubernetes unless the provider specifically makes it
-available. Additionally, any software installed in the cluster to support this access should be
+To configure Webhook authentication, access to control plane server filesystems is required. This
+means that it will not be possible with Managed Kubernetes unless the provider specifically makes it
+available. Additionally, any software installed in the cluster to support this access should be
isolated from general workloads, as it will run with high privileges.
## Authenticating proxy {#authenticating-proxy}
-Another option for integrating external authentication systems into Kubernetes is to use an
-[authenticating proxy](/docs/reference/access-authn-authz/authentication/#authenticating-proxy).
-With this mechanism, Kubernetes expects to receive requests from the proxy with specific header
-values set, indicating the username and group memberships to assign for authorization purposes.
-It is important to note that there are specific considerations to take into account when using
+Another option for integrating external authentication systems into Kubernetes is to use an
+[authenticating proxy](/docs/reference/access-authn-authz/authentication/#authenticating-proxy).
+With this mechanism, Kubernetes expects to receive requests from the proxy with specific header
+values set, indicating the username and group memberships to assign for authorization purposes.
+It is important to note that there are specific considerations to take into account when using
this mechanism.
-Firstly, securely configured TLS must be used between the proxy and Kubernetes API server to
-mitigate the risk of traffic interception or sniffing attacks. This ensures that the communication
+Firstly, securely configured TLS must be used between the proxy and Kubernetes API server to
+mitigate the risk of traffic interception or sniffing attacks. This ensures that the communication
between the proxy and Kubernetes API server is secure.
-Secondly, it is important to be aware that an attacker who is able to modify the headers of the
-request may be able to gain unauthorized access to Kubernetes resources. As such, it is important
-to ensure that the headers are properly secured and cannot be tampered with.
\ No newline at end of file
+Secondly, it is important to be aware that an attacker who is able to modify the headers of the
+request may be able to gain unauthorized access to Kubernetes resources. As such, it is important
+to ensure that the headers are properly secured and cannot be tampered with.
+
+## {{% heading "whatsnext" %}}
+
+- [User Authentication](/docs/reference/access-authn-authz/authentication/)
+- [Authenticating with Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/)
+- [kubelet Authentication](/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication)
+- [Authenticating with Service Account Tokens](/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-tokens)
From 9d1d3323cd79989ffc7485370c9edbf728b594e5 Mon Sep 17 00:00:00 2001
From: Patrice Chalin
Date: Sat, 16 Nov 2024 08:11:29 -0500
Subject: [PATCH 07/37] Container build with NPM deps and fixed build cmd
---
.dockerignore | 2 ++
Dockerfile | 5 +++--
Makefile | 10 ++++++++--
3 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/.dockerignore b/.dockerignore
index 1d085cacc9..be2f2c26c4 100644
--- a/.dockerignore
+++ b/.dockerignore
@@ -1 +1,3 @@
**
+!package.json
+!package-lock.json
diff --git a/Dockerfile b/Dockerfile
index 1c95bbed12..14f7d4dad8 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -31,8 +31,7 @@ RUN apk add --no-cache \
git \
openssh-client \
rsync \
- npm && \
- npm install -D autoprefixer postcss-cli
+ npm
RUN mkdir -p /var/hugo && \
addgroup -Sg 1000 hugo && \
@@ -43,6 +42,8 @@ RUN mkdir -p /var/hugo && \
COPY --from=0 /go/bin/hugo /usr/local/bin/hugo
WORKDIR /src
+COPY package.json package-lock.json ./
+RUN npm ci
USER hugo:hugo
diff --git a/Makefile b/Makefile
index cff50172cb..b97e9a45b3 100644
--- a/Makefile
+++ b/Makefile
@@ -97,11 +97,17 @@ docker-push: ## Build a multi-architecture image and push that into the registry
rm Dockerfile.cross
container-build: module-check
- $(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify --environment development"
+ mkdir -p public
+ $(CONTAINER_RUN) --read-only \
+ --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 \
+ --mount type=bind,source=$(CURDIR)/public,target=/src/public $(CONTAINER_IMAGE) \
+ hugo --cleanDestinationDir --buildDrafts --buildFuture --environment preview --noBuildLock
# no build lock to allow for read-only mounts
container-serve: module-check ## Boot the development server using container.
- $(CONTAINER_RUN) --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir --noBuildLock
+ $(CONTAINER_RUN) --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only \
+ --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) \
+ hugo server --buildFuture --environment development --bind 0.0.0.0 --destination /tmp/public --cleanDestinationDir --noBuildLock
test-examples:
scripts/test_examples.sh install
From 1ff8d96245ce1c15eaa0f55e953e1d33953ce901 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Thu, 31 Oct 2024 08:52:00 +0000
Subject: [PATCH 08/37] Bump Docsy to v0.3.0
---
.gitmodules | 2 +-
themes/docsy | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.gitmodules b/.gitmodules
index 771ed3fc6b..7028b79fbe 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,7 +1,7 @@
[submodule "themes/docsy"]
path = themes/docsy
url = https://github.com/google/docsy.git
- branch = v0.2.0
+ branch = v0.3.0
[submodule "api-ref-generator"]
path = api-ref-generator
url = https://github.com/kubernetes-sigs/reference-docs
diff --git a/themes/docsy b/themes/docsy
index 1c77bb2448..9f55cf3480 160000
--- a/themes/docsy
+++ b/themes/docsy
@@ -1 +1 @@
-Subproject commit 1c77bb24483946f11c13f882f836a940b55ad019
+Subproject commit 9f55cf34808d720bcfff9398c9f9bb7fd8fce4ec
From 4d016545dbd73c42470261b2144c88ae5db0cac9 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Sat, 2 Nov 2024 17:58:12 +0000
Subject: [PATCH 09/37] Adapt sidebar navigation for newer Docsy
---
hugo.toml | 2 +
layouts/partials/sidebar-tree.html | 76 ++++++++++++------------------
2 files changed, 31 insertions(+), 47 deletions(-)
diff --git a/hugo.toml b/hugo.toml
index 13ba477cc9..b0fde0b773 100644
--- a/hugo.toml
+++ b/hugo.toml
@@ -218,6 +218,8 @@ url = "https://v1-27.docs.kubernetes.io"
[params.ui]
# Enable to show the side bar menu in its compact state.
sidebar_menu_compact = false
+# Show this many levels in compact mode
+ul_show = 3
# Show expand/collapse icon for sidebar sections.
sidebar_menu_foldable = true
# https://github.com/gohugoio/hugo/issues/8918#issuecomment-903314696
diff --git a/layouts/partials/sidebar-tree.html b/layouts/partials/sidebar-tree.html
index f8ce9b468e..ae6f622df6 100644
--- a/layouts/partials/sidebar-tree.html
+++ b/layouts/partials/sidebar-tree.html
@@ -1,36 +1,35 @@
-{{/* We cache this partial for bigger sites and set the active class client side. */}}
-{{ $sidebarCacheLimit := cond (isset .Site.Params.ui "sidebar_cache_limit") .Site.Params.ui.sidebar_cache_limit 2000 -}}
-{{ $shouldDelayActive := ge (len .Site.Pages) $sidebarCacheLimit -}}
+{{/* Always cache this partial; set the active class client side. */}}
+{{ $shouldDelayActive := true }}
{{ if not .Site.Params.ui.sidebar_search_disable -}}
{{ else -}}
-{{- end }}
+{{- end }}
\ No newline at end of file
From e7ac3287d1261c541eb1dc6a4ed5ffcb54a5f501 Mon Sep 17 00:00:00 2001
From: Mauren Berti
Date: Mon, 18 Nov 2024 18:20:31 -0500
Subject: [PATCH 10/37] [pt-br] Update docs/tasks/tools/install-kubectl-macos
Small adjustments for consistency and anchor fixes.
---
.../optional-kubectl-configs-bash-mac.md | 37 ++++++++++---------
.../docs/tasks/tools/install-kubectl-macos.md | 12 +++---
2 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/content/pt-br/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md b/content/pt-br/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
index 33497ebdd1..77a53ab611 100644
--- a/content/pt-br/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
+++ b/content/pt-br/docs/tasks/tools/included/optional-kubectl-configs-bash-mac.md
@@ -1,6 +1,6 @@
---
-title: "Autocompletar no bash macOS"
-description: "Configurações opcionais do auto-completar do bash no macOS."
+title: "Autocompletar do bash no macOS"
+description: "Configurações opcionais para habilitar o autocompletar do bash no macOS."
headless: true
_build:
list: never
@@ -14,17 +14,18 @@ O script de autocompletar do kubectl para Bash pode ser gerado com o comando `ku
O script permite habilitar o autocompletar do kubectl no seu shell.
No entanto, o script autocompletar depende do
-[**bash-completar**](https://github.com/scop/bash-completion), o que significa que você precisa instalar este software primeiro (executando `type _init_completion` você pode testar se tem o bash-completion instalado).
+[**bash-completion**](https://github.com/scop/bash-completion), o que significa
+que você precisa instalar este software primeiro.
{{< warning>}}
-Existem duas versões de autocompletar do Bash, v1 e v2. V1 é para Bash 3.2
-(que é padrão no macOS), e v2 que é para Bash 4.1+. O script de autocompletar
-do kubectl **não funciona** corretamente com o autocompletar do bash v1 e o
+Existem duas versões do bash-completion, v1 e v2. V1 é para Bash 3.2
+(que é padrão no macOS), e v2 é para Bash 4.1+. O script de autocompletar
+do kubectl **não funciona** corretamente com o bash-completion v1 e o
Bash 3.2. Ele requer **bash-completion v2** e **Bash 4.1+**. Por isso, para
executarmos o autocompletar do kubectl no macOS de forma correta, você precisa
-instalar e usar o Bash 4.1+([*guia*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)).
+instalar e usar o Bash 4.1+ ([*guia*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)).
As instruções a seguir, levam em conta que você utilize o Bash 4.1+.
-(Isso quer dizer, nenhuma versão do Bash 4.1 ou mais recente).
+(ou seja, a versão 4.1 do Bash ou qualquer outra mais recente).
{{< /warning >}}
### Atualizando Bash
@@ -35,13 +36,13 @@ As instruções abaixo sugerem que você esteja utilizando o Bash 4.1+. Você po
echo $BASH_VERSION
```
-Se a versão do Bash for antiga, você pode instalar ou atualizar utilizando o Homebrew:
+Se a versão do Bash for muito antiga, você pode instalar ou atualizar utilizando o Homebrew:
```bash
brew install bash
```
-Recarregue seu shell e verifique se a versão desejada foi instalada ou se está em uso:
+Recarregue seu shell e verifique se a versão desejada foi instalada e está em uso:
```bash
echo $BASH_VERSION $SHELL
@@ -52,12 +53,12 @@ O Homebrew normalmente instala os pacotes em `/usr/local/bin/bash`.
### Instalar bash-completar
{{< note >}}
-Como mencionado anteriormente, essas instruções levam em consideração que você esteja utilizando o Bash 4.1+, dessa forma você
-vai instalar o bash-completion v2 (diferentemente do Bash 3.2 e do bash-completion v1,
-nesses casos, o completar do kubectl não irá funcionar).
+Como mencionado anteriormente, essas instruções assumem que você esteja utilizando
+o Bash 4.1+. Por isso, você irá instalar o bash-completion v2 (em contraste ao
+Bash 3.2 e bash-completion v1, caso em que o autocompletar do kubectl não irá funcionar).
{{< /note >}}
-Você pode testar se você tiver o bash-completion v2 instalado, utilizando `type _init_completion`.
+Você pode testar se o bash-completion v2 está instalado, utilizando `type _init_completion`.
Se não, você pode instalar utilizando o Homebrew:
```bash
@@ -70,7 +71,7 @@ Como indicado na saída deste comando, adicione a seguinte linha em seu arquivo
brew_etc="$(brew --prefix)/etc" && [[ -r "${brew_etc}/profile.d/bash_completion.sh" ]] && . "${brew_etc}/profile.d/bash_completion.sh"
```
-Recarregue seu shell e verifique que o bash-completion v2 está instalado corretamente, utilizando `type _init_completion`.
+Recarregue seu shell e verifique que o bash-completion v2 está instalado corretamente utilizando `type _init_completion`.
### Habilitar autocompletar do kubectl
@@ -97,13 +98,13 @@ as suas sessões de shell. Existem várias maneiras de fazer isso:
```
- Se você tiver instalado o kubectl com o Homebrew(conforme explicado
- [aqui](/docs/tasks/tools/install-kubectl-macos/#install-with-homebrew-on-macos)),
+ [aqui](/docs/tasks/tools/install-kubectl-macos/#instalando-o-kubectl-no-macos)),
então o script de autocompletar do kubectl deverá estar pronto em `/usr/local/etc/bash_completion.d/kubectl`.
Neste caso, você não precisa fazer mais nada.
{{< note >}}
A instalação do bash-completion v2 via Homebrew carrega todos os arquivos no diretório
- `BASH_COMPLETION_COMPAT_DIR`, é por isso que os dois últimos métodos funcionam..
+ `BASH_COMPLETION_COMPAT_DIR`, é por isso que os dois últimos métodos funcionam.
{{< /note >}}
-De qualquer forma, após recarregar seu shell, o auto-completar do kubectl deve estar funcionando.
+Em todos os casos, após recarregar seu shell, o autocompletar do kubectl deve estar funcionando.
diff --git a/content/pt-br/docs/tasks/tools/install-kubectl-macos.md b/content/pt-br/docs/tasks/tools/install-kubectl-macos.md
index 38325753b0..8fb5909af2 100644
--- a/content/pt-br/docs/tasks/tools/install-kubectl-macos.md
+++ b/content/pt-br/docs/tasks/tools/install-kubectl-macos.md
@@ -7,21 +7,21 @@ weight: 10
## {{% heading "prerequisites" %}}
Você deve usar uma versão do kubectl que esteja próxima da versão do seu cluster.
-Por exemplo, um cliente v{{< skew currentVersion >}} pode se comunicar
-com control planes nas versões v{{< skew currentVersionAddMinor -1 >}}, v{{< skew currentVersionAddMinor 0 >}},
-e v{{< skew currentVersionAddMinor 1 >}}.
-Usar a versão compatível e mais recente do kubectl pode evitar imprevistos ou problemas.
+Por exemplo, um cliente v{{< skew currentVersion >}} pode se comunicar com as
+versões v{{< skew currentVersionAddMinor -1 >}}, v{{< skew currentVersionAddMinor 0 >}}
+e v{{< skew currentVersionAddMinor 1 >}} da camada de gerenciamento. Usar a
+versão compatível mais recente do kubectl ajuda a evitar problemas inesperados.
## Instalando o kubectl no macOS
Existem os seguintes métodos para instalar o kubectl no macOS:
-- [Instalar kubectl no macOS](#instalar-kubectl-no-macos)
+- [Instalando o kubectl no macOS](#instalando-o-kubectl-no-macos)
- [Instalar o kubectl com curl no macOS](#instalar-o-kubectl-com-o-curl-no-macos)
- [Instalar com Homebrew no macOS](#instalar-com-homebrew-no-macos)
- [Instalar com Macports no macOS](#instalar-com-macports-no-macos)
- [Verificar a configuração do kubectl](#verificar-a-configuração-do-kubectl)
-- [Plugins e ajustes opcionais do kubectl](#plugins-e-ajustes-opcionais-do-kubectl)
+- [Configurações e plugins opcionais do kubectl](#configurações-e-plugins-opcionais-do-kubectl)
- [Habilitar o autocompletar no shell](#ative-o-autocompletar-no-shell)
- [Instalar o plugin `kubectl convert`](#instalar-kubectl-convert-plugin)
From 6a73d0e087b4979b87e4604a185b624933e218d1 Mon Sep 17 00:00:00 2001
From: Adam Gerard
Date: Mon, 18 Nov 2024 18:41:23 -0600
Subject: [PATCH 11/37] docs: factor out typos into new pr
---
.../services/connect-applications-service.md | 2 +-
.../services/pods-and-endpoint-termination-flow.md | 12 ++++++------
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index d41275f736..64215919ce 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -90,7 +90,7 @@ kubectl expose deployment/my-nginx
service/my-nginx exposed
```
-This is equivalent to `kubectl apply -f` the following yaml:
+This is equivalent to `kubectl apply -f` in the following yaml:
{{% code_sample file="service/networking/nginx-svc.yaml" %}}
diff --git a/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md b/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md
index 9f89c606e3..e75bb38de1 100644
--- a/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md
+++ b/content/en/docs/tutorials/services/pods-and-endpoint-termination-flow.md
@@ -17,7 +17,7 @@ graceful connection draining.
## Termination process for Pods and their endpoints
-There are often cases when you need to terminate a Pod - be it for upgrade or scale down.
+There are often cases when you need to terminate a Pod - be it to upgrade or scale down.
In order to improve application availability, it may be important to implement
a proper active connections draining.
@@ -29,12 +29,12 @@ a simple nginx web server to demonstrate the concept.
## Example flow with endpoint termination
-The following is the example of the flow described in the
+The following is the example flow described in the
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
document.
-Let's say you have a Deployment containing of a single `nginx` replica
-(just for demonstration purposes) and a Service:
+Let's say you have a Deployment containing a single `nginx` replica
+(say just for the sake of demonstration purposes) and a Service:
{{% code_sample file="service/pod-with-graceful-termination.yaml" %}}
@@ -158,10 +158,10 @@ The output is similar to this:
```
This allows applications to communicate their state during termination
-and clients (such as load balancers) to implement a connections draining functionality.
+and clients (such as load balancers) to implement connection draining functionality.
These clients may detect terminating endpoints and implement a special logic for them.
-In Kubernetes, endpoints that are terminating always have their `ready` status set as as `false`.
+In Kubernetes, endpoints that are terminating always have their `ready` status set as `false`.
This needs to happen for backward
compatibility, so existing load balancers will not use it for regular traffic.
If traffic draining on terminating pod is needed, the actual readiness can be
From d34ee98252817ba2e8bd604ac57fa11770366901 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Mon, 18 Nov 2024 17:22:12 +0000
Subject: [PATCH 12/37] Tweak node capacity overprovisioning task
---
.../node-overprovisioning.md | 47 ++++++++++++++-----
.../deployment-with-capacity-reservation.yaml | 1 +
.../priorityclass/low-priority-class.yaml | 2 +-
3 files changed, 36 insertions(+), 14 deletions(-)
diff --git a/content/en/docs/tasks/administer-cluster/node-overprovisioning.md b/content/en/docs/tasks/administer-cluster/node-overprovisioning.md
index 3685c1342a..bbfc431846 100644
--- a/content/en/docs/tasks/administer-cluster/node-overprovisioning.md
+++ b/content/en/docs/tasks/administer-cluster/node-overprovisioning.md
@@ -1,5 +1,5 @@
---
-title: Overprovision Node Capacity For A Cluster
+title: Overprovision Node Capacity For A Cluster
content_type: task
weight: 10
---
@@ -7,24 +7,29 @@ weight: 10
-This page guides you through configuring {{< glossary_tooltip text="Node" term_id="node" >}} overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively reserves a portion of your cluster's compute resources. This reservation helps reduce the time required to schedule new pods during scaling events, enhancing your cluster's responsiveness to sudden spikes in traffic or workload demands.
+This page guides you through configuring {{< glossary_tooltip text="Node" term_id="node" >}}
+overprovisioning in your Kubernetes cluster. Node overprovisioning is a strategy that proactively
+reserves a portion of your cluster's compute resources. This reservation helps reduce the time
+required to schedule new pods during scaling events, enhancing your cluster's responsiveness
+to sudden spikes in traffic or workload demands.
-By maintaining some unused capacity, you ensure that resources are immediately available when new pods are created, preventing them from entering a pending state while the cluster scales up.
+By maintaining some unused capacity, you ensure that resources are immediately available when
+new pods are created, preventing them from entering a pending state while the cluster scales up.
## {{% heading "prerequisites" %}}
-- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with
+- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with
your cluster.
- You should already have a basic understanding of
[Deployments](/docs/concepts/workloads/controllers/deployment/),
- Pod {{}},
- and [PriorityClasses](/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
+ Pod {{< glossary_tooltip text="priority" term_id="pod-priority" >}},
+ and {{< glossary_tooltip text="PriorityClasses" term_id="priority-class" >}}.
- Your cluster must be set up with an [autoscaler](/docs/concepts/cluster-administration/cluster-autoscaling/)
that manages nodes based on demand.
-## Create a placeholder Deployment
+## Create a PriorityClass
Begin by defining a PriorityClass for the placeholder Pods. First, create a PriorityClass with a
negative priority value, that you will shortly assign to the placeholder pods.
@@ -43,14 +48,24 @@ When you add this to your cluster, Kubernetes runs those placeholder pods to res
is a capacity shortage, the control plane will pick one these placeholder pods as the first candidate to
{{< glossary_tooltip text="preempt" term_id="preemption" >}}.
+## Run Pods that request node capacity
+
Review the sample manifest:
{{% code_sample language="yaml" file="deployments/deployment-with-capacity-reservation.yaml" %}}
+### Pick a namespace for the placeholder pods
+
+You should select, or create, a {{< glossary_tooltip term_id="namespace" text="namespace">}}
+that the placeholder Pods will go into.
+
+### Create the placeholder deployment
+
Create a Deployment based on that manifest:
```shell
-kubectl apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
+# Change the namespace name "example"
+kubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
```
## Adjust placeholder resource requests
@@ -61,7 +76,13 @@ To edit the Deployment, modify the `resources` section in the Deployment manifes
to set appropriate requests and limits. You can download that file locally and then edit it
with whichever text editor you prefer.
-For example, to reserve 500m CPU and 1Gi memory across 5 placeholder pods,
+You can also edit the Deployment using kubectl:
+
+```shell
+kubectl edit deployment capacity-reservation
+```
+
+For example, to reserve a total of a 0.5 CPU and 1GiB of memory across 5 placeholder pods,
define the resource requests and limits for a single placeholder pod as follows:
```yaml
@@ -77,10 +98,10 @@ define the resource requests and limits for a single placeholder pod as follows:
### Calculate the total reserved resources
-For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:
-
-Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity `500m`)
-Total Memory reserved: 5 × 200MiB = 1GiB (in the Pod specification, you'll write `1 Gi`)
+
+For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:
+Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity `500m`)
+Total memory reserved: 5 × 200MiB = 1GiB (in the Pod specification, you'll write `1 Gi`)
To scale the Deployment, adjust the number of replicas based on your cluster's size and expected workload:
diff --git a/content/en/examples/deployments/deployment-with-capacity-reservation.yaml b/content/en/examples/deployments/deployment-with-capacity-reservation.yaml
index a34a4f8d91..f70bed0111 100644
--- a/content/en/examples/deployments/deployment-with-capacity-reservation.yaml
+++ b/content/en/examples/deployments/deployment-with-capacity-reservation.yaml
@@ -2,6 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: capacity-reservation
+ # You should decide what namespace to deploy this into
spec:
replicas: 1
selector:
diff --git a/content/en/examples/priorityclass/low-priority-class.yaml b/content/en/examples/priorityclass/low-priority-class.yaml
index a8e4f3834d..a24d77a3ef 100644
--- a/content/en/examples/priorityclass/low-priority-class.yaml
+++ b/content/en/examples/priorityclass/low-priority-class.yaml
@@ -1,7 +1,7 @@
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
- name: placeholder
+ name: placeholder # these Pods represent placeholder capacity
value: -1000
globalDefault: false
description: "Negative priority for placeholder pods to enable overprovisioning."
\ No newline at end of file
From de0985ee7c4d0fa94ca91ed4d106d9dc567cdaed Mon Sep 17 00:00:00 2001
From: Josh Soref <2119212+jsoref@users.noreply.github.com>
Date: Tue, 19 Nov 2024 09:46:24 -0500
Subject: [PATCH 13/37] Fix type parameter
---
.../blog/_posts/2023-12-15-volume-attributes-class/index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/en/blog/_posts/2023-12-15-volume-attributes-class/index.md b/content/en/blog/_posts/2023-12-15-volume-attributes-class/index.md
index 08085d1d26..7d0b1306da 100644
--- a/content/en/blog/_posts/2023-12-15-volume-attributes-class/index.md
+++ b/content/en/blog/_posts/2023-12-15-volume-attributes-class/index.md
@@ -68,7 +68,7 @@ If you would like to see the feature in action and verify it works fine in your
name: csi-sc-example
provisioner: pd.csi.storage.gke.io
parameters:
- disk-type: "hyperdisk-balanced"
+ type: "hyperdisk-balanced"
volumeBindingMode: WaitForFirstConsumer
```
@@ -174,4 +174,4 @@ Special thanks to all the contributors that provided great reviews, shared valua
* Jordan Liggitt (liggitt)
* Matthew Cary (mattcary)
* Michelle Au (msau42)
-* Xing Yang (xing-yang)
\ No newline at end of file
+* Xing Yang (xing-yang)
From fa3e39c451eed064df7c17ddeaa9fb0cdc09591e Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Tue, 19 Nov 2024 09:33:21 +0800
Subject: [PATCH 14/37] [zh] Add node-overprovisioning.md
---
.../node-overprovisioning.md | 205 ++++++++++++++++++
.../deployment-with-capacity-reservation.yaml | 33 +++
.../priorityclass/low-priority-class.yaml | 7 +
3 files changed, 245 insertions(+)
create mode 100644 content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
create mode 100644 content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
create mode 100644 content/zh-cn/examples/priorityclass/low-priority-class.yaml
diff --git a/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md b/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
new file mode 100644
index 0000000000..e91ae3ba21
--- /dev/null
+++ b/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
@@ -0,0 +1,205 @@
+---
+title: 为集群超配节点容量
+content_type: task
+weight: 10
+---
+
+
+
+
+
+本页指导你在 Kubernetes 集群中配置{{< glossary_tooltip text="节点" term_id="node" >}}超配。
+节点超配是一种主动预留部分集群计算资源的策略。这种预留有助于减少在扩缩容事件期间调度新 Pod 所需的时间,
+从而增强集群对突发流量或突发工作负载需求的响应能力。
+
+通过保持一些未使用的容量,你确保在新 Pod 被创建时资源可以立即可用,防止 Pod 在集群扩缩容时进入 Pending 状态。
+
+## {{% heading "prerequisites" %}}
+
+
+- 你需要有一个 Kubernetes 集群,并且 kubectl 命令行工具必须被配置为与你的集群通信。
+- 你应该已经基本了解了 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)、Pod
+ {{}}和
+ [PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)。
+- 你的集群必须设置一个基于需求管理节点的
+ [Autoscaler](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)。
+
+
+
+
+## 创建占位 Deployment {#create-a-placeholder-deployment}
+
+首先为占位 Pod 定义一个 PriorityClass。
+先创建一个优先级值为负数的 PriorityClass,稍后将其分配给占位 Pod。
+接下来,你将部署使用此 PriorityClass 的 Deployment。
+
+{{% code_sample language="yaml" file="priorityclass/low-priority-class.yaml" %}}
+
+
+然后创建 PriorityClass:
+
+```shell
+kubectl apply -f https://k8s.io/examples/priorityclass/low-priority-class.yaml
+```
+
+
+接下来,你将定义一个 Deployment,使用优先级值为负数的 PriorityClass 并运行最小容器。
+当你将此 Deployment 添加到集群中时,Kubernetes 会运行这些占位 Pod 以预留容量。
+每当出现容量短缺时,控制面将选择这些占位 Pod
+中的一个作为第一个候选者进行{{< glossary_tooltip text="抢占" term_id="preemption" >}}。
+
+查看样例清单:
+
+{{% code_sample language="yaml" file="deployments/deployment-with-capacity-reservation.yaml" %}}
+
+
+基于该清单创建 Deployment:
+
+```shell
+kubectl apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
+```
+
+
+## 调整占位资源请求 {#adjust-placeholder-resource-requests}
+
+为占位 Pod 配置资源请求和限制,以定义你希望保持的超配资源量。
+这种预留确保为新 Pod 保留可以使用的、特定量的 CPU 和内存。
+
+
+要编辑 Deployment,可以修改 Deployment 清单文件中的 `resources` 一节,
+设置合适的 `requests` 和 `limits`。
+你可以将该文件下载到本地,然后用自己喜欢的文本编辑器进行编辑。
+
+例如,要为 5 个占位 Pod 预留 500m CPU 和 1Gi 内存,请为单个占位 Pod 定义以下资源请求和限制:
+
+```yaml
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "200Mi"
+ limits:
+ cpu: "100m"
+```
+
+
+## 设置所需的副本数量 {#set-the-desired-replica-count}
+
+### 计算总预留资源 {#calculate-the-total-reserved-resources}
+
+例如,有 5 个副本,每个预留 0.1 CPU 和 200MiB 内存:
+
+CPU 预留总量:5 × 0.1 = 0.5(在 Pod 规约中,你将写入数量 `500m`)
+
+内存预留总量:5 × 200MiB = 1GiB(在 Pod 规约中,你将写入 `1 Gi`)
+
+要扩缩容 Deployment,请基于集群的大小和预期的工作负载调整副本数:
+
+```shell
+kubectl scale deployment capacity-reservation --replicas=5
+```
+
+
+验证扩缩容效果:
+
+```shell
+kubectl get deployment capacity-reservation
+```
+
+
+输出应反映出更新后的副本数:
+
+```none
+NAME READY UP-TO-DATE AVAILABLE AGE
+capacity-reservation 5/5 5 5 2m
+```
+
+{{< note >}}
+
+一些自动扩缩组件,特别是
+[Karpenter](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/#autoscaler-karpenter),
+在考虑节点扩缩容时将偏好的亲和性规则视为硬性规则。如果你使用 Karpenter
+或其他使用同样启发式的节点扩缩容组件,你在此处设置的副本数也就是你的集群的最少节点数。
+{{< /note >}}
+
+## {{% heading "whatsnext" %}}
+
+
+- 进一步了解 [PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)
+ 及其如何影响 Pod 调度。
+- 探索[节点自动扩缩容](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/),
+ 以基于工作负载需求动态调整集群的大小。
+- 了解 [Pod 抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/),
+ 这是 Kubernetes 处理资源竞争的关键机制。这篇文档还涵盖了**驱逐**,
+ 虽然与占位 Pod 方法相关性较小,但也是 Kubernetes 在资源竞争时做出反应的一种机制。
diff --git a/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml b/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
new file mode 100644
index 0000000000..556c0b0f18
--- /dev/null
+++ b/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
@@ -0,0 +1,33 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: capacity-reservation
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: capacity-placeholder
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: capacity-placeholder
+ annotations:
+ kubernetes.io/description: "Capacity reservation"
+ spec:
+ priorityClassName: placeholder
+ affinity: # 有可能的话,将这些 Pod 开销放到不同的节点
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: placeholder
+ topologyKey: "kubernetes.io/hostname"
+ containers:
+ - name: pause
+ image: registry.k8s.io/pause:3.6
+ resources:
+ requests:
+ cpu: "50m"
+ memory: "512Mi"
+ limits:
+ memory: "512Mi"
diff --git a/content/zh-cn/examples/priorityclass/low-priority-class.yaml b/content/zh-cn/examples/priorityclass/low-priority-class.yaml
new file mode 100644
index 0000000000..a8e4f3834d
--- /dev/null
+++ b/content/zh-cn/examples/priorityclass/low-priority-class.yaml
@@ -0,0 +1,7 @@
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: placeholder
+value: -1000
+globalDefault: false
+description: "Negative priority for placeholder pods to enable overprovisioning."
\ No newline at end of file
From 7e45783196b7ebc0061b12dca9a5b05150c21d20 Mon Sep 17 00:00:00 2001
From: Dmitry Shurupov
Date: Wed, 20 Nov 2024 16:15:45 +0700
Subject: [PATCH 15/37] Fix control plane translation in Russian
Signed-off-by: Dmitry Shurupov
---
.../ru/docs/concepts/architecture/garbage-collection.md | 8 ++++----
.../docs/reference/glossary/cloud-controller-manager.md | 2 +-
content/ru/docs/reference/glossary/controller.md | 6 +++---
content/ru/docs/reference/glossary/kube-scheduler.md | 4 ++--
4 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/content/ru/docs/concepts/architecture/garbage-collection.md b/content/ru/docs/concepts/architecture/garbage-collection.md
index b5152734fc..8a214c99f8 100644
--- a/content/ru/docs/concepts/architecture/garbage-collection.md
+++ b/content/ru/docs/concepts/architecture/garbage-collection.md
@@ -21,14 +21,14 @@ weight: 50
## Владельцы и зависимости {#owners-dependents}
Многие объекты в Kubernetes ссылаются друг на друга через [*ссылки владельцев*](/docs/concepts/overview/working-with-objects/owners-dependents/).
-Ссылки владельцев сообщают плоскости управления какие объекты зависят от других.
-Kubernetes использует ссылки владельцев, чтобы предоставить плоскости управления и другим API
-клиентам, возможность очистить связанные ресурсы перед удалением объекта. В большинстве случаев, Kubernetes автоматический управляет ссылками владельцев.
+Ссылки владельцев сообщают управляющему слою, какие объекты зависят от других.
+Kubernetes использует ссылки владельцев, чтобы предоставить управляющему слою и другим API
+клиентам, возможность очистить связанные ресурсы перед удалением объекта. В большинстве случаев, Kubernetes автоматически управляет ссылками владельцев.
Владелец отличается от [меток и селекторов](/docs/concepts/overview/working-with-objects/labels/)
которые также используют некоторые ресурсы. Например, рассмотрим
{{}} которая создает объект
-`EndpointSlice`. Служба использует *метки* чтобы позволить плоскости управления определить какие `EndpointSlice` объекты используются для этой службы. В дополнение
+`EndpointSlice`. Служба использует *метки* чтобы позволить управляющему слою определить, какие `EndpointSlice` объекты используются для этой службы. В дополнение
к меткам, каждый `EndpointSlice` управляет ои имени службы, имеет
ссылку владельца. Ссылки владельцев помогают различным частям Kubernetes избегать
вмешательства в объекты, которые они не контролируют.
diff --git a/content/ru/docs/reference/glossary/cloud-controller-manager.md b/content/ru/docs/reference/glossary/cloud-controller-manager.md
index c12c8db992..6910f5551a 100644
--- a/content/ru/docs/reference/glossary/cloud-controller-manager.md
+++ b/content/ru/docs/reference/glossary/cloud-controller-manager.md
@@ -4,7 +4,7 @@ id: cloud-controller-manager
date: 2018-04-12
full_link: /docs/concepts/architecture/cloud-controller/
short_description: >
- Компонент плоскости управления, который интегрирует Kubernetes со сторонними облачными провайдерами.
+ Компонент управляющего слоя, который интегрирует Kubernetes со сторонними облачными провайдерами.
aka:
tags:
- core-object
diff --git a/content/ru/docs/reference/glossary/controller.md b/content/ru/docs/reference/glossary/controller.md
index c1efc41d1b..9d5f71cc0e 100644
--- a/content/ru/docs/reference/glossary/controller.md
+++ b/content/ru/docs/reference/glossary/controller.md
@@ -20,10 +20,10 @@ tags:
Контроллеры отсллеживают общее состояние вашего кластера через
{{< glossary_tooltip text="API-сервер" term_id="kube-apiserver" >}} (часть
-{{< glossary_tooltip text="плоскости управления" term_id="control-plane" >}}).
+{{< glossary_tooltip text="управляющего слоя" term_id="control-plane" >}}).
-Некоторые контроллеры также работают внутри плоскости управления, обеспечивая
-управляющие циклы, которые являются ядром для операций Kubernetes. Например:
+Некоторые контроллеры также работают внутри управляющего слоя (control plane),
+обеспечивая управляющие циклы, которые являются ядром для операций Kubernetes. Например:
контроллер развертывания (deployment controller), контроллер daemonset (daemonset controller),
контроллер пространства имен (namespace controller) и контроллер постоянных томов (persistent volume
controller) (и другие) работают с {{< glossary_tooltip term_id="kube-controller-manager" >}}.
diff --git a/content/ru/docs/reference/glossary/kube-scheduler.md b/content/ru/docs/reference/glossary/kube-scheduler.md
index 4cf084ea0f..0f9c7c42e8 100644
--- a/content/ru/docs/reference/glossary/kube-scheduler.md
+++ b/content/ru/docs/reference/glossary/kube-scheduler.md
@@ -4,13 +4,13 @@ id: kube-scheduler
date: 2018-04-12
full_link: /docs/reference/generated/kube-scheduler/
short_description: >
- Компонент плоскости управления, который отслеживает созданные поды без привязанного узла и выбирает узел, на котором они должны работать.
+ Компонент управляющего слоя, который отслеживает созданные поды без привязанного узла и выбирает узел, на котором они должны работать.
aka:
tags:
- architecture
---
- Компонент плоскости управления, который отслеживает созданные поды без привязанного узла и выбирает узел, на котором они должны работать.
+ Компонент управляющего слоя (control plane), который отслеживает созданные поды без привязанного узла и выбирает узел, на котором они должны работать.
From ae0791b959e9b3d039e13adc037db24dc16ce4f0 Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Wed, 20 Nov 2024 21:59:46 +0800
Subject: [PATCH 16/37] [zh-cn]sync configure-java-microservice images
Signed-off-by: xin.li
---
.../zh-cn/docs/concepts/containers/images.md | 2 +-
.../configure-java-microservice/_index.md | 10 --
...nfigure-java-microservice-interactive.html | 37 ------
.../configure-java-microservice.md | 121 ------------------
4 files changed, 1 insertion(+), 169 deletions(-)
delete mode 100644 content/zh-cn/docs/tutorials/configuration/configure-java-microservice/_index.md
delete mode 100644 content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
delete mode 100644 content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md
diff --git a/content/zh-cn/docs/concepts/containers/images.md b/content/zh-cn/docs/concepts/containers/images.md
index bee6ad221b..ffc1ab2d0a 100644
--- a/content/zh-cn/docs/concepts/containers/images.md
+++ b/content/zh-cn/docs/concepts/containers/images.md
@@ -512,7 +512,7 @@ Credentials can be provided in several ways:
- all pods can use any images cached on a node
- requires root access to all nodes to set up
- Specifying ImagePullSecrets on a Pod
- - only pods which provide own keys can access the private registry
+ - only pods which provide their own keys can access the private registry
- Vendor-specific or local extensions
- if you're using a custom node configuration, you (or your cloud
provider) can implement your mechanism for authenticating the node
diff --git a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/_index.md b/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/_index.md
deleted file mode 100644
index b94fbbb97f..0000000000
--- a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/_index.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: "示例:配置 java 微服务"
-weight: 10
----
-
\ No newline at end of file
diff --git a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html b/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
deleted file mode 100644
index 5a119fcd37..0000000000
--- a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive.html
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: "互动教程 - 配置 java 微服务"
-weight: 20
----
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- 如需要与终端交互,请使用台式机/平板电脑版
-
-
-
-
-
-
-
-
-
diff --git a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md b/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md
deleted file mode 100644
index 22a7089d1e..0000000000
--- a/content/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice.md
+++ /dev/null
@@ -1,121 +0,0 @@
----
-title: "使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置"
-content_type: tutorial
-weight: 10
----
-
-
-
-
-
-在本教程中,你会学到如何以及为什么要实现外部化微服务应用配置。
-具体来说,你将学习如何使用 Kubernetes ConfigMaps 和 Secrets 设置环境变量,
-然后在 MicroProfile config 中使用它们。
-
-## {{% heading "prerequisites" %}}
-
-
-### 创建 Kubernetes ConfigMaps 和 Secrets {#creating-kubernetes-configmaps-secrets}
-在 Kubernetes 中,为 docker 容器设置环境变量有几种不同的方式,比如:
-Dockerfile、kubernetes.yml、Kubernetes ConfigMaps、和 Kubernetes Secrets。
-在本教程中,你将学到怎么用后两个方式去设置你的环境变量,而环境变量的值将注入到你的微服务里。
-使用 ConfigMaps 和 Secrets 的一个好处是他们能在多个容器间复用,
-比如赋值给不同的容器中的不同环境变量。
-
-
-ConfigMaps 是存储非机密键值对的 API 对象。
-在互动教程中,你会学到如何用 ConfigMap 来保存应用名字。
-ConfigMap 的更多信息,你可以在[这里](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/)找到文档。
-
-Secrets 尽管也用来存储键值对,但区别于 ConfigMaps 的是:它针对机密/敏感数据,且存储格式为 Base64 编码。
-secrets 的这种特性使得它适合于存储证书、密钥、令牌,上述内容你将在交互教程中实现。
-Secrets 的更多信息,你可以在[这里](/zh-cn/docs/concepts/configuration/secret/)找到文档。
-
-
-
-### 从代码外部化配置
-外部化应用配置之所以有用处,是因为配置常常根据环境的不同而变化。
-为了实现此功能,我们用到了 Java 上下文和依赖注入(Contexts and Dependency Injection, CDI)、MicroProfile 配置。
-MicroProfile config 是 MicroProfile 的功能特性,
-是一组开放 Java 技术,用于开发、部署云原生微服务。
-
-
-CDI 提供一套标准的依赖注入能力,使得应用程序可以由相互协作的、松耦合的 beans 组装而成。
-MicroProfile Config 为 app 和微服务提供从各种来源,比如应用、运行时、环境,获取配置参数的标准方法。
-基于来源定义的优先级,属性可以自动的合并到单独一组应用可以通过 API 访问到的属性。
-CDI & MicroProfile 都会被用在互动教程中,
-用来从 Kubernetes ConfigMaps 和 Secrets 获得外部提供的属性,并注入应用程序代码中。
-
-很多开源框架、运行时支持 MicroProfile Config。
-对于整个互动教程,你都可以使用开放的库、灵活的开源 Java 运行时,去构建并运行云原生的 apps 和微服务。
-然而,任何 MicroProfile 兼容的运行时都可以用来做替代品。
-
-
-## {{% heading "objectives" %}}
-
-
-* 创建 Kubernetes ConfigMap 和 Secret
-* 使用 MicroProfile Config 注入微服务配置
-
-
-
-
-
-## 示例:使用 MicroProfile、ConfigMaps、Secrets 实现外部化应用配置
-
-[启动互动教程](/zh-cn/docs/tutorials/configuration/configure-java-microservice/configure-java-microservice-interactive/)
From d75627700ab7fc09d188140e50612397c53772ab Mon Sep 17 00:00:00 2001
From: Luis Oliveira
Date: Wed, 20 Nov 2024 20:30:54 -0300
Subject: [PATCH 17/37] [pt-br] Add tasks/job/automated-tasks-with-cron-jobs.md
(#48347)
* Add tasks/job/automated-tasks-with-cron-jobs.md PT-BR
* remocao linha reviewers
---
.../job/automated-tasks-with-cron-jobs.md | 112 ++++++++++++++++++
1 file changed, 112 insertions(+)
create mode 100644 content/pt-br/docs/tasks/job/automated-tasks-with-cron-jobs.md
diff --git a/content/pt-br/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/pt-br/docs/tasks/job/automated-tasks-with-cron-jobs.md
new file mode 100644
index 0000000000..058a9ee884
--- /dev/null
+++ b/content/pt-br/docs/tasks/job/automated-tasks-with-cron-jobs.md
@@ -0,0 +1,112 @@
+---
+title: Executando tarefas automatizadas com CronJob
+min-kubernetes-server-version: v1.21
+content_type: task
+weight: 10
+---
+
+
+
+Esta página mostra como executar tarefas automatizadas usando o objeto {{< glossary_tooltip text="CronJob" term_id="cronjob" >}} no kubernetes.
+
+## {{% heading "prerequisites" %}}
+
+* {{< include "task-tutorial-prereqs.md" >}}
+
+
+
+## Criando um CronJob {#creating-a-cron-job}
+
+Cron jobs requerem um arquivo de configuração.
+Aqui está um manifesto para CronJob que executa uma tarefa de demonstração simples a cada minuto:
+
+{{% code_sample file="application/job/cronjob.yaml" %}}
+
+Execute o exemplo de CronJob usando o seguinte comando:
+
+```shell
+kubectl create -f https://k8s.io/examples/application/job/cronjob.yaml
+```
+A saída é semelhante a esta:
+
+```
+cronjob.batch/hello created
+```
+
+Após criar o cron job, obtenha o status usando este comando:
+
+```shell
+kubectl get cronjob hello
+```
+
+A saída é semelhante a esta:
+
+```
+NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
+hello */1 * * * * False 0 10s
+```
+
+Como você pode ver pelos resultados do comando, o cron job ainda não agendou ou executou uma tarefa ainda.
+{{< glossary_tooltip text="Observe" term_id="watch" >}} que a tarefa será criada em cerca de um minuto:
+
+```shell
+kubectl get jobs --watch
+```
+A saída é semelhante a esta:
+
+```
+NAME COMPLETIONS DURATION AGE
+hello-4111706356 0/1 0s
+hello-4111706356 0/1 0s 0s
+hello-4111706356 1/1 5s 5s
+```
+
+Agora você viu uma tarefa em execução agendada pelo cron job "hello".
+Você pode parar de observá-lo e visualizar o cron job novamente para ver que ele agendou a tarefa:
+
+```shell
+kubectl get cronjob hello
+```
+
+A saída é semelhante a esta:
+
+```
+NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
+hello */1 * * * * False 0 50s 75s
+```
+
+Você deve ver que o cron job `hello` agendou uma tarefa com sucesso no tempo especificado em
+`LAST SCHEDULE`. Existem atualmente 0 tarefas ativas, o que significa que a tarefa foi concluída ou falhou.
+
+Agora, encontre os pods da última tarefa agendada criada e veja a saída padrão de um dos pods.
+
+{{< note >}}
+O nome da tarefa é diferente do nome do pod.
+{{< /note >}}
+
+```shell
+# Replace "hello-4111706356" with the job name in your system
+pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name})
+```
+Veja os logs do pod:
+
+```shell
+kubectl logs $pods
+```
+A saída é semelhante a esta:
+
+```
+Fri Feb 22 11:02:09 UTC 2019
+Hello from the Kubernetes cluster
+```
+
+## Deletando um CronJob {#deleting-a-cron-job}
+
+Quando você não precisar mais de um cron job, exclua-o com `kubectl delete cronjob `:
+
+```shell
+kubectl delete cronjob hello
+```
+
+Excluindo o cron job remove todas as tarefas e pods que ele criou e impede a criação de novas tarefas.
+Você pode ler mais sobre como remover tarefas em [garbage collection](/docs/concepts/architecture/garbage-collection/).
From cbb153d7bbc27c397e0e25eab96d880d2c7da940 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Thu, 21 Nov 2024 09:04:00 +0800
Subject: [PATCH 18/37] [zh] Resync node-overprovisioning.md
---
.../node-overprovisioning.md | 96 +++++++++++++------
.../deployment-with-capacity-reservation.yaml | 1 +
.../priorityclass/low-priority-class.yaml | 4 +-
3 files changed, 70 insertions(+), 31 deletions(-)
diff --git a/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md b/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
index e91ae3ba21..baea5b516a 100644
--- a/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
+++ b/content/zh-cn/docs/tasks/administer-cluster/node-overprovisioning.md
@@ -4,7 +4,7 @@ content_type: task
weight: 10
---
@@ -12,45 +12,49 @@ weight: 10
本页指导你在 Kubernetes 集群中配置{{< glossary_tooltip text="节点" term_id="node" >}}超配。
节点超配是一种主动预留部分集群计算资源的策略。这种预留有助于减少在扩缩容事件期间调度新 Pod 所需的时间,
从而增强集群对突发流量或突发工作负载需求的响应能力。
-通过保持一些未使用的容量,你确保在新 Pod 被创建时资源可以立即可用,防止 Pod 在集群扩缩容时进入 Pending 状态。
+通过保持一些未使用的容量,确保在新 Pod 被创建时资源可以立即可用,防止 Pod 在集群扩缩容时进入 Pending 状态。
## {{% heading "prerequisites" %}}
- 你需要有一个 Kubernetes 集群,并且 kubectl 命令行工具必须被配置为与你的集群通信。
- 你应该已经基本了解了 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)、Pod
{{}}和
- [PriorityClass](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass)。
-- 你的集群必须设置一个基于需求管理节点的
- [Autoscaler](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)。
+ {{< glossary_tooltip text="PriorityClass" term_id="priority-class" >}}。
+- 你的集群必须设置一个基于需求管理节点的[自动扩缩程序](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)。
-## 创建占位 Deployment {#create-a-placeholder-deployment}
+## 创建 PriorityClass {#create-a-priorityclass}
首先为占位 Pod 定义一个 PriorityClass。
先创建一个优先级值为负数的 PriorityClass,稍后将其分配给占位 Pod。
@@ -72,25 +76,50 @@ You will next define a Deployment that uses the negative-priority PriorityClass
When you add this to your cluster, Kubernetes runs those placeholder pods to reserve capacity. Any time there
is a capacity shortage, the control plane will pick one these placeholder pods as the first candidate to
{{< glossary_tooltip text="preempt" term_id="preemption" >}}.
-
-Review the sample manifest:
-->
-接下来,你将定义一个 Deployment,使用优先级值为负数的 PriorityClass 并运行最小容器。
+接下来,你将定义一个 Deployment,使用优先级值为负数的 PriorityClass 并运行最小的容器。
当你将此 Deployment 添加到集群中时,Kubernetes 会运行这些占位 Pod 以预留容量。
每当出现容量短缺时,控制面将选择这些占位 Pod
中的一个作为第一个候选者进行{{< glossary_tooltip text="抢占" term_id="preemption" >}}。
+
+## 运行请求节点容量的 Pod {#run-pods-that-request-node-capacity}
+
查看样例清单:
{{% code_sample language="yaml" file="deployments/deployment-with-capacity-reservation.yaml" %}}
+### 为占位 Pod 挑选一个命名空间 {#pick-a-namespace-for-the-placeholder-pods}
+
+你应选择或创建占位 Pod 要进入的{{< glossary_tooltip term_id="namespace" text="命名空间">}}。
+
+
+### 创建占位 Deployment {#create-the-placeholder-deployment}
+
基于该清单创建 Deployment:
```shell
-kubectl apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
+# 你要更改命名空间名称 "example"
+kubectl --namespace example apply -f https://k8s.io/examples/deployments/deployment-with-capacity-reservation.yaml
```
要编辑 Deployment,可以修改 Deployment 清单文件中的 `resources` 一节,
设置合适的 `requests` 和 `limits`。
你可以将该文件下载到本地,然后用自己喜欢的文本编辑器进行编辑。
+你也可以使用 kubectl 来编辑 Deployment:
+
+```shell
+kubectl edit deployment capacity-reservation
+```
+
+
例如,要为 5 个占位 Pod 预留 500m CPU 和 1Gi 内存,请为单个占位 Pod 定义以下资源请求和限制:
```yaml
@@ -130,23 +168,23 @@ define the resource requests and limits for a single placeholder pod as follows:
## Set the desired replica count
### Calculate the total reserved resources
-
-For example, with 5 replicas each reserving 0.1 CPU and 200MiB of memory:
-
-Total CPU reserved: 5 × 0.1 = 0.5 (in the Pod specification, you'll write the quantity `500m`)
-Total Memory reserved: 5 × 200MiB = 1GiB (in the Pod specification, you'll write `1 Gi`)
-
-To scale the Deployment, adjust the number of replicas based on your cluster's size and expected workload:
-->
## 设置所需的副本数量 {#set-the-desired-replica-count}
### 计算总预留资源 {#calculate-the-total-reserved-resources}
-例如,有 5 个副本,每个预留 0.1 CPU 和 200MiB 内存:
+
-CPU 预留总量:5 × 0.1 = 0.5(在 Pod 规约中,你将写入数量 `500m`)
+
+例如,有 5 个副本,每个预留 0.1 CPU 和 200MiB 内存:
+CPU 预留总量:5 × 0.1 = 0.5(在 Pod 规约中,你将写入数量 `500m`)
+内存预留总量:5 × 200MiB = 1GiB(在 Pod 规约中,你将写入 `1 Gi`)
要扩缩容 Deployment,请基于集群的大小和预期的工作负载调整副本数:
diff --git a/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml b/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
index 556c0b0f18..02ff770242 100644
--- a/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
+++ b/content/zh-cn/examples/deployments/deployment-with-capacity-reservation.yaml
@@ -2,6 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: capacity-reservation
+ # 你应决定要将此 Deployment 部署到哪个命名空间
spec:
replicas: 1
selector:
diff --git a/content/zh-cn/examples/priorityclass/low-priority-class.yaml b/content/zh-cn/examples/priorityclass/low-priority-class.yaml
index a8e4f3834d..9ee2492ec1 100644
--- a/content/zh-cn/examples/priorityclass/low-priority-class.yaml
+++ b/content/zh-cn/examples/priorityclass/low-priority-class.yaml
@@ -1,7 +1,7 @@
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
- name: placeholder
+ name: placeholder # 这些 Pod 表示占位容量
value: -1000
globalDefault: false
-description: "Negative priority for placeholder pods to enable overprovisioning."
\ No newline at end of file
+description: "Negative priority for placeholder pods to enable overprovisioning."
From 37b8ed8f4953daa55255b70863ffa53b9fa652be Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Thu, 21 Nov 2024 09:26:19 +0800
Subject: [PATCH 19/37] [zh-cn]sync connect-applications-service.md
Signed-off-by: xin.li
---
.../docs/tutorials/services/connect-applications-service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/zh-cn/docs/tutorials/services/connect-applications-service.md b/content/zh-cn/docs/tutorials/services/connect-applications-service.md
index c093f4a090..773ae3677b 100644
--- a/content/zh-cn/docs/tutorials/services/connect-applications-service.md
+++ b/content/zh-cn/docs/tutorials/services/connect-applications-service.md
@@ -148,7 +148,7 @@ service/my-nginx exposed
```
这等价于使用 `kubectl create -f` 命令及如下的 yaml 文件创建:
From 26d1dc7a8bc87af9f75998402fffe6c90f9c5d4c Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Wed, 20 Nov 2024 22:05:36 +0800
Subject: [PATCH 20/37] [zh-cn]sync pull-image-private-registry
Signed-off-by: xin.li
---
.../pull-image-private-registry.md | 21 ++++++++++++++++---
1 file changed, 18 insertions(+), 3 deletions(-)
diff --git a/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md
index 0131d6b09c..fe85bb76e0 100644
--- a/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/zh-cn/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -378,12 +378,9 @@ kubectl describe pod private-reg
如果你看到一个原因设为 `FailedToRetrieveImagePullSecret` 的事件,
那么 Kubernetes 找不到指定名称(此例中为 `regcred`)的 Secret。
-如果你指定 Pod 需要拉取镜像凭据,kubelet 在尝试拉取镜像之前会检查是否可以访问该 Secret。
+## 使用来自多个仓库的镜像
+
+一个 Pod 可以包含多个容器,每个容器的镜像可以来自不同的仓库。
+你可以在一个 Pod 中使用多个 `imagePullSecrets`,每个 `imagePullSecrets` 可以包含多个凭证。
+
+
+kubelet 将使用与仓库匹配的每个凭证尝试拉取镜像。
+如果没有凭证匹配仓库,则 kubelet 将尝试在没有授权的情况下拉取镜像,或者使用特定运行时的自定义配置。
+
## {{% heading "whatsnext" %}}
选择合适的身份认证机制是确保集群安全的一个重要方面。
@@ -27,16 +27,16 @@ Kubernetes 提供了多种内置机制,
当为你的集群选择最好的身份认证机制时需要谨慎考虑每种机制的优缺点。
通常情况下,建议启用尽可能少的身份认证机制,
以简化用户管理,避免用户仍保有对其不再需要的集群的访问权限的情况。
值得注意的是 Kubernetes 集群中并没有内置的用户数据库。
@@ -44,10 +44,10 @@ configured authentication source.
因此,要审计用户访问,你需要检视来自每个已配置身份认证数据源的凭据。
对于有多个用户直接访问 Kubernetes API 的生产集群来说,
建议使用外部身份认证数据源,例如:OIDC。
@@ -61,9 +61,9 @@ suitable for this use-case.
## X.509 客户端证书身份认证 {#x509-client-certificate-authentication}
Kubernetes 采用 [X.509 客户端证书](/zh-cn/docs/reference/access-authn-authz/authentication/#x509-client-certificates)
@@ -71,54 +71,53 @@ Kubernetes 采用 [X.509 客户端证书](/zh-cn/docs/reference/access-authn-aut
例如 Kubelet 对 API 服务器进行身份认证时。
虽然这种机制也可以用于用户身份认证,但由于一些限制它可能不太适合在生产中使用:
-
- 客户端证书无法独立撤销。
证书一旦被泄露,攻击者就可以使用它,直到证书过期。
为了降低这种风险,建议为使用客户端证书创建的用户身份认证凭据配置较短的有效期。
- 如果证书需要被作废,必须重新为证书机构设置密钥,但这样做可能给集群带来可用性风险。
- 在集群中创建的客户端证书不会被永久记录。
因此,如果你要跟踪所有已签发的证书,就必须将它们记录下来。
- 用于对客户端证书进行身份认证的私钥不可以启用密码保护。
任何可以读取包含密钥文件的人都可以利用该密钥。
- 使用客户端证书身份认证需要客户端直连 API 服务器而不允许中间存在 TLS 终止节点,
这一约束可能会使网络架构变得复杂。
- 组数据包含在客户端证书的 `O` 值中,
这意味着在证书有效期内无法更改用户的组成员身份。
## 静态令牌文件 {#static-token-file}
尽管 Kubernetes 允许你从控制平面节点的磁盘中加载
@@ -130,13 +129,13 @@ several reasons:
-->
- 凭据以明文的方式存储在控制平面节点的磁盘中,这可能是一种安全风险。
- 修改任何凭据都需要重启 API 服务进程使其生效,这会影响可用性。
- 没有现成的机制让用户轮换其凭据数据。
要轮换凭据数据,集群管理员必须修改磁盘上的令牌并将其分发给用户。
@@ -151,27 +150,27 @@ credential, a cluster administrator must modify the token on disk and distribute
## 启动引导令牌 {#bootstrap-tokens}
[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)用于节点加入集群,
因为下列的一些原因,不建议用于用户身份认证:
- 启动引导令牌中包含有硬编码的组成员身份,不适合一般使用,
因此不适用于身份认证目的。
- 手动生成启动引导令牌有可能使较弱的令牌容易被攻击者猜到,
有可能成为安全隐患。
- 没有现成的加锁定机制用来防止暴力破解,
这使得攻击者更容易猜测或破解令牌。
@@ -182,10 +181,10 @@ attackers to guess or crack the token.
## 服务账号令牌 {#serviceaccount-secret-tokens}
[服务账号令牌](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#manual-secret-management-for-serviceaccounts)
@@ -198,8 +197,8 @@ generally unsuitable for a number of reasons:
-->
- 服务账号令牌无法设置有效期,在相关的服务账号被删除前一直有效。
- 任何集群用户,只要能读取服务账号令牌定义所在的命名空间中的 Secret,就能看到身份认证令牌。
TokenRequest API 是一种可生成短期凭据的有用工具,所生成的凭据可
@@ -224,7 +223,7 @@ TokenRequest API 是一种可生成短期凭据的有用工具,所生成的凭
而且,如何以安全的方式向用户分发凭据信息也是挑战。
当使用 TokenRequest 令牌进行服务身份认证时,
@@ -236,10 +235,10 @@ lifespan to reduce the impact of compromised tokens.
## OpenID Connect 令牌身份认证 {#openid-connect-token-authentication}
Kubernetes 支持使用 [OpenID Connect (OIDC)](/zh-cn/docs/reference/access-authn-authz/authentication/#openid-connect-tokens)
@@ -249,8 +248,8 @@ Kubernetes 支持使用 [OpenID Connect (OIDC)](/zh-cn/docs/reference/access-aut
必须考虑以下加固措施:
- 安装在集群中用于支持 OIDC 身份认证的软件应该与普通的工作负载隔离,
因为它要以较高的特权来运行。
@@ -259,8 +258,8 @@ general workloads as it will run with high privileges.
-->
- 有些 Kubernetes 托管服务对可使用的 OIDC 服务组件有限制。
- 与 TokenRequest 令牌一样,OIDC 令牌的有效期也应较短,以减少被泄露的令牌所带来的影响。
@@ -270,11 +269,11 @@ compromised tokens.
## Webhook 令牌身份认证 {#webhook-token-authentication}
[Webhook 令牌身份认证](/zh-cn/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
@@ -285,9 +284,9 @@ are some Kubernetes-specific considerations to take into account.
而且还需要考虑一些特定于 Kubernetes 的因素。
要配置 Webhook 身份认证的前提是需要提供控制平面服务器文件系统的访问权限。
@@ -301,11 +300,11 @@ isolated from general workloads, as it will run with high privileges.
## 身份认证代理 {#authenticating-proxy}
将外部身份认证系统集成到 Kubernetes 的另一种方式是使用
@@ -315,8 +314,8 @@ this mechanism.
值得注意的是,在使用这种机制时有一些特定的注意事项。
首先,在代理和 Kubernetes API 服务器间必须以安全的方式配置 TLS 连接,
@@ -324,9 +323,22 @@ between the proxy and Kubernetes API server is secure.
TLS 连接可以确保代理和 Kubernetes API 服务器间的通信是安全的。
其次,需要注意的是,能够修改表头的攻击者可能会在未经授权的情况下访问 Kubernetes 资源。
-因此,确保标头得到妥善保护并且不会被篡改非常重要。
\ No newline at end of file
+因此,确保标头得到妥善保护并且不会被篡改非常重要。
+
+## {{% heading "whatsnext" %}}
+
+
+- [用户认证](/zh-cn/docs/reference/access-authn-authz/authentication/)
+- [使用 Bootstrap 令牌进行身份验证](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)
+- [kubelet 认证](/zh-cn/docs/reference/access-authn-authz/kubelet-authn-authz/#kubelet-authentication)
+- [使用服务帐户令牌进行身份验证](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-tokens)
From f6d31851b4d2fea2679fb7abe79586ccad2587a4 Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Thu, 7 Nov 2024 22:39:23 +0800
Subject: [PATCH 28/37] [zh-cn]Add blog:
2024-11-08-kubernetes-1.32-sneak-peek.md
Signed-off-by: xin.li
---
.../2024-11-08-kubernetes-1.32-sneak-peek.md | 289 ++++++++++++++++++
1 file changed, 289 insertions(+)
create mode 100644 content/zh-cn/blog/_posts/2024-11-08-kubernetes-1.32-sneak-peek.md
diff --git a/content/zh-cn/blog/_posts/2024-11-08-kubernetes-1.32-sneak-peek.md b/content/zh-cn/blog/_posts/2024-11-08-kubernetes-1.32-sneak-peek.md
new file mode 100644
index 0000000000..791f82a689
--- /dev/null
+++ b/content/zh-cn/blog/_posts/2024-11-08-kubernetes-1.32-sneak-peek.md
@@ -0,0 +1,289 @@
+---
+layout: blog
+title: 'Kubernetes v1.32 预览'
+date: 2024-11-08
+slug: kubernetes-1-32-upcoming-changes
+---
+
+
+
+随着 Kubernetes v1.32 发布日期的临近,Kubernetes 项目继续发展和成熟。
+在这个过程中,某些特性可能会被弃用、移除或被更好的特性取代,以确保项目的整体健康与发展。
+
+本文概述了 Kubernetes v1.32 发布的一些计划变更,发布团队认为你应该了解这些变更,
+以确保你的 Kubernetes 环境得以持续维护并跟上最新的变化。以下信息基于 v1.32
+发布的当前状态,实际发布日期前可能会有所变动。
+
+
+### Kubernetes API 的移除和弃用流程
+
+Kubernetes 项目对功能特性有一个文档完备的[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/)。
+该策略规定,只有当较新的、稳定的相同 API 可用时,原有的稳定 API 才可能被弃用,每个稳定级别的 API 都有一个最短的生命周期。
+弃用的 API 指的是已标记为将在后续发行某个 Kubernetes 版本时移除的 API;
+移除之前该 API 将继续发挥作用(从弃用起至少一年时间),但使用时会显示一条警告。
+移除的 API 将在当前版本中不再可用,此时你必须迁移以使用替换的 API。
+
+
+* 正式发布的(GA)或稳定的 API 版本可被标记为已弃用,但不得在 Kubernetes 主要版本未变时删除。
+
+* Beta 或预发布 API 版本,必须保持在被弃用后 3 个发布版本中仍然可用。
+
+* Alpha 或实验性 API 版本可以在任何版本中删除,不必提前通知;
+ 如果同一特性已有不同实施方案,则此过程可能会成为撤销。
+
+
+无论 API 是因为特性从 Beta 升级到稳定状态还是因为未能成功而被移除,
+所有移除操作都遵守此弃用策略。每当 API 被移除时,
+迁移选项都会在[弃用指南](/zh-cn/docs/reference/using-api/deprecation-guide/)中进行说明。
+
+
+## 关于撤回 DRA 的旧的实现的说明
+
+增强特性 [#3063](https://github.com/kubernetes/enhancements/issues/3063) 在 Kubernetes 1.26
+中引入了动态资源分配(DRA)。
+
+
+然而,在 Kubernetes v1.32 中,这种 DRA 的实现方法将发生重大变化。与原来实现相关的代码将被删除,
+只留下 KEP [#4381](https://github.com/kubernetes/enhancements/issues/4381) 作为"新"的基础特性。
+
+
+改变现有方法的决定源于其与集群自动伸缩的不兼容性,因为资源可用性是不透明的,
+这使得 Cluster Autoscaler 和控制器的决策变得复杂。
+新增的结构化参数模型替换了原有特性。
+
+
+这次移除将使 Kubernetes 能够更可预测地处理新的硬件需求和资源声明,
+避免了与 kube-apiserver 之间复杂的来回 API 调用。
+
+请参阅增强问题 [#3063](https://github.com/kubernetes/enhancements/issues/3063) 以了解更多信息。
+
+
+## API 移除
+
+在 [Kubernetes v1.32](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-32) 中,计划仅移除一个 API:
+
+
+* `flowcontrol.apiserver.k8s.io/v1beta3` 版本的 FlowSchema 和 PriorityLevelConfiguration 已被移除。
+ 为了对此做好准备,你可以编辑现有的清单文件并重写客户端软件,使用自 v1.29 起可用的 `flowcontrol.apiserver.k8s.io/v1` API 版本。
+ 所有现有的持久化对象都可以通过新 API 访问。`flowcontrol.apiserver.k8s.io/v1beta3` 中的重要变化包括:
+ 当未指定时,PriorityLevelConfiguration 的 `spec.limited.nominalConcurrencyShares`
+ 字段仅默认为 30,而显式设置的 0 值不会被更改为此默认值。
+
+ 有关更多信息,请参阅 [API 弃用指南](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-32)。
+
+
+## Kubernetes v1.32 的抢先预览
+
+以下增强特性有可能会被包含在 v1.32 发布版本中。请注意,这并不是最终承诺,发布内容可能会发生变化。
+
+
+### 更多 DRA 增强特性!
+
+在此次发布中,就像上一次一样,Kubernetes 项目继续提出多项对动态资源分配(DRA)的增强。
+DRA 是 Kubernetes 资源管理系统的关键组件,这些增强旨在提高对需要专用硬件(如 GPU、FPGA 和网络适配器)
+的工作负载进行资源分配的灵活性和效率。此次发布引入了多项改进,包括在 Pod 状态中添加资源健康状态,
+具体内容详见 KEP [#4680](https://github.com/kubernetes/enhancements/issues/4680)。
+
+
+#### 在 Pod 状态中添加资源健康状态
+
+当 Pod 使用的设备出现故障或暂时不健康时,很难及时发现。
+KEP [#4680](https://github.com/kubernetes/enhancements/issues/4680)
+提议通过 Pod 的 `status` 暴露设备健康状态,从而使 Pod 崩溃的故障排除更加容易。
+
+
+### Windows 工作继续
+
+KEP [#4802](https://github.com/kubernetes/enhancements/issues/4802) 为
+Kubernetes 集群中的 Windows 节点添加了体面关机支持。
+在此之前,Kubernetes 为 Linux 节点提供了体面关机特性,但缺乏对 Windows 节点的同等支持。
+这一增强特性使 Windows 节点上的 kubelet 能够正确处理系统关机事件,确保在 Windows 节点上运行的 Pod 能够体面终止,
+从而允许工作负载在不受干扰的情况下重新调度。这一改进提高了包含 Windows 节点的集群的可靠性和稳定性,
+特别是在计划维护或系统更新期间。
+
+
+### 允许环境变量中使用特殊字符
+
+随着这一[增强特性](https://github.com/kubernetes/enhancements/issues/4369)升级到 Beta 阶段,
+Kubernetes 现在允许几乎所有的可打印 ASCII 字符(不包括 `=`)作为环境变量名称。
+这一变化解决了此前对变量命名的限制,通过适应各种应用需求,促进了 Kubernetes 的更广泛采用。
+放宽的验证将通过 `RelaxedEnvironmentVariableValidation` 特性门控默认启用,
+确保用户可以轻松使用环境变量而不受严格限制,增强了开发者在处理需要特殊字符配置的应用(如 .NET Core)时的灵活性。
+
+
+### 使 Kubernetes 感知到 LoadBalancer 的行为
+
+KEP [#1860](https://github.com/kubernetes/enhancements/issues/1860) 升级到 GA 阶段,
+为 `type: LoadBalancer` 类型的 Service 引入了 `ipMode` 字段,该字段可以设置为 `"VIP"` 或 `"Proxy"`。
+这一增强旨在改善云提供商负载均衡器与 kube-proxy 的交互方式,对最终用户来说是透明的。
+使用 `"VIP"` 时,kube-proxy 会继续处理负载均衡,保持现有的行为。使用 `"Proxy"` 时,
+流量将直接发送到负载均衡器,提供云提供商对依赖 kube-proxy 的更大控制权;
+这意味着对于某些云提供商,你可能会看到负载均衡器性能的提升。
+
+
+### 为资源生成名称时重试
+
+这一[增强特性](https://github.com/kubernetes/enhancements/issues/4420)改进了使用
+`generateName` 字段创建 Kubernetes 资源时的名称冲突处理。此前,如果发生名称冲突,
+API 服务器会返回 409 HTTP 冲突错误,客户端需要手动重试请求。通过此次更新,
+API 服务器在发生冲突时会自动重试生成新名称,最多重试七次。这显著降低了冲突的可能性,
+确保生成多达 100 万个名称时冲突的概率低于 0.1%,为大规模工作负载提供了更高的弹性。
+
+
+## 想了解更多?
+
+新特性和弃用特性也会在 Kubernetes 发布说明中宣布。我们将在此次发布的
+[Kubernetes v1.32](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md)
+的 CHANGELOG 中正式宣布新内容。
+
+你可以在以下版本的发布说明中查看变更公告:
+
+* [Kubernetes v1.31](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md)
+
+* [Kubernetes v1.30](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md)
+
+* [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md)
+
+* [Kubernetes v1.28](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md)
From cdca65aa64d58f0c24f478488d84ecc502c7e520 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Fri, 22 Nov 2024 09:59:40 +0800
Subject: [PATCH 29/37] [zh] Resync using-api/server-side-apply.md
---
.../reference/using-api/server-side-apply.md | 105 +++++++++++++-----
1 file changed, 75 insertions(+), 30 deletions(-)
diff --git a/content/zh-cn/docs/reference/using-api/server-side-apply.md b/content/zh-cn/docs/reference/using-api/server-side-apply.md
index 5ebf63421d..e49a895ab9 100644
--- a/content/zh-cn/docs/reference/using-api/server-side-apply.md
+++ b/content/zh-cn/docs/reference/using-api/server-side-apply.md
@@ -18,7 +18,7 @@ weight: 25
{{< feature-state feature_gate_name="ServerSideApply" >}}
-
-服务器端应用场景中的 **patch** 请求要求客户端提供自身的标识作为
-[字段管理器(Field Manager)](#managers)。使用服务器端应用时,
-如果尝试变更由别的管理器来控制的字段,会导致请求被拒绝,除非客户端强制要求进行覆盖。
+服务器端应用场景中的 **patch** 请求要求客户端提供自身的标识作为[字段管理器(Field Manager)](#managers)。
+使用服务器端应用时,如果尝试变更由别的管理器来控制的字段,会导致请求被拒绝,除非客户端强制要求进行覆盖。
关于覆盖操作的细节,可参阅[冲突](#conflicts)节。
+`kubectl get` 默认省略 `managedFields`。
+当输出格式为 `json` 或 `yaml` 时,你可以添加 `--show-managed-fields` 参数以显示 `managedFields`。
+{{< /note >}}
+
+
+```yaml
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: test-cm
+ namespace: default
+ labels:
+ test-label: test
+ managedFields:
+ - manager: kubectl
+ operation: Apply # 注意大写:“Apply” (或者 “Update”)
apiVersion: v1
time: "2010-10-10T0:00:00Z"
fieldsType: FieldsV1
@@ -198,7 +232,7 @@ data:
key: some value
```
-
示例的 ConfigMap 对象在 `.metadata.managedFields` 中包含字段管理记录。
字段管理记录包括关于管理实体本身的基本信息,以及关于被管理的字段和相关操作(`Apply` 或 `Update`)的详细信息。
-如果最后更改该字段的请求是服务器端应用的**patch**操作,则 `operation` 的值为 `Apply`;否则为 `Update`。
+如果最后更改该字段的请求是服务器端应用的 **patch** 操作,则 `operation` 的值为 `Apply`;否则为 `Update`。
-
@@ -322,8 +356,7 @@ sets the manager identity to `"kubectl"` by default.
当你使用 `kubectl` 工具执行服务器端应用操作时,`kubectl` 默认情况下会将管理器标识设置为 `“kubectl”`。
-
-
## 序列化 {#serialization}
-在协议层面,Kubernetes 用 [YAML](https://yaml.org/) 来表示 Server-Side Apply 的消息体,
+在协议层面,Kubernetes 用 [YAML](https://yaml.org/) 来表示服务器端应用的消息体,
媒体类型为 `application/apply-patch+yaml`。
{{< note >}}
-
现在,用户希望从他们的配置中删除 `replicas`,从而避免与 HorizontalPodAutoscaler(HPA)及其控制器发生冲突。
-然而,这里存在一个竞态:
-在 HPA 需要调整 `.spec.replicas` 之前会有一个时间窗口,
+然而,这里存在一个竞态:在 HPA 需要调整 `.spec.replicas` 之前会有一个时间窗口,
如果在 HPA 写入字段并成为新的属主之前,用户删除了 `.spec.replicas`,
那 API 服务器就会把 `.spec.replicas` 的值设为 1(Deployment 的默认副本数)。
这不是用户希望发生的事情,即使是暂时的——它很可能会导致正在运行的工作负载降级。
@@ -754,6 +787,17 @@ First, the user defines a new manifest containing only the `replicas` field:
首先,用户新定义一个只包含 `replicas` 字段的新清单:
+
```yaml
# 将此文件另存为 'nginx-deployment-replicas-only.yaml'
apiVersion: apps/v1
@@ -848,7 +892,7 @@ field in an object also becomes available.
服务器端应用使用一种更具声明性的方法来跟踪对象的字段管理,而不是记录用户最后一次应用的状态。
这意味着,使用服务器端应用的副作用,就是字段管理器所管理的对象的每个字段的相关信息也会变得可用。
-
-
-{{< note >}}
-
-
-此页面被重定向到
-https://v1-24.docs.kubernetes.io/zh-cn/docs/reference/tools/map-crictl-dockercli/
-,原因是
-[dockershim 在 v1.24 中被从 crictl 中移除](https://github.com/kubernetes-sigs/cri-tools/issues/870)。
-根据我们的社区政策,弃用的文档超过三个版本后不再维护。
-弃用的原因在 [Dockershim-FAQ](/zh-cn/docs/blog/2020/12/02/dockershim-faq/) 中进行了说明。
-
-{{ note >}}
From 2e366d01e1fd6655b6c3882afb19e87991a03c71 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sun, 24 Nov 2024 04:04:32 +0200
Subject: [PATCH 32/37] [pt] update queue link
---
content/pt-br/_index.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/pt-br/_index.html b/content/pt-br/_index.html
index 1131ee38e7..200ac40474 100644
--- a/content/pt-br/_index.html
+++ b/content/pt-br/_index.html
@@ -10,7 +10,7 @@ cid: home
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s) é um produto Open Source utilizado para automatizar a implantação, o dimensionamento e o gerenciamento de aplicativos em contêiner.
-Ele agrupa contêineres que compõem uma aplicação em unidades lógicas para facilitar o gerenciamento e a descoberta de serviço. O Kubernetes se baseia em [15 anos de experiência na execução de containers em produção no Google](http://queue.acm.org/detail.cfm?id=2898444), combinado com as melhores ideias e práticas da comunidade.
+Ele agrupa contêineres que compõem uma aplicação em unidades lógicas para facilitar o gerenciamento e a descoberta de serviço. O Kubernetes se baseia em [15 anos de experiência na execução de containers em produção no Google](https://queue.acm.org/detail.cfm?id=2898444), combinado com as melhores ideias e práticas da comunidade.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
From 97ad0bd441aa263ef3da74b3a28efc932f7b9731 Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Sun, 24 Nov 2024 14:55:29 +0800
Subject: [PATCH 33/37] [zh-cn]sync cluster-autoscaling.md
Signed-off-by: xin.li
---
.../docs/concepts/cluster-administration/cluster-autoscaling.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/content/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling.md b/content/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling.md
index cecb1d06df..25a9cc671e 100644
--- a/content/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling.md
+++ b/content/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling.md
@@ -221,5 +221,7 @@ Cluster Proportional Autoscaler 扩缩工作负载的副本数量,而 Cluster
- 参阅[工作负载级别自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling/)
+- 参阅[节点超分配](/zh-cn/docs/tasks/administer-cluster/node-overprovisioning/)
From 0b91965aa2e4ab5ac33ec4852f63ad2137217d07 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Mon, 25 Nov 2024 02:35:29 +0200
Subject: [PATCH 34/37] [es] update queue link
---
content/es/_index.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/_index.html b/content/es/_index.html
index 24acdd9cc9..e2f47c3bfc 100644
--- a/content/es/_index.html
+++ b/content/es/_index.html
@@ -10,7 +10,7 @@ cid: home
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s) es una plataforma de código abierto para automatizar la implementación, el escalado y la administración de aplicaciones en contenedores.
-Kubernetes agrupa los contenedores que conforman una aplicación en unidades lógicas para una fácil administración y descubrimiento. Kubernetes se basa en [15 años de experiencia en la ejecución de cargas de trabajo de producción en Google](http://queue.acm.org/detail.cfm?id=2898444), combinada con las mejores ideas y prácticas de la comunidad.
+Kubernetes agrupa los contenedores que conforman una aplicación en unidades lógicas para una fácil administración y descubrimiento. Kubernetes se basa en [15 años de experiencia en la ejecución de cargas de trabajo de producción en Google](https://queue.acm.org/detail.cfm?id=2898444), combinada con las mejores ideas y prácticas de la comunidad.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
From 1f44f78b4b8d2b69ddcac66dfc7ef2a730b5a1ff Mon Sep 17 00:00:00 2001
From: xin gu <418294249@qq.com>
Date: Mon, 25 Nov 2024 10:48:27 +0800
Subject: [PATCH 35/37] sync pods-and-endpoint-termination-flow
---
.../services/pods-and-endpoint-termination-flow.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/content/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow.md b/content/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow.md
index 42bc10fce3..54daf32485 100644
--- a/content/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow.md
+++ b/content/zh-cn/docs/tutorials/services/pods-and-endpoint-termination-flow.md
@@ -27,7 +27,7 @@ Service 连接到了你的应用,你就有了一个持续运行的多副本应
## 端点终止的示例流程 {#example-flow-with-endpoint-termination}
@@ -223,14 +223,14 @@ The output is similar to this:
这种设计使得应用可以在终止期间公布自己的状态,而客户端(如负载均衡器)则可以实现连接排空功能。
这些客户端可以检测到正在终止的端点,并为这些端点实现特殊的逻辑。
+
+kubeletは、クラスターのノード上のCPU、メモリ、ディスク容量、ファイルシステムのinodeなどのリソースを監視します。
+これらのリソースの1つ以上が特定の消費レベルに達すると、kubeletはノード上の1つ以上のPodを積極的に失敗させることでリソースを回収し、枯渇を防ぎます。
+
+ノード圧迫による退避は、[APIを起点とした退避](/ja/docs/concepts/scheduling-eviction/api-eviction/)とは異なります。
From 3f677e9bac9e13552f61aa1b33b5f3b9fa644522 Mon Sep 17 00:00:00 2001
From: Christian Schlotter
Date: Mon, 25 Nov 2024 16:35:19 +0100
Subject: [PATCH 37/37] SSA: fix table for merge-strategy markers
---
content/en/docs/reference/using-api/server-side-apply.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/using-api/server-side-apply.md b/content/en/docs/reference/using-api/server-side-apply.md
index 475509c920..b4f4046930 100644
--- a/content/en/docs/reference/using-api/server-side-apply.md
+++ b/content/en/docs/reference/using-api/server-side-apply.md
@@ -303,7 +303,7 @@ For a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResour
you can set these markers when you define the custom resource.
| Golang marker | OpenAPI extension | Possible values | Description |
-| --------------- | ---------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
+| --------------- | ---------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `//+listType` | `x-kubernetes-list-type` | `atomic`/`set`/`map` | Applicable to lists. `set` applies to lists that include only scalar elements. These elements must be unique. `map` applies to lists of nested types only. The key values (see `listMapKey`) must be unique in the list. `atomic` can apply to any list. If configured as `atomic`, the entire list is replaced during merge. At any point in time, a single manager owns the list. If `set` or `map`, different managers can manage entries separately. |
| `//+listMapKey` | `x-kubernetes-list-map-keys` | List of field names, e.g. `["port", "protocol"]` | Only applicable when `+listType=map`. A list of field names whose values uniquely identify entries in the list. While there can be multiple keys, `listMapKey` is singular because keys need to be specified individually in the Go type. The key fields must be scalars. |
| `//+mapType` | `x-kubernetes-map-type` | `atomic`/`granular` | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields. |