sync pid-limiting multi-tenancy job

pull/44007/head
xin gu 2023-11-20 19:15:38 +08:00
parent 58502a433b
commit 10a8a4abdf
3 changed files with 3 additions and 3 deletions

View File

@ -181,7 +181,7 @@ Eviction signal value is calculated periodically and does NOT enforce the limit.
PID limiting - per Pod and per Node sets the hard limit.
Once the limit is hit, workload will start experiencing failures when trying to get a new PID.
It may or may not lead to rescheduling of a Pod,
depending on how workload reacts on these failures and how liveleness and readiness
depending on how workload reacts on these failures and how liveness and readiness
probes are configured for the Pod. However, if limits were set correctly,
you can guarantee that other Pods workload and system processes will not run out of PIDs
when one Pod is misbehaving.

View File

@ -918,7 +918,7 @@ The two options are discussed in more detail in the following sections.
<!--
As previously mentioned, you should consider isolating each workload in its own namespace, even if
you are using dedicated clusters or virtualized control planes. This ensures that each workload
only has access to its own resources, such as Config Maps and Secrets, and allows you to tailor
only has access to its own resources, such as ConfigMaps and Secrets, and allows you to tailor
dedicated security policies for each workload. In addition, it is a best practice to give each
namespace names that are unique across your entire fleet (that is, even if they are in separate
clusters), as this gives you the flexibility to switch between dedicated and shared clusters in

View File

@ -1613,7 +1613,7 @@ the Job status, allowing the Pod to be removed by other controllers or users.
{{< note >}}
<!--
See [My pod stays terminating](/docs/tasks/debug-application/debug-pods) if you
See [My pod stays terminating](/docs/tasks/debug/debug-application/debug-pods/) if you
observe that pods from a Job are stucked with the tracking finalizer.
-->
如果你发现来自 Job 的某些 Pod 因存在负责跟踪的 Finalizer 而无法正常终止,