Merge pull request #49004 from michellengnx/merged-main-dev-1.32

Merge main branch into dev-1.32
pull/48159/head
Kubernetes Prow Robot 2024-12-10 15:34:04 +00:00 committed by GitHub
commit b6a7ad7f78
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
41 changed files with 2628 additions and 253 deletions

View File

@ -12,7 +12,7 @@ sitemap:
{{% blocks/feature image="flower" id="feature-primary" %}}
[কুবারনেটিস]({{< relref "/docs/concepts/overview/" >}}), K8s নামেও পরিচিত, কনটেইনারাইজড অ্যাপ্লিকেশনের স্বয়ংক্রিয় ডিপ্লয়মেন্ট, স্কেলিং এবং পরিচালনার জন্য একটি ওপেন-সোর্স সিস্টেম।
এটি সহজ ব্যবস্থাপনা এবং আবিষ্কারের জন্য লজিক্যাল ইউনিটে একটি অ্যাপ্লিকেশন তৈরি করে এমন কন্টেইনারগুলিকে গোষ্ঠীভুক্ত করে। কুবারনেটিস [Google-এ প্রোডাকশন ওয়ার্কলোড চালানোর 15 বছরের অভিজ্ঞতার ভিত্তিতে](http://queue.acm.org/detail.cfm?id=2898444) তৈরি করে, কমিউনিটির সেরা ধারণা এবং অনুশীলনের সাথে মিলিত ভাবে।
এটি সহজ ব্যবস্থাপনা এবং আবিষ্কারের জন্য লজিক্যাল ইউনিটে একটি অ্যাপ্লিকেশন তৈরি করে এমন কন্টেইনারগুলিকে গোষ্ঠীভুক্ত করে। কুবারনেটিস [Google-এ প্রোডাকশন ওয়ার্কলোড চালানোর 15 বছরের অভিজ্ঞতার ভিত্তিতে](https://queue.acm.org/detail.cfm?id=2898444) তৈরি করে, কমিউনিটির সেরা ধারণা এবং অনুশীলনের সাথে মিলিত ভাবে।
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}

View File

@ -0,0 +1,505 @@
---
layout: blog
title: 'Kubernetes v1.32: {release-name}'
date: 2024-12-11
slug: kubernetes-v1-32-release
author: >
[Kubernetes v1.32 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.32/release-team.md)
draft: true
---
**Editors:** Matteo Bianchi, Edith Puclla, William Rizzo, Ryota Sawada, Rashan Smith
Announcing the release of Kubernetes v1.32: {release-name}!
In line with previous releases, the release of Kubernetes v1.32 introduces new stable, beta, and alpha features.
The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant
support from our community.
This release consists of 44 enhancements in total.
Of those enhancements, 13 have graduated to Stable, 12 are entering Beta, and 19 have entered in Alpha.
## Release theme and logo
{{< figure src="/images/blog/2024-12-11-kubernetes-1.32-release/k8s-1.32.png" alt="Kubernetes v1.32 logo"
class="release-logo" >}}
<TODO upload image to static/images/blog/2024-12-11-kubernetes-1.32-release/k8s-1.32.png>
The Kubernetes v1.32 Release Theme is "{release-name}".
Kubernetes v1.32's {release-story}
## Updates to recent key features
### A note on DRA enhancements
In this release, like the previous one, the Kubernetes project continues proposing a number of enhancements to the
Dynamic Resource Allocation (DRA), a key component of the Kubernetes resource management system. These enhancements aim
to improve the flexibility and efficiency of resource allocation for workloads that require specialized hardware, such
as GPUs, FPGAs and network adapters.
These features are particularly useful for use-cases such as machine learning or high-performance computing
applications. The core part enabling DRA Structured parameter support [got promoted to beta](#structured-parameter-support).
### Quality of life improvements on nodes and sidecar containers update
[SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) has the following highlights that go beyond
KEPs:
1. The systemd watchdog capability is now used to restart the kubelet when its health check fails, while also limiting
the maximum number of restarts within a given time period. This enhances the reliability of the kubelet. For more
details, see pull request [#127566](https://github.com/kubernetes/kubernetes/pull/127566).
2. In cases when an image pull back-off error is encountered, the message displayed in the Pod status has been improved
to be more human-friendly and to indicate details about why the Pod is in this condition.
When an image pull back-off occurs, the error is appended to the `status.containerStatuses[*].state.waiting.message`
field in the Pod specification with an `ImagePullBackOff` value in the `reason` field. This change provides you with
more context and helps you to identify the root cause of the issue. For more details, see pull request
[#127918](https://github.com/kubernetes/kubernetes/pull/127918).
3. The sidecar containers feature is targeting graduation to Stable in v1.33. To view the remaining work items and
feedback from users, see comments in the issue
[#753](https://github.com/kubernetes/enhancements/issues/753#issuecomment-2350136594).
## Highlights of features graduating to Stable
_This is a selection of some of the improvements that are now stable following the v1.32 release._
### Custom Resource field selectors
Custom resource field selector allows developers to add field selectors to custom resources, mirroring the functionality
available for built-in Kubernetes objects. This allows for more efficient and precise filtering of custom resources,
promoting better API design practices.
This work was done as a part of [KEP #4358](https://github.com/kubernetes/enhancements/issues/4358), by [SIG API
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
### Support to size memory backed volumes
This feature makes it possible to dynamically size memory-backed volumes based on Pod resource limits, improving the
workload's portability and overall node resource utilization.
This work was done as a part of [KEP #1967](https://github.com/kubernetes/enhancements/issues/1967), by [SIG
Node](https://github.com/kubernetes/community/tree/master/sig-node).
### Bound service account token improvement
The inclusion of the node name in the service account token claims allows users to use such information during
authorization and admission (ValidatingAdmissionPolicy).
Furthermore this improvement keeps service account credentials from being a privilege escalation path for nodes.
This work was done as part of [KEP #4193](https://github.com/kubernetes/enhancements/issues/4193) by [SIG
Auth](https://github.com/kubernetes/community/tree/master/sig-auth).
### Structured authorization configuration
Multiple authorizers can be configured in the API server to allow for structured authorization decisions,
with support for CEL match conditions in webhooks.
This work was done as part of [KEP #3221](https://github.com/kubernetes/enhancements/issues/3221) by [SIG
Auth](https://github.com/kubernetes/community/tree/master/sig-auth).
### Auto remove PVCs created by StatefulSet
PersistentVolumeClaims (PVCs) created by StatefulSets get automatically deleted when no longer needed,
while ensuring data persistence during StatefulSet updates and node maintenance.
This feature simplifies storage management for StatefulSets and reduces the risk of orphaned PVCs.
This work was done as part of [KEP #1847](https://github.com/kubernetes/enhancements/issues/1847) by [SIG
Apps](https://github.com/kubernetes/community/tree/master/sig-apps).
## Highlights of features graduating to Beta
_This is a selection of some of the improvements that are now beta following the v1.32 release._
### Job API managed-by mechanism
The `managedBy` field for Jobs was promoted to beta in the v1.32 release. This feature enables external controllers
(like [Kueue](https://kueue.sigs.k8s.io/)) to manage Job synchronization, offering greater flexibility and integration
with advanced workload management systems.
This work was done as a part of [KEP #4368](https://github.com/kubernetes/enhancements/issues/4368), by [SIG
Apps](https://github.com/kubernetes/community/tree/master/sig-apps).
### Only allow anonymous auth for configured endpoints
This feature lets admins specify which endpoints are allowed for anonymous requests. For example, the admin
can choose to only allow anonymous access to health endpoints like `/healthz`, `/livez`, and `/readyz` while
making sure preventing anonymous access to other cluster endpoints or resources even if a user
misconfigures RBAC.
This work was done as a part of [KEP #4633](https://github.com/kubernetes/enhancements/issues/4633), by [SIG
Auth](https://github.com/kubernetes/community/tree/master/sig-auth).
### Per-plugin callback functions for accurate requeueing in kube-scheduler enhancements
This feature enhances scheduling throughput with more efficient scheduling retry decisions by
per-plugin callback functions (QueueingHint). All plugins now have QueueingHints.
This work was done as a part of [KEP #4247](https://github.com/kubernetes/enhancements/issues/4247), by [SIG
Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling).
### Recover from volume expansion failure
This feature lets users recover from volume expansion failure by retrying with a smaller size. This enhancement ensures
that volume expansion is more resilient and reliable, reducing the risk of data loss or corruption during the process.
This work was done as a part of [KEP #1790](https://github.com/kubernetes/enhancements/issues/1790), by [SIG
Storage](https://github.com/kubernetes/community/tree/master/sig-storage).
### Volume group snapshot
This feature introduces a VolumeGroupSnapshot API, which lets users take a snapshot of multiple volumes together, ensuring data consistency across the volumes.
This work was done as a part of [KEP #3476](https://github.com/kubernetes/enhancements/issues/3476), by [SIG
Storage](https://github.com/kubernetes/community/tree/master/sig-storage).
### Structured parameter support
The core part of Dynamic Resource Allocation (DRA), the structured parameter support, got promoted to beta.
This allows the kube-scheduler and Cluster Autoscaler to simulate claim allocation directly, without needing a
third-party driver.
These components can now predict whether resource requests can be fulfilled based on the cluster's current state without actually
committing to the allocation. By eliminating the need for a third-party driver to validate or test allocations, this
feature improves planning and decision-making for resource distribution, making the scheduling and scaling processes
more efficient.
This work was done as a part of [KEP #4381](https://github.com/kubernetes/enhancements/issues/4381), by WG Device
Management (a cross functional team containing [SIG Node](https://github.com/kubernetes/community/tree/master/sig-node),
[SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling) and [SIG
Autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling)).
### Label and field selector authorization
Label and field selectors can be used in authorization decisions. The node authorizer
automatically takes advantage of this to limit nodes to list or watch their pods only.
Webhook authorizers can be updated to limit requests based on the label or field selector used.
This work was done as part of [KEP #4601](https://github.com/kubernetes/enhancements/issues/4601)
by [SIG Auth](https://github.com/kubernetes/community/tree/master/sig-auth).
## Highlights of new features in Alpha
_This is a selection of key improvements introduced as alpha features in the v1.32 release._
### Asynchronous preemption in the Kubernetes Scheduler
The Kubernetes scheduler has been enhanced with Asynchronous Preemption, a feature that improves scheduling throughput
by handling preemption operations asynchronously. Preemption ensures higher-priority pods get the resources they need by
evicting lower-priority ones, but this process previously involved heavy operations like API calls to delete pods,
slowing down the scheduler. With this enhancement, such tasks are now processed in parallel, allowing the scheduler to
continue scheduling other pods without delays.
This improvement is particularly beneficial in clusters with high Pod churn or frequent scheduling failures, ensuring a
more efficient and resilient scheduling process.
This work was done as a part of KEP [#4832](https://github.com/kubernetes/enhancements/issues/4832)
by [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling).
### Mutating admission policies using CEL expressions
This feature leverages CEL's object instantiation and JSON Patch strategies, combined with Server Side Applys merge
algorithms. It simplifies policy definition, reduces mutation conflicts, and enhances admission control performance
while laying a foundation for more robust, extensible policy frameworks in Kubernetes.
The Kubernetes API server now supports Common Expression Language (CEL)-based Mutating Admission Policies, providing a
lightweight, efficient alternative to mutating admission webhooks. With this enhancement, administrators can use CEL to
declare mutations like setting labels, defaulting fields, or injecting sidecars with simple, declarative expressions.
This approach reduces operational complexity, eliminates the need for webhooks, and integrates directly with the
kube-apiserver, offering faster and more reliable in-process mutation handling.
This work was done as a part of [KEP #3962](https://github.com/kubernetes/enhancements/issues/3962) by [SIG API
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
### Pod-level resource specifications
This enhancement simplifies resource management in Kubernetes by introducing the ability to set resource requests and
limits at the Pod level, creating a shared pool that all containers in the Pod can dynamically use. This is particularly
valuable for workloads with containers that have fluctuating or bursty resource needs, as it minimizes over-provisioning
and improves overall resource efficiency.
By leveraging Linux cgroup settings at the Pod level, Kubernetes ensures that these resource limits are enforced while
enabling tightly coupled containers to collaborate more effectively without hitting artificial constraints. Importantly,
this feature maintains backward compatibility with existing container-level resource settings, allowing users to adopt
it incrementally without disrupting current workflows or existing configurations.
This marks a significant improvement for multi-container pods, as it reduces the operational complexity of managing
resource allocations across containers. It also provides a performance boost for tightly integrated applications, such
as sidecar architectures, where containers share workloads or depend on each others availability to perform optimally.
This work was done as part of [KEP #2837](https://github.com/kubernetes/enhancements/issues/2837) by [SIG
Node](https://github.com/kubernetes/community/tree/master/sig-node).
### Allow zero value for sleep action of PreStop hook
This enhancement introduces the ability to set a zero-second sleep duration for the PreStop lifecycle hook in
Kubernetes, offering a more flexible and no-op option for resource validation and customization. Previously, attempting
to define a zero value for the sleep action resulted in validation errors, restricting its use. With this update, users
can configure a zero-second duration as a valid sleep setting, enabling immediate execution and termination behaviors
where needed.
The enhancement is backward-compatible, introduced as an opt-in feature controlled by the
`PodLifecycleSleepActionAllowZero` feature gate. This change is particularly beneficial for scenarios requiring PreStop
hooks for validation or admission webhook processing without requiring an actual sleep duration. By aligning with the
capabilities of the `time.After` Go function, this update simplifies configuration and expands usability for Kubernetes
workloads.
This work was done as part of [KEP #4818](https://github.com/kubernetes/enhancements/issues/4818) by [SIG
Node](https://github.com/kubernetes/community/tree/master/sig-node).
### DRA: Standardized network interface data for resource claim status
This enhancement adds a new field that allows drivers to report specific device status data for each allocated object
in a ResourceClaim. It also establishes a standardized way to represent networking devices information.
This work was done as a part of
[KEP #4817](https://github.com/kubernetes/enhancements/issues/4817), by
[SIG Network](https://github.com/kubernetes/community/tree/master/sig-network).
### New statusz and flagz endpoints for core components
You can enable two new HTTP endpoints, `/statusz` and `/flagz`, for core components.
These enhance cluster debuggability by gaining insight into what versions (e.g. Golang version) that component is
running as, along with details about its uptime, and which command line flags that component was executed with;
making it easier to diagnose both runtime and configuration issues.
This work was done as part of
[KEP #4827](https://github.com/kubernetes/enhancements/issues/4827)
and [KEP #4828](https://github.com/kubernetes/enhancements/issues/4828) by
[SIG Instrumentation](https://github.com/kubernetes/community/tree/master/sig-instrumentation).
### Windows strikes back!
Support for graceful shutdowns of Windows nodes in Kubernetes clusters has been added.
Before this release, Kubernetes provided graceful node shutdown functionality for Linux nodes
but lacked equivalent support for Windows. This enhancement enables the kubelet on Windows nodes to handle system
shutdown events properly. Doing so, it ensures that Pods running on Windows nodes are gracefully terminated,
allowing workloads to be rescheduled without disruption. This improvement enhances the reliability and stability
of clusters that include Windows nodes, especially during a planned maintenance or any system updates.
Moreover CPU and memory affinity support has been added for Windows nodes with nodes, with improvements
to the CPU manager, memory manager and topology manager.
This work was done respectively as part of [KEP #4802](https://github.com/kubernetes/enhancements/issues/4802)
and [KEP #4885](https://github.com/kubernetes/enhancements/issues/4885) by [SIG
Windows](https://github.com/kubernetes/community/tree/master/sig-windows).
## Graduations, deprecations, and removals in 1.32
### Graduations to Stable
This lists all the features that graduated to stable (also known as _general availability_). For a full list of updates
including new features and graduations from alpha to beta, see the release notes.
This release includes a total of 13 enhancements promoted to Stable:
- [Structured Authorization Configuration](https://github.com/kubernetes/enhancements/issues/3221)
- [Bound service account token improvements](https://github.com/kubernetes/enhancements/issues/4193)
- [Custom Resource Field Selectors](https://github.com/kubernetes/enhancements/issues/4358)
- [Retry Generate Name](https://github.com/kubernetes/enhancements/issues/4420)
- [Make Kubernetes aware of the LoadBalancer behaviour](https://github.com/kubernetes/enhancements/issues/1860)
- [Field `status.hostIPs` added for Pod](https://github.com/kubernetes/enhancements/issues/2681)
- [Custom profile in kubectl debug](https://github.com/kubernetes/enhancements/issues/4292)
- [Memory Manager](https://github.com/kubernetes/enhancements/issues/1769)
- [Support to size memory backed volumes](https://github.com/kubernetes/enhancements/issues/1967)
- [Improved multi-numa alignment in Topology Manager](https://github.com/kubernetes/enhancements/issues/3545)
- [Add job creation timestamp to job annotations](https://github.com/kubernetes/enhancements/issues/4026)
- [Add Pod Index Label for StatefulSets and Indexed Jobs](https://github.com/kubernetes/enhancements/issues/4017)
- [Auto remove PVCs created by StatefulSet](https://github.com/kubernetes/enhancements/issues/1847)
### Deprecations and removals
As Kubernetes develops and matures, features may be deprecated, removed, or replaced with better ones for the project's
overall health.
See the Kubernetes [deprecation and removal policy](/docs/reference/using-api/deprecation-policy/) for more details on
this process.
#### Withdrawal of the old DRA implementation
The enhancement [#3063](https://github.com/kubernetes/enhancements/issues/3063) introduced Dynamic Resource Allocation
(DRA) in Kubernetes 1.26.
However, in Kubernetes v1.32, this approach to DRA will be significantly changed. Code related to the original
implementation will be removed, leaving KEP [#4381](https://github.com/kubernetes/enhancements/issues/4381) as the "new"
base functionality.
The decision to change the existing approach originated from its incompatibility with cluster autoscaling as resource
availability was non-transparent, complicating decision-making for both Cluster Autoscaler and controllers.
The newly added Structured Parameter model substitutes the functionality.
This removal will allow Kubernetes to handle new hardware requirements and resource claims more predictably, bypassing
the complexities of back and forth API calls to the kube-apiserver.
See the enhancement issue [#3063](https://github.com/kubernetes/enhancements/issues/3063) to find out more.
#### Deprecation of gitRepo volume types
The [gitRepo](https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo) volume type is deprecated and will be
removed in a future release, the deprecation has been executed in light of the security advisory encompassing the
[CVE-2024-10220](https://nvd.nist.gov/vuln/detail/CVE-2024-10220): Arbitrary command execution through gitRepo volume,
which was reported publicly in [this issue](https://github.com/kubernetes/kubernetes/issues/128885).
#### API removals
There is one API removal in [Kubernetes v1.32](/docs/reference/using-api/deprecation-guide/#v1-32):
* The `flowcontrol.apiserver.k8s.io/v1beta3` API version of FlowSchema and PriorityLevelConfiguration has been removed.
To prepare for this, you can edit your existing manifests and rewrite client software to use the
`flowcontrol.apiserver.k8s.io/v1 API` version, available since v1.29.
All existing persisted objects are accessible via the new API. Notable changes in flowcontrol.apiserver.k8s.io/v1beta3
include that the PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field only defaults to 30 when
unspecified, and an explicit value of 0 is not changed to 30.
For more information, refer to the [API deprecation guide](/docs/reference/using-api/deprecation-guide/#v1-32).
### Release notes and upgrade actions required
Check out the full details of the Kubernetes v1.32 release in our [release
notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md).
## Availability
Kubernetes v1.32 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.32.0) or
on the [Kubernetes download page](/releases/download/).
To get started with Kubernetes, check out these [interactive tutorials](/docs/tutorials/) or run local Kubernetes
clusters using [minikube](https://minikube.sigs.k8s.io/). You can also easily install v1.32 using
[kubeadm](/docs/setup/independent/create-cluster-kubeadm/).
## Release team
Kubernetes is only possible with the support, commitment, and hard work of its community.
Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up
the Kubernetes releases you rely on.
This requires the specialized skills of people from all corners of our community, from the code itself to its
documentation and project management.
We would like to thank the entire [release
team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.32/release-team.md) for the hours spent
hard at work to deliver the Kubernetes v1.32 release to our community.
The Release Team's membership ranges from first-time shadows to returning team leads with experience forged over several
release cycles.
A very special thanks goes out our release lead, Frederico Muñoz, for leading the release team so gracefully and handle
any matter with the uttermost care, making sure this release was executed smoothly and efficiently.
Last but not least a big thanks goes to all the release members - leads and shadows alike - and to the following SIGs
for the terrific work and outcome achieved during these 14 weeks of release work:
- [SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) - for the fundamental support in docs and
blog reviews and continous collaboration with release Comms and Docs;
- [SIG k8s Infra](https://github.com/kubernetes/community/tree/master/sig-k8s-infra) and [SIG
Testing](https://github.com/kubernetes/community/tree/master/sig-testing) - for the outstanding work in keeping the
testing framework in check, along with all the infra components necessary;
- [SIG Release](https://github.com/kubernetes/community/tree/master/sig-release) and
all the release managers - for the incredible support provided throughout the orchestration of the entire release,
addressing even the most challenging issues in a graceful and timely manner.
## Project velocity
The CNCF K8s [DevStats
project](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&var-period=m&var-repogroup_name=All)
aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This
includes everything from individual contributions to the number of companies that are contributing and is an
illustration of the depth and breadth of effort that goes into evolving this ecosystem.
In the v1.32 release cycle, which ran for 14 weeks (September 9th to December 11th), we saw contributions to Kubernetes
from as many as 125 different companies and 559 individuals as of writing.
In the whole Cloud Native ecosystem, the figure goes up to 433 companies counting 2441 total contributors. This sees an
increase of 7% more overall contributions compared to the [previous
release](https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/#project-velocity) cycle, along with 14%
increase in the number of companies involved, showcasing strong interest and community behind the Cloud Native projects.
Source for this data:
- [Companies contributing to
Kubernetes](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&from=1725832800000&to=1733961599000&var-period=d28&var-repogroup_name=Kubernetes&var-repo_name=kubernetes%2Fkubernetes)
- [Overall ecosystem
contributions](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&from=1725832800000&to=1733961599000&var-period=d28&var-repogroup_name=All&var-repo_name=kubernetes%2Fkubernetes)
By contribution we mean when someone makes a commit, code review, comment, creates an issue or PR, reviews a PR
(including blogs and documentation) or comments on issues and PRs.
If you are interested in contributing visit [Getting Started](https://www.kubernetes.dev/docs/guide/#getting-started) on
our contributor website.
[Check out
DevStats](https://k8s.devstats.cncf.io/d/11/companies-contributing-in-repository-groups?orgId=1&var-period=m&var-repogroup_name=All)
to learn more about the overall velocity of the Kubernetes project and community.
## Event updates
Explore the upcoming Kubernetes and cloud-native events from March to June 2025, featuring KubeCon and KCD Stay informed
and engage with the Kubernetes community.
**March 2025**
- [**KCD - Kubernetes Community Days: Beijing, China**](https://www.cncf.io/kcds/): In March | Beijing, China
- [**KCD - Kubernetes Community Days: Guadalajara, Mexico**](https://www.cncf.io/kcds/): March 16, 2025 | Guadalajara,
Mexico
- [**KCD - Kubernetes Community Days: Rio de Janeiro, Brazil**](https://www.cncf.io/kcds/): March 22, 2025 | Rio de
Janeiro, Brazil
**April 2025**
- [**KubeCon + CloudNativeCon Europe 2025**](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe): April
1-4, 2025 | London, United Kingdom
- [**KCD - Kubernetes Community Days: Budapest, Hungary**](https://www.cncf.io/kcds/): April 23, 2025 | Budapest,
Hungary
- [**KCD - Kubernetes Community Days: Chennai, India**](https://www.cncf.io/kcds/): April 26, 2025 | Chennai, India
- [**KCD - Kubernetes Community Days: Auckland, New Zealand**](https://www.cncf.io/kcds/): April 28, 2025 | Auckland,
New Zealand
**May 2025**
- [**KCD - Kubernetes Community Days: Helsinki, Finland**](https://www.cncf.io/kcds/): May 6, 2025 | Helsinki, Finland
- [**KCD - Kubernetes Community Days: San Francisco, USA**](https://www.cncf.io/kcds/): May 8, 2025 | San Francisco, USA
- [**KCD - Kubernetes Community Days: Austin,
USA**](https://community.cncf.io/events/details/cncf-kcd-texas-presents-kcd-texas-austin-2025/): May 15, 2025 | Austin,
USA
- [**KCD - Kubernetes Community Days: Seoul, South Korea**](https://www.cncf.io/kcds/): May 22, 2025 | Seoul, South
Korea
- [**KCD - Kubernetes Community Days: Istanbul, Turkey**](https://www.cncf.io/kcds/): May 23, 2025 | Istanbul, Turkey
- [**KCD - Kubernetes Community Days: Heredia, Costa Rica**](https://www.cncf.io/kcds/): May 31, 2025 | Heredia, Costa
Rica
- [**KCD - Kubernetes Community Days: New York, USA**](https://www.cncf.io/kcds/): In May | New York, USA
**June 2025**
- [**KCD - Kubernetes Community Days: Bratislava, Slovakia**](https://www.cncf.io/kcds/): June 5, 2025 | Bratislava,
Slovakia
- [**KCD - Kubernetes Community Days: Bangalore, India**](https://www.cncf.io/kcds/): June 6, 2025 | Bangalore, India
- [**KubeCon + CloudNativeCon China 2025**](https://events.linuxfoundation.org/kubecon-cloudnativecon-china/): June
10-11, 2025 | Hong Kong
- [**KCD - Kubernetes Community Days: Antigua Guatemala, Guatemala**](https://www.cncf.io/kcds/): June 14, 2025 |
Antigua Guatemala, Guatemala
- [**KubeCon + CloudNativeCon Japan 2025**](https://events.linuxfoundation.org/kubecon-cloudnativecon-japan): June
16-17, 2025 | Tokyo, Japan
- [**KCD - Kubernetes Community Days: Nigeria, Africa**](https://www.cncf.io/kcds/): June 19, 2025 | Nigeria, Africa
## Upcoming release webinar
Join members of the Kubernetes v1.32 release team on **Thursday, January 9th 2024 at 5:00 PM (UTC)**, to learn about the
release highlights of this release, as well as deprecations and removals to help plan for upgrades.
For more information and registration, visit the [event
page](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kubernetes-132-release/)
on the CNCF Online Programs site.
## Get involved
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest
Groups](https://www.kubernetes.dev/community/community-groups/#special-interest-groups) (SIGs) that align with your
interests.
Have something youd like to broadcast to the Kubernetes community?
Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication),
and through the channels below.
Thank you for your continued feedback and support.
- Follow us on Bluesky [@Kubernetes.io](https://bsky.app/profile/did:plc:kyg4uikmq7lzpb76ugvxa6ul) for latest updates
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
- Join the community on [Slack](http://slack.k8s.io/)
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Share your Kubernetes
[story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
- Read more about whats happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
- Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)

View File

@ -0,0 +1,92 @@
---
layout: blog
title: 'Kubernetes v1.32 Adds A New CPU Manager Static Policy Option For Strict CPU Reservation'
draft: true
date: 2024-12-11
slug: cpumanager-strict-cpu-reservation
author: >
[Jing Zhang](https://github.com/jingczhang) (Nokia)
---
In Kubernetes v1.32, after years of community discussion, we are excited to introduce a
`strict-cpu-reservation` option for the [CPU Manager static policy](/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options).
This feature is currently in alpha, with the associated policy hidden by default. You can only use the
policy if you explicitly enable the alpha behavior in your cluster.
## Understanding the feature
The CPU Manager static policy is used to reduce latency or improve performance. The `reservedSystemCPUs` defines an explicit CPU set for OS system daemons and kubernetes system daemons. This option is designed for Telco/NFV type use cases where uncontrolled interrupts/timers may impact the workload performance. you can use this option to define the explicit cpuset for the system/kubernetes daemons as well as the interrupts/timers, so the rest CPUs on the system can be used exclusively for workloads, with less impact from uncontrolled interrupts/timers. More details of this parameter can be found on the [Explicitly Reserved CPU List](/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-list) page.
If you want to protect your system daemons and interrupt processing, the obvious way is to use the `reservedSystemCPUs` option.
However, until the Kubernetes v1.32 release, this isolation was only implemented for guaranteed
pods that made requests for a whole number of CPUs. At pod admission time, the kubelet only
compares the CPU _requests_ against the allocatable CPUs. In Kubernetes, limits can be higher than
the requests; the previous implementation allowed burstable and best-effort pods to use up
the capacity of `reservedSystemCPUs`, which could then starve host OS services of CPU - and we
know that people saw this in real life deployments.
The existing behavior also made benchmarking (for both infrastructure and workloads) results inaccurate.
When this new `strict-cpu-reservation` policy option is enabled, the CPU Manager static policy will not allow any workload to use the reserved system CPU cores.
## Enabling the feature
To enable this feature, you need to turn on both the `CPUManagerPolicyAlphaOptions` feature gate and the `strict-cpu-reservation` policy option. And you need to remove the `/var/lib/kubelet/cpu_manager_state` file if it exists and restart kubelet.
With the following kubelet configuration:
```yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
featureGates:
...
CPUManagerPolicyOptions: true
CPUManagerPolicyAlphaOptions: true
cpuManagerPolicy: static
cpuManagerPolicyOptions:
strict-cpu-reservation: "true"
reservedSystemCPUs: "0,32,1,33,16,48"
...
```
When `strict-cpu-reservation` is not set or set to false:
```console
# cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0-63","checksum":1058907510}
```
When `strict-cpu-reservation` is set to true:
```console
# cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"2-15,17-31,34-47,49-63","checksum":4141502832}
```
## Monitoring the feature
You can monitor the feature impact by checking the following CPU Manager counters:
- `cpu_manager_shared_pool_size_millicores`: report shared pool size, in millicores (e.g. 13500m)
- `cpu_manager_exclusive_cpu_allocation_count`: report exclusively allocated cores, counting full cores (e.g. 16)
Your best-effort workloads may starve if the `cpu_manager_shared_pool_size_millicores` count is zero for prolonged time.
We believe any pod that is required for operational purpose like a log forwarder should not run as best-effort, but you can review and adjust the amount of CPU cores reserved as needed.
## Conclusion
Strict CPU reservation is critical for Telco/NFV use cases. It is also a prerequisite for enabling the all-in-one type of deployments where workloads are placed on nodes serving combined control+worker+storage roles.
We want you to start using the feature and looking forward to your feedback.
## Further reading
Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/)
task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers.
## Getting involved
This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md). If you are interested in helping develop this feature, sharing feedback, or participating in any other ongoing SIG Node projects, please attend the SIG Node meeting for more details.

View File

@ -0,0 +1,131 @@
---
layout: blog
title: "Kubernetes v1.32: QueueingHint Brings a New Possibility to Optimize Pod Scheduling"
date: 2024-12-12
slug: scheduler-queueinghint
draft: true
Author: >
[Kensei Nakada](https://github.com/sanposhiho) (Tetrate.io)
---
The Kubernetes [scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/) is the core
component that selects the nodes on which new Pods run. The scheduler processes
these new Pods **one by one**. Therefore, the larger your clusters, the more important
the throughput of the scheduler becomes.
Over the years, the Kubernetes project (and SIG Scheduling in particular) has improved the throughput
of the scheduler in multiple enhancements. This blog post describes a major improvement to the
scheduler in Kubernetes v1.32: a
[scheduling context element](/docs/concepts/scheduling-eviction/scheduling-framework/#extension-points)
named _QueueingHint_. This page provides background knowledge of the scheduler and explains how
QueueingHint improves scheduling throughput.
## Scheduling queue
The scheduler stores all unscheduled Pods in an internal component called the _scheduling queue_.
The scheduling queue consists of the following data structures:
- **ActiveQ**: holds newly created Pods or Pods that are ready to be retried for scheduling.
- **BackoffQ**: holds Pods that are ready to be retried but are waiting for a backoff period to end. The
backoff period depends on the number of unsuccessful scheduling attempts performed by the scheduler on that Pod.
- **Unschedulable Pod Pool**: holds Pods that the scheduler won't attempt to schedule for one of the
following reasons:
- The scheduler previously attempted and was unable to schedule the Pods. Since that attempt, the cluster
hasn't changed in a way that could make those Pods schedulable.
- The Pods are blocked from entering the scheduling cycles by PreEnqueue Plugins,
for example, they have a [scheduling gate](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/#configuring-pod-schedulinggates),
and get blocked by the scheduling gate plugin.
## Scheduling framework and plugins
The Kubernetes scheduler is implemented following the Kubernetes
[scheduling framework](/docs/concepts/scheduling-eviction/scheduling-framework/).
And, all scheduling features are implemented as plugins
(e.g., [Pod affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
is implemented in the `InterPodAffinity` plugin.)
The scheduler processes pending Pods in phases called _cycles_ as follows:
1. **Scheduling cycle**: the scheduler takes pending Pods from the activeQ component of the scheduling
queue _one by one_. For each Pod, the scheduler runs the filtering/scoring logic from every scheduling plugin. The
scheduler then decides on the best node for the Pod, or decides that the Pod can't be scheduled at that time.
If the scheduler decides that a Pod can't be scheduled, that Pod enters the Unschedulable Pod Pool
component of the scheduling queue. However, if the scheduler decides to place the Pod on a node,
the Pod goes to the binding cycle.
1. **Binding cycle**: the scheduler communicates the node placement decision to the Kubernetes API
server. This operation bounds the Pod to the selected node.
Aside from some exceptions, most unscheduled Pods enter the unschedulable pod pool after each scheduling
cycle. The Unschedulable Pod Pool component is crucial because of how the scheduling cycle processes Pods one by one. If the scheduler had to constantly retry placing unschedulable Pods, instead of offloading those
Pods to the Unschedulable Pod Pool, multiple scheduling cycles would be wasted on those Pods.
## Improvements to retrying Pod scheduling with QueuingHint
Unschedulable Pods only move back into the ActiveQ or BackoffQ components of the scheduling
queue if changes in the cluster might allow the scheduler to place those Pods on nodes.
Prior to v1.32, each plugin registered which cluster changes could solve their failures, an object creation, update, or deletion in the cluster (called _cluster events_),
with `EnqueueExtensions` (`EventsToRegister`),
and the scheduling queue retries a pod with an event that is registered by a plugin that rejected the pod in a previous scheduling cycle.
Additionally, we had an internal feature called `preCheck`, which helped further filtering of events for efficiency, based on Kubernetes core scheduling constraints;
For example, `preCheck` could filter out node-related events when the node status is `NotReady`.
However, we had two issues for those approaches:
- Requeueing with events was too broad, could lead to scheduling retries for no reason.
- A new scheduled Pod _might_ solve the `InterPodAffinity`'s failure, but not all of them do.
For example, if a new Pod is created, but without a label matching `InterPodAffinity` of the unschedulable pod, the pod wouldn't be schedulable.
- `preCheck` relied on the logic of in-tree plugins and was not extensible to custom plugins,
like in issue [#110175](https://github.com/kubernetes/kubernetes/issues/110175).
Here QueueingHints come into play;
a QueueingHint subscribes to a particular kind of cluster event, and make a decision about whether each incoming event could make the Pod schedulable.
For example, consider a Pod named `pod-a` that has a required Pod affinity. `pod-a` was rejected in
the scheduling cycle by the `InterPodAffinity` plugin because no node had an existing Pod that matched
the Pod affinity specification for `pod-a`.
{{< figure src="queueinghint1.svg" alt="A diagram showing the scheduling queue and pod-a rejected by InterPodAffinity plugin" caption="A diagram showing the scheduling queue and pod-a rejected by InterPodAffinity plugin" >}}
`pod-a` moves into the Unschedulable Pod Pool. The scheduling queue records which plugin caused
the scheduling failure for the Pod. For `pod-a`, the scheduling queue records that the `InterPodAffinity`
plugin rejected the Pod.
`pod-a` will never be schedulable until the InterPodAffinity failure is resolved.
There're some scenarios that the failure could be resolved, one example is an existing running pod gets a label update and becomes matching a Pod affinity.
For this scenario, the `InterPodAffinity` plugin's `QueuingHint` callback function checks every Pod label update that occurs in the cluster.
Then, if a Pod gets a label update that matches the Pod affinity requirement of `pod-a`, the `InterPodAffinity`,
plugin's `QueuingHint` prompts the scheduling queue to move `pod-a` back into the ActiveQ or
the BackoffQ component.
{{< figure src="queueinghint2.svg" alt="A diagram showing the scheduling queue and pod-a being moved by InterPodAffinity QueueingHint" caption="A diagram showing the scheduling queue and pod-a being moved by InterPodAffinity QueueingHint" >}}
## QueueingHint's history and what's new in v1.32
At SIG Scheduling, we have been working on the development of QueueingHint since
Kubernetes v1.28.
While QueuingHint isn't user-facing, we implemented the `SchedulerQueueingHints` feature gate as a
safety measure when we originally added this feature. In v1.28, we implemented QueueingHints with a
few in-tree plugins experimentally, and made the feature gate enabled by default.
However, users reported a memory leak, and consequently we disabled the feature gate in a
patch release of v1.28. From v1.28 until v1.31, we kept working on the QueueingHint implementation
within the rest of the in-tree plugins and fixing bugs.
In v1.32, we made this feature enabled by default again. We finished implementing QueueingHints
in all plugins and also identified the cause of the memory leak!
We thank all the contributors who participated in the development of this feature and those who reported and investigated the earlier issues.
## Getting involved
These features are managed by Kubernetes [SIG Scheduling](https://github.com/kubernetes/community/tree/master/sig-scheduling).
Please join us and share your feedback.
## How can I learn more?
- [KEP-4247: Per-plugin callback functions for efficient requeueing in the scheduling queue](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4247-queueinghint/README.md)

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -486,6 +486,7 @@ Here are some examples of device plugin implementations:
* [Akri](https://github.com/project-akri/akri), which lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices).
* The [AMD GPU device plugin](https://github.com/ROCm/k8s-device-plugin)
* The [generic device plugin](https://github.com/squat/generic-device-plugin) for generic Linux devices and USB devices
* The [HAMi](https://github.com/Project-HAMi/HAMi) for heterogeneous AI computing virtualization middleware (for example, NVIDIA, Cambricon, Hygon, Iluvatar, MThreads, Ascend, Metax)
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for
Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for

View File

@ -19,13 +19,14 @@ Resource quotas are a tool for administrators to address this concern.
<!-- body -->
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
aggregate resource consumption per namespace. It can limit the quantity of objects that can
aggregate resource consumption per namespace. It can limit the quantity of objects that can
be created in a namespace by type, as well as the total amount of compute resources that may
be consumed by resources in that namespace.
Resource quotas work like this:
- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/).
- Different teams work in different namespaces. This can be enforced with
[RBAC](/docs/reference/access-authn-authz/rbac/).
- The administrator creates one ResourceQuota for each namespace.
@ -36,22 +37,28 @@ Resource quotas work like this:
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
for an example of how to avoid this problem.
{{< note >}}
- For `cpu` and `memory` resources, ResourceQuotas enforce that **every**
(new) pod in that namespace sets a limit for that resource.
If you enforce a resource quota in a namespace for either `cpu` or `memory`,
you, and other clients, **must** specify either `requests` or `limits` for that resource,
for every new Pod you submit. If you don't, the control plane may reject admission
for that Pod.
- For other resources: ResourceQuota works and will ignore pods in the namespace without setting a limit or request for that resource. It means that you can create a new pod without limit/request ephemeral storage if the resource quota limits the ephemeral storage of this namespace.
(new) pod in that namespace sets a limit for that resource.
If you enforce a resource quota in a namespace for either `cpu` or `memory`,
you and other clients, **must** specify either `requests` or `limits` for that resource,
for every new Pod you submit. If you don't, the control plane may reject admission
for that Pod.
- For other resources: ResourceQuota works and will ignore pods in the namespace without
setting a limit or request for that resource. It means that you can create a new pod
without limit/request for ephemeral storage if the resource quota limits the ephemeral
storage of this namespace.
You can use a [LimitRange](/docs/concepts/policy/limit-range/) to automatically set
a default request for these resources.
{{< /note >}}
The name of a ResourceQuota object must be a valid
@ -61,17 +68,17 @@ Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 GiB and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already created resources.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
ResourceQuota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
`--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
@ -88,7 +95,7 @@ that can be requested in a given namespace.
The following resource types are supported:
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| ------------- | ----------- |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
@ -104,31 +111,31 @@ In addition to the resources mentioned above, in release 1.10, quota support for
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
and `limits` for the same extended resource in a quota. So for extended resources, only quota items
with prefix `requests.` is allowed for now.
with prefix `requests.` are allowed.
Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to
limit the total number of GPUs requested in a namespace to 4, you can define a quota as follows:
* `requests.nvidia.com/gpu: 4`
See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more detail information.
See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more details.
## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace.
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/)
that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.
| Resource Name | Description |
| --------------------- | ----------------------------------------------------------- |
| ------------- | ----------- |
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the `<storage-class-name>`, the sum of storage requests cannot exceed this value. |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the `<storage-class-name>`, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
define a quota as follows:
For example, if you want to quota storage with `gold` StorageClass separate from
a `bronze` StorageClass, you can define a quota as follows:
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
@ -136,7 +143,7 @@ define a quota as follows:
In release 1.8, quota support for local ephemeral storage is added as an alpha feature:
| Resource Name | Description |
| ------------------------------- |----------------------------------------------------------- |
| ------------- | ----------- |
| `requests.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage requests cannot exceed this value. |
| `limits.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value. |
| `ephemeral-storage` | Same as `requests.ephemeral-storage`. |
@ -169,15 +176,16 @@ Here is an example set of resources users may want to put under object count quo
* `count/cronjobs.batch`
If you define a quota this way, it applies to Kubernetes' APIs that are part of the API server, and
to any custom resources backed by a CustomResourceDefinition. If you use [API aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to
to any custom resources backed by a CustomResourceDefinition. If you use
[API aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to
add additional, custom APIs that are not defined as CustomResourceDefinitions, the core Kubernetes
control plane does not enforce quota for the aggregated API. The extension API server is expected to
control plane does not enforce quota for the aggregated API. The extension API server is expected to
provide quota enforcement if that's appropriate for the custom API.
For example, to create a quota on a `widgets` custom resource in the `example.com` API group, use `count/widgets.example.com`.
When using such a resource quota (nearly for all object kinds), an object is charged
against the quota if the object kind exists (is defined) in the control plane.
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
want to limit the number of Secrets in a server given their large size. Too many Secrets in a cluster can
actually prevent servers and controllers from starting. You can set a quota for Jobs to protect against
a poorly configured CronJob. CronJobs that create too many Jobs in a namespace can lead to a denial of service.
@ -185,17 +193,17 @@ a poorly configured CronJob. CronJobs that create too many Jobs in a namespace c
There is another syntax only to set the same type of quota for certain resources.
The following types are supported:
| Resource Name | Description |
| ------------------------------- |--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `configmaps` | The total number of ConfigMaps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. |
| `replicationcontrollers` | The total number of ReplicationControllers that can exist in the namespace. |
| `resourcequotas` | The total number of ResourceQuotas that can exist in the namespace. |
| `services` | The total number of Services that can exist in the namespace. |
| `services.loadbalancers` | The total number of Services of type `LoadBalancer` that can exist in the namespace. |
| `services.nodeports` | The total number of `NodePorts` allocated to Services of type `NodePort` or `LoadBalancer` that can exist in the namespace. |
| `secrets` | The total number of Secrets that can exist in the namespace. |
| Resource Name | Description |
| ------------- | ----------- |
| `configmaps` | The total number of ConfigMaps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. |
| `replicationcontrollers` | The total number of ReplicationControllers that can exist in the namespace. |
| `resourcequotas` | The total number of ResourceQuotas that can exist in the namespace. |
| `services` | The total number of Services that can exist in the namespace. |
| `services.loadbalancers` | The total number of Services of type `LoadBalancer` that can exist in the namespace. |
| `services.nodeports` | The total number of `NodePorts` allocated to Services of type `NodePort` or `LoadBalancer` that can exist in the namespace. |
| `secrets` | The total number of Secrets that can exist in the namespace. |
For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace that are not terminal. You might want to set a `pods`
@ -248,7 +256,7 @@ The `scopeSelector` supports the following values in the `operator` field:
* `DoesNotExist`
When using one of the following values as the `scopeName` when defining the
`scopeSelector`, the `operator` must be `Exists`.
`scopeSelector`, the `operator` must be `Exists`.
* `Terminating`
* `NotTerminating`
@ -470,10 +478,10 @@ have pods with affinity terms that cross namespaces. Specifically, it controls w
to set `namespaces` or `namespaceSelector` fields in pod affinity terms.
Preventing users from using cross-namespace affinity terms might be desired since a pod
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
Using this scope operators can prevent certain namespaces (`foo-ns` in the example below)
Using this scope operators can prevent certain namespaces (`foo-ns` in the example below)
from having pods that use cross-namespace pod affinity by creating a resource quota object in
that namespace with `CrossNamespacePodAffinity` scope and hard limit of 0:
@ -492,9 +500,9 @@ spec:
operator: Exists
```
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity`
as a limited resource by setting the kube-apiserver flag --admission-control-config-file
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity`
as a limited resource by setting the kube-apiserver flag `--admission-control-config-file`
to the path of the following configuration file:
```yaml
@ -513,7 +521,7 @@ plugins:
```
With the above configuration, pods can use `namespaces` and `namespaceSelector` in pod affinity only
if the namespace where they are created have a resource quota object with
if the namespace where they are created have a resource quota object with
`CrossNamespacePodAffinity` scope and a hard limit greater than or equal to the number of pods using those fields.
## Requests compared to Limits {#requests-vs-limits}
@ -522,12 +530,12 @@ When allocating compute resources, each container may specify a request and a li
The quota can be configured to quota either value.
If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
then it requires that every incoming container specifies an explicit limit for those resources.
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
kubectl supports creating, updating, and viewing quotas:
```shell
kubectl create namespace myspace
@ -619,7 +627,7 @@ services 0 10
services.loadbalancers 0 2
```
Kubectl also supports object count quota for all standard namespaced resources
kubectl also supports object count quota for all standard namespaced resources
using the syntax `count/<resource>.<group>`:
```shell
@ -652,7 +660,7 @@ count/secrets 1 4
## Quota and Cluster Capacity
ResourceQuotas are independent of the cluster capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
Sometimes more complex policies may be desired, such as:
@ -671,7 +679,7 @@ restrictions around nodes: pods from several namespaces may run on the same node
## Limit Priority Class consumption by default
It may be desired that pods at a particular priority, eg. "cluster-services",
It may be desired that pods at a particular priority, such as "cluster-services",
should be allowed in a namespace, if and only if, a matching quota object exists.
With this mechanism, operators are able to restrict usage of certain high
@ -711,9 +719,9 @@ resourcequota/pods-cluster-services created
In this case, a pod creation will be allowed if:
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
in the `kube-system` namespace, and it has passed the resource quota check.
A Pod creation request is rejected if its `priorityClassName` is set to `cluster-services`
@ -721,7 +729,8 @@ and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md)
for more information.
- See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
- Read [Quota support for priority class design document](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765).

View File

@ -6,41 +6,62 @@ reviewers:
- erictune
- janetkuo
- thockin
title: Admission Controllers Reference
linkTitle: Admission Controllers
title: Admission Control in Kubernetes
linkTitle: Admission Control
content_type: concept
weight: 40
---
<!-- overview -->
This page provides an overview of Admission Controllers.
This page provides an overview of _admission controllers_.
An admission controller is a piece of code that intercepts requests to the
Kubernetes API server prior to persistence of the resource, but after the request
is authenticated and authorized.
Several important features of Kubernetes require an admission controller to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission controllers is an incomplete server that will not
support all the features you expect.
<!-- body -->
## What are they?
An _admission controller_ is a piece of code that intercepts requests to the
Kubernetes API server prior to persistence of the object, but after the request
is authenticated and authorized.
Admission controllers are code within the Kubernetes
{{< glossary_tooltip term_id="kube-apiserver" text="API server" >}} that check the
data arriving in a request to modify a resource.
Admission controllers may be _validating_, _mutating_, or both. Mutating
controllers may modify objects related to the requests they admit; validating controllers may not.
Admission controllers apply to requests that create, delete, or modify objects.
Admission controllers can also block custom verbs, such as a request to connect to a
pod via an API server proxy. Admission controllers do _not_ (and cannot) block requests
to read (**get**, **watch** or **list**) objects, because reads bypass the admission
control layer.
Admission controllers limit requests to create, delete, modify objects. Admission
controllers can also block custom verbs, such as a request connect to a Pod via
an API server proxy. Admission controllers do _not_ (and cannot) block requests
to read (**get**, **watch** or **list**) objects.
Admission control mechanisms may be _validating_, _mutating_, or both. Mutating
controllers may modify the data for the resource being modified; validating controllers may not.
The admission controllers in Kubernetes {{< skew currentVersion >}} consist of the
[list](#what-does-each-admission-controller-do) below, are compiled into the
`kube-apiserver` binary, and may only be configured by the cluster
administrator. In that list, there are two special controllers:
MutatingAdmissionWebhook and ValidatingAdmissionWebhook. These execute the
mutating and validating (respectively)
[admission control webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
which are configured in the API.
administrator.
## Admission control phases
### Admission control extension points
Within the full [list](#what-does-each-admission-controller-do), there are three
special controllers:
[MutatingAdmissionWebhook](#mutatingadmissionwebhook),
[ValidatingAdmissionWebhook](#validatingadmissionwebhook), and
[ValidatingAdmissionPolicy](#validatingadmissionpolicy).
The two webhook controllers execute the mutating and validating (respectively)
[admission control webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
which are configured in the API. ValidatingAdmissionPolicy provides a way to embed
declarative validation code within the API, without relying on any external HTTP
callouts.
You can use these three admission controllers to customize cluster behavior at
admission time.
### Admission control phases
The admission control process proceeds in two phases. In the first phase,
mutating admission controllers are run. In the second phase, validating
@ -58,12 +79,7 @@ corresponding reclamation or reconciliation process, as a given admission
controller does not know for sure that a given request will pass all of the
other admission controllers.
## Why do I need them?
Several important features of Kubernetes require an admission controller to be enabled in order
to properly support the feature. As a result, a Kubernetes API server that is not properly
configured with the right set of admission controllers is an incomplete server and will not
support all the features you expect.
## How do I turn on an admission controller?
@ -105,13 +121,6 @@ In Kubernetes {{< skew currentVersion >}}, the default ones are:
CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook
```
{{< note >}}
The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin is enabled
by default, but is only active if you enable the `ValidatingAdmissionPolicy`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and**
the `admissionregistration.k8s.io/v1alpha1` API.
{{< /note >}}
## What does each admission controller do?
### AlwaysAdmit {#alwaysadmit}

View File

@ -20,7 +20,7 @@ For example:
- [kubespray](https://kubespray.io/):
A composition of [Ansible](https://docs.ansible.com/) playbooks,
[inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#inventory),
[inventory](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible/inventory.md),
provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration
management tasks. You can reach out to the community on Slack channel
[#kubespray](https://kubernetes.slack.com/messages/kubespray/).

View File

@ -105,6 +105,7 @@ Leader Migration can be enabled without a configuration. Please see
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: kube-controller-manager
@ -123,6 +124,7 @@ between both parties of the migration.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: *
@ -156,6 +158,7 @@ which has the same effect.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: cloud-controller-manager
@ -235,6 +238,7 @@ controllers.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: *

View File

@ -132,7 +132,7 @@ discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
```shell
# This example assumes that node names match host names, and are reachable via SSH.
NODES=($(kubectl get nodes -o name))
NODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == "Hostname")].address}' ))
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>

View File

@ -9,7 +9,7 @@ cid: home
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s)
Kubernetes mengelompokkan kontainer yang membentuk suatu aplikasi dalam bentuk unit logis yang memudahkan proses manajemen dan <i>discovery</i>. Kubernetes dibuat berdasarkan [pengalaman operasional <i>workloads</i> skala produksi yang dilakukan oleh Google](http://queue.acm.org/detail.cfm?id=2898444), yang digabungkan dengan ide-ide terbaik dan <i>best practices</i> yang diberikan oleh komunitas.
Kubernetes mengelompokkan kontainer yang membentuk suatu aplikasi dalam bentuk unit logis yang memudahkan proses manajemen dan <i>discovery</i>. Kubernetes dibuat berdasarkan [pengalaman operasional <i>workloads</i> skala produksi yang dilakukan oleh Google](https://queue.acm.org/detail.cfm?id=2898444), yang digabungkan dengan ide-ide terbaik dan <i>best practices</i> yang diberikan oleh komunitas.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}

View File

@ -0,0 +1,116 @@
---
layout: blog
title: 'Client-Goへのフィーチャーゲートの導入: 柔軟性と管理性を強化するために'
date: 2024-08-12
slug: feature-gates-in-client-go
author: >
Ben Luddy (Red Hat),
Lukasz Szaszkiewicz (Red Hat)
translator: >
[Youmei Masada](https://github.com/youmeim)
---
Kubernetesコンポーネントは _フィーチャーゲート_ というオン/オフのスイッチを使うことで、新機能を追加する際のリスクを管理しています。
_フィーチャーゲート_ の仕組みは、Alpha、Beta、GAといった各ステージを通じて、新機能の継続的な品質認定を可能にします。
kube-controller-managerやkube-schedulerのようなKubernetesコンポーネントは、client-goライブラリを使ってAPIとやりとりします。
Kubernetesエコシステムは、このライブラリをコントローラーやツール、Webhookなどをビルドするために利用しています。
最新のclient-goにはそれ自体にフィーチャーゲート機構があり、開発者やクラスター管理者は新たなクライアントの機能を採用するかどうかを制御することができます。
Kubernetesにおけるフィーチャーゲートについて深く知るには、[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を参照してください。
## 動機
client-goのフィーチャーゲートが登場するまでは、それぞれの機能が独自のやり方で、 利用できる機能とその機能の有効化のための仕組みを区別していました。
client-goの新バージョンにアップデートすることで有効化できる機能もありました。
その他の機能については、利用するプログラムからいつでも設定できる状態にしておく必要がありました。
ごく一部の機能には環境変数を使って実行時に設定可能なものがありました。
kube-apiserverが提供するフィーチャーゲート機能を利用する場合、(設定や機能実装の時期が原因で)そうした機能をサポートしないクライアントサイドのフォールバック機構がしばしば必要になりました。
これらのフォールバック機構で明らかになった問題があれば、問題の影響を緩和するためにclient-goのバージョンを固定したり、ロールバックしたりする必要がありました。
これらのいずれのアプローチも、client-goを利用するいくつかのプログラムに対してのみデフォルトで機能を有効化する場合には、よい効果をもたらすものではありませんでした。
単一のコンポーネントに対して新機能を有効化するだけでも、標準設定の変更が直ちにすべてのKubernetesコンポーネントに伝搬し、影響範囲は甚大なものとなっていました。
## client-goにおけるフィーチャーゲート
こうした課題に対処するため、client-goの個別機能は新しいフィーチャーゲート機構を使うフェーズに移行します。
Kubernetesコンポーネントのフィーチャーゲート使用経験があるなら、開発者やユーザーは誰もが慣れ親しんだやり方で機能を有効化/無効化できるようになります。
client-goの最近のバージョンを使うだけで、client-goを用いてビルドしたソフトウェアを利用する方々にとってはいくつかの利益があります。
* アーリーアダプターはデフォルトでは無効化されているclient-goの機能について、プロセス単位で有効化できます。
* 挙動がおかしな機能については、新たなバイナリをビルドせずに無効化できます。
* client-goのすべての既知のフィーチャーゲートは状態が記録されており、ユーザーは機能の挙動を調査することができます。
client-goを用いてビルドするソフトウェアを開発している方々にとっては、次のような利益があります。
* 環境変数から client-goのフィーチャーゲートのオーバーライドを指定することができます。
client-goの機能にバグが見つかった場合は、新しいリリースを待たずに機能を無効化できます。
* プログラムのデフォルトの挙動を変更する目的で、開発者は環境変数ベースのオーバーライドを他のソースからの読み込みで置き換えたり、実行時のオーバーライドを完全に無効化したりすることができます。
このカスタマイズ可能な振る舞いは、Kubernetesコンポーネントの既存の`--feature-gates`コマンドラインフラグや機能有効化メトリクス、ロギングを統合するのに利用します。
## client-goのフィーチャーゲートをオーバーライドする
**補足**: ここではclient-goのフィーチャーゲートを実行時に上書きするデフォルトの方法について説明します。
client-goのフィーチャーゲートは、個々のプログラムの開発者がカスタマイズしたり、無効化したりすることができます。
Kubernetesコンポーネントではclient-goフィーチャーゲートの上書きを`--feature-gates`フラグで制御します。
client-goの機能は`KUBE_FEATURE`から始まる名前の環境変数を設定することによって、有効化したり無効化したりすることができます。
例えば、`MyFeature`という名前の機能を有効化するには、次のような環境変数を設定します。
```
KUBE_FEATURE_MyFeature=true
```
この機能を無効化したいときには、環境変数を`false`に設定します。
```
KUBE_FEATURE_MyFeature=false
```
**補足**: いくつかのオペレーティングシステムでは、環境変数は大文字小文字が区別されます。
したがって`KUBE_FEATURE_MyFeature`と`KUBE_FEATURE_MYFEATURE`は異なる2つの変数として認識される場合があります。
## client-goのフィーチャーゲートをカスタマイズする
標準のフィーチャーゲート上書き機能である環境変数ベースの仕組みは、Kubernetesエコシステムの多くのプログラムにとって十分なものと言え、特殊なインテグレーションが不要なやり方です。
異なる挙動を必要とするプログラムのために、この仕組みを独自のフィーチャーゲートプロバイダーで置き換えることもできます。
これにより、うまく動かないことが分かっている機能を強制的に無効化したり、フィーチャーゲートを直接外部の設定サービスから読み込んだり、コマンドラインオプションからフィーチャーゲートの上書きを指定したりすることができるようになります。
Kubernetesコンポーネントはclient-goの標準のフィーチャーゲートプロバイダーを、既存のKubernetesフィーチャーゲートプロバイダーに対する接ぎ木(shim)を使って置き換えます。
実用的な理由から、client-goのフィーチャーゲートは他のKubernetesのフィーチャーゲートと同様に取り扱われています。
(`--feature-gates`コマンドラインフラグに落とし込まれた上で、機能有効化メトリクスに登録され、プログラム開始時にログがなされます)。
標準のフィーチャーゲートプロバイダーを置き換えるには、Gatesインターフェースを実装し、パッケージ初期化の際にReplaceFeatureGatesを呼ぶ必要があります。
以下は簡単な例です。
```go
import (
“k8s.io/client-go/features”
)
type AlwaysEnabledGates struct{}
func (AlwaysEnabledGates) Enabled(features.Feature) bool {
return true
}
func init() {
features.ReplaceFeatureGates(AlwaysEnabledGates{})
}
```
定義済みのclient-goの機能の完全な一覧が必要な場合は、Registryインターフェースを実装して`AddFeaturesToExistingFeatureGates`を呼ぶことで取得できます。
完全な例としては[Kubernetesにおける使用方法](https://github.com/kubernetes/kubernetes/blob/64ba17c605a41700f7f4c4e27dca3684b593b2b9/pkg/features/kube_features.go#L990-L997)を参考にしてください。
## まとめ
client-go v1.30のフィーチャーゲートの導入により、client-goの新機能のロールアウトを安全かつ簡単に実施できるようになりました。
ユーザーや開発者はclient-goの新機能を採用するペースを管理できます。
Kubernetes APIの両側にまたがる機能の品質認定に関する共通のメカニズムができたことによって、Kubernetesコントリビューターの作業は効率化されつつあります。

View File

@ -210,6 +210,6 @@ Kubernetesが共通のPod APIを他のリソース内(たとえば{{< glossary_t
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Marathon](https://github.com/d2iq-archive/marathon)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/)

View File

@ -0,0 +1,18 @@
---
title: JSON Web Token (JWT)
id: jwt
date: 2023-01-17
full_link: https://www.rfc-editor.org/rfc/rfc7519
short_description: >
2つの通信主体間で送受信されるクレームを表現する手段。
aka:
tags:
- security
- architecture
---
2つの通信主体間で送受信されるクレームを表現する手段。
<!--more-->
JWTはデジタル署名と暗号化をすることが可能です。Kubernetesはクラスター内で何らかの操作を実行したいエンティティの身元を確認するため、認証トークンとしてJWTを使用します。

View File

@ -0,0 +1,18 @@
---
title: サイドカーコンテナ
id: sidecar-container
date: 2018-04-12
full_link: /ja/docs/concepts/workloads/pods/sidecar-containers/
short_description: >
Podのライフサイクル全体を通して実行を続ける補助コンテナ。
tags:
- fundamental
---
サイドカーコンテナは1つ以上の{{< glossary_tooltip text="コンテナ" term_id="container" >}}で構成され、一般的にアプリケーションコンテナより先に起動されます。
<!--more-->
サイドカーコンテナは通常のアプリケーションコンテナと似ていますが、目的が違います: サイドカーコンテナは、同じPodで実行されるメインのアプリケーションコンテナに対してサービスを提供します。
{{< glossary_tooltip text="Initコンテナ" term_id="init-container" >}}とは異なり、サイドカーコンテナはPod起動後も実行を続けます。
詳細については、[サイドカーコンテナ](/ja/docs/concepts/workloads/pods/sidecar-containers/)をご参照ください。

View File

@ -0,0 +1,25 @@
---
title: ユーザー名前空間
id: userns
date: 2021-07-13
full_link: https://man7.org/linux/man-pages/man7/user_namespaces.7.html
short_description: >
非特権ユーザーに対して管理者権限をエミュレートするLinuxカーネルの機能。
aka:
- user namespace
tags:
- security
---
root権限をエミュレートするLinuxカーネルの機能。rootlessコンテナを実現するために使われます。
<!--more-->
ユーザー名前空間は、非rootユーザーが管理者(root)権限をエミュレートできるLinuxカーネルの機能です。例えば、コンテナの外部で管理者権限を持っていなくてもコンテナを実行することができます。
ユーザー名前空間は、コンテナブレークアウト攻撃による被害を軽減するのに効果的な対策です。
ここでのユーザー名前空間は、Linuxカーネルの機能を指し、Kubernetesの{{< glossary_tooltip text="Namespace" term_id="namespace" >}}とは異なります。
<!-- TODO: https://kinvolk.io/blog/2020/12/improving-kubernetes-and-container-security-with-user-namespaces/ -->

View File

@ -1,7 +1,7 @@
---
title: Casos de estudo
linkTitle: Casos de estudo
bigheader: Casos de estudo utilizando Kubernetes
title: Estudos de caso
linkTitle: Estudos de caso
bigheader: Estudos de caso utilizando Kubernetes
abstract: Alguns usuários que estão executando o Kubernetes em produção.
layout: basic
class: gridPage

View File

@ -1,6 +1,7 @@
---
title: Comunidade
layout: basic
body_class: community
cid: community
community_styles_migrated: true
menu:
@ -15,8 +16,8 @@ menu:
alt="foto de uma conferência do Kubernetes">
<div class="community-section" id="introduction">
<p>A comunidade do Kubernetes - usuários, contribuidores, e a cultura que nós
construímos juntos - é uma das maiores razões para a ascensão meteórica deste
<p>A comunidade do Kubernetes usuários, contribuidores, e a cultura que nós
construímos juntos é uma das maiores razões para a ascensão meteórica deste
projeto de código aberto. Nossa cultura e valores continuam crescendo e mudando
conforme o projeto cresce e muda. Todos nós trabalhamos juntos em direção a
constantes melhorias do projeto e da forma com que trabalhamos nele.</p>
@ -31,7 +32,7 @@ menu:
<div id="navigation-items">
<div class="community-nav-item external-link">
<a href="https://www.kubernetes.dev/">Comunidade dos contribuidores</a>
<a href="https://www.kubernetes.dev/">Comunidade de contribuidores</a>
</div>
<div class="community-nav-item">
<a href="#values">Valores da comunidade</a>

View File

@ -1,9 +1,8 @@
---
reviewers:
- raelga
linktitle: Documentação do Kubernetes
title: Documentação
weight: 10
content_type: concept
sitemap:
priority: 1.0
---
<!-- overview -->
@ -16,8 +15,8 @@ Como você pode ver, a maior parte da documentação ainda está disponível ape
<!-- body -->
Se você quiser participar, você pode entrar no canal Slack [#kubernetes-docs-pt](http://slack.kubernetes.io/) e fazer parte da equipe por trás da tradução.
Se você quiser participar, você pode entrar no canal do Slack [#kubernetes-docs-pt](http://slack.kubernetes.io/) e fazer parte da equipe por trás da tradução.
Você também pode acessar o canal para solicitar a tradução de uma página específica ou relatar qualquer erro que possa ter sido encontrado. Qualquer contribuição será bem recebida!
Para mais informações sobre como contribuir, dê uma olhada [github.com/kubernetes/website](https://github.com/kubernetes/website/).
Para mais informações sobre como contribuir, consulte [github.com/kubernetes/website](https://github.com/kubernetes/website/).

View File

@ -0,0 +1,27 @@
---
title: "Ative o autocompletar no PowerShell"
description: "Algumas configurações opcionais para ativar o autocompletar no PowerShell."
headless: true
_build:
list: never
render: never
publishResources: false
---
O script de autocompletar do kubectl para PowerShell, pode ser gerado com o comando `kubectl completion powershell`.
Para fazer isso em todas as suas sessões de shell, adicione a seguinte linha ao seu arquivo `$PROFILE`:
```powershell
kubectl completion powershell | Out-String | Invoke-Expression
```
Este comando irá regenerar o script de autocompletar toda vez que o PowerShell for iniciado. Você também pode adicionar o script gerado diretamente ao seu arquivo `$PROFILE`.
Para adicionar o script gerado ao seu arquivo `$PROFILE`, execute a seguinte linha no prompt do PowerShell:
```powershell
kubectl completion powershell >> $PROFILE
```
Após recarregar seu shell, o autocompletar do kubectl deve estar funcionando.

View File

@ -0,0 +1,213 @@
---
title: Instale e configure o kubectl no Windows
content_type: task
weight: 10
---
## {{% heading "prerequisites" %}}
Você deve usar uma versão do kubectl que esteja próxima da versão do seu cluster. Por exemplo, um cliente v{{< skew currentVersion >}} pode se comunicar com as versões v{{< skew currentVersionAddMinor -1 >}}, v{{< skew currentVersionAddMinor 0 >}} e v{{< skew currentVersionAddMinor 1 >}} da camada de gerenciamento. Usar a versão compatível mais recente do kubectl ajuda a evitar problemas inesperados.
## Instale o kubectl no Windows
Existem os seguintes métodos para instalar o kubectl no Windows:
- [Instale o binário kubectl no Windows (via download direto ou curl)](#install-kubectl-binary-on-windows-via-direct-download-or-curl)
- [Instale no Windows usando Chocolatey, Scoop ou winget](#install-nonstandard-package-tools)
### Instale o binário kubectl no Windows (via download direto ou curl) {#install-kubectl-binary-on-windows-via-direct-download-or-curl}
1. Você tem duas opções para instalar o kubectl em seu dispositivo Windows
- Download direto:
Baixe a última versão do patch {{< skew currentVersion >}} diretamente para sua arquitetura específica visitando a [pagina de lançamentos do Kubernetes](https://kubernetes.io/releases/download/#binaries). Certifique-se de selecionar o binário correto para a sua arquitetura. (e.g., amd64, arm64, etc.).
- Usando curl:
Se você tiver o `curl` instalado, use este comando:
```powershell
curl.exe -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe"
```
{{< note >}}
Para descobrir a versão estável mais recente (por exemplo, para scripts), veja
[https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
{{< /note >}}
2. Validar o binário (opcional)
Baixe o arquivo de checksum do `kubectl`:
```powershell
curl.exe -LO "https://dl.k8s.io/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe.sha256"
```
Valide o binário do `kubectl` com o arquivo de checksum:
- Usando o Prompt de Comando para comparar manualmente a saída do `CertUtil` ao arquivo de checksum baixado:
```cmd
CertUtil -hashfile kubectl.exe SHA256
type kubectl.exe.sha256
```
- Usando PowerShell para automatizar a verificação com o operador `-eq` para obter
um resultado `True` ou `False`:
```powershell
$(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256)
```
3. Adicione (no início ou no final) o diretório do binário `kubectl` na variável de ambiente `PATH`.
4. Teste para garantir que a versão do `kubectl` seja a mesma que foi baixada:
```cmd
kubectl version --client
```
Ou use este comando para uma visão detalhada da versão:
```cmd
kubectl version --client --output=yaml
```
{{< note >}}
[Docker Desktop para Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
adiciona sua própria versão do `kubectl` ao `PATH`. Se você instalou o Docker Desktop anteriormente,
pode ser necessário colocar sua entrada no `PATH` antes da adicionada pelo instalador do Docker Desktop
ou remover o `kubectl` do Docker Desktop.
{{< /note >}}
### Instalar no Windows usando Chocolatey, Scoop, ou winget {#install-nonstandard-package-tools}
1. Para instalar o kubectl no Windows, você pode usar o gerenciador de pacotes [Chocolatey](https://chocolatey.org),
o instalador de linha de comando [Scoop](https://scoop.sh) ou o gerenciador de pacotes
[winget](https://learn.microsoft.com/en-us/windows/package-manager/winget/).
{{< tabs name="kubectl_win_install" >}}
{{% tab name="choco" %}}
```powershell
choco install kubernetes-cli
```
{{% /tab %}}
{{% tab name="scoop" %}}
```powershell
scoop install kubectl
```
{{% /tab %}}
{{% tab name="winget" %}}
```powershell
winget install -e --id Kubernetes.kubectl
```
{{% /tab %}}
{{< /tabs >}}
2. Teste para garantir que a versão que você instalou está atualizada:
```powershell
kubectl version --client
```
3. Navegue até seu diretório pessoal:
```powershell
# Se você estiver usando o cmd.exe, execute: cd %USERPROFILE%
cd ~
```
4. Crie o diretório `.kube`:
```powershell
mkdir .kube
```
5. Navegue para o diretório `.kube` que você acabou de criar:
```powershell
cd .kube
```
6. Configure o kubectl para usar um cluster Kubernetes remoto:
```powershell
New-Item config -type file
```
{{< note >}}
Edite o arquivo de configuração com um editor de texto de sua escolha, como o Notepad.
{{< /note >}}
## Verificar a configuração do kubectl
{{< include "included/verify-kubectl.md" >}}
## Configurações e plugins opcionais do kubectl
### Ativar autocompletar no shell
O kubectl oferece suporte ao autocompletar para Bash, Zsh, Fish e PowerShell,
o que pode economizar tempo de digitação.
Abaixo estão os procedimentos para configurar o autocompletar no PowerShell.
{{< include "included/optional-kubectl-configs-pwsh.md" >}}
### Instalar o plugin `kubectl convert`
{{< include "included/kubectl-convert-overview.md" >}}
1. Baixe a última versão com este comando:
```powershell
curl.exe -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl-convert.exe"
```
2. Validar o binário (opcional).
Baixe o arquivo de checksum do `kubectl-convert`:
```powershell
curl.exe -LO "https://dl.k8s.io/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl-convert.exe.sha256"
```
Valide o binário do `kubectl-convert` com o arquivo de checksum:
- Usando o Prompt de Comando para comparar manualmente a saída do `CertUtil` ao arquivo de checksum baixado:
```cmd
CertUtil -hashfile kubectl-convert.exe SHA256
type kubectl-convert.exe.sha256
```
- Usando PowerShell para automatizar a verificação com o operador `-eq` para obter
um resultado `True` ou `False`:
```powershell
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
```
3. Adicione (no início ou no final) o diretório do binário `kubectl-convert` na variável de ambiente `PATH`.
4. Verifique se o plugin foi instalado com sucesso.
```shell
kubectl convert --help
```
Se você não ver um erro, isso significa que o plugin foi instalado com sucesso.
5. Após instalar o plugin, limpe os arquivos de instalação:
```powershell
del kubectl-convert.exe
del kubectl-convert.exe.sha256
```
## {{% heading "whatsnext" %}}
{{< include "included/kubectl-whats-next.md" >}}

View File

@ -392,6 +392,6 @@ kubelet может вызывать различные действия:
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Marathon](https://github.com/d2iq-archive/marathon)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).

View File

@ -10,7 +10,7 @@ cid: home
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) là một hệ thống mã nguồn mở giúp tự động hóa việc triển khai, nhân rộng và quản lý các ứng dụng container.
Nó nhóm các container cấu thành lên một ứng dụng thành các đơn vị logic để dễ dàng quản lý và khám phá. Kubernetes xây dựng dựa trên [15 năm kinh nghiệm vận hành các hệ thống trong môi trường production tại Google](http://queue.acm.org/detail.cfm?id=2898444), kết hợp với các ý tưởng và thực tiễn tốt nhất từ cộng đồng.
Nó nhóm các container cấu thành lên một ứng dụng thành các đơn vị logic để dễ dàng quản lý và khám phá. Kubernetes xây dựng dựa trên [15 năm kinh nghiệm vận hành các hệ thống trong môi trường production tại Google](https://queue.acm.org/detail.cfm?id=2898444), kết hợp với các ý tưởng và thực tiễn tốt nhất từ cộng đồng.
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}

View File

@ -27,7 +27,7 @@ CRI 是一个插件接口,它使 kubelet 能够使用各种容器运行时,
这样 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 能启动
{{< glossary_tooltip text="Pod" term_id="pod" >}} 及其容器。
{{< glossary_definition prepend="容器运行时接口CRI是" term_id="container-runtime-interface" length="all" >}}
{{< glossary_definition prepend="容器运行时接口CRI是" term_id="cri" length="all" >}}
<!-- body -->
<!--
@ -44,7 +44,7 @@ runtime, which can be configured separately within the kubelet by using the
`--image-service-endpoint` [command line flags](/docs/reference/command-line-tools-reference/kubelet).
-->
当通过 gRPC 连接到容器运行时kubelet 将充当客户端。运行时和镜像服务端点必须在容器运行时中可用,
可以使用 `--image-service-endpoint`
可以使用 `--image-service-endpoint`
[命令行标志](/zh-cn/docs/reference/command-line-tools-reference/kubelet)在 kubelet 中单独配置。
<!--

View File

@ -367,7 +367,7 @@ downgrade `MaxPerPodContainer` to `1` and evict the oldest containers.
Additionally, containers owned by pods that have been deleted are removed once
they are older than `MinAge`.
-->
除以上变量之外kubelet 还会垃圾收集除无标识的以及已删除的容器,通常从最未使用的容器开始。
除以上变量之外kubelet 还会垃圾收集除无标识的以及已删除的容器,通常从最长时间未使用的容器开始。
当保持每个 Pod 的最大数量的容器(`MaxPerPodContainer`)会使得全局的已死亡容器个数超出上限
`MaxContainers`)时,`MaxPerPodContainer` 和 `MaxContainers` 之间可能会出现冲突。

View File

@ -900,6 +900,7 @@ Here are some examples of device plugin implementations:
* [Akri](https://github.com/project-akri/akri), which lets you easily expose heterogeneous leaf devices (such as IP cameras and USB devices).
* The [AMD GPU device plugin](https://github.com/ROCm/k8s-device-plugin)
* The [generic device plugin](https://github.com/squat/generic-device-plugin) for generic Linux devices and USB devices
* The [HAMi](https://github.com/Project-HAMi/HAMi) for heterogeneous AI computing virtualization middleware (for example, NVIDIA, Cambricon, Hygon, Iluvatar, MThreads, Ascend, Metax)
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for
Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices
* The [KubeVirt device plugins](https://github.com/kubevirt/kubernetes-device-plugins) for
@ -918,6 +919,7 @@ Here are some examples of device plugin implementations:
* [Akri](https://github.com/project-akri/akri),它可以让你轻松公开异构叶子设备(例如 IP 摄像机和 USB 设备)。
* [AMD GPU 设备插件](https://github.com/ROCm/k8s-device-plugin)
* 适用于通用 Linux 设备和 USB 设备的[通用设备插件](https://github.com/squat/generic-device-plugin)
* 用于异构 AI 计算虚拟化中间件(例如 NVIDIA、Cambricon、Hygon、Iluvatar、MThreads、Ascend、Metax 设备)的 [HAMi](https://github.com/Project-HAMi/HAMi)
* [Intel 设备插件](https://github.com/intel/intel-device-plugins-for-kubernetes)支持
Intel GPU、FPGA、QAT、VPU、SGX、DSA、DLB 和 IAA 设备
* [KubeVirt 设备插件](https://github.com/kubevirt/kubernetes-device-plugins)用于硬件辅助的虚拟化

View File

@ -34,7 +34,7 @@ Resource quotas are a tool for administrators to address this concern.
<!--
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
aggregate resource consumption per namespace. It can limit the quantity of objects that can
aggregate resource consumption per namespace. It can limit the quantity of objects that can
be created in a namespace by type, as well as the total amount of compute resources that may
be consumed by resources in that namespace.
-->
@ -47,14 +47,15 @@ Resource quotas work like this:
资源配额的工作方式如下:
<!--
- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/).
- Different teams work in different namespaces. This can be enforced with
[RBAC](/docs/reference/access-authn-authz/rbac/).
- The administrator creates one ResourceQuota for each namespace.
- Users create resources (pods, services, etc.) in the namespace, and the quota system
tracks usage to ensure it does not exceed hard resource limits defined in a ResourceQuota.
- If creating or updating a resource violates a quota constraint, the request will fail with HTTP
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
@ -77,12 +78,16 @@ Resource quotas work like this:
{{< note >}}
<!--
- For `cpu` and `memory` resources, ResourceQuotas enforce that **every**
(new) pod in that namespace sets a limit for that resource.
If you enforce a resource quota in a namespace for either `cpu` or `memory`,
you, and other clients, **must** specify either `requests` or `limits` for that resource,
for every new Pod you submit. If you don't, the control plane may reject admission
for that Pod.
- For other resources: ResourceQuota works and will ignore pods in the namespace without setting a limit or request for that resource. It means that you can create a new pod without limit/request ephemeral storage if the resource quota limits the ephemeral storage of this namespace.
(new) pod in that namespace sets a limit for that resource.
If you enforce a resource quota in a namespace for either `cpu` or `memory`,
you, and other clients, **must** specify either `requests` or `limits` for that resource,
for every new Pod you submit. If you don't, the control plane may reject admission
for that Pod.
- For other resources: ResourceQuota works and will ignore pods in the namespace without
setting a limit or request for that resource. It means that you can create a new pod
without limit/request ephemeral storage if the resource quota limits the ephemeral
storage of this namespace.
You can use a [LimitRange](/docs/concepts/policy/limit-range/) to automatically set
a default request for these resources.
-->
@ -93,6 +98,7 @@ a default request for these resources.
- 对于其他资源ResourceQuota 可以工作,并且会忽略命名空间中的 Pod而无需为该资源设置限制或请求。
这意味着,如果资源配额限制了此命名空间的临时存储,则可以创建没有限制/请求临时存储的新 Pod。
你可以使用[限制范围](/zh-cn/docs/concepts/policy/limit-range/)自动设置对这些资源的默认请求。
{{< /note >}}
<!--
@ -110,7 +116,7 @@ Examples of policies that could be created using namespaces and quotas are:
<!--
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 GiB and 10 cores,
let B use 10GiB and 4 cores, and hold 2GiB and 2 cores in reserve for future allocation.
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
- Limit the "testing" namespace to using 1 core and 1GiB RAM. Let the "production" namespace
use any amount.
-->
- 在具有 32 GiB 内存和 16 核 CPU 资源的集群中,允许 A 团队使用 20 GiB 内存 和 10 核的 CPU 资源,
@ -119,7 +125,7 @@ Examples of policies that could be created using namespaces and quotas are:
<!--
In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces,
there may be contention for resources. This is handled on a first-come-first-served basis.
there may be contention for resources. This is handled on a first-come-first-served basis.
Neither contention nor changes to quota will affect already created resources.
-->
@ -130,14 +136,14 @@ Neither contention nor changes to quota will affect already created resources.
<!--
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
ResourceQuota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
`--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
-->
## 启用资源配额 {#enabling-resource-quota}
资源配额的支持在很多 Kubernetes 版本中是默认启用的。
ResourceQuota 的支持在很多 Kubernetes 版本中是默认启用的。
当 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}
的命令行标志 `--enable-admission-plugins=` 中包含 `ResourceQuota` 时,
资源配额会被启用。
@ -168,7 +174,7 @@ The following resource types are supported:
<!--
| Resource Name | Description |
| --------------------- | --------------------------------------------------------- |
| ------------- | ----------- |
| `limits.cpu` | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. |
| `limits.memory` | Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. |
| `requests.cpu` | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed this value. |
@ -178,7 +184,7 @@ The following resource types are supported:
| `memory` | Same as `requests.memory` |
-->
| 资源名称 | 描述 |
| --------------------- | --------------------------------------------- |
| ------------- | ----------- |
| `limits.cpu` | 所有非终止状态的 Pod其 CPU 限额总量不能超过该值。 |
| `limits.memory` | 所有非终止状态的 Pod其内存限额总量不能超过该值。 |
| `requests.cpu` | 所有非终止状态的 Pod其 CPU 需求总量不能超过该值。 |
@ -202,10 +208,10 @@ In addition to the resources mentioned above, in release 1.10, quota support for
<!--
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
and `limits` for the same extended resource in a quota. So for extended resources, only quota items
with prefix `requests.` is allowed for now.
with prefix `requests.` are allowed.
-->
由于扩展资源不可超量分配,因此没有必要在配额中为同一扩展资源同时指定 `requests``limits`
对于扩展资源而言,目前仅允许使用前缀为 `requests.` 的配额项。
对于扩展资源而言,仅允许使用前缀为 `requests.` 的配额项。
<!--
Take the GPU resource as an example, if the resource name is `nvidia.com/gpu`, and you want to
@ -217,14 +223,15 @@ limit the total number of GPUs requested in a namespace to 4, you can define a q
* `requests.nvidia.com/gpu: 4`
<!--
See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more detail information.
See [Viewing and Setting Quotas](#viewing-and-setting-quotas) for more details.
-->
有关更多详细信息,请参阅[查看和设置配额](#viewing-and-setting-quotas)。
<!--
## Storage Resource Quota
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested in a given namespace.
You can limit the total sum of [storage resources](/docs/concepts/storage/persistent-volumes/)
that can be requested in a given namespace.
In addition, you can limit consumption of storage resources based on associated storage-class.
-->
@ -237,25 +244,25 @@ In addition, you can limit consumption of storage resources based on associated
<!--
| Resource Name | Description |
| --------------------- | --------------------------------------------------------- |
| ------------- | ----------- |
| `requests.storage` | Across all persistent volume claims, the sum of storage requests cannot exceed this value. |
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | Across all persistent volume claims associated with the `<storage-class-name>`, the sum of storage requests cannot exceed this value. |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | Across all persistent volume claims associated with the `<storage-class-name>`, the total number of [persistent volume claims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
-->
| 资源名称 | 描述 |
| --------------------- | ----------------------------------------------------------- |
| ------------- | ----------- |
| `requests.storage` | 所有 PVC存储资源的需求总量不能超过该值。 |
| `persistentvolumeclaims` | 在该命名空间中所允许的 [PVC](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 总量。 |
| `<storage-class-name>.storageclass.storage.k8s.io/requests.storage` | 在所有与 `<storage-class-name>` 相关的持久卷申领中,存储请求的总和不能超过该值。 |
| `<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims` | 在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的[持久卷申领](/zh-cn/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)总数。 |
<!--
For example, if an operator wants to quota storage with `gold` storage class separate from `bronze` storage class, the operator can
define a quota as follows:
For example, if you want to quota storage with `gold` StorageClass separate from
a `bronze` StorageClass, you can define a quota as follows:
-->
例如,如果一个操作人员针对 `gold` 存储类型与 `bronze` 存储类型设置配额
操作人员可以定义如下配额:
例如,如果你想要将 `gold` StorageClass 与 `bronze` StorageClass 分开进行存储配额配置
则可以按如下方式定义配额:
* `gold.storageclass.storage.k8s.io/requests.storage: 500Gi`
* `bronze.storageclass.storage.k8s.io/requests.storage: 100Gi`
@ -267,13 +274,13 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
<!--
| Resource Name | Description |
| ------------------------------- |----------------------------------------------------------- |
| ------------- | ----------- |
| `requests.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage requests cannot exceed this value. |
| `limits.ephemeral-storage` | Across all pods in the namespace, the sum of local ephemeral storage limits cannot exceed this value. |
| `ephemeral-storage` | Same as `requests.ephemeral-storage`. |
-->
| 资源名称 | 描述 |
| ------------------------------- |----------------------------------------------------------- |
| ------------- | ----------- |
| `requests.ephemeral-storage` | 在命名空间的所有 Pod 中,本地临时存储请求的总和不能超过此值。 |
| `limits.ephemeral-storage` | 在命名空间的所有 Pod 中,本地临时存储限制值的总和不能超过此值。 |
| `ephemeral-storage` | 与 `requests.ephemeral-storage` 相同。 |
@ -323,9 +330,10 @@ Here is an example set of resources users may want to put under object count quo
<!--
If you define a quota this way, it applies to Kubernetes' APIs that are part of the API server, and
to any custom resources backed by a CustomResourceDefinition. If you use [API aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to
to any custom resources backed by a CustomResourceDefinition. If you use
[API aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to
add additional, custom APIs that are not defined as CustomResourceDefinitions, the core Kubernetes
control plane does not enforce quota for the aggregated API. The extension API server is expected to
control plane does not enforce quota for the aggregated API. The extension API server is expected to
provide quota enforcement if that's appropriate for the custom API.
For example, to create a quota on a `widgets` custom resource in the `example.com` API group, use `count/widgets.example.com`.
-->
@ -340,7 +348,7 @@ For example, to create a quota on a `widgets` custom resource in the `example.co
<!--
When using such a resource quota (nearly for all object kinds), an object is charged
against the quota if the object kind exists (is defined) in the control plane.
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
These types of quotas are useful to protect against exhaustion of storage resources. For example, you may
want to limit the number of Secrets in a server given their large size. Too many Secrets in a cluster can
actually prevent servers and controllers from starting. You can set a quota for Jobs to protect against
a poorly configured CronJob. CronJobs that create too many Jobs in a namespace can lead to a denial of service.
@ -363,7 +371,7 @@ The following types are supported:
<!--
| Resource Name | Description |
| ----------------------------|--------------------------------------------- |
| ------------- | ----------- |
| `configmaps` | The total number of ConfigMaps that can exist in the namespace. |
| `persistentvolumeclaims` | The total number of [PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) that can exist in the namespace. |
| `pods` | The total number of Pods in a non-terminal state that can exist in the namespace. A pod is in a terminal state if `.status.phase in (Failed, Succeeded)` is true. |
@ -480,7 +488,7 @@ The `scopeSelector` supports the following values in the `operator` field:
<!--
When using one of the following values as the `scopeName` when defining the
`scopeSelector`, the `operator` must be `Exists`.
`scopeSelector`, the `operator` must be `Exists`.
-->
定义 `scopeSelector` 时,如果使用以下值之一作为 `scopeName` 的值,则对应的
`operator` 只能是 `Exists`
@ -762,8 +770,8 @@ to set `namespaces` or `namespaceSelector` fields in pod affinity terms.
<!--
Preventing users from using cross-namespace affinity terms might be desired since a pod
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
-->
禁止用户使用跨名字空间的亲和性规则可能是一种被需要的能力,
因为带有反亲和性约束的 Pod 可能会阻止所有其他名字空间的 Pod 被调度到某失效域中。
@ -795,7 +803,7 @@ spec:
<!--
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity`
as a limited resource by setting the kube-apiserver flag --admission-control-config-file
as a limited resource by setting the kube-apiserver flag `--admission-control-config-file`
to the path of the following configuration file:
-->
如果集群运维人员希望默认禁止使用 `namespaces``namespaceSelector`
@ -841,7 +849,7 @@ The quota can be configured to quota either value.
<!--
If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
container makes an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`,
then it requires that every incoming container specifies an explicit limit for those resources.
-->
如果配额中指定了 `requests.cpu``requests.memory` 的值,则它要求每个容器都显式给出对这些资源的请求。
@ -850,7 +858,7 @@ then it requires that every incoming container specifies an explicit limit for t
<!--
## Viewing and Setting Quotas
Kubectl supports creating, updating, and viewing quotas:
kubectl supports creating, updating, and viewing quotas:
-->
## 查看和设置配额 {#viewing-and-setting-quotas}
@ -947,7 +955,7 @@ services.loadbalancers 0 2
```
<!--
Kubectl also supports object count quota for all standard namespaced resources
kubectl also supports object count quota for all standard namespaced resources
using the syntax `count/<resource>.<group>`:
-->
kubectl 还使用语法 `count/<resource>.<group>` 支持所有标准的、命名空间域的资源的对象计数配额:
@ -983,7 +991,7 @@ count/secrets 1 4
## Quota and Cluster Capacity
ResourceQuotas are independent of the cluster capacity. They are
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
expressed in absolute units. So, if you add nodes to your cluster, this does *not*
automatically give each namespace the ability to consume more resources.
-->
## 配额和集群容量 {#quota-and-cluster-capacity}
@ -1022,7 +1030,7 @@ restrictions around nodes: pods from several namespaces may run on the same node
<!--
## Limit Priority Class consumption by default
It may be desired that pods at a particular priority, eg. "cluster-services",
It may be desired that pods at a particular priority, such as "cluster-services",
should be allowed in a namespace, if and only if, a matching quota object exists.
-->
## 默认情况下限制特定优先级的资源消耗 {#limit-priority-class-consumption-by-default}
@ -1079,9 +1087,9 @@ resourcequota/pods-cluster-services created
<!--
In this case, a pod creation will be allowed if:
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
1. the Pod's `priorityClassName` is not specified.
1. the Pod's `priorityClassName` is specified to a value other than `cluster-services`.
1. the Pod's `priorityClassName` is set to `cluster-services`, it is to be created
in the `kube-system` namespace, and it has passed the resource quota check.
-->
在这里,当以下条件满足时可以创建 Pod
@ -1101,10 +1109,11 @@ and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
<!--
- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md)
for more information.
- See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
- Read [Quota support for priority class design document](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765).
-->
- 参阅[资源配额设计文档](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md)。
- 参阅[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。

View File

@ -218,7 +218,7 @@ You can also verify that the owner reference of these pods is set to the fronten
To do this, get the yaml of one of the Pods running:
-->
你也可以查看 Pod 的属主引用被设置为前端的 ReplicaSet。
要实现这点,可取运行中的某个 Pod 的 YAML
要实现这点,可取运行中的某个 Pod 的 YAML
```shell
kubectl get pods frontend-gbgfx -o yaml
@ -296,7 +296,7 @@ Fetching the Pods:
新的 Pod 会被该 ReplicaSet 获取,并立即被 ReplicaSet 终止,
因为它们的存在会使得 ReplicaSet 中 Pod 个数超出其期望值。
Pod
取 Pod
```shell
@ -341,7 +341,7 @@ number of its new Pods and the original matches its desired count. As fetching t
-->
你会看到 ReplicaSet 已经获得了该 Pod并仅根据其规约创建新的 Pod
直到新的 Pod 和原来的 Pod 的总数达到其预期个数。
这时取 Pod 列表:
这时取 Pod 列表:
```shell
kubectl get pods

View File

@ -213,7 +213,7 @@ Value | Description
:-----|:-----------
`Pending`(悬决)| Pod 已被 Kubernetes 系统接受,但有一个或者多个容器尚未创建亦未运行。此阶段包括等待 Pod 被调度的时间和通过网络下载镜像的时间。
`Running`(运行中) | Pod 已经绑定到了某个节点Pod 中所有的容器都已被创建。至少有一个容器仍在运行,或者正处于启动或重启状态。
`Succeeded`(成功) | Pod 中的所有容器都已成功终止,并且不会再重启。
`Succeeded`(成功) | Pod 中的所有容器都已成功结束,并且不会再重启。
`Failed`(失败) | Pod 中的所有容器都已终止,并且至少有一个容器是因为失败终止。也就是说,容器以非 0 状态退出或者被系统终止,且未被设置为自动重启。
`Unknown`(未知) | 因为某些原因无法取得 Pod 的状态。这种情况通常是因为与 Pod 所在主机通信失败。

View File

@ -6,6 +6,11 @@ _build:
render: false
stages:
- stage: alpha
defaultValue: false
fromVersion: "1.29"
toVersion: "1.29"
- stage: beta
defaultValue: true
fromVersion: "1.30"

View File

@ -1,43 +0,0 @@
---
title: 容器运行时接口Container Runtime Interface
id: container-runtime-interface
date: 2021-11-24
full_link: /zh-cn/docs/concepts/architecture/cri
short_description: >
kubelet 和容器运行时之间通信的主要协议。
aka:
tags:
- cri
---
<!--
title: Container Runtime Interface
id: container-runtime-interface
date: 2021-11-24
full_link: /docs/concepts/architecture/cri
short_description: >
The main protocol for the communication between the kubelet and Container Runtime.
aka:
tags:
- cri
-->
<!--
The main protocol for the communication between the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and Container Runtime.
-->
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 和容器运行时之间通信的主要协议。
<!--more-->
<!--
The Kubernetes Container Runtime Interface (CRI) defines the main
[gRPC](https://grpc.io) protocol for the communication between the
[node components](/docs/concepts/architecture/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
-->
Kubernetes 容器运行时接口Container Runtime InterfaceCRI定义了主要 [gRPC](https://grpc.io) 协议,
用于[节点组件](/zh-cn/docs/concepts/architecture/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
和{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}之间的通信。

View File

@ -1,42 +1,43 @@
---
title: 容器运行时接口Container Runtime InterfaceCRI
id: cri
date: 2019-03-07
full_link: /zh-cn/docs/concepts/architecture/#container-runtime
date: 2021-11-24
full_link: /zh-cn/docs/concepts/architecture/cri
short_description: >
一组与 kubelet 集成的容器运行时 API
在 kubelet 和本地容器运行时之间通讯的协议
aka:
tags:
- fundamental
- fundamental
---
<!--
title: Container runtime interface (CRI)
title: Container Runtime Interface (CRI)
id: cri
date: 2019-03-07
full_link: /docs/concepts/architecture/#container-runtime
date: 2021-11-24
full_link: /docs/concepts/architecture/cri
short_description: >
An API for container runtimes to integrate with kubelet
Protocol for communication between the kubelet and the local container runtime.
aka:
tags:
- fundamental
- fundamental
-->
<!--
The container runtime interface (CRI) is an API for container runtimes
to integrate with {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} on a node.
The main protocol for the communication between the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and Container Runtime.
-->
容器运行时接口Container Runtime InterfaceCRI是一组让容器运行时与节点上
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 集成的 API。
在 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} 与容器运行时之间通信的主要协议。
<!--more-->
<!--
For more information, see the [CRI](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md) API and specifications.
The Kubernetes Container Runtime Interface (CRI) defines the main
[gRPC](https://grpc.io) protocol for the communication between the
[node components](/docs/concepts/architecture/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
-->
更多信息,请参考[容器运行时接口CRI](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
API 与规范。
Kubernetes 容器运行时接口CRI定义了在[节点组件](/zh-cn/docs/concepts/architecture/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}
和{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}之间通信的主要
[gRPC](https://grpc.io) 协议。

View File

@ -0,0 +1,54 @@
---
title: PodTemplate
id: pod-template
date: 2024-10-13
short_description: >
创建 Pod 所用的模板。
aka:
- Pod 模板
tags:
- core-object
---
<!--
title: PodTemplate
id: pod-template
date: 2024-10-13
short_description: >
A template for creating Pods.
aka:
- pod template
tags:
- core-object
-->
<!--
An API object that defines a template for creating {{< glossary_tooltip text="Pods" term_id="pod" >}}.
The PodTemplate API is also embedded in API definitions for workload management, such as
{{< glossary_tooltip text="Deployment" term_id="deployment" >}} or
{{< glossary_tooltip text="StatefulSets" term_id="StatefulSet" >}}.
-->
这个 API 对象定义了创建 {{< glossary_tooltip text="Pod" term_id="pod" >}} 时所用的模板。
PodTemplate API 也被嵌入在工作负载管理的 API 定义中,例如
{{< glossary_tooltip text="Deployment" term_id="deployment" >}} 或
{{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}。
<!--more-->
<!--
Pod templates allow you to define common metadata (such as labels, or a template for the name of a
new Pod) as well as to specify a pod's desired state.
[Workload management](/docs/concepts/workloads/controllers/) controllers use Pod templates
(embedded into another object, such as a Deployment or StatefulSet)
to define and manage one or more {{< glossary_tooltip text="Pods" term_id="pod" >}}.
When there can be multiple Pods based on the same template, these are called
{{< glossary_tooltip term_id="replica" text="replicas" >}}.
Although you can create a PodTemplate object directly, you rarely need to do so.
-->
Pod 模板允许你定义常见的元数据(例如标签,或新 Pod 名称的模板)以及 Pod 的期望状态。
[工作负载管理](/zh-cn/docs/concepts/workloads/controllers/)控制器使用 Pod 模板
(嵌入到另一个对象中,例如 Deployment 或 StatefulSet
来定义和管理一个或多个 {{< glossary_tooltip text="Pod" term_id="pod" >}}。
当多个 Pod 基于同一个模板时,这些 Pod 称为{{< glossary_tooltip term_id="replica" text="副本" >}}。
尽管你可以直接创建 PodTemplate 对象,但通常不需要这样做。

View File

@ -20,17 +20,17 @@ description: >-
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) collects pod and
container metrics via [cAdvisor](https://github.com/google/cadvisor). As an alpha feature,
Kubernetes lets you configure the collection of pod and container
metrics via the {{< glossary_tooltip term_id="container-runtime-interface" text="Container Runtime Interface">}} (CRI). You
metrics via the {{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI). You
must enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and
use a compatible CRI implementation (containerd >= 1.6.0, CRI-O >= 1.23.0) to
use the CRI based collection mechanism.
-->
[kubelet](/zh-cn/docs/reference/command-line-tools-reference/kubelet/) 通过
[cAdvisor](https://github.com/google/cadvisor) 收集 Pod 和容器指标。作为一个 Alpha 特性,
Kubernetes 允许你通过{{< glossary_tooltip term_id="container-runtime-interface" text="容器运行时接口">}}CRI
Kubernetes 允许你通过{{< glossary_tooltip term_id="cri" text="容器运行时接口">}}CRI
配置收集 Pod 和容器指标。要使用基于 CRI 的收集机制,你必须启用 `PodAndContainerStatsFromCRI`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
并使用兼容的 CRI 实现containerd >= 1.6.0, CRI-O >= 1.23.0)。
并使用兼容的 CRI 实现containerd >= 1.6.0, CRI-O >= 1.23.0)。
<!-- body -->
@ -44,7 +44,7 @@ collection with cAdvisor include:
-->
## CRI Pod 和容器指标 {#cri-pod-container-metrics}
当启用 `PodAndContainerStatsFromCRI` 时,Kubelet 轮询底层容器运行时以获取
当启用 `PodAndContainerStatsFromCRI` 时,kubelet 轮询底层容器运行时以获取
Pod 和容器统计信息,而不是直接使用 cAdvisor 检查主机系统。同直接使用 cAdvisor
收集信息相比,依靠容器运行时获取这些信息的好处包括:
@ -54,14 +54,14 @@ Pod 和容器统计信息,而不是直接使用 cAdvisor 检查主机系统。
again by the kubelet.
-->
- 潜在的性能改善,如果容器运行时在正常操作中已经收集了这些信息。
在这种情况下,这些数据可以被重用,而不是由 Kubelet 再次进行聚合。
在这种情况下,这些数据可以被重用,而不是由 kubelet 再次进行聚合。
<!--
- It further decouples the kubelet and the container runtime allowing collection of metrics for
container runtimes that don't run processes directly on the host with kubelet where they are
observable by cAdvisor (for example: container runtimes that use virtualization).
-->
- 这种做法进一步解耦了 Kubelet 和容器运行时。
对于使用 Kubelet 来在主机上运行进程的容器运行时,其行为可用 cAdvisor 观测;
- 这种做法进一步解耦了 kubelet 和容器运行时。
对于使用 kubelet 来在主机上运行进程的容器运行时,其行为可用 cAdvisor 观测;
对于其他运行时(例如,使用虚拟化的容器运行时)而言,
这种做法提供了允许收集容器运行时指标的可能性。

View File

@ -69,7 +69,7 @@ By default, Kubernetes fetches node summary metrics data using an embedded
[cAdvisor](https://github.com/google/cadvisor) that runs within the kubelet. If you
enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
in your cluster, and you use a container runtime that supports statistics access via
{{< glossary_tooltip term_id="container-runtime-interface" text="Container Runtime Interface">}} (CRI), then
{{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI), then
the kubelet [fetches Pod- and container-level metric data using CRI](/docs/reference/instrumentation/cri-pod-container-metrics), and not via cAdvisor.
-->
## 概要指标 API 源 {#summary-api-source}
@ -77,7 +77,7 @@ the kubelet [fetches Pod- and container-level metric data using CRI](/docs/refer
默认情况下Kubernetes 使用 kubelet 内运行的嵌入式 [cAdvisor](https://github.com/google/cadvisor)
获取节点概要指标数据。如果你在自己的集群中启用 `PodAndContainerStatsFromCRI`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
且你通过{{< glossary_tooltip term_id="container-runtime-interface" text="容器运行时接口">}}CRI使用支持统计访问的容器运行时
且你通过{{< glossary_tooltip term_id="cri" text="容器运行时接口">}}CRI使用支持统计访问的容器运行时
则 kubelet [将使用 CRI 来获取 Pod 和容器级别的指标数据](/zh-cn/docs/reference/instrumentation/cri-pod-container-metrics)
而不是 cAdvisor 来获取。

View File

@ -0,0 +1,996 @@
---
api_metadata:
apiVersion: "resource.k8s.io/v1alpha3"
import: "k8s.io/api/resource/v1alpha3"
kind: "ResourceSlice"
content_type: "api_reference"
description: "ResourceSlice 表示一个或多个资源,这些资源位于同一个驱动所管理的、彼此相似的资源构成的资源池。"
title: "ResourceSlice v1alpha3"
weight: 18
---
<!--
api_metadata:
apiVersion: "resource.k8s.io/v1alpha3"
import: "k8s.io/api/resource/v1alpha3"
kind: "ResourceSlice"
content_type: "api_reference"
description: "ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver."
title: "ResourceSlice v1alpha3"
weight: 18
auto_generated: true
-->
`apiVersion: resource.k8s.io/v1alpha3`
`import "k8s.io/api/resource/v1alpha3"`
<!--
## ResourceSlice {#ResourceSlice}
ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver. A pool may span more than one ResourceSlice, and exactly how many ResourceSlices comprise a pool is determined by the driver.
At the moment, the only supported resources are devices with attributes and capacities. Each device in a given pool, regardless of how many ResourceSlices, must have a unique name. The ResourceSlice in which a device gets published may change over time. The unique identifier for a device is the tuple \<driver name>, \<pool name>, \<device name>.
-->
## ResourceSlice {#ResourceSlice}
ResourceSlice 表示一个或多个资源,这些资源位于同一个驱动所管理的、彼此相似的资源构成的资源池。
一个池可以包含多个 ResourceSlice一个池包含多少个 ResourceSlice 由驱动确定。
目前,所支持的资源只能是具有属性和容量的设备。
给定池中的每个设备,无论有多少个 ResourceSlice必须具有唯一名称。
发布设备的 ResourceSlice 可能会随着时间的推移而变化。
设备的唯一标识符是元组 \<驱动名称\>、\<池名称\>、\<设备名称\>。
<!--
Whenever a driver needs to update a pool, it increments the pool.Spec.Pool.Generation number and updates all ResourceSlices with that new number and new resource definitions. A consumer must only use ResourceSlices with the highest generation number and ignore all others.
When allocating all resources in a pool matching certain criteria or when looking for the best solution among several different alternatives, a consumer should check the number of ResourceSlices in a pool (included in each ResourceSlice) to determine whether its view of a pool is complete and if not, should wait until the driver has completed updating the pool.
-->
每当驱动需要更新池时pool.spec.pool.generation 编号加一,
并用新的编号和新的资源定义来更新所有 ResourceSlice。
资源用户必须仅使用 generation 编号最大的 ResourceSlice并忽略所有其他 ResourceSlice。
从池中分配符合某些条件的所有资源或在多个不同分配方案间寻找最佳方案时,
资源用户应检查池中的 ResourceSlice 数量(包含在每个 ResourceSlice 中),
以确定其对池的视图是否完整,如果不完整,则应等到驱动完成对池的更新。
<!--
For resources that are not local to a node, the node name is not set. Instead, the driver may use a node selector to specify where the devices are available.
This is an alpha type and requires enabling the DynamicResourceAllocation feature gate.
-->
对于非某节点本地的资源,节点名称不会被设置。
驱动可以使用节点选择算符来给出设备的可用位置。
此特性为 Alpha 级别,需要启用 DynamicResourceAllocation 特性门控。
<hr>
- **apiVersion**: resource.k8s.io/v1alpha3
- **kind**: ResourceSlice
<!--
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
Standard object metadata
- **spec** (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSliceSpec" >}}">ResourceSliceSpec</a>), required
Contains the information published by the driver.
Changing the spec automatically increments the metadata.generation number.
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的对象元数据。
- **spec** (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSliceSpec" >}}">ResourceSliceSpec</a>),必需
包含驱动所发布的信息。
更改 spec 会自动让 metadata.generation 编号加一。
## ResourceSliceSpec {#ResourceSliceSpec}
<!--
ResourceSliceSpec contains the information published by the driver in one ResourceSlice.
-->
ResourceSliceSpec 包含驱动在一个 ResourceSlice 中所发布的信息。
<hr>
<!--
- **driver** (string), required
Driver identifies the DRA driver providing the capacity information. A field selector can be used to list only ResourceSlice objects with a certain driver name.
Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver. This field is immutable.
-->
- **driver** (string),必需
driver 标明提供容量信息的 DRA 驱动。可以使用字段选择算符仅列举具有特定驱动名称的 ResourceSlice 对象。
字段值必须是 DNS 子域名并且应以驱动供应商所拥有的 DNS 域结尾。此字段是不可变更的。
<!--
- **pool** (ResourcePool), required
Pool describes the pool that this ResourceSlice belongs to.
<a name="ResourcePool"></a>
*ResourcePool describes the pool that ResourceSlices belong to.*
-->
- **pool** (ResourcePool),必需
pool 描述 ResourceSlice 所属的池。
<a name="ResourcePool"></a>
**ResourcePool 描述 ResourceSlice 所属的池。**
<!--
- **pool.generation** (int64), required
Generation tracks the change in a pool over time. Whenever a driver changes something about one or more of the resources in a pool, it must change the generation in all ResourceSlices which are part of that pool. Consumers of ResourceSlices should only consider resources from the pool with the highest generation number. The generation may be reset by drivers, which should be fine for consumers, assuming that all ResourceSlices in a pool are updated to match or deleted.
Combined with ResourceSliceCount, this mechanism enables consumers to detect pools which are comprised of multiple ResourceSlices and are in an incomplete state.
-->
- **pool.generation** (int64),必需
generation 跟踪池中随时间发生的变化。每当驱动更改池中一个或多个资源的某些内容时,
它必须为所有属于该池的 ResourceSlice 更改 generation。
ResourceSlice 的使用者应仅考虑池中 generation 编号最大的资源。
generation 可以被驱动重置,这对于使用者来说应该没问题,
前提是池中的所有 ResourceSlice 都已被更新以匹配,或都已被删除。
结合 resourceSliceCount此机制让使用者能够检测资源池包含多个
ResourceSlice 且处于不完整状态的情况。
<!--
- **pool.name** (string), required
Name is used to identify the pool. For node-local devices, this is often the node name, but this is not required.
It must not be longer than 253 characters and must consist of one or more DNS sub-domains separated by slashes. This field is immutable.
-->
- **pool.name** (string),必需
name 用作池的标识符。对于节点本地设备,字段值通常是节点名称,但这不是必须的。
此字段不得超过 253 个字符,并且必须由一个或多个用斜杠分隔的 DNS 子域组成。此字段是不可变更的。
<!--
- **pool.resourceSliceCount** (int64), required
ResourceSliceCount is the total number of ResourceSlices in the pool at this generation number. Must be greater than zero.
Consumers can use this to check whether they have seen all ResourceSlices belonging to the same pool.
-->
- **pool.resourceSliceCount** (int64),必需
resourceSliceCount 是池中带有对应 generation 编号的 ResourceSlice 的总数。必须大于零。
资源用户可以使用此字段检查他们是否能看到属于同一池的所有 ResourceSlice。
<!--
- **allNodes** (boolean)
AllNodes indicates that all nodes have access to the resources in the pool.
Exactly one of NodeName, NodeSelector and AllNodes must be set.
-->
- **allNodes** (boolean)
allNodes 表示所有节点都可以访问池中的资源。
nodeName、nodeSelector 和 allNodes 之一必须被设置。
<!--
- **devices** ([]Device)
*Atomic: will be replaced during a merge*
Devices lists some or all of the devices in this pool.
Must not have more than 128 entries.
<a name="Device"></a>
*Device represents one individual hardware instance that can be selected based on its attributes. Besides the name, exactly one field must be set.*
-->
- **devices** ([]Device)
**原子性:将在合并期间被替换**
devices 列举池中的部分或全部设备。
列表大小不得超过 128 个条目。
<a name="Device"></a>
**Device 表示可以基于其属性进行选择的一个单独硬件实例。name 之外,必须且只能设置一个字段。**
<!--
- **devices.name** (string), required
Name is unique identifier among all devices managed by the driver in the pool. It must be a DNS label.
- **devices.basic** (BasicDevice)
Basic defines one device instance.
<a name="BasicDevice"></a>
*BasicDevice defines one device instance.*
-->
- **devices.name** (string),必需
name 是池中由驱动所管理的设备的标识符,在所有设备中唯一。此字段值必须是 DNS 标签。
- **devices.basic** (BasicDevice)
basic 定义一个设备实例。
<a name="BasicDevice"></a>
**BasicDevice 定义一个设备实例。**
<!--
- **devices.basic.attributes** (map[string]DeviceAttribute)
Attributes defines the set of attributes for this device. The name of each attribute must be unique in that set.
The maximum number of attributes and capacities combined is 32.
<a name="DeviceAttribute"></a>
*DeviceAttribute must have exactly one field set.*
-->
- **devices.basic.attributes** (map[string]DeviceAttribute)
attributes 定义设备的属性集。在该集合中每个属性的名称必须唯一。
attributes 和 capacities 两个映射合起来,最多包含 32 个属性。
<a name="DeviceAttribute"></a>
**DeviceAttribute 必须设置一个字段。**
<!--
- **devices.basic.attributes.bool** (boolean)
BoolValue is a true/false value.
- **devices.basic.attributes.int** (int64)
IntValue is a number.
- **devices.basic.attributes.string** (string)
StringValue is a string. Must not be longer than 64 characters.
- **devices.basic.attributes.version** (string)
VersionValue is a semantic version according to semver.org spec 2.0.0. Must not be longer than 64 characters.
-->
- **devices.basic.attributes.bool** (boolean)
bool 字段值是 true/false。
- **devices.basic.attributes.int** (int64)
int 字段值是一个整数。
- **devices.basic.attributes.string** (string)
string 字段值是一个字符串。不得超过 64 个字符。
- **devices.basic.attributes.version** (string)
version 字段值是符合 semver.org 2.0.0 规范的语义版本。不得超过 64 个字符。
<!--
- **devices.basic.capacity** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
Capacity defines the set of capacities for this device. The name of each capacity must be unique in that set.
The maximum number of attributes and capacities combined is 32.
-->
- **devices.basic.capacity** (map[string]<a href="{{< ref "../common-definitions/quantity#Quantity" >}}">Quantity</a>)
capacity 定义设备的容量集。在该集合中每个容量的名称必须唯一。
attributes 和 capacities 两个映射合起来,最多包含 32 个属性。
<!--
- **nodeName** (string)
NodeName identifies the node which provides the resources in this pool. A field selector can be used to list only ResourceSlice objects belonging to a certain node.
This field can be used to limit access from nodes to ResourceSlices with the same node name. It also indicates to autoscalers that adding new nodes of the same type as some old node might also make new resources available.
Exactly one of NodeName, NodeSelector and AllNodes must be set. This field is immutable.
-->
- **nodeName** (string)
nodeName 标明提供池中资源的某个节点。可以使用字段选择算符仅列举属于特定节点的 ResourceSlice 对象。
此字段可用于限制节点只能访问具有相同节点名称的 ResourceSlice。
此字段还向负责添加相同类型节点的自动扩缩容程序指明,某些旧节点也能够提供新资源供访问。
nodeName、nodeSelector 和 allNodes 三个字段必须设置其一。此字段是不可变更的。
<!--
- **nodeSelector** (NodeSelector)
NodeSelector defines which nodes have access to the resources in the pool, when that pool is not limited to a single node.
Must use exactly one term.
Exactly one of NodeName, NodeSelector and AllNodes must be set.
<a name="NodeSelector"></a>
*A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*
-->
- **nodeSelector** (NodeSelector)
nodeSelector 定义当池所涉及的节点不止一个时,在哪些节点上可以访问到资源。
此字段中只能设置一个判定条件。
nodeName、nodeSelector 和 allNodes 三个字段必须设置其一。
<a name="NodeSelector"></a>
**NodeSelector 表示在一组节点上一个或多个标签查询结果的并集;
也就是说,它表示由节点选择算符条件所表示的选择算符的逻辑或计算结果。**
<!--
- **nodeSelector.nodeSelectorTerms** ([]NodeSelectorTerm), required
*Atomic: will be replaced during a merge*
Required. A list of node selector terms. The terms are ORed.
<a name="NodeSelectorTerm"></a>
*A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*
-->
- **nodeSelector.nodeSelectorTerms** ([]NodeSelectorTerm),必需
**原子性:将在合并期间被替换**
必需。节点选择算符条件的列表。这些条件最终会被按照逻辑或操作组合起来。
<a name="NodeSelectorTerm"></a>
**未指定或空的节点选择算符条件不会匹配任何对象。各个条件最少是按逻辑与运算组合到一起的。
TopologySelectorTerm 类型实现了 NodeSelectorTerm 的一个子集。**
<!--
- **nodeSelector.nodeSelectorTerms.matchExpressions** ([]<a href="{{< ref "../common-definitions/node-selector-requirement#NodeSelectorRequirement" >}}">NodeSelectorRequirement</a>)
*Atomic: will be replaced during a merge*
A list of node selector requirements by node's labels.
- **nodeSelector.nodeSelectorTerms.matchFields** ([]<a href="{{< ref "../common-definitions/node-selector-requirement#NodeSelectorRequirement" >}}">NodeSelectorRequirement</a>)
*Atomic: will be replaced during a merge*
A list of node selector requirements by node's fields.
-->
- **nodeSelector.nodeSelectorTerms.matchExpressions** ([]<a href="{{< ref "../common-definitions/node-selector-requirement#NodeSelectorRequirement" >}}">NodeSelectorRequirement</a>)
**原子性:将在合并期间被替换**
基于节点标签的节点选择算符要求的列表。
- **nodeSelector.nodeSelectorTerms.matchFields** ([]<a href="{{< ref "../common-definitions/node-selector-requirement#NodeSelectorRequirement" >}}">NodeSelectorRequirement</a>)
**原子性:将在合并期间被替换**
基于节点字段的节点选择算符要求的列表。
## ResourceSliceList {#ResourceSliceList}
<!--
ResourceSliceList is a collection of ResourceSlices.
-->
ResourceSliceList 是 ResourceSlice 的集合。
<hr>
- **apiVersion**: resource.k8s.io/v1alpha3
- **kind**: ResourceSliceList
<!--
- **items** ([]<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>), required
Items is the list of resource ResourceSlices.
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
Standard list metadata
-->
- **items** ([]<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>),必需
items 是 ResourceSlice 资源的列表。
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
标准的列表元数据。
<!--
## Operations {#Operations}
<hr>
### `get` read the specified ResourceSlice
#### HTTP Request
-->
## 操作 {#Operations}
<hr>
### `get` 读取指定的 ResourceSlice
#### HTTP 请求
GET /apis/resource.k8s.io/v1alpha3/resourceslices/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the ResourceSlice
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
ResourceSlice 的名称。
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): OK
401: Unauthorized
<!--
### `list` list or watch objects of kind ResourceSlice
#### HTTP Request
-->
### `list` 列举或监视类别为 ResourceSlice 的对象
#### HTTP 请求
GET /apis/resource.k8s.io/v1alpha3/resourceslices
<!--
#### Parameters
- **allowWatchBookmarks** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
-->
#### 参数
- **allowWatchBookmarks** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSliceList" >}}">ResourceSliceList</a>): OK
401: Unauthorized
<!--
### `create` create a ResourceSlice
#### HTTP Request
-->
### `create` 创建 ResourceSlice
#### HTTP 请求
POST /apis/resource.k8s.io/v1alpha3/resourceslices
<!--
#### Parameters
- **body**: <a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **body**: <a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): OK
201 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): Created
202 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): Accepted
401: Unauthorized
<!--
### `update` replace the specified ResourceSlice
#### HTTP Request
-->
### `update` 替换指定的 ResourceSlice
#### HTTP 请求
PUT /apis/resource.k8s.io/v1alpha3/resourceslices/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the ResourceSlice
- **body**: <a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
ResourceSlice 的名称。
- **body**: <a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): OK
201 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): Created
401: Unauthorized
<!--
### `patch` partially update the specified ResourceSlice
#### HTTP Request
-->
### `patch` 部分更新指定的 ResourceSlice
#### HTTP 请求
PATCH /apis/resource.k8s.io/v1alpha3/resourceslices/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the ResourceSlice
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
ResourceSlice 的名称。
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): OK
201 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): Created
401: Unauthorized
<!--
### `delete` delete a ResourceSlice
#### HTTP Request
-->
### `delete` 删除 ResourceSlice
#### HTTP 请求
DELETE /apis/resource.k8s.io/v1alpha3/resourceslices/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the ResourceSlice
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
-->
#### 参数
- **name** (**路径参数**): string必需
ResourceSlice 的名称。
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): OK
202 (<a href="{{< ref "../workload-resources/resource-slice-v1alpha3#ResourceSlice" >}}">ResourceSlice</a>): Accepted
401: Unauthorized
<!--
### `deletecollection` delete collection of ResourceSlice
#### HTTP Request
-->
### `deletecollection` 删除 ResourceSlice 的集合
#### HTTP 请求
DELETE /apis/resource.k8s.io/v1alpha3/resourceslices
<!--
#### Parameters
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
-->
#### 参数
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
401: Unauthorized

View File

@ -186,6 +186,7 @@ Leader Migration can be enabled without a configuration. Please see
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: kube-controller-manager
@ -208,6 +209,7 @@ between both parties of the migration.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: *
@ -260,6 +262,7 @@ which has the same effect.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: cloud-controller-manager
@ -400,6 +403,7 @@ controllers.
kind: LeaderMigrationConfiguration
apiVersion: controllermanager.config.k8s.io/v1
leaderName: cloud-provider-extraction-migration
resourceLock: leases
controllerLeaders:
- name: route
component: *

View File

@ -233,7 +233,7 @@ discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
-->
```shell
# 此示例假设节点名称与主机名称匹配,并且可通过 SSH 访问。
NODES=($(kubectl get nodes -o name))
NODES=($( kubectl get node -o jsonpath='{.items[*].status.addresses[?(.type == "Hostname")].address}' ))
for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>

View File

@ -1,27 +1,51 @@
# i18n strings for the Portuguese (main) site.
# NOTE: Please keep the entries in the same order as the English version (en.toml).
[api_reference_title]
other = "Referência da API"
[auto_generated_edit_notice]
other = "(página gerada automaticamente)"
[auto_generated_pageinfo]
other = """<p>Esta página foi gerada automaticamente.</p><p>Se você planeja relatar um problema nesta página, mencione que esta página é gerada automaticamente na descrição da sua <i>issue</i>. A correção pode necessitar ser efetuada em outra parte do projeto Kubernetes.</p>"""
[blog_post_show_more]
other = "Mostrar mais postagens"
[banner_acknowledgement]
other = "Ocultar este aviso"
[case_study_prefix]
other = "Estudo de caso:"
[caution]
other = "Cuidado:"
[cleanup_heading]
other = "Limpando"
[community_events_calendar]
[china_icp_license]
other = "Licença ICP:"
[community_advice_guidelines]
other = "Você pode ler também sobre nossas {{ .link }}."
[community_calendar_name]
other = "Calendário de Eventos"
[community_contributor_site_name]
other = "Site da pessoa contribuidora"
[community_forum_name]
other = "Fórum"
[community_github_name]
other = "GitHub"
[community_server_fault_name]
other = "Server Fault"
[community_slack_name]
other = "Slack"
@ -29,7 +53,10 @@ other = "Slack"
other = "Stack Overflow"
[community_twitter_name]
other = "Twitter"
other = "X (antigo Twitter)"
[community_x_name]
other = "X (antigo Twitter)"
[community_youtube_name]
other = "YouTube"
@ -37,6 +64,9 @@ other = "YouTube"
[conjunction_1]
other = "e"
[copy_sample_to_clipboard]
other = "Copiar {{ .filename }} para a área de transferência"
[cve_id]
other = "ID da CVE"
@ -64,6 +94,13 @@ other = " a documentação não é mais mantida ativamente. A versão que você
[deprecation_file_warning]
other = "Descontinuado"
[outdated_content_title]
other = "Este documento pode estar desatualizado"
[outdated_content_message]
other = "Este documento possui uma data de atualização mais antiga que o documento original. Portanto, este conteúdo pode estar desatualizado. Se você lê inglês, veja a versão em inglês para acessar a versão mais atualizada: "
[dockershim_message]
other = """O Dockershim foi removido do projeto Kubernetes na versão 1.24. Leia a <a href="/dockershim">seção de Perguntas Frequentes sobre a remoção do Dockershim</a> para mais detalhes."""
@ -79,6 +116,9 @@ other = "Eu sou..."
[docs_label_users]
other = "Usuários"
[docs_page_versions]
other = "Este(a) %s cobre a versão %s do Kubernetes. Se você deseja usar uma versão diferente do Kubernetes, consulte as seguintes páginas:"
[docs_version_current]
other = "(esta documentação)"
@ -100,9 +140,51 @@ other = "Talvez você estivesse procurando por:"
[examples_heading]
other = "Exemplos"
[feature_gate_enabled]
other = "(habilitado por padrão: {{ .enabled }})"
[feature_gate_stage_alpha]
other = "Alfa"
[feature_gate_stage_beta]
other = "Beta"
[feature_gate_stage_stable]
other = "GA"
[feature_gate_stage_deprecated]
other = "Descontinuado"
[feature_gate_table_header_default]
other = "Padrão"
[feature_gate_table_header_feature]
other = "Funcionalidade"
[feature_gate_table_header_from]
other = "De"
[feature_gate_table_header_since]
other = "Até"
[feature_gate_table_header_stage]
other = "Estágio"
[feature_gate_table_header_to]
other = "Para"
[feature_gate_table_header_until]
other = "Até"
[feature_state]
other = "ESTADO DA FUNCIONALIDADE:"
[feature_state_kubernetes_label]
other = "Kubernetes"
[feature_state_feature_gate_tooltip]
other = "Feature Gate:"
[feedback_heading]
other = "Comentários"
@ -139,8 +221,8 @@ do Katacoda pela O'Reilly Media em 2019.</p>
auxílio às pessoas dando seus primeiros passos nas suas jornadas de aprendizagem
do Kubernetes.</p>
<p>Os tutoriais encerrarão seu funcionamento no dia <b>31 de março de 2023</b>.
Para mais informações, leia
"<a href="/pt-br/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/">os tutoriais gratuitos do Katacoda estão sendo encerrados</a>."</p>"""
Você está vendo este aviso porque esta página em particular não foi atualizada ainda após
o encerramento dos tutoriais interativos.</p>"""
[latest_release]
other = "Versão mais recente:"
@ -155,7 +237,7 @@ other = "Próximo >>"
other = "<< Anterior"
[layouts_case_studies_list_tell]
other = "Conte seu caso"
other = "Conte sua história"
[layouts_docs_glossary_aka]
other = "Também conhecido como"
@ -266,7 +348,15 @@ other = "relatar um problema"
other = "Obrigado pelo feedback. Se você tiver uma pergunta específica sobre como utilizar o Kubernetes, faça em"
[layouts_docs_search_fetching]
other = "Buscando resultados.."
other = "Buscando resultados..."
[legacy_repos_message]
other = """Os repositórios legados de pacotes (`apt.kubernetes.io` e `yum.kubernetes.io`) foram
[descontinuados e congelados a partir de 13 de setembro de 2023](/blog/2023/08/31/legacy-package-repository-deprecation/).
**A utilização dos [novos repositórios de pacotes hospedados em `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/)
é fortemente recomendada e requerida para instalar versões do Kubernetes lançadas após 13 de setembro de 2023.**
Os repositórios legados descontinuados e seus conteúdos podem ser removidos a qualquer momento no futuro e sem
um período de aviso prévio. Os novos repositórios de pacotes fornecem downloads para versões do Kubernetes a partir da v1.24.0."""
[main_by]
other = "por"
@ -286,12 +376,6 @@ other = """The Linux Foundation &reg;. Todos os direitos reservados. A Linux Fou
[main_documentation_license]
other = """Os autores do Kubernetes | Documentação Distribuída sob <a href="https://git.k8s.io/website/LICENSE" class="light-text">CC BY 4.0</a>"""
[main_edit_this_page]
other = "Edite essa página"
[main_github_create_an_issue]
other = "Abra um bug"
[main_github_invite]
other = "Interessado em mergulhar na base de código do Kubernetes?"
@ -302,7 +386,7 @@ other = "Veja no Github"
other = "Recursos do Kubernetes"
[main_kubeweekly_baseline]
other = "Interessado em receber as últimas novidades sobre Kubernetes? Inscreva-se no KubeWeekly."
other = "Interessado em receber as últimas novidades sobre o Kubernetes? Inscreva-se no KubeWeekly."
[main_kubernetes_past_link]
other = "Veja boletins passados"
@ -311,7 +395,7 @@ other = "Veja boletins passados"
other = "Inscreva-se"
[main_page_history]
other ="História da página"
other ="Histórico da página"
[main_page_last_modified_on]
other = "Última modificação da página em"
@ -322,6 +406,9 @@ other = "Ler sobre"
[main_read_more]
other = "Consulte Mais informação"
[monthly_patch_release]
other = "Lançamento de Correção Mensal"
[not_applicable]
other = "Não se aplica"
@ -349,17 +436,76 @@ other = "Antes de você começar"
[previous_patches]
other = "Versões de correção:"
[print_printable_section]
other = "Essa é a versão completa de impressão dessa seção"
[release_binary_alternate_links]
other = """Você pode encontrar links para baixar os componentes do Kubernetes (e seus arquivos _checksum_) nos arquivos [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG).
Alternativamente, utilize o website [downloadkubernetes.com](https://www.downloadkubernetes.com/) para filtrar por versão e arquitetura."""
[print_click_to_print]
other = "Clique aqui para imprimir"
[release_binary_arch]
other = "Arquitetura"
[print_show_regular]
other = "Retornar à visualização normal"
[release_binary_arch_option]
other = "Arquiteturas"
[print_entire_section]
other = "Imprimir toda essa seção"
[release_binary_copy_link]
other = "Copiar Link"
[release_binary_copy_link_certifcate]
other = "certificado"
[release_binary_copy_link_checksum]
other = "arquivo checksum"
[release_binary_copy_link_signature]
other = "assinatura"
[release_binary_copy_link_tooltip]
other = "copiar para a área de transferência"
[release_binary_download]
other = "Baixar Binário"
[release_binary_download_tooltip]
other = "baixar o arquivo binário"
[release_binary_options]
other = "Opções de Download"
[release_binary_os]
other = "Sistema Operacional"
[release_binary_os_option]
other = "Sistemas Operacionais"
[release_binary_os_darwin]
other = "darwin"
[release_binary_os_linux]
other = "linux"
[release_binary_os_windows]
other = "windows"
[release_binary_table_caption]
other = "Baixe os binários dos componentes do Kubernetes"
# NOTA: <current-version> é um placeholder para a versão real preenchida pelo shortcode 'release-binaries'.
# Não modificar ou localizar o placeholder <current-version>.
[release_binary_section]
other = """Você pode encontrar os links para baixar os componentes do Kubernetes na versão <current-version> (junto com seus arquivos _checksum_) abaixo.
Para acessar downloads de versões mais antigas ainda suportadas, visite o link da documentação respectiva
para [versões mais antigas](https://kubernetes.io/docs/home/supported-doc-versions/#versions-older) ou utilize [downloadkubernetes.com](https://www.downloadkubernetes.com/)."""
# NOTA: <current-version> e <current-changelog-url> são placeholders preenchidos pelo shortcode 'release-binaries'.
# Não localize ou modifique os placeholders <current-version> e <current-changelog-url>.
[release_binary_section_note]
other = """Para baixar versões de correção mais antigas dos componentes do Kubernetes na versão <current-version> (e seus arquivos _checksum_),
por favor consulte o arquivo [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG/CHANGELOG-<current-changelog-url>.md)."""
[release_binary_version]
other = "Versão"
[release_binary_version_option]
other = "Versão Mais Recente"
[release_date_after]
other = ")"
@ -368,16 +514,33 @@ other = ")"
other = "(liberada em "
# See https://gohugo.io/functions/format/#gos-layout-string
# Use a suitable format for your locale
# Use a suitable format for your locale. Avoid using :date_short
[release_date_format]
# example: other = ":date_medium"
other = "2006-01-02"
# See https://gohugo.io/functions/format/#gos-layout-string
# Use a suitable format for your locale.
[release_date_format_month]
other = "January 2006"
[release_cherry_pick_deadline]
other = "Fim do prazo para Cherry-Pick"
[release_end_of_life_date]
other = "Data de fim da vida útil"
# Also see release_maintenance_and_end_of_life_details_past
[release_maintenance_and_end_of_life_details_current]
other = "O Kubernetes {{ .minor_version }} entrará em modo de manutenção em {{ .maintenance_mode_start_date }}; a data de fim da vida útil do Kubernetes {{ .minor_version }} é {{ .release_eol_date }}."
# Used if the maintenance mode date is in the past, otherwise
# see release_maintenance_and_end_of_life_details_current
[release_maintenance_and_end_of_life_details_past]
other = """ **O Kubernetes {{ .minor_version }} entrou em modo de manutenção em {{ .maintenance_mode_start_date }}**.
A data de fim da vida útil do Kubernetes {{ .minor_version }} é {{ .release_eol_date }}."""
[release_full_details_initial_text]
other = "Informações completas da versão"
@ -390,7 +553,10 @@ other = "Versão menor"
[release_info_next_patch]
other = "A próxima versão de correção é **%s**."
# Descontinuado. Utilizar release_maintenance_and_end_of_life_details_current
# e release_maintenance_and_end_of_life_details_past no lugar desta string de localização.
[release_info_eol]
# Marcar com vazio quando todas as referências na localização estiverem usando as novas strings.
other = "**%s** entra em modo de manutenção em **%s** e o fim da vida útil é em **%s**."
[release_note]
@ -415,17 +581,22 @@ other = "Inscrever-se"
other = "Sinopse"
[thirdparty_message]
other = """Esta seção tem links para projetos de terceiros que fornecem a funcionalidade exigida pelo Kubernetes. Os autores do projeto Kubernetes não são responsáveis por esses projetos. Esta página obedece as <a href="https://github.com/cncf/foundation/blob/master/website-guidelines.md" target="_blank">diretrizes de conteúdo do site CNCF</a>, listando os itens em ordem alfabética. Para adicionar um projeto a esta lista, leia o <a href="/docs/contribute/style/content-guide/#third-party-content">guia de conteúdo</a> antes de enviar sua alteração."""
other = """Esta seção contém links para projetos de terceiros que fornecem a funcionalidade exigida pelo Kubernetes. Os autores do projeto Kubernetes não são responsáveis por esses projetos. Esta página obedece as <a href="https://github.com/cncf/foundation/blob/master/website-guidelines.md" target="_blank">diretrizes de conteúdo do site CNCF</a>, listando os itens em ordem alfabética. Para adicionar um projeto a esta lista, leia o <a href="/docs/contribute/style/content-guide/#third-party-content">guia de conteúdo</a> antes de enviar sua alteração."""
[thirdparty_message_edit_disclaimer]
other = "Aviso de conteúdo de terceiros"
[thirdparty_message_single_item]
other = """&#128711; Este item aponta para um projeto ou produto de terceiros que não é parte do Kubernetes. <a class="alert-more-info" href="#third-party-content-disclaimer">Mais informações</a>"""
[thirdparty_message_disclaimer]
other = """<p>Itens nesta página referem-se a produtos ou projetos de terceiros que fornecem a funcionalidade requerida pelo Kubernetes. Os autores do projeto Kubernetes não são responsáveis por estes produtos ou projetos de terceiros. Veja as <a href="https://github.com/cncf/foundation/blob/master/website-guidelines.md" target="_blank">diretrizes de conteúdo do site CNCF</a> para mais detalhes.</p><p>Você deve ler o <a href="/pt-br/docs/contribute/style/content-guide/#third-party-content">guia de conteúdo</a> antes de propor alterações que incluam links extras de terceiros.</p>"""
[thirdparty_message_vendor]
other = """Itens nesta página referem-se a fornecedores externos ao Kubernetes. Os autores do projeto Kubernetes não são responsáveis por estes produtos ou projetos de terceiros. Para adicionar um fornecedor, produto ou projeto a esta lista, consulte o <a href="/docs/contribute/style/content-guide/#third-party-content">guia de conteúdo</a> antes de enviar uma alteração. <a href="#third-party-content-disclaimer">Mais informações.</a>"""
[translated_by]
other = "Traduzido Por"
[ui_search_placeholder]
other = "Procurar"