Merge main into dev-1.32 to keep in sync

pull/48252/head
Rodolfo Albuquerque 2024-10-08 09:34:16 -03:00
commit ae1af38c53
72 changed files with 3522 additions and 505 deletions

View File

@ -16,7 +16,6 @@ aliases:
- salaxander
- sftim
- tengqm
- drewhagen # RT 1.31 temp acting Docs lead
sig-docs-localization-owners: # Admins for localization content
- a-mccarthy
- divya-mohan0209
@ -64,8 +63,7 @@ aliases:
- salaxander
- sftim
- tengqm
- Princesso # RT 1.31 Docs Lead
- drewhagen # RT 1.31 temp acting Docs lead
- chanieljdan # RT 1.32 Docs Lead
sig-docs-en-reviews: # PR reviews for English content
- dipesh-rawat
- divya-mohan0209
@ -81,8 +79,8 @@ aliases:
- shannonxtreme
- tengqm
- windsonsea
- Princesso # RT 1.31 Docs Lead
- drewhagen # RT 1.31 temp acting Docs lead
- Princesso
- drewhagen
sig-docs-es-owners: # Admins for Spanish content
- electrocucaracha
- krol3
@ -228,12 +226,12 @@ aliases:
- MaxymVlasov
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
committee-steering: # provide PR approvals for announcements
- bentheelder
- aojea
- BenTheElder
- justaugustus
- mrbobbytables
- pacoxu
- palnabarun
- pohly
- saschagrunert
- soltysh
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
sig-release-leads:

View File

@ -169,7 +169,7 @@ sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
## ドキュメントに貢献する {#contributing-to-the-docs}
GitHubの画面右上にある**Fork**ボタンをクリックすると、GitHubアカウントに紐付いた本リポジトリのコピーが作成されます。このコピーのことを*フォーク*と呼びます。フォークリポジトリの中では好きなように変更を加えることができます。加えた変更をこのリポジトリに反映したい好きなタイミングで、フォークリポジトリからPull Reqeustを作成してください。
GitHubの画面右上にある**Fork**ボタンをクリックすると、GitHubアカウントに紐付いた本リポジトリのコピーが作成されます。このコピーのことを*フォーク*と呼びます。フォークリポジトリの中では好きなように変更を加えることができます。加えた変更をこのリポジトリに反映したい好きなタイミングで、フォークリポジトリからPull Requestを作成してください。
Pull Requestが作成されると、レビュー担当者が責任を持って明確かつ実用的なフィードバックを返します。Pull Requestの所有者は作成者であるため、**自分自身で作成したPull Requestを編集し、フィードバックに対応するのはあなたの責任です。**

View File

@ -893,6 +893,11 @@ section#cncf {
}
}
footer.row {
margin-left: initial;
margin-right: initial;
}
/* DOCUMENTATION */
// nav-tabs and tab-content

View File

@ -266,16 +266,26 @@ body.td-404 main .error-details {
/* FOOTER */
footer {
background-color: #303030;
background-color: #202020;
/* darkened later in this file */
background-image: url("/images/texture.png");
padding: 1rem !important;
min-height: initial !important;
justify-content: center;
> div, > p {
max-width: 95%;
@media only screen and (min-width: 768px) {
max-width: calc(min(80rem,90vw)); // avoid spreading too wide
}
color: inherit;
background: transparent;
a:hover {
color: inherit;
background: transparent;
text-decoration: underline;
}
}
> .footer__links {
@ -313,6 +323,50 @@ footer {
}
}
footer {
background-image: linear-gradient(rgba(0, 0, 0, 0.527),rgba(0, 0, 0, 0.5)) , url("/images/texture.png");
color: #e9e9e9;
}
// Custom footer sizing
@media (min-width: 800px) and (max-width: 1279px) {
footer {
ul.footer-icons {
min-width: 17.5vw;
display: flex;
flex-wrap: nowrap;
flex-direction: row;
justify-content: space-evenly;
}
.col-sm-2 {
flex: 0 0 22.5%;
max-width: 22.5%;
}
.footer-main.text-center {
flex: 0 0 50%;
max-width: 50%;
}
}
}
@media (max-width: 799px) {
footer ul.footer-icons {
display: flex;
flex-wrap: nowrap;
flex-direction: column;
align-items: flex-start;
row-gap: 0.5em;
}
footer div.order-1 ul.footer-icons {
margin-left: auto;
}
footer div.order-3 ul.footer-icons {
margin-right: auto;
}
}
/* SIDE-DRAWER MENU */
.pi-pushmenu .sled {

View File

@ -127,7 +127,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
```shell
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo apt-get install -y ca-certificates curl gnupg
```
Falls Debian 9 (stretch) oder älter genutzt wird, müssen zusätzlich das Paket `apt-transport-https` installiert werden:
@ -136,16 +136,20 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
sudo apt-get install -y apt-transport-https
```
2. Den öffentlichen Google Cloud Signaturschlüssel herunterladen:
2. Den öffentlichen Signaturschlüssel für die Kubernetes Repositories herunterladen:
```shell
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
# Falls der Ordner `/etc/apt/keyrings` nicht existiert, sollte er vor dem curl Kommando erstellt werden, siehe folgende Zeile
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # nicht privilegierten APT Applikationen den Lesezugriff auf diesen Keyring erlauben
```
3. Kubernetes zum `apt` Repository:
3. Kubernetes `apt` Repository hinzufügen:
```shell
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # dies hilft tools wie command-not-found korrekt zu funktionieren
```
4. Den `apt` Paketindex mit dem neuen Repository updaten und kubectl installieren:

View File

@ -19,7 +19,7 @@ Kubernetes?**
**Kensei**: Hi, thanks for the opportunity! Im Kensei Nakada
([@sanposhiho](https://github.com/sanposhiho/)), a software engineer at
[Tetrate.io](https://tetrate.io/). I have been contributing to Kubernetes in my free time for more
than 3 years, and now Im an approver of SIG-Scheduling in Kubernetes. Also, Im a founder/owner of
than 3 years, and now Im an approver of SIG Scheduling in Kubernetes. Also, Im a founder/owner of
two SIG subprojects,
[kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator) and
[kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension).
@ -32,14 +32,14 @@ brief overview of SIG Scheduling and explain its role within the Kubernetes ecos
**KN**: As the name implies, our responsibility is to enhance scheduling within
Kubernetes. Specifically, we develop the components that determine which Node is the best place for
each Pod. In Kubernetes, our main focus is on maintaining the
[kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/), along
[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/), along
with other scheduling-related components as part of our SIG subprojects.
**AP: I see, got it! That makes me curious--what recent innovations or developments has SIG
Scheduling introduced to Kubernetes scheduling?**
**KN**: From a feature perspective, there have been [several
enhancements](https://kubernetes.io/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/)
**KN**: From a feature perspective, there have been
[several enhancements](/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/)
to `PodTopologySpread` recently. `PodTopologySpread` is a relatively new feature in the scheduler,
and we are still in the process of gathering feedback and making improvements.
@ -53,59 +53,58 @@ reducing the likelihood of wasting scheduling cycles.
**A: That sounds interesting! Are there any other interesting topics or projects you are currently
working on within SIG Scheduling?**
**KN**: Im leading the development of `QueueingHint` which I just shared. Given that its a big new
**KN**: Im leading the development of `QueueingHint` which I just shared. Given that its a big new
challenge for us, weve been facing many unexpected challenges, especially around the scalability,
and were trying to solve each of them to eventually enable it by default.
And also, I believe
[kube-scheduler-wasm-extention](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
(SIG sub project) that I started last year would be interesting to many people. Kubernetes has
[kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
(a SIG subproject) that I started last year would be interesting to many people. Kubernetes has
various extensions from many components. Traditionally, extensions are provided via webhooks
([extender](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/scheduler_extender.md)
in the scheduler) or Go SDK ([Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) in the
scheduler). However, these come with drawbacks - performance issues with webhooks and the need to
in the scheduler) or Go SDK ([Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/)
in the scheduler). However, these come with drawbacks - performance issues with webhooks and the need to
rebuild and replace schedulers with Go SDK, posing difficulties for those seeking to extend the
scheduler but lacking familiarity with it. The project is trying to introduce a new solution to
scheduler but lacking familiarity with it. The project is trying to introduce a new solution to
this general challenge - a [WebAssembly](https://webassembly.org/) based extension. Wasm allows
users to build plugins easily, without worrying about recompiling or replacing their scheduler, and
sidestepping performance concerns.
Through this project, sig-scheduling has been learning valuable insights about WebAssembly's
Through this project, SIG Scheduling has been learning valuable insights about WebAssembly's
interaction with large Kubernetes objects. And I believe the experience that were gaining should be
useful broadly within the community, beyond sig-scheduling.
useful broadly within the community, beyond SIG Scheduling.
**A: Definitely! Now, there are currently 8 subprojects inside SIG Scheduling. Would you like to
**A: Definitely! Now, there are 8 subprojects inside SIG Scheduling. Would you like to
talk about them? Are there some interesting contributions by those teams you want to highlight?**
**KN**: Let me pick up three sub projects; Kueue, KWOK and descheduler.
**KN**: Let me pick up three subprojects: Kueue, KWOK and descheduler.
[Kueue](https://github.com/kubernetes-sigs/kueue):
[Kueue](https://github.com/kubernetes-sigs/kueue)
: Recently, many people have been trying to manage batch workloads with Kubernetes, and in 2022,
Kubernetes community founded
[WG-Batch](https://github.com/kubernetes/community/blob/master/wg-batch/README.md) for better
support for such batch workloads in Kubernetes. [Kueue](https://github.com/kubernetes-sigs/kueue)
is a project that takes a crucial role for it. Its a job queueing controller, deciding when a job
should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims
to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers
(scheduler, cluster-autoscaler, kube-controller-manager, etc).
Kubernetes community founded
[WG-Batch](https://github.com/kubernetes/community/blob/master/wg-batch/README.md) for better
support for such batch workloads in Kubernetes. [Kueue](https://github.com/kubernetes-sigs/kueue)
is a project that takes a crucial role for it. Its a job queueing controller, deciding when a job
should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims
to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers
(scheduler, cluster-autoscaler, kube-controller-manager, etc).
[KWOK](https://github.com/kubernetes-sigs/kwok):
[KWOK](https://github.com/kubernetes-sigs/kwok)
: KWOK is a component in which you can create a cluster of thousands of Nodes in seconds. Its
mostly useful for simulation/testing as a lightweight cluster, and actually another SIG sub
project [kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator)
uses KWOK background.
[descheduler](https://github.com/kubernetes-sigs/descheduler):
: Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes,
scheduling constraints (`PodAffinity`, `NodeAffinity`, `PodTopologySpread`, etc) are honored only at
Pod schedule, but its not guaranteed that the contrtaints are kept being satisfied afterwards.
Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so
that theyre recreated and rescheduled.
[descheduler](https://github.com/kubernetes-sigs/descheduler)
: Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes,
scheduling constraints (`PodAffinity`, `NodeAffinity`, `PodTopologySpread`, etc) are honored only at
Pod schedule, but its not guaranteed that the contrtaints are kept being satisfied afterwards.
Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so
that theyre recreated and rescheduled.
[Descheduling Framework](https://github.com/kubernetes-sigs/descheduler/blob/master/keps/753-descheduling-framework/README.md).
: One very interesting on-going project, similar to [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) in the
[Descheduling Framework](https://github.com/kubernetes-sigs/descheduler/blob/master/keps/753-descheduling-framework/README.md)
: One very interesting on-going project, similar to
[Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/) in the
scheduler, aiming to make descheduling logic extensible and allow maintainers to focus on building
a core engine of descheduler.
@ -125,27 +124,26 @@ improving our components over the years.
**AP: Kubernetes is a community-driven project. Any recommendations for new contributors or
beginners looking to get involved and contribute to SIG scheduling? Where should they start?**
**KN**: Let me start with a general recommendation for contributing to any SIG: a common approach is
to look for
**KN**: Let me start with a general recommendation for contributing to any SIG: a common approach is to look for
[good-first-issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
However, you'll soon realize that many people worldwide are trying to contribute to the Kubernetes
repository.
I suggest starting by examining the implementation of a component that interests you. If you have
any questions about it, ask in the corresponding Slack channel (e.g., #sig-scheduling for the
scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation,
scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation,
look at issues within the SIG (e.g.,
[sig-scheduling](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fscheduling)),
where you'll find more unassigned issues compared to good-first-issue ones. You may also want to
where you'll find more unassigned issues compared to good-first-issue ones. You may also want to
filter issues with the
[kind/cleanup](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue++label%3Akind%2Fcleanup+)
label, which often indicates lower-priority tasks and can be starting points.
Specifically for SIG Scheduling, you should first understand the [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/), which is
the fundamental architecture of kube-scheduler. Most of the implementation is found in
[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler). I suggest
starting with
Specifically for SIG Scheduling, you should first understand the
[Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/), which is
the fundamental architecture of kube-scheduler. Most of the implementation is found in
[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler).
I suggest starting with
[ScheduleOne](https://github.com/kubernetes/kubernetes/blob/0590bb1ac495ae8af2a573f879408e48800da2c5/pkg/scheduler/schedule_one.go#L66)
function and then exploring deeper from there.
@ -154,15 +152,14 @@ sub-projects. These typically have fewer maintainers and offer more opportunitie
significant impact. Despite being called "sub" projects, many have a large number of users and a
considerable impact on the community.
And last but not least, remember contributing to the community isnt just about code. While I
And last but not least, remember contributing to the community isnt just about code. While I
talked a lot about the implementation contribution, there are many ways to contribute, and each one
is valuable. One comment to an issue, one feedback to an existing feature, one review comment in PR,
one clarification on the documentation; every small contribution helps drive the Kubernetes
ecosystem forward.
**AP: Those are some pretty useful tips! And if I may ask, how do you assist new contributors in
getting started, and what skills are contributors likely to learn by participating in SIG
Scheduling?**
getting started, and what skills are contributors likely to learn by participating in SIG Scheduling?**
**KN**: Our maintainers are available to answer your questions in the #sig-scheduling Slack
channel. By participating, you'll gain a deeper understanding of Kubernetes scheduling and have the
@ -178,9 +175,8 @@ pain points?**
**KN**: Scheduling in Kubernetes can be quite challenging because of the diverse needs of different
organizations with different business requirements. Supporting all possible use cases in
kube-scheduler is impossible. Therefore, extensibility is a key focus for us. A few years ago, we
rearchitected kube-scheduler with [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/), which
offers flexible extensibility for users to implement various scheduling needs through plugins. This
rearchitected kube-scheduler with [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/),
which offers flexible extensibility for users to implement various scheduling needs through plugins. This
allows maintainers to focus on the core scheduling features and the framework runtime.
Another major issue is maintaining sufficient scheduling throughput. Typically, a Kubernetes cluster
@ -190,7 +186,7 @@ and, consequently, the cluster's scalability. Although we have an internal perfo
unfortunately, we sometimes overlook performance degradation in less common scenarios. Its
difficult as even small changes, which look irrelevant to performance, can lead to degradation.
**AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?**
**AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?**
**KN**: Our primary goal is always to build and maintain _extensible_ and _stable_ scheduling
runtime, and I bet this goal will remain unchanged forever.
@ -198,7 +194,7 @@ runtime, and I bet this goal will remain unchanged forever.
As already mentioned, extensibility is key to solving the challenge of the diverse needs of
scheduling. Rather than trying to support every different use case directly in kube-scheduler, we
will continue to focus on enhancing extensibility so that it can accommodate various use
cases. [kube-scheduler-wasm-extention](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
cases. [kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
that I mentioned is also part of this initiative.
Regarding stability, introducing new optimizations like QueueHint is one of our
@ -217,7 +213,7 @@ about SIG Scheduling?**
**KN**: Scheduling is one of the most complicated areas in Kubernetes, and you may find it difficult
at first. But, as I shared earlier, you can find many opportunities for contributions, and many
maintainers are willing to help you understand things. We know your unique perspective and skills
are what makes our open source so powerful :)
are what makes our open source so powerful 😊
Feel free to reach out to us in Slack
([#sig-scheduling](https://kubernetes.slack.com/archives/C09TP78DV)) or

View File

@ -0,0 +1,62 @@
---
layout: blog
title: "Announcing the 2024 Steering Committee Election Results"
slug: steering-committee-results-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/10/02/steering-committee-results-2024
date: 2024-10-02T15:10:00-05:00
author: >
Bridget Kromhout
---
The [2024 Steering Committee Election](https://github.com/kubernetes/community/tree/master/elections/steering/2024) is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committees role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).
Thank you to everyone who voted in the election; your participation helps support the communitys continued health and success.
## Results
Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):
* **Antonio Ojea ([@aojea](https://github.com/aojea)), Google**
* **Benjamin Elder ([@BenTheElder](https://github.com/bentheelder)), Google**
* **Sascha Grunert ([@saschagrunert](https://github.com/saschagrunert)), Red Hat**
They join continuing members:
* **Stephen Augustus ([@justaugustus](https://github.com/justaugustus)), Cisco**
* **Paco Xu 徐俊杰 ([@pacoxu](https://github.com/pacoxu)), DaoCloud**
* **Patrick Ohly ([@pohly](https://github.com/pohly)), Intel**
* **Maciej Szulik ([@soltysh](https://github.com/soltysh)), Defense Unicorns**
Benjamin Elder is a returning Steering Committee Member.
## Big thanks!
Thank you and congratulations on a successful election to this rounds election officers:
* Bridget Kromhout ([@bridgetkromhout](https://github.com/bridgetkromhout))
* Christoph Blecker ([@cblecker](https://github.com/cblecker))
* Priyanka Saggu ([@Priyankasaggu11929](https://github.com/Priyankasaggu11929))
Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:
* Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables))
* Nabarun Pal ([@palnabarun](https://github.com/palnabarun))
And thank you to all the candidates who came forward to run for election.
## Get involved with the Steering Committee
This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee [meeting notes](https://bit.ly/k8s-steering-wd) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They have an open meeting on [the first Monday at 8am PT of every month](https://github.com/kubernetes/steering). They can also be contacted at their public mailing list steering@kubernetes.io.
You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).
If you want to meet some of the newly elected Steering Committee members, join us for the [Steering AMA](https://www.kubernetes.dev/events/2024/kcsna/schedule/#steering-ama) at the Kubernetes Contributor Summit North America 2024 in Salt Lake City.
---
_This post was adapted from one written by the [Contributor Comms Subproject](https://github.com/kubernetes/community/tree/master/communication/contributor-comms). If you want to write stories about the Kubernetes community, learn more about us._

View File

@ -5,6 +5,9 @@ reviewers:
- thockin
- msau42
title: Storage Classes
api_metadata:
- apiVersion: "storage.k8s.io/v1"
kind: "StorageClass"
content_type: concept
weight: 40
---

View File

@ -65,13 +65,14 @@ DELETE | delete
The resource and subresource is determined from the incoming request's path:
Kubelet API | resource | subresource
-------------|----------|------------
/stats/\* | nodes | stats
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
*all others* | nodes | proxy
Kubelet API | resource | subresource
--------------------|----------|------------
/stats/\* | nodes | stats
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
/checkpoint/\* | nodes | checkpoint
*all others* | nodes | proxy
The namespace and API group attributes are always an empty string, and
the resource name is always the name of the kubelet's `Node` API object.

View File

@ -0,0 +1,14 @@
---
title: Duration
id: duration
date: 2024-10-05
full_link:
short_description: >
A time interval specified as a string in the format accepted by Go's [time.Duration](https://pkg.go.dev/time), allowing for flexible time specifications using various units like seconds, minutes, and hours.
aka:
tags:
- fundamental
---
In Kubernetes APIs, a duration must be non-negative and is typically expressed with a suffix.
For example, `5s` for five seconds or `1m30s` for one minute and thirty seconds.

View File

@ -289,6 +289,19 @@ in the StatefulSet topic for more details.
Note the [PodIndexLabel](/docs/reference/command-line-tools-reference/feature-gates/)
feature gate must be enabled for this label to be added to pods.
### resource.kubernetes.io/pod-claim-name
Type: Annotation
Example: `resource.kubernetes.io/pod-claim-name: "my-pod-claim"`
Used on: ResourceClaim
This annotation is assigned to generated ResourceClaims.
Its value corresponds to the name of the resource claim in the `.spec` of any Pod(s) for which the ResourceClaim was created.
This annotation is an internal implementation detail of [dynamic resource allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
You should not need to read or modify the value of this annotation.
### cluster-autoscaler.kubernetes.io/safe-to-evict
Type: Annotation

View File

@ -8,13 +8,6 @@ weight: 70
<!-- overview -->
{{< note >}}
While kubeadm is being used as the management tool for external etcd nodes
in this guide, please note that kubeadm does not plan to support certificate rotation
or upgrades for such nodes. The long-term plan is to empower the tool
[etcdadm](https://github.com/kubernetes-sigs/etcdadm) to manage these
aspects.
{{< /note >}}
By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision

View File

@ -17,7 +17,6 @@ The Kubernetes project recommends modifying DNS configuration using the `hostAli
(part of the `.spec` for a Pod), and not by using an init container or other means to edit `/etc/hosts`
directly.
Change made in other ways may be overwritten by the kubelet during Pod creation or restart.
made in other ways may be overwritten by the kubelet during Pod creation or restart.
<!-- steps -->

View File

@ -563,7 +563,7 @@ the Deployment and / or StatefulSet be removed from their
a change to that object is applied, for example via `kubectl apply -f
deployment.yaml`, this will instruct Kubernetes to scale the current number of Pods
to the value of the `spec.replicas` key. This may not be
desired and could be troublesome when an HPA is active.
desired and could be troublesome when an HPA is active, resulting in thrashing or flapping behavior.
Keep in mind that the removal of `spec.replicas` may incur a one-time
degradation of Pod counts as the default value of this key is 1 (reference

View File

@ -88,10 +88,6 @@ En esta página se listan algunos de los complementos disponibles con sus respec
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)
es un panel de control con una interfaz web para Kubernetes.
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)
es una herramienta para visualizar gráficamente tus contenedores, Pods, Services, etc.
Utilícela junto con una [cuenta de Weave Cloud](https://cloud.weave.works/)
o aloje la interfaz de usuario usted mismo.
## Infraestructura

View File

@ -1,6 +1,7 @@
---
title: Tutoriales
main_menu: true
no_list: true
weight: 60
content_type: concept
---
@ -31,6 +32,10 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
* [Ejemplo: Configurando un Microservicio en Java](/docs/tutorials/configuration/configure-java-microservice/)
* [Configuring Redis Using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
## Creación de Pods
* [Adoptando Contenedores Sidecar](/docs/tutorials/configuration/pod-sidecar-containers/)
## Aplicaciones Stateless
* [Exposing an External IP Address to Access an Application in a Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
@ -38,19 +43,22 @@ Antes de recorrer cada tutorial, recomendamos añadir un marcador a
## Aplicaciones Stateful
* [StatefulSet Basics](/docs/tutorials/stateful-application/basic-stateful-set/)
* [Example: WordPress and MySQL with Persistent Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
* [Example: Deploying Cassandra with Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
## Clústers
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [Seccomp](/docs/tutorials/clusters/seccomp/)
* [Conceptos Básicos de StatefulSet](/docs/tutorials/stateful-application/basic-stateful-set/)
* [Ejemplo: WordPress y MySQL con Volúmenes Persistentes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
* [Ejemplo: Implementación de Cassandra con StatefulSets](/docs/tutorials/stateful-application/cassandra/)
* [Ejecución de ZooKeeper, un Sistema Distribuido CP](/docs/tutorials/stateful-application/zookeeper/)
## Servicios
* [Using Source IP](/docs/tutorials/services/source-ip/)
* [Conectando Aplicaciones con Servicios](/docs/tutorials/services/connect-applications-service/)
* [Uso de la IP de Origen](/docs/tutorials/services/source-ip/)
## Seguridad
* [Aplicar Estándares de Seguridad de Pods a Nivel de Clúster](/docs/tutorials/security/cluster-level-pss/)
* [Aplicar Estándares de Seguridad de Pods a Nivel de Namespace](/docs/tutorials/security/ns-level-pss/)
* [Restringir el Acceso de un Contenedor a Recursos con AppArmor](/docs/tutorials/security/apparmor/)
* [Seccomp](/docs/tutorials/security/seccomp/)
## {{% heading "whatsnext" %}}

View File

@ -41,7 +41,6 @@ weight: 120
## 可視化と制御 {#visualization-amp-control}
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)はKubernetes向けのダッシュボードを提供するウェブインターフェースです。
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)は、コンテナ、Pod、Serviceなどをグラフィカルに可視化するツールです。[Weave Cloud account](https://cloud.weave.works/)と組み合わせて使うか、UIを自分でホストして使います。
## インフラストラクチャ {#infrastructure}

View File

@ -73,7 +73,7 @@ Podの各コンテナは、次の1つ以上を指定できます。
### CPUの意味 {#meaning-of-cpu}
CPUリソースの制限と要求は、*cpu*単位で測定されます。
Kuberenetesにおける1つのCPUは、クラウドプロバイダーの**1 vCPU/コア**およびベアメタルのインテルプロセッサーの**1 ハイパースレッド**に相当します。
Kubernetesにおける1つのCPUは、クラウドプロバイダーの**1 vCPU/コア**およびベアメタルのインテルプロセッサーの**1 ハイパースレッド**に相当します。
要求を少数で指定することもできます。
`spec.containers[].resources.requests.cpu`が`0.5`のコンテナは、1CPUを要求するコンテナの半分のCPUが保証されます。

View File

@ -44,7 +44,7 @@ kubectl get services --all-namespaces --field-selector metadata.namespace!=defa
## 連結されたセレクター
[ラベル](/docs/concepts/overview/working-with-objects/labels)や他のセレクターと同様に、フィールドセレクターはコンマ区切りのリストとして連結することができます。
下記の`kubectl`コマンドは、`status.phase`が`Runnning`でなく、かつ`spec.restartPolicy`フィールドが`Always`に等しいような全てのPodを選択します。
下記の`kubectl`コマンドは、`status.phase`が`Running`でなく、かつ`spec.restartPolicy`フィールドが`Always`に等しいような全てのPodを選択します。
```shell
kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always

View File

@ -136,7 +136,7 @@ Kubernetes v1.8において、ローカルのエフェメラルストレージ
| `configmaps` | 名前空間内で存在可能なConfigMapの総数。 |
| `persistentvolumeclaims` | 名前空間内で存在可能な[PersistentVolumeClaim](/ja/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)の総数。 |
| `pods` | 名前空間内で存在可能な停止していないPodの総数。`.status.phase in (Failed, Succeeded)`がtrueのとき、Podは停止状態にあります。 |
| `replicationcontrollers` | 名前空間内で存在可能なReplicationControlerの総数。 |
| `replicationcontrollers` | 名前空間内で存在可能なReplicationControllerの総数。 |
| `resourcequotas` | 名前空間内で存在可能なResourceQuotaの総数。 |
| `services` | 名前空間内で存在可能なServiceの総数。 |
| `services.loadbalancers` | 名前空間内で存在可能なtype:LoadBalancerであるServiceの総数。 |

View File

@ -45,7 +45,7 @@ APIの要素が特定バージョンのAPIグループに追加されると、
大幅に挙動が変更されることはありません。
{{< note >}}
歴史的な理由により、「core」(グループ名なし)と「extentions」という2つの「monolithic」APIグループがあります。
歴史的な理由により、「core」(グループ名なし)と「extensions」という2つの「monolithic」APIグループがあります。
リソースはこれらのレガシーなAPIグループからより特定のドメインに特化したAPIグループに段階的に移行されます。
{{< /note >}}

View File

@ -84,7 +84,7 @@ no_list: true
プロダクション用のKubernetesクラスターの認証認可をセットアップするにあたって、いくつかの考慮事項があります。
- *認証モードの設定*: Kubermetes APIサーバー ([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/))の起動時に、*--authorization-mode*フラグを使用しサポートされた認証モードを設定しなければいけません。例えば、*/etc/kubernetes/manifests*配下の*kube-adminserver.yaml*ファイルで*--authorization-mode*フラグにNodeやRBACを設定することで、認証されたリクエストに対してードやRBACの認証を許可することができます。
- *認証モードの設定*: Kubernetes APIサーバー ([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/))の起動時に、*--authorization-mode*フラグを使用しサポートされた認証モードを設定しなければいけません。例えば、*/etc/kubernetes/manifests*配下の*kube-adminserver.yaml*ファイルで*--authorization-mode*フラグにNodeやRBACを設定することで、認証されたリクエストに対してードやRBACの認証を許可することができます。
- *ユーザー証明書とロールバインディングの作成(RMAC)*: RBAC認証を使用している場合、ユーザーはクラスター証明機関により署名された証明書署名要求(CSR)を作成でき、各ユーザーにRolesとClusterRolesをバインドすることができます。詳細は[証明書署名要求](/docs/reference/access-authn-authz/certificate-signing-requests/)をご覧ください。
- *属性を組み合わせたポリシーの作成(ABAC)*: ABAC認証を使用している場合、特定のリソース例えばPod、Namespace、またはAPIグループにアクセスするために、選択されたユーザーやグループに属性の組み合わせで形成されたポリシーを割り当てることができます。より多くの情報は[Examples](/docs/reference/access-authn-authz/abac/#examples)をご覧ください。
- *アドミッションコントローラーの考慮事項*: APIサーバーを経由してくるリクエストのための追加の認証形式に[Webhookトークン認証](/ja/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)があります。Webhookや他の特別な認証形式はAPIサーバーへアドミッションコントローラーを追加し有効化される必要があります。

View File

@ -229,7 +229,7 @@ Address 1: 10.244.2.8
Podの順序インデックス、ホスト名、SRVレコード、そしてAレコード名は変化していませんが、Podに紐付けられたIPアドレスは変化する可能性があります。このチュートリアルで使用しているクラスターでは、IPアドレスは変わりました。このようなことがあるため、他のアプリケーションがStatefulSet内のPodに接続するときには、IPアドレスで指定しないことが重要です。
StatefulSetの有効なメンバーを探して接続する必要がある場合は、headless ServiceのCNAME(`nginx.default.svc.cluster.local`)をクエリしなければなりません。CNAMEに紐付けられたSRVレコードには、StatefulSet内のRunnningかつReadyなPodだけが含まれます。
StatefulSetの有効なメンバーを探して接続する必要がある場合は、headless ServiceのCNAME(`nginx.default.svc.cluster.local`)をクエリしなければなりません。CNAMEに紐付けられたSRVレコードには、StatefulSet内のRunningかつReadyなPodだけが含まれます。
アプリケーションがlivenessとreadinessをテストするコネクションのロジックをすでに実装している場合、PodのSRVレコード(`web-0.nginx.default.svc.cluster.local`、`web-1.nginx.default.svc.cluster.local`)をPodが安定しているものとして使用できます。PodがRunning and Readyな状態に移行すれば、アプリケーションはPodのアドレスを発見できるようになります。

View File

@ -38,14 +38,14 @@ Kubernetes?**
**Kensei**: Hi, thanks for the opportunity! Im Kensei Nakada
([@sanposhiho](https://github.com/sanposhiho/)), a software engineer at
[Tetrate.io](https://tetrate.io/). I have been contributing to Kubernetes in my free time for more
than 3 years, and now Im an approver of SIG-Scheduling in Kubernetes. Also, Im a founder/owner of
than 3 years, and now Im an approver of SIG Scheduling in Kubernetes. Also, Im a founder/owner of
two SIG subprojects,
[kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator) and
[kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension).
-->
**Kensei**: 嗨,感谢你给我这个机会!我是 Kensei Nakada
([@sanposhiho](https://github.com/sanposhiho/)),是来自 [Tetrate.io](https://tetrate.io/) 的一名软件工程师。
我在业余时间为 Kubernetes 贡献了超过 3 年的时间,现在我是 Kubernetes 中 SIG-Scheduling 的一名 Approver。
我在业余时间为 Kubernetes 贡献了超过 3 年的时间,现在我是 Kubernetes 中 SIG Scheduling 的一名 Approver。
同时,我还是两个 SIG 子项目的创始人/负责人:
[kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator) 和
[kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)。
@ -59,7 +59,7 @@ brief overview of SIG Scheduling and explain its role within the Kubernetes ecos
**KN**: As the name implies, our responsibility is to enhance scheduling within
Kubernetes. Specifically, we develop the components that determine which Node is the best place for
each Pod. In Kubernetes, our main focus is on maintaining the
[kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/), along
[kube-scheduler](/docs/concepts/scheduling-eviction/kube-scheduler/), along
with other scheduling-related components as part of our SIG subprojects.
-->
## 关于 SIG Scheduling
@ -69,15 +69,15 @@ with other scheduling-related components as part of our SIG subprojects.
**KN**: 正如名字所示,我们的责任是增强 Kubernetes 中的调度特性。
具体来说,我们开发了一些组件,将每个 Pod 调度到最合适的 Node。
在 Kubernetes 中,我们的主要关注点是维护
[kube-scheduler](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/)
[kube-scheduler](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/)
以及其他调度相关的组件,这些组件是 SIG Scheduling 的子项目。
<!--
**AP: I see, got it! That makes me curious--what recent innovations or developments has SIG
Scheduling introduced to Kubernetes scheduling?**
**KN**: From a feature perspective, there have been [several
enhancements](https://kubernetes.io/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/)
**KN**: From a feature perspective, there have been
[several enhancements](/blog/2023/04/17/fine-grained-pod-topology-spread-features-beta/)
to `PodTopologySpread` recently. `PodTopologySpread` is a relatively new feature in the scheduler,
and we are still in the process of gathering feedback and making improvements.
-->
@ -104,7 +104,7 @@ reducing the likelihood of wasting scheduling cycles.
**A: That sounds interesting! Are there any other interesting topics or projects you are currently
working on within SIG Scheduling?**
**KN**: Im leading the development of `QueueingHint` which I just shared. Given that its a big new
**KN**: Im leading the development of `QueueingHint` which I just shared. Given that its a big new
challenge for us, weve been facing many unexpected challenges, especially around the scalability,
and were trying to solve each of them to eventually enable it by default.
-->
@ -115,15 +115,14 @@ and were trying to solve each of them to eventually enable it by default.
<!--
And also, I believe
[kube-scheduler-wasm-extention](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
(SIG sub project) that I started last year would be interesting to many people. Kubernetes has
[kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
(a SIG subproject) that I started last year would be interesting to many people. Kubernetes has
various extensions from many components. Traditionally, extensions are provided via webhooks
([extender](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/scheduler_extender.md)
in the scheduler) or Go SDK ([Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) in the
scheduler). However, these come with drawbacks - performance issues with webhooks and the need to
in the scheduler) or Go SDK ([Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/)
in the scheduler). However, these come with drawbacks - performance issues with webhooks and the need to
rebuild and replace schedulers with Go SDK, posing difficulties for those seeking to extend the
scheduler but lacking familiarity with it. The project is trying to introduce a new solution to
scheduler but lacking familiarity with it. The project is trying to introduce a new solution to
this general challenge - a [WebAssembly](https://webassembly.org/) based extension. Wasm allows
users to build plugins easily, without worrying about recompiling or replacing their scheduler, and
sidestepping performance concerns.
@ -138,32 +137,32 @@ Go SDK调度器中的[调度框架](/zh-cn/docs/concepts/scheduling-eviction/
Wasm 允许用户轻松构建插件,而无需担心重新编译或替换调度器,还能规避性能问题。
<!--
Through this project, sig-scheduling has been learning valuable insights about WebAssembly's
Through this project, SIG Scheduling has been learning valuable insights about WebAssembly's
interaction with large Kubernetes objects. And I believe the experience that were gaining should be
useful broadly within the community, beyond sig-scheduling.
useful broadly within the community, beyond SIG Scheduling.
**A: Definitely! Now, there are currently 8 subprojects inside SIG Scheduling. Would you like to
**A: Definitely! Now, there are 8 subprojects inside SIG Scheduling. Would you like to
talk about them? Are there some interesting contributions by those teams you want to highlight?**
**KN**: Let me pick up three sub projects; Kueue, KWOK and descheduler.
**KN**: Let me pick up three subprojects: Kueue, KWOK and descheduler.
-->
通过这个项目,sig-scheduling 正在积累 WebAssembly 与大型 Kubernetes 对象交互的宝贵洞察。
我相信我们所获得的经验应该对整个社区都很有用,而不仅限于 sig-scheduling 的范围。
通过这个项目,SIG Scheduling 正在积累 WebAssembly 与大型 Kubernetes 对象交互的宝贵洞察。
我相信我们所获得的经验应该对整个社区都很有用,而不仅限于 SIG Scheduling 的范围。
**A: 当然!目前 SIG Scheduling 有 8 个子项目。你想谈谈它们吗?有没有一些你想强调的有趣贡献?**
**KN**: 让我挑选三个子项目Kueue、KWOK 和 Descheduler。
<!--
[Kueue](https://github.com/kubernetes-sigs/kueue):
[Kueue](https://github.com/kubernetes-sigs/kueue)
: Recently, many people have been trying to manage batch workloads with Kubernetes, and in 2022,
Kubernetes community founded
[WG-Batch](https://github.com/kubernetes/community/blob/master/wg-batch/README.md) for better
support for such batch workloads in Kubernetes. [Kueue](https://github.com/kubernetes-sigs/kueue)
is a project that takes a crucial role for it. Its a job queueing controller, deciding when a job
should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims
to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers
(scheduler, cluster-autoscaler, kube-controller-manager, etc).
Kubernetes community founded
[WG-Batch](https://github.com/kubernetes/community/blob/master/wg-batch/README.md) for better
support for such batch workloads in Kubernetes. [Kueue](https://github.com/kubernetes-sigs/kueue)
is a project that takes a crucial role for it. Its a job queueing controller, deciding when a job
should wait, when a job should be admitted to start, and when a job should be preempted. Kueue aims
to be installed on a vanilla Kubernetes cluster while cooperating with existing matured controllers
(scheduler, cluster-autoscaler, kube-controller-manager, etc).
-->
[Kueue](https://github.com/kubernetes-sigs/kueue):
: 最近,许多人尝试使用 Kubernetes 管理批处理工作负载2022 年Kubernetes 社区成立了
@ -175,18 +174,18 @@ to be installed on a vanilla Kubernetes cluster while cooperating with existing
同时与现有的成熟控制器调度器、cluster-autoscaler、kube-controller-manager 等)协作。
<!--
[KWOK](https://github.com/kubernetes-sigs/kwok):
[KWOK](https://github.com/kubernetes-sigs/kwok)
: KWOK is a component in which you can create a cluster of thousands of Nodes in seconds. Its
mostly useful for simulation/testing as a lightweight cluster, and actually another SIG sub
project [kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator)
uses KWOK background.
[descheduler](https://github.com/kubernetes-sigs/descheduler):
: Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes,
scheduling constraints (`PodAffinity`, `NodeAffinity`, `PodTopologySpread`, etc) are honored only at
Pod schedule, but its not guaranteed that the contrtaints are kept being satisfied afterwards.
Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so
that theyre recreated and rescheduled.
[descheduler](https://github.com/kubernetes-sigs/descheduler)
: Descheduler is a component recreating pods that are running on undesired Nodes. In Kubernetes,
scheduling constraints (`PodAffinity`, `NodeAffinity`, `PodTopologySpread`, etc) are honored only at
Pod schedule, but its not guaranteed that the contrtaints are kept being satisfied afterwards.
Descheduler evicts Pods violating their scheduling constraints (or other undesired conditions) so
that theyre recreated and rescheduled.
-->
[KWOK](https://github.com/kubernetes-sigs/kwok)
: KWOK 这个组件可以在几秒钟内创建一个包含数千个节点的集群。它主要用于模拟/测试轻量级集群,实际上另一个 SIG 子项目
@ -199,9 +198,9 @@ that theyre recreated and rescheduled.
以便这些 Pod 被重新创建和重新调度。
<!--
[Descheduling Framework](https://github.com/kubernetes-sigs/descheduler/blob/master/keps/753-descheduling-framework/README.md).
: One very interesting on-going project, similar to [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/) in the
[Descheduling Framework](https://github.com/kubernetes-sigs/descheduler/blob/master/keps/753-descheduling-framework/README.md)
: One very interesting on-going project, similar to
[Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/) in the
scheduler, aiming to make descheduling logic extensible and allow maintainers to focus on building
a core engine of descheduler.
-->
@ -236,8 +235,7 @@ improving our components over the years.
**AP: Kubernetes is a community-driven project. Any recommendations for new contributors or
beginners looking to get involved and contribute to SIG scheduling? Where should they start?**
**KN**: Let me start with a general recommendation for contributing to any SIG: a common approach is
to look for
**KN**: Let me start with a general recommendation for contributing to any SIG: a common approach is to look for
[good-first-issue](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22).
However, you'll soon realize that many people worldwide are trying to contribute to the Kubernetes
repository.
@ -254,10 +252,10 @@ SIG Scheduling 做出贡献的初学者有什么建议?他们应该从哪里
<!--
I suggest starting by examining the implementation of a component that interests you. If you have
any questions about it, ask in the corresponding Slack channel (e.g., #sig-scheduling for the
scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation,
scheduler, #sig-node for kubelet, etc). Once you have a rough understanding of the implementation,
look at issues within the SIG (e.g.,
[sig-scheduling](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fscheduling)),
where you'll find more unassigned issues compared to good-first-issue ones. You may also want to
where you'll find more unassigned issues compared to good-first-issue ones. You may also want to
filter issues with the
[kind/cleanup](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue++label%3Akind%2Fcleanup+)
label, which often indicates lower-priority tasks and can be starting points.
@ -271,11 +269,11 @@ Slack 频道中提问(例如,调度器的 #sig-schedulingkubelet 的 #sig
标签的 Issue这通常表示较低优先级的任务可以作为起点。
<!--
Specifically for SIG Scheduling, you should first understand the [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/), which is
the fundamental architecture of kube-scheduler. Most of the implementation is found in
[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler). I suggest
starting with
Specifically for SIG Scheduling, you should first understand the
[Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/), which is
the fundamental architecture of kube-scheduler. Most of the implementation is found in
[pkg/scheduler](https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler).
I suggest starting with
[ScheduleOne](https://github.com/kubernetes/kubernetes/blob/0590bb1ac495ae8af2a573f879408e48800da2c5/pkg/scheduler/schedule_one.go#L66)
function and then exploring deeper from there.
@ -295,15 +293,14 @@ considerable impact on the community.
但许多项目实际上有大量用户,并对社区产生了相当大的影响。
<!--
And last but not least, remember contributing to the community isnt just about code. While I
And last but not least, remember contributing to the community isnt just about code. While I
talked a lot about the implementation contribution, there are many ways to contribute, and each one
is valuable. One comment to an issue, one feedback to an existing feature, one review comment in PR,
one clarification on the documentation; every small contribution helps drive the Kubernetes
ecosystem forward.
**AP: Those are some pretty useful tips! And if I may ask, how do you assist new contributors in
getting started, and what skills are contributors likely to learn by participating in SIG
Scheduling?**
getting started, and what skills are contributors likely to learn by participating in SIG Scheduling?**
-->
最后但同样重要的是,记住为社区做贡献不仅仅是编写代码。
虽然我谈到了很多关于实现的贡献,但还有许多其他方式可以做贡献,每一种都很有价值。
@ -336,9 +333,8 @@ pain points?**
**KN**: Scheduling in Kubernetes can be quite challenging because of the diverse needs of different
organizations with different business requirements. Supporting all possible use cases in
kube-scheduler is impossible. Therefore, extensibility is a key focus for us. A few years ago, we
rearchitected kube-scheduler with [Scheduling
Framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/), which
offers flexible extensibility for users to implement various scheduling needs through plugins. This
rearchitected kube-scheduler with [Scheduling Framework](/docs/concepts/scheduling-eviction/scheduling-framework/),
which offers flexible extensibility for users to implement various scheduling needs through plugins. This
allows maintainers to focus on the core scheduling features and the framework runtime.
-->
**KN**: 在 Kubernetes 中进行调度可能相当具有挑战性,因为不同组织有不同的业务要求。
@ -361,7 +357,7 @@ difficult as even small changes, which look irrelevant to performance, can lead
但不巧的是,我们有时会忽视在不常见场景下的性能下降。即使是与性能无关的小改动也有难度,可能导致性能下降。
<!--
**AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?**
**AP: What are some upcoming goals or initiatives for SIG Scheduling? How do you envision the SIG evolving in the future?**
**KN**: Our primary goal is always to build and maintain _extensible_ and _stable_ scheduling
runtime, and I bet this goal will remain unchanged forever.
@ -369,7 +365,7 @@ runtime, and I bet this goal will remain unchanged forever.
As already mentioned, extensibility is key to solving the challenge of the diverse needs of
scheduling. Rather than trying to support every different use case directly in kube-scheduler, we
will continue to focus on enhancing extensibility so that it can accommodate various use
cases. [kube-scheduler-wasm-extention](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
cases. [kube-scheduler-wasm-extension](https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension)
that I mentioned is also part of this initiative.
-->
**AP: 接下来 SIG Scheduling 有哪些即将实现的目标或计划?你如何看待 SIG 的未来发展?**
@ -405,7 +401,7 @@ about SIG Scheduling?**
**KN**: Scheduling is one of the most complicated areas in Kubernetes, and you may find it difficult
at first. But, as I shared earlier, you can find many opportunities for contributions, and many
maintainers are willing to help you understand things. We know your unique perspective and skills
are what makes our open source so powerful :)
are what makes our open source so powerful 😊
-->
## 结束语
@ -413,7 +409,7 @@ are what makes our open source so powerful :)
**KN**: 调度是 Kubernetes 中最复杂的领域之一,你可能一开始会觉得很困难。但正如我之前分享的,
你可以找到许多贡献的机会,许多维护者愿意帮助你理解各事项。
我们知道你独特的视角和技能是我们的开源项目能够如此强大的源泉 :)
我们知道你独特的视角和技能是我们的开源项目能够如此强大的源泉 😊
<!--
Feel free to reach out to us in Slack

View File

@ -0,0 +1,123 @@
---
layout: blog
title: "公布 2024 年指导委员会选举结果"
slug: steering-committee-results-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/10/02/steering-committee-results-2024
date: 2024-10-02T15:10:00-05:00
author: >
Bridget Kromhout
translator: >
Xin Li (DaoCloud)
---
<!--
layout: blog
title: "Announcing the 2024 Steering Committee Election Results"
slug: steering-committee-results-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/10/02/steering-committee-results-2024
date: 2024-10-02T15:10:00-05:00
author: >
Bridget Kromhout
-->
<!--
The [2024 Steering Committee Election](https://github.com/kubernetes/community/tree/master/elections/steering/2024) is now complete. The Kubernetes Steering Committee consists of 7 seats, 3 of which were up for election in 2024. Incoming committee members serve a term of 2 years, and all members are elected by the Kubernetes Community.
This community body is significant since it oversees the governance of the entire Kubernetes project. With that great power comes great responsibility. You can learn more about the steering committees role in their [charter](https://github.com/kubernetes/steering/blob/master/charter.md).
Thank you to everyone who voted in the election; your participation helps support the communitys continued health and success.
-->
[2024 年指导委员会选举](https://github.com/kubernetes/community/tree/master/elections/steering/2024)现已完成。
Kubernetes 指导委员会由 7 个席位组成,其中 3 个席位于 2024 年进行选举。
新任委员会成员的任期为 2 年,所有成员均由 Kubernetes 社区选举产生。
这个社区机构非常重要,因为它负责监督整个 Kubernetes 项目的治理。
权力越大责任越大,你可以在其
[章程](https://github.com/kubernetes/steering/blob/master/charter.md)中了解有关指导委员会角色的更多信息。
感谢所有在选举中投票的人;你们的参与有助于支持社区的持续健康和成功。
<!--
## Results
Congratulations to the elected committee members whose two year terms begin immediately (listed in alphabetical order by GitHub handle):
-->
## 结果
祝贺当选的委员会成员,其两年任期立即开始(按 GitHub 句柄按字母顺序列出):
* **Antonio Ojea ([@aojea](https://github.com/aojea)), Google**
* **Benjamin Elder ([@BenTheElder](https://github.com/bentheelder)), Google**
* **Sascha Grunert ([@saschagrunert](https://github.com/saschagrunert)), Red Hat**
<!--
They join continuing members:
-->
他们将与以下连任成员一起工作:
* **Stephen Augustus ([@justaugustus](https://github.com/justaugustus)), Cisco**
* **Paco Xu 徐俊杰 ([@pacoxu](https://github.com/pacoxu)), DaoCloud**
* **Patrick Ohly ([@pohly](https://github.com/pohly)), Intel**
* **Maciej Szulik ([@soltysh](https://github.com/soltysh)), Defense Unicorns**
<!--
Benjamin Elder is a returning Steering Committee Member.
-->
Benjamin Elder 是一位回归的指导委员会成员。
<!--
## Big Thanks!
Thank you and congratulations on a successful election to this rounds election officers:
-->
## 十分感谢!
感谢并祝贺本轮选举官员成功完成选举工作:
* Bridget Kromhout ([@bridgetkromhout](https://github.com/bridgetkromhout))
* Christoph Blecker ([@cblecker](https://github.com/cblecker))
* Priyanka Saggu ([@Priyankasaggu11929](https://github.com/Priyankasaggu11929))
<!--
Thanks to the Emeritus Steering Committee Members. Your service is appreciated by the community:
-->
感谢名誉指导委员会成员,你们的服务受到社区的赞赏:
* Bob Killen ([@mrbobbytables](https://github.com/mrbobbytables))
* Nabarun Pal ([@palnabarun](https://github.com/palnabarun))
<!--
And thank you to all the candidates who came forward to run for election.
-->
感谢所有前来竞选的候选人。
<!--
## Get involved with the Steering Committee
This governing body, like all of Kubernetes, is open to all. You can follow along with Steering Committee [meeting notes](https://bit.ly/k8s-steering-wd) and weigh in by filing an issue or creating a PR against their [repo](https://github.com/kubernetes/steering). They have an open meeting on [the first Monday at 8am PT of every month](https://github.com/kubernetes/steering). They can also be contacted at their public mailing list steering@kubernetes.io.
-->
## 参与指导委员会
这个管理机构与所有 Kubernetes 一样,向所有人开放。
你可以关注指导委员会[会议记录](https://github.com/orgs/kubernetes/projects/40)
并通过提交 Issue 或针对其 [repo](https://github.com/kubernetes/steering) 创建 PR 来参与。
他们在[太平洋时间每月第一个周一上午 8:00](https://github.com/kubernetes/steering) 举行开放的会议。
你还可以通过其公共邮件列表 steering@kubernetes.io 与他们联系。
<!--
You can see what the Steering Committee meetings are all about by watching past meetings on the [YouTube Playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM).
If you want to meet some of the newly elected Steering Committee members, join us for the [Steering AMA](https://www.kubernetes.dev/events/2024/kcsna/schedule/#steering-ama) at the Kubernetes Contributor Summit North America 2024 in Salt Lake City.
-->
你可以通过在 [YouTube 播放列表](https://www.youtube.com/playlist?list=PL69nYSiGNLP1yP1B_nd9-drjoxp0Q14qM)上观看过去的会议来了解指导委员会会议的全部内容。
如果你想认识一些新当选的指导委员会成员,
欢迎参加在盐湖城举行的 2024 年北美 Kubernetes 贡献者峰会上的
[Steering AMA](https://www.kubernetes.dev/events/2024/kcsna/schedule/#steering-ama)。
---
<!--
_This post was adapted from one written by the [Contributor Comms Subproject](https://github.com/kubernetes/community/tree/master/communication/contributor-comms). If you want to write stories about the Kubernetes community, learn more about us._
-->
**这篇文章是由[贡献者通信子项目](https://github.com/kubernetes/community/tree/master/communication/contributor-comms)撰写的。
如果你想撰写有关 Kubernetes 社区的故事,请了解有关我们的更多信息。**

View File

@ -165,21 +165,27 @@ owner object:
* 在删除过程完成之前,通过 Kubernetes API 仍然可以看到该对象。
<!--
After the owner object enters the deletion in progress state, the controller
deletes the dependents. After deleting all the dependent objects, the controller
deletes the owner object. At this point, the object is no longer visible in the
After the owner object enters the *deletion in progress* state, the controller
deletes dependents it knows about. After deleting all the dependent objects it knows about,
the controller deletes the owner object. At this point, the object is no longer visible in the
Kubernetes API.
During foreground cascading deletion, the only dependents that block owner
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field.
deletion are those that have the `ownerReference.blockOwnerDeletion=true` field
and are in the garbage collection controller cache. The garbage collection controller
cache may not contain objects whose resource type cannot be listed / watched successfully,
or objects that are created concurrent with deletion of an owner object.
See [Use foreground cascading deletion](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)
to learn more.
-->
当属主对象进入删除过程中状态后,控制器删除其依赖对象。控制器在删除完所有依赖对象之后,
删除属主对象。这时,通过 Kubernetes API 就无法再看到该对象。
当属主对象进入**删除进行中**状态后,控制器会删除其已知的依赖对象。
在删除所有已知的依赖对象后,控制器会删除属主对象。
这时,通过 Kubernetes API 就无法再看到该对象。
在前台级联删除过程中,唯一可能阻止属主对象被删除的是那些带有
`ownerReference.blockOwnerDeletion=true` 字段的依赖对象。
在前台级联删除过程中,唯一会阻止属主对象被删除的是那些带有
`ownerReference.blockOwnerDeletion=true` 字段并且存在于垃圾收集控制器缓存中的依赖对象。
垃圾收集控制器缓存中可能不包含那些无法成功被列举/监视的资源类型的对象,
或在属主对象删除的同时创建的对象。
参阅[使用前台级联删除](/zh-cn/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion)
以了解进一步的细节。

View File

@ -1,10 +1,9 @@
---
title: 调度、抢占和驱逐
title: "调度、抢占和驱逐"
weight: 95
content_type: concept
no_list: true
---
<!--
title: "Scheduling, Preemption and Eviction"
weight: 95
@ -24,7 +23,7 @@ of terminating one or more Pods on Nodes.
匹配到合适的{{<glossary_tooltip text="节点" term_id="node">}}
以便 {{<glossary_tooltip text="kubelet" term_id="kubelet">}} 能够运行它们。
抢占Preemption指的是终止低{{<glossary_tooltip text="优先级" term_id="pod-priority">}}的
Pod 以便高优先级的 Pod 可以调度运行的过程。
Pod 以便高优先级的 Pod 可以调度到 Node 上的过程。
驱逐Eviction是在资源匮乏的节点上主动让一个或多个 Pod 失效的过程。
<!--
@ -42,7 +41,7 @@ Pod 以便高优先级的 Pod 可以调度运行的过程。
* [Pod Scheduling Readiness](/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
* [Descheduler](https://github.com/kubernetes-sigs/descheduler#descheduler-for-kubernetes)
-->
## 调度
## 调度 {#scheduling}
* [Kubernetes 调度器](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/)
* [将 Pod 指派到节点](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)
@ -50,7 +49,6 @@ Pod 以便高优先级的 Pod 可以调度运行的过程。
* [Pod 拓扑分布约束](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/)
* [污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [动态资源分配](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation)
* [调度框架](/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework)
* [调度器性能调试](/zh-cn/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* [扩展资源的资源装箱](/zh-cn/docs/concepts/scheduling-eviction/resource-bin-packing/)
* [Pod 调度就绪](/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness/)
@ -63,7 +61,7 @@ Pod 以便高优先级的 Pod 可以调度运行的过程。
* [Node-pressure Eviction](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
-->
## Pod 干扰
## Pod 干扰 {#pod-disruption}
{{<glossary_definition term_id="pod-disruption" length="all">}}

View File

@ -8,6 +8,7 @@ title: API-initiated Eviction
content_type: concept
weight: 110
-->
{{< glossary_definition term_id="api-eviction" length="short" >}} </br>
<!--

View File

@ -30,8 +30,8 @@ or to co-locate Pods from two different services that communicate a lot into the
-->
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
以便**限制**其只能在特定的{{< glossary_tooltip text="节点" term_id="node" >}}上运行,
或优先在特定的节点上运行。有几种方法可以实现这点,推荐的方法都是用
[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。
或优先在特定的节点上运行。有几种方法可以实现这点,
推荐的方法都是用[标签选择算符](/zh-cn/docs/concepts/overview/working-with-objects/labels/)来进行选择。
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制
Pod 被部署到哪个节点。例如,确保 Pod 最终落在连接了 SSD 的机器上,
@ -1118,8 +1118,8 @@ The following operators can only be used with `nodeAffinity`.
-->
| 操作符 | 行为 |
| :------------: | :-------------: |
| `Gt` | 字段值将被解析为整数,并且该整数小于通过解析此选择算符命名的标签的值所得到的整数 |
| `Lt` | 字段值将被解析为整数,并且该整数大于通过解析此选择算符命名的标签的值所得到的整数 |
| `Gt` | 字段值将被解析为整数,并且该整数小于通过解析此选择算符命名的标签的值所得到的整数 |
| `Lt` | 字段值将被解析为整数,并且该整数大于通过解析此选择算符命名的标签的值所得到的整数 |
{{<note>}}
<!--

View File

@ -275,7 +275,7 @@ gets scheduled.
通知负责这些 ResourceClaim 的资源驱动程序,告知它们调度器认为适合该 Pod 的节点。
资源驱动程序通过排除没有足够剩余资源的节点来响应调度器。
一旦调度器有了这些信息,它就会选择一个节点,并将该选择存储在 PodScheduling 对象中。
然后,资源驱动程序为分配 ResourceClaim以便资源可用于该节点。
然后,资源驱动程序为分配 ResourceClaim以便资源可用于该节点。
完成后Pod 就会被调度。
<!--

View File

@ -745,8 +745,8 @@ There are some implicit conventions worth noting here:
- Only the Pods holding the same namespace as the incoming Pod can be matching candidates.
- The scheduler bypasses any nodes that don't have any `topologySpreadConstraints[*].topologyKey`
present. This implies that:
- The scheduler only considers nodes that have all `topologySpreadConstraints[*].topologyKey` present at the same time.
Nodes missing any of these `topologyKeys` are bypassed. This implies that:
1. any Pods located on those bypassed nodes do not impact `maxSkew` calculation - in the
above [example](#example-conflicting-topologyspreadconstraints), suppose the node `node1`
@ -763,7 +763,8 @@ There are some implicit conventions worth noting here:
- 只有与新来的 Pod 具有相同命名空间的 Pod 才能作为匹配候选者。
- 调度器会忽略没有任何 `topologySpreadConstraints[*].topologyKey` 的节点。这意味着:
- 调度器只会考虑同时具有全部 `topologySpreadConstraints[*].topologyKey` 的节点。
缺少任一 `topologyKey` 的节点将被忽略。这意味着:
1. 位于这些节点上的 Pod 不影响 `maxSkew` 计算,在上面的[例子](#example-conflicting-topologyspreadconstraints)中,
假设节点 `node1` 没有标签 "zone",则 2 个 Pod 将被忽略,因此新来的
@ -932,8 +933,8 @@ Pod 彼此的调度方式(更密集或更分散)。
`podAntiAffinity`
: 驱逐 Pod。如果将此设为 `requiredDuringSchedulingIgnoredDuringExecution` 模式,
则只有单个 Pod 可以调度到单个拓扑域;如果你选择 `preferredDuringSchedulingIgnoredDuringExecution`
则你将丢失强制执行此约束的能力。
则只有单个 Pod 可以调度到单个拓扑域;如果你选择 `preferredDuringSchedulingIgnoredDuringExecution`
则你将丢失强制执行此约束的能力。
<!--
For finer control, you can specify topology spread constraints to distribute

View File

@ -180,11 +180,11 @@ layers expect.
## 运行阶段 {#lifecycle-phase-runtime}
<!--
The Runtime phase comprises three critical areas: [compute](#protection-runtime-compute),
[access](#protection-runtime-access), and [storage](#protection-runtime-storage).
The Runtime phase comprises three critical areas: [access](#protection-runtime-access),
[compute](#protection-runtime-compute), and [storage](#protection-runtime-storage).
-->
运行阶段包含三个关键领域:[计算](#protection-runtime-compute)
[访问](#protection-runtime-access)和[存储](#protection-runtime-storage)。
运行阶段包含三个关键领域:[访问](#protection-runtime-access)、
[计算](#protection-runtime-compute)和[存储](#protection-runtime-storage)。
<!--

View File

@ -1,5 +1,8 @@
---
title: 存储类
api_metadata:
- apiVersion: "storage.k8s.io/v1"
kind: "StorageClass"
content_type: concept
weight: 40
---
@ -10,6 +13,9 @@ reviewers:
- thockin
- msau42
title: Storage Classes
api_metadata:
- apiVersion: "storage.k8s.io/v1"
kind: "StorageClass"
content_type: concept
weight: 40
-->

View File

@ -18,8 +18,8 @@ for the repository.
This section covers the duties of a PR wrangler. For more information on giving good reviews,
see [Reviewing changes](/docs/contribute/review/).
-->
SIG Docs 的[批准人Approver](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers)们每周轮流负责
[管理仓库的 PR](https://github.com/kubernetes/website/wiki/PR-Wranglers)。
SIG Docs 的[批准人Approver](/zh-cn/docs/contribute/participate/roles-and-responsibilities/#approvers)
们每周轮流负责[管理仓库的 PR](https://github.com/kubernetes/website/wiki/PR-Wranglers)。
本节介绍 PR 管理者的职责。关于如何提供较好的评审意见,
可参阅[评审变更](/zh-cn/docs/contribute/review/)。
@ -33,7 +33,8 @@ Each day in a week-long shift as PR Wrangler:
- Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality
and adherence to the [Style](/docs/contribute/style/style-guide/) and
[Content](/docs/contribute/style/content-guide/) guides.
- Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can.
- Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`).
Review as many PRs as you can.
-->
## 职责 {#duties}
@ -47,15 +48,15 @@ Each day in a week-long shift as PR Wrangler:
PR`size/XXL`),尽可能多地评审 PR。
<!--
- Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md).
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors
that haven't signed the CLA to do so.
- Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors
that haven't signed the CLA to do so.
- Provide feedback on changes and ask for technical reviews from members of other SIGs.
- Provide inline suggestions on the PR for the proposed content changes.
- If you need to verify content, comment on the PR and request more details.
- Assign relevant `sig/` label(s).
- If needed, assign reviewers from the `reviewers:` block in the file's front matter.
- You can also tag a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md)
for a review by commenting `@kubernetes/<sig>-pr-reviews` on the PR.
- Provide inline suggestions on the PR for the proposed content changes.
- If you need to verify content, comment on the PR and request more details.
- Assign relevant `sig/` label(s).
- If needed, assign reviewers from the `reviewers:` block in the file's front matter.
- You can also tag a [SIG](https://github.com/kubernetes/community/blob/master/sig-list.md)
for a review by commenting `@kubernetes/<sig>-pr-reviews` on the PR.
-->
- 确保贡献者签署 [CLA](https://github.com/kubernetes/community/blob/master/CLA.md)。
- 使用[此脚本](https://github.com/zparnold/k8s-docs-pr-botherer)自动提醒尚未签署
@ -69,13 +70,13 @@ Each day in a week-long shift as PR Wrangler:
[SIG](https://github.com/kubernetes/community/blob/master/sig-list.md) 来评审。
<!--
- Use the `/approve` comment to approve a PR for merging. Merge the PR when ready.
- PRs should have a `/lgtm` comment from another member before merging.
- Consider accepting technically accurate content that doesn't meet the
[style guidelines](/docs/contribute/style/style-guide/). As you approve the change,
open a new issue to address the style concern. You can usually write these style fix
issues as [good first issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue).
- Using style fixups as good first issues is a good way to ensure a supply of easier tasks
to help onboard new contributors.
- PRs should have a `/lgtm` comment from another member before merging.
- Consider accepting technically accurate content that doesn't meet the
[style guidelines](/docs/contribute/style/style-guide/). As you approve the change,
open a new issue to address the style concern. You can usually write these style fix
issues as [good first issues](https://kubernetes.dev/docs/guide/help-wanted/#good-first-issue).
- Using style fixups as good first issues is a good way to ensure a supply of easier tasks
to help onboard new contributors.
-->
- 使用 `/approve` 评论来批准可以合并的 PR在 PR 就绪时将其合并。
- PR 在被合并之前,应该有来自其他成员的 `/lgtm` 评论。
@ -87,7 +88,8 @@ Each day in a week-long shift as PR Wrangler:
这有助于接纳新的贡献者。
<!--
- Also check for pull requests against the [reference docs generator](https://github.com/kubernetes-sigs/reference-docs) code, and review those (or bring in help).
- Also check for pull requests against the [reference docs generator](https://github.com/kubernetes-sigs/reference-docs)
code, and review those (or bring in help).
- Support the [issue wrangler](/docs/contribute/participate/issue-wrangler/) to
triage and tag incoming issues daily.
See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
@ -101,11 +103,12 @@ Each day in a week-long shift as PR Wrangler:
{{< note >}}
<!--
PR wrangler duties do not apply to localization PRs (non-English PRs).
Localization teams have their own processes and teams for reviewing their language PRs.
However, it's often helpful to ensure language PRs are labeled correctly,
review small non-language dependent PRs (like a link update),
or tag reviewers or contributors in long-running PRs (ones opened more than 6 months ago and have not been updated in a month or more).
PR wrangler duties do not apply to localization PRs (non-English PRs).
Localization teams have their own processes and teams for reviewing their language PRs.
However, it's often helpful to ensure language PRs are labeled correctly,
review small non-language dependent PRs (like a link update),
or tag reviewers or contributors in long-running PRs
(ones opened more than 6 months ago and have not been updated in a month or more).
-->
PR 管理者的职责不适用于本地化 PR非英语 PR
本地化团队有自己的流程和团队来审查其语言 PR。
@ -201,10 +204,11 @@ These queries exclude localization PRs. All queries are against the main branch
Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.
Close PRs where:
- The author hasn't signed the CLA for two weeks.
Authors can reopen the PR after signing the CLA. This is a low-risk way to make
sure nothing gets merged without a signed CLA.
Authors can reopen the PR after signing the CLA. This is a low-risk way to make
sure nothing gets merged without a signed CLA.
- The author has not responded to comments or feedback in 2 or more weeks.
@ -237,7 +241,7 @@ and closes them. PR wranglers should close issues after 14-30 days of inactivity
-->
一个名为 [`k8s-ci-robot`](https://github.com/k8s-ci-robot) 的自动服务会在 Issue 停滞 90
天后自动将其标记为过期;然后再等 30 天,如果仍然无人过问,则将其关闭。
PR 管理者应该在 issues 处于无人过问状态 14-30 天后关闭它们。
PR 管理者应该在 Issue 处于无人过问状态 14-30 天后关闭它们。
{{< /note >}}
<!--
@ -273,7 +277,7 @@ The program was introduced to help new contributors understand the PR wrangling
PR 管理轮值表,然后注册报名。
- 其他人可以通过 [#sig-docs Slack 频道](https://kubernetes.slack.com/messages/sig-docs)申请成为指定
PR 管理者某一周的影子。可以随时咨询 (`@bradtopol`) 或某一位
PR 管理者某一周的影子。可以随时咨询`@bradtopol`或某一位
[SIG Docs 联席主席/主管](https://github.com/kubernetes/community/tree/master/sig-docs#leadership)。
- 注册成为一名 PR 管理者的影子时,

View File

@ -4,7 +4,9 @@
# 内置链接检查工具
<!--
You can use [htmltest](https://github.com/wjdp/htmltest) to check for broken links in [`/content/en/`](https://git.k8s.io/website/content/en/). This is useful when refactoring sections of content, moving pages around, or renaming files or page headers.
You can use [htmltest](https://github.com/wjdp/htmltest) to check for broken links in
[`/content/en/`](https://git.k8s.io/website/content/en/). This is useful when refactoring
sections of content, moving pages around, or renaming files or page headers.
-->
你可以使用 [htmltest](https://github.com/wjdp/htmltest) 来检查
[`/content/en/`](https://git.k8s.io/website/content/en/) 下面的失效链接。
@ -16,15 +18,18 @@ You can use [htmltest](https://github.com/wjdp/htmltest) to check for broken lin
## 工作原理 {#how-the-tool-works}
<!--
`htmltest` scans links in the generated HTML files of the kubernetes website repository. It runs using a `make` command which does the following:
`htmltest` scans links in the generated HTML files of the kubernetes website repository.
It runs using a `make` command which does the following:
-->
`htmltest` 会扫描 Kubernetes website 仓库构建生成的 HTML 文件。通过执行 `make` 命令进行了下列操作:
<!--
- Builds the site and generates output HTML in the `/public` directory of your local `kubernetes/website` repository
- Builds the site and generates output HTML in the `/public` directory of your
local `kubernetes/website` repository
- Pulls the `wdjp/htmltest` Docker image
- Mounts your local `kubernetes/website` repository to the Docker image
- Scans the files generated in the `/public` directory and provides command line output when it encounters broken internal links
- Scans the files generated in the `/public` directory and provides command line output
when it encounters broken internal links
-->
- 构建站点并输出 HTML 到本地 `kubernetes/website` 仓库下的 `/public` 目录中
- 拉取 Docker 镜像 `wdjp/htmltest`
@ -37,7 +42,10 @@ You can use [htmltest](https://github.com/wjdp/htmltest) to check for broken lin
## 哪些链接不会检查 {#what-it-does-and-doesnot-check}
<!--
The link checker scans generated HTML files, not raw Markdown. The htmltest tool depends on a configuration file, [`.htmltest.yml`](https://git.k8s.io/website/.htmltest.yml), to determine which content to examine.
The link checker scans generated HTML files, not raw Markdown.
The htmltest tool depends on a configuration file,
[`.htmltest.yml`](https://git.k8s.io/website/.htmltest.yml),
to determine which content to examine.
The link checker scans the following:
-->
@ -47,8 +55,10 @@ The link checker scans the following:
该链接检查器扫描以下内容:
<!--
- All content generated from Markdown in [`/content/en/docs`](https://git.k8s.io/website/content/en/docs/) directory, excluding:
- Generated API references, for example https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- All content generated from Markdown in
[`/content/en/docs`](https://git.k8s.io/website/content/en/docs/) directory, excluding:
- Generated API references, for example
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- All internal links, excluding:
- Empty hashes (`<a href="#">` or `[title](#)`) and empty hrefs (`<a href="">` or `[title]()`)
- Internal links to images and other media files
@ -66,16 +76,18 @@ The link checker does not scan the following:
该链接检查器不会扫描以下内容:
<!--
- Links included in the top and side nav bars, footer links, or links in a page's `<head>` section, such as links to CSS stylesheets, scripts, and meta information
- Links included in the top and side nav bars, footer links, or links in a page's `<head>` section,
such as links to CSS stylesheets, scripts, and meta information
- Top level pages and their children, for example: `/training`, `/community`, `/case-studies/adidas`
- Blog posts
- API Reference documentation, for example: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- API Reference documentation, for example
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- Localizations
-->
- 包含在顶部和侧边导航栏的链接,以及页脚链接或者页面的 `<head>` 部分中的链接,例如 CSS 样式表、脚本以及元信息的链接。
- 顶级页面及其子页面,例如:`/training`、`/community`、`/case-studies/adidas`
- 博客文章
- API 参考文档,例如 https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- API 参考文档,例如 https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
- 本地化内容
<!--
@ -87,6 +99,7 @@ The link checker does not scan the following:
You must install
-->
必须安装:
* [Docker](https://docs.docker.com/get-docker/)
* [make](https://www.gnu.org/software/make/)
@ -109,7 +122,7 @@ To run the link checker:
2. 执行如下命令:
```
```shell
make container-internal-linkcheck
```
@ -152,8 +165,10 @@ One way to fix this is to:
<!--
1. Navigate to the Markdown file with a broken link.
2. Using a text editor, do a full-text search (usually Ctrl+F or Command+F) for the broken link's URL, `#preserving-unknown-fields`.
3. Fix the link. For a broken page hash (or _anchor_) link, check whether the topic was renamed or removed.
2. Using a text editor, do a full-text search (usually Ctrl+F or Command+F) for the
broken link's URL, `#preserving-unknown-fields`.
3. Fix the link. For a broken page hash (or _anchor_) link,
check whether the topic was renamed or removed.
-->
1. 转到含有失效链接的 Markdown 文件。
2. 使用文本编辑器全文搜索失效链接的 URL通常使用 Ctrl+F 或 Command+F`#preserving-unknown-fields`。

View File

@ -148,7 +148,14 @@ The resource and subresource is determined from the incoming request's path:
资源和子资源是根据传入请求的路径确定的:
<!--
Kubelet API | resource | subresource
Kubelet API | resource | subresource
--------------------|----------|------------
/stats/\* | nodes | stats
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
/checkpoint/\* | nodes | checkpoint
*all others* | nodes | proxy
-->
Kubelet API | 资源 | 子资源
-------------|----------|------------
@ -156,6 +163,7 @@ Kubelet API | 资源 | 子资源
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
/checkpoint/\* | nodes | checkpoint
**其它所有** | nodes | proxy
<!--

View File

@ -134,6 +134,9 @@ stored as extra 'private claims' in the issued JWT.
When a bound token is presented to the kube-apiserver, the service account authenticator
will extract and verify these claims.
If the referenced object or the ServiceAccount is pending deletion (for example, due to finalizers),
then for any instant that is 60 seconds (or more) after the `.metadata.deletionTimestamp` date,
authentication with that token would fail.
If the referenced object no longer exists (or its `metadata.uid` does not match),
the request will not be authenticated.
-->
@ -141,6 +144,9 @@ the request will not be authenticated.
将作为额外的“私有声明”存储在所发布的 JWT 中。
当将被绑定的令牌提供给 kube-apiserver 时,服务帐户身份认证组件将提取并验证这些声明。
如果所引用的对象或 ServiceAccount 正处于删除中(例如,由于 finalizer 的原因),
那么在 `.metadata.deletionTimestamp` 时间戳之后的 60 秒(或更长时间)后的某一时刻,
使用该令牌进行身份认证将会失败。
如果所引用的对象不再存在(或其 `metadata.uid` 不匹配),则请求将无法通过认证。
<!--

View File

@ -9,6 +9,9 @@ stages:
- stage: stable
defaultValue: true
fromVersion: "1.25"
toVersion: "1.30"
removed: true
---
<!--
Normalize HTTP get URL and Header passing for lifecycle

View File

@ -9,8 +9,27 @@ stages:
- stage: alpha
defaultValue: false
fromVersion: "1.28"
toVersion: "1.30"
- stage: beta
defaultValue: true
fromVersion: "1.31"
---
<!--
Allow the API server to serve consistent lists from cache.
Enhance Kubernetes API server performance by serving consistent **list** requests
directly from its watch cache, improving scalability and response times.
To consistent list from cache Kubernetes requires a newer etcd version (v3.4.31+ or v3.5.13+),
that includes fixes to watch progress request feature.
If older etcd version is provided Kubernetes will automatically detect it and fallback to serving consistent reads from etcd.
Progress notifications ensure watch cache is consistent with etcd while reducing
the need for resource-intensive quorum reads from etcd.
See the Kubernetes documentation on [Semantics for **get** and **list**](/docs/reference/using-api/api-concepts/#semantics-for-get-and-list) for more details.
-->
允许 API 服务器从缓存中提供一致的 list 操作。
通过直接使用监视缓存来为 **list** 请求提供一致性的数据,提升 Kubernetes API 服务器的性能,
从而改善可扩展性和响应时间。为了从缓存获取一致的列表Kubernetes 需要使用较新的
Etcd 版本v3.4.31+ 或 v3.5.13+),这些版本包含了对监视进度请求特性的修复。
如果使用较旧的 Etcd 版本Kubernetes 会自动检测到并回退到从 Etcd 提供一致的读取操作。
进度通知能够确保监视缓存与 Etcd 保持一致,同时减少对 Etcd 进行资源密集型仲裁读取的需求。
更多细节请参阅 Kubernetes 文档
[**get** 和 **list** 语义](/zh-cn/docs/reference/using-api/api-concepts/#semantics-for-get-and-list)。

View File

@ -13,6 +13,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.29"
toVersion: "1.30"
- stage: stable
defaultValue: true
fromVersion: "1.31"
---
<!--
Enable support to CDI device IDs in the

View File

@ -8,10 +8,12 @@ _build:
stages:
- stage: alpha
defaultValue: false
fromVersion: "1.26"
fromVersion: "1.30"
---
<!--
Enables support for resources with custom parameters and a lifecycle
that is independent of a Pod.
that is independent of a Pod. Allocation of resources is handled
by the Kubernetes scheduler based on "structured parameters".
-->
启用对具有自定义参数和独立于 Pod 生命周期的资源的支持。
资源的分配由 Kubernetes 调度器根据“结构化参数”进行处理。

View File

@ -9,6 +9,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.27"
toVersion: "1.30"
- stage: stable
defaultValue: true
fromVersion: "1.31"
---
<!--
Enables Indexed Jobs to be scaled up or down by mutating both

View File

@ -9,6 +9,10 @@ stages:
- stage: alpha
defaultValue: false
fromVersion: "1.23"
toVersion: "1.30"
- stage: beta
defaultValue: true
fromVersion: "1.31"
---
<!--

View File

@ -2,7 +2,6 @@
title: kube-apiserver 配置 (v1)
content_type: tool-reference
package: apiserver.config.k8s.io/v1
auto_generated: true
---
<!--
title: kube-apiserver Configuration (v1)
@ -14,7 +13,6 @@ auto_generated: true
<!--
<p>Package v1 is the v1 version of the API.</p>
-->
<p>v1 包中包含 API 的 v1 版本。</p>
<!--
@ -23,6 +21,7 @@ auto_generated: true
## 资源类型
- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)
- [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration)
## `AdmissionConfiguration` {#apiserver-config-k8s-io-v1-AdmissionConfiguration}
@ -52,6 +51,125 @@ auto_generated: true
</tbody>
</table>
## `EncryptionConfiguration` {#apiserver-config-k8s-io-v1-EncryptionConfiguration}
<p>
<!--
EncryptionConfiguration stores the complete configuration for encryption providers.
It also allows the use of wildcards to specify the resources that should be encrypted.
Use '&ast;.&lt;group&gt;' to encrypt all resources within a group or '&ast;.&ast;' to encrypt all resources.
'&ast;.' can be used to encrypt all resource in the core group. '&ast;.&ast;' will encrypt all
resources, even custom resources that are added after API server start.
Use of wildcards that overlap within the same resource list or across multiple
entries are not allowed since part of the configuration would be ineffective.
Resource lists are processed in order, with earlier lists taking precedence.
-->
EncryptionConfiguration 存储加密驱动的完整配置。它还允许使用通配符来指定应该被加密的资源。
使用 “&ast;.&lt;group&gt;” 以加密组内的所有资源,或使用 “&ast;.&ast;” 以加密所有资源。
&ast;.” 可用于加密核心组中的所有资源。“&ast;.&ast;” 将加密所有资源,包括在 API 服务器启动后添加的自定义资源。
由于部分配置可能无效,所以不允许在同一资源列表中或跨多个条目使用重叠的通配符。
资源列表被按顺序处理,会优先处理较早的列表。
</p>
<p>
<!--
Example:
-->
示例:
</p>
<!--
- identity: {} # do not encrypt events even though *.* is specified below
-->
<pre><code>kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
- resources:
- events
providers:
- identity: {} # 即使以下 *.* 被指定,也不会对事件加密
- resources:
- secrets
- configmaps
- pandas.awesome.bears.example
providers:
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- resources:
- '*.apps'
providers:
- aescbc:
keys:
- name: key2
secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ/Cg==
- resources:
- '*.*'
providers:
- aescbc:
keys:
- name: key3
secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==</code></pre>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.config.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>EncryptionConfiguration</code></td></tr>
<tr><td><code>resources</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-ResourceConfiguration"><code>[]ResourceConfiguration</code></a>
</td>
<td>
<p>
<!--
resources is a list containing resources, and their corresponding encryption providers.
-->
resources 是一个包含资源及其对应加密驱动的列表。
</p>
</td>
</tr>
</tbody>
</table>
## `AESConfiguration` {#apiserver-config-k8s-io-v1-AESConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)
<p>
<!--
AESConfiguration contains the API configuration for an AES transformer.
-->
AESConfiguration 包含针对 AES 转换器的 API 配置。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>keys</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-Key"><code>[]Key</code></a>
</td>
<td>
<p>
<!--
keys is a list of keys to be used for creating the AES transformer.
Each key has to be 32 bytes long for AES-CBC and 16, 24 or 32 bytes for AES-GCM.
-->
keys 是一个用于创建 AES 转换器的密钥列表。
对于 AES-CBC每个密钥的长度必须是 32 字节;
对于 AES-GCM每个密钥的长度可以是 16、24 或 32 字节。
</p>
</td>
</tr>
</tbody>
</table>
## `AdmissionPluginConfiguration` {#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration}
<!--
@ -110,3 +228,328 @@ configuration. If present, it will be used instead of the path to the configurat
</tbody>
</table>
## `IdentityConfiguration` {#apiserver-config-k8s-io-v1-IdentityConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)
<p>
<!--
IdentityConfiguration is an empty struct to allow identity transformer in provider configuration.
-->
IdentityConfiguration 是一个空结构体,允许在驱动配置中使用身份转换器。
</p>
## `KMSConfiguration` {#apiserver-config-k8s-io-v1-KMSConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)
<p>
<!--
KMSConfiguration contains the name, cache size and path to configuration file for a KMS based envelope transformer.
-->
KMSConfiguration 包含 KMS 型信封转换器所用的配置文件的名称、缓存大小和路径。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>
<code>string</code>
</td>
<td>
<p>
<!--
apiVersion of KeyManagementService
-->
KeyManagementService 的 apiVersion
</p>
</td>
</tr>
<tr><td><code>name</code> <B><!--[Required]-->[必需]</B><br/>
<code>string</code>
</td>
<td>
<p>
<!--
name is the name of the KMS plugin to be used.
-->
name 是要使用的 KMS 插件的名称。
</p>
</td>
</tr>
<tr><td><code>cachesize</code><br/>
<code>int32</code>
</td>
<td>
<p>
<!--
cachesize is the maximum number of secrets which are cached in memory. The default value is 1000.
Set to a negative value to disable caching. This field is only allowed for KMS v1 providers.
-->
cachesize 是内存中缓存的最大 Secret 数量。默认值为 1000。
设置为负值将禁用缓存。此字段仅允许用于 KMS v1 驱动。
</p>
</td>
</tr>
<tr><td><code>endpoint</code> <B><!--[Required]-->[必需]</B><br/>
<code>string</code>
</td>
<td>
<p>
<!--
endpoint is the gRPC server listening address, for example &quot;unix:///var/run/kms-provider.sock&quot;.
-->
endpoint 是 gRPC 服务器的监听地址,例如 "unix:///var/run/kms-provider.sock"。
</p>
</td>
</tr>
<tr><td><code>timeout</code><br/>
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
</td>
<td>
<p>
<!--
timeout for gRPC calls to kms-plugin (ex. 5s). The default is 3 seconds.
-->
timeout 是 gRPC 调用到 KMS 插件的超时时间(例如 5s。默认值为 3 秒。
</p>
</td>
</tr>
</tbody>
</table>
## `Key` {#apiserver-config-k8s-io-v1-Key}
<!--
**Appears in:**
-->
**出现在:**
- [AESConfiguration](#apiserver-config-k8s-io-v1-AESConfiguration)
- [SecretboxConfiguration](#apiserver-config-k8s-io-v1-SecretboxConfiguration)
<p>
<!--
Key contains name and secret of the provided key for a transformer.
-->
Key 包含为转换器所提供的密钥的名称和 Secret。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>name</code> <B><!--[Required]-->[必需]</B><br/>
<code>string</code>
</td>
<td>
<p>
<!--
name is the name of the key to be used while storing data to disk.
-->
name 是在将数据存储到磁盘时所使用的密钥名称。
</p>
</td>
</tr>
<tr><td><code>secret</code> <B><!--[Required]-->[必需]</B><br/>
<code>string</code>
</td>
<td>
<p>
<!--
secret is the actual key, encoded in base64.
-->
secret 是实际的密钥,以 base64 编码。
</p>
</td>
</tr>
</tbody>
</table>
## `ProviderConfiguration` {#apiserver-config-k8s-io-v1-ProviderConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [ResourceConfiguration](#apiserver-config-k8s-io-v1-ResourceConfiguration)
<p>
<!--
ProviderConfiguration stores the provided configuration for an encryption provider.
-->
ProviderConfiguration 存储为加密驱动提供的配置。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>aesgcm</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-AESConfiguration"><code>AESConfiguration</code></a>
</td>
<td>
<p>
<!--
aesgcm is the configuration for the AES-GCM transformer.
-->
aesgcm 是 AES-GCM 转换器的配置。
</p>
</td>
</tr>
<tr><td><code>aescbc</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-AESConfiguration"><code>AESConfiguration</code></a>
</td>
<td>
<p>
<!--
aescbc is the configuration for the AES-CBC transformer.
-->
aescbc 是 AES-CBC 转换器的配置。
</p>
</td>
</tr>
<tr><td><code>secretbox</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-SecretboxConfiguration"><code>SecretboxConfiguration</code></a>
</td>
<td>
<p>
<!--
secretbox is the configuration for the Secretbox based transformer.
-->
secretbox 是基于 Secretbox 的转换器的配置。
</p>
</td>
</tr>
<tr><td><code>identity</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-IdentityConfiguration"><code>IdentityConfiguration</code></a>
</td>
<td>
<p>
<!--
identity is the (empty) configuration for the identity transformer.
-->
identity 是身份转换器的(空)配置。
</p>
</td>
</tr>
<tr><td><code>kms</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-KMSConfiguration"><code>KMSConfiguration</code></a>
</td>
<td>
<p>
<!--
kms contains the name, cache size and path to configuration file for a KMS based envelope transformer.
-->
kms 包含 KMS 型信封转换器所用的配置文件的名称、缓存大小和路径。
</p>
</td>
</tr>
</tbody>
</table>
## `ResourceConfiguration` {#apiserver-config-k8s-io-v1-ResourceConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration)
<p>
<!--
ResourceConfiguration stores per resource configuration.
-->
ResourceConfiguration 存储每个资源的配置。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>resources</code> <B><!--[Required]-->[必需]</B><br/>
<code>[]string</code>
</td>
<td>
<p>
<!--
resources is a list of kubernetes resources which have to be encrypted. The resource names are derived from <code>resource</code> or <code>resource.group</code> of the group/version/resource.
eg: pandas.awesome.bears.example is a custom resource with 'group': awesome.bears.example, 'resource': pandas.
Use '&ast;.&ast;' to encrypt all resources and '&ast;.&lt;group&gt;' to encrypt all resources in a specific group.
eg: '&ast;.awesome.bears.example' will encrypt all resources in the group 'awesome.bears.example'.
eg: '&ast;.' will encrypt all resources in the core group (such as pods, configmaps, etc).
-->
resources 是一个需要加密的 Kubernetes 资源列表。
资源名称来源于组/版本/资源的 “<code>resource</code>” 或 “<code>resource.group</code>”。
例如pandas.awesome.bears.example 是一个自定义资源,其 “group” 为 awesome.bears.example
“resource” 为 pandas。使用 “&ast;.&ast;” 以加密所有资源,使用 “&ast;.&lt;group&gt;” 以加密特定组中的所有资源。
例如,“&ast;.awesome.bears.example” 将加密 “awesome.bears.example” 组中的所有资源。
再比如,“&ast;.” 将加密核心组中的所有资源(如 Pod、ConfigMap 等)。
</p>
</td>
</tr>
<tr><td><code>providers</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-ProviderConfiguration"><code>[]ProviderConfiguration</code></a>
</td>
<td>
<p>
<!--
providers is a list of transformers to be used for reading and writing the resources to disk.
eg: aesgcm, aescbc, secretbox, identity, kms.
-->
providers 是从磁盘读取资源和写入资源到磁盘要使用的转换器的列表。
例如aesgcm、aescbc、secretbox、identity、kms。
</p>
</td>
</tr>
</tbody>
</table>
## `SecretboxConfiguration` {#apiserver-config-k8s-io-v1-SecretboxConfiguration}
<!--
**Appears in:**
-->
**出现在:**
- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)
<p>
<!--
SecretboxConfiguration contains the API configuration for an Secretbox transformer.
-->
SecretboxConfiguration 包含 Secretbox 转换器的 API 配置。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
<tbody>
<tr><td><code>keys</code> <B><!--[Required]-->[必需]</B><br/>
<a href="#apiserver-config-k8s-io-v1-Key"><code>[]Key</code></a>
</td>
<td>
<p>
<!--
keys is a list of keys to be used for creating the Secretbox transformer.
Each key has to be 32 bytes long.
-->
keys 是一个用于创建 Secretbox 转换器的密钥列表。每个密钥的长度必须为 32 字节。
</p>
</td>
</tr>
</tbody>
</table>

View File

@ -13,7 +13,7 @@ auto_generated: true
<!--
## Resource Types
-->
## 资源类型
## 资源类型 {#resource-types}
- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig)
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
@ -97,7 +97,7 @@ JSONOptions 包含为 &quot;json&quot; 日志格式提供的选项。
<td>
<!--
(Members of <code>OutputRoutingOptions</code> are embedded into this type.)
<span class="text-muted">No descrtputRoutingOptions contains options that are supported biption provided.</span>
<span class="text-muted">No description provided.</span>
-->
<code>OutputRoutingOptions</code> 的成员嵌入到此类型中。)
<span class="text-muted">没有提供描述。</span>
@ -108,11 +108,13 @@ JSONOptions 包含为 &quot;json&quot; 日志格式提供的选项。
## `LogFormatFactory` {#LogFormatFactory}
<p>
<!--
LogFormatFactory provides support for a certain additional,
non-default log format.
-->
<p>LogFormatFactory 提供了对某些附加的、非默认的日志格式的支持。</p>
LogFormatFactory 提供了对某些附加的、非默认的日志格式的支持。
</p>
## `LoggingConfiguration` {#LoggingConfiguration}
@ -123,10 +125,12 @@ non-default log format.
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
<p>
<!--
LoggingConfiguration contains logging options.
-->
LoggingConfiguration 包含日志选项。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -273,8 +277,8 @@ certain global defaults.
<!--
OutputRoutingOptions contains options that are supported by both &quot;text&quot; and &quot;json&quot;.
-->
</p>
OutputRoutingOptions 包含 &quot;text&quot;&quot;json&quot; 支持的选项。
</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th><!--Description-->描述<</th></tr></thead>
@ -311,7 +315,7 @@ Only available when the LoggingAlphaOptions feature gate is enabled.</p>
</tbody>
</table>
## `TextOptions` {#TextOptions}
## `TextOptions` {#TextOptions}
<!--
**Appears in:**
@ -327,7 +331,6 @@ TextOptions contains options for logging format &quot;text&quot;.
TextOptions 包含用于记录 &quot;text&quot; 格式的选项。
</p>
<table class="table">
<thead><tr><th width="30%">Field</th><th><!--Description-->描述</th></tr></thead>
<tbody>
@ -406,10 +409,12 @@ flushFrequency field, and new fields should use metav1.Duration.
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
<p>
<!--
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
-->
<p>TracingConfiguration 为 OpenTelemetry 追踪客户端提供版本化的配置信息。</p>
TracingConfiguration 为 OpenTelemetry 追踪客户端提供版本化的配置信息。
</p>
<table class="table">
<thead><tr><th width="30%">字段</th><th>描述</th></tr></thead>
@ -458,12 +463,14 @@ rate, but otherwise never samples.
- [LoggingConfiguration](#LoggingConfiguration)
<p>
<!--
VModuleConfiguration is a collection of individual file names or patterns
and the corresponding verbosity threshold.
-->
VModuleConfiguration 是一个集合,其中包含一个个文件名(或文件名模式)
及其对应的详细程度阈值。
</p>
## `VerbosityLevel` {#VerbosityLevel}
@ -479,13 +486,16 @@ VModuleConfiguration 是一个集合,其中包含一个个文件名(或文
- [LoggingConfiguration](#LoggingConfiguration)
<p>
<!--
VerbosityLevel represents a klog or logr verbosity threshold.
-->
<p>VerbosityLevel 表示 klog 或 logr 的详细程度verbosity阈值。</p>
VerbosityLevel 表示 klog 或 logr 的详细程度verbosity阈值。
</p>
## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig}
<p>
<!--
CredentialProviderConfig is the configuration containing information about
each exec credential provider. Kubelet reads this configuration from disk and enables
@ -493,6 +503,7 @@ each provider as specified by the CredentialProvider type.
-->
CredentialProviderConfig 包含有关每个 exec 凭据提供者的配置信息。
Kubelet 从磁盘上读取这些配置信息,并根据 CredentialProvider 类型启用各个提供者。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -525,10 +536,12 @@ auth keys, the value from the provider earlier in this list is used.
## `KubeletConfiguration` {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}
<p>
<!--
KubeletConfiguration contains the configuration for the Kubelet
-->
KubeletConfiguration 中包含 Kubelet 的配置。
</p>
<table class="table">
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
@ -1132,8 +1145,8 @@ collected based on being unused for too long.
Default: &quot;0s&quot; (disabled)
-->
<p><code>imageMaximumGCAge</code> 是对未使用镜像进行垃圾收集之前允许其存在的时长。
此字段的默认值为 &quot;0s&quot;,表示禁用此字段,这意味着镜像不会因为过长时间不使用而被垃圾收集。
默认值:&quot;0s&quot;(已禁用)</p>
此字段的默认值为 &quot;0s&quot;,表示禁用此字段,这意味着镜像不会因为过长时间不使用而被垃圾收集。</p>
<p>默认值:&quot;0s&quot;(已禁用)</p>
</td>
</tr>
@ -2109,8 +2122,7 @@ Default: &quot;&quot;
-->
<p><code>systemReservedCgroup</code> 帮助 kubelet 识别用来为 OS 系统级守护进程实施
<code>systemReserved</code> 计算资源预留时使用的顶级控制组CGroup
参考 <a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
以了解详细信息。</p>
更多细节参阅<a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">节点可分配资源</a></p>
<p>默认值:&quot;&quot;</p>
</td>
</tr>
@ -2129,8 +2141,7 @@ Default: &quot;&quot;
-->
<p><code>kubeReservedCgroup</code> 帮助 kubelet 识别用来为 Kubernetes 节点系统级守护进程实施
<code>kubeReserved</code> 计算资源预留时使用的顶级控制组CGroup
参阅 <a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
了解进一步的信息。</p>
更多细节参阅<a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">节点可分配资源</a></p>
<p>默认值:&quot;&quot;</p>
</td>
</tr>
@ -2158,8 +2169,7 @@ Default: [&quot;pods&quot;]
<p>如果列表中包含 <code>system-reserved</code>,则必须设置 <code>systemReservedCgroup</code></p>
<p>如果列表中包含 <code>kube-reserved</code>,则必须设置 <code>kubeReservedCgroup</code></p>
<p>这个字段只有在 <code>cgroupsPerQOS</code>被设置为 <code>true</code> 才被支持。</p>
<p>参阅<a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">Node Allocatable</a>
了解进一步的信息。</p>
<p>更多细节参阅<a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">节点可分配资源</a></p>
<p>默认值:[&quot;pods&quot;]</p>
</td>
</tr>
@ -2365,6 +2375,7 @@ Default: nil
<a href="#kubelet-config-k8s-io-v1beta1-MemoryReservation"><code>[]MemoryReservation</code></a>
</td>
<td>
<p>
<!--
reservedMemory specifies a comma-separated list of memory reservations for NUMA nodes.
The parameter makes sense only in the context of the memory manager feature.
@ -2378,32 +2389,37 @@ reserved memory from all NUMA nodes should be equal to the amount of memory spec
by the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">node allocatable</a>.
If at least one node allocatable parameter has a non-zero value, you will need
to specify at least one NUMA node.
Also, avoid specifying:</p>
<ol>
<li>Duplicates, the same NUMA node, and memory type, but with a different value.</li>
<li>zero limits for any memory type.</li>
<li>NUMAs nodes IDs that do not exist under the machine.</li>
<li>memory types except for memory and hugepages-&lt;size&gt;</li>
</ol>
<p>Default: nil</p>
Also, avoid specifying:
-->
<p><code>reservedMemory</code> 给出一个逗号分隔的列表,为 NUMA 节点预留内存。</p>
<p>此参数仅在内存管理器功能特性语境下有意义。内存管理器不会为容器负载分配预留内存。
<code>reservedMemory</code> 给出一个逗号分隔的列表,为 NUMA 节点预留内存。
此参数仅在内存管理器功能特性语境下有意义。内存管理器不会为容器负载分配预留内存。
例如,如果你的 NUMA0 节点内存为 10Gi<code>reservedMemory</code> 设置为在 NUMA0
上预留 1Gi 内存,内存管理器会认为其上只有 9Gi 内存可供分配。</p>
<p>你可以设置不同数量的 NUMA 节点和内存类型。你也可以完全忽略这个字段,不过你要清楚,
上预留 1Gi 内存,内存管理器会认为其上只有 9Gi 内存可供分配。
你可以设置不同数量的 NUMA 节点和内存类型。你也可以完全忽略这个字段,不过你要清楚,
所有 NUMA 节点上预留内存的总量要等于通过
<a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">node allocatable</a>
设置的内存量。</p>
<p>如果至少有一个节点可分配参数设置值非零,则你需要设置至少一个 NUMA 节点。</p>
<p>此外,避免如下设置:</p>
<a href="https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable">节点可分配资源</a>设置的内存量。
如果至少有一个节点可分配参数设置值非零,则你需要设置至少一个 NUMA 节点。
此外,避免如下设置:
</p>
<ol>
<!--
<li>Duplicates, the same NUMA node, and memory type, but with a different value.</li>
<li>zero limits for any memory type.</li>
<li>NUMAs nodes IDs that do not exist under the machine.</li>
<li>memory types except for memory and hugepages-&lt;size&gt;</li>
</ol>
-->
<li>在配置值中存在重复项NUMA 节点和内存类型相同,但配置值不同,这是不允许的。</li>
<li>为任何内存类型设置限制值为零。</li>
<li>NUMA 节点 ID 在宿主系统上不存在。/li>
<li><code>memory</code><code>hugepages-&lt;size&gt;</code> 之外的内存类型。</li>
</ol>
<p>默认值nil</p>
<p>
<!--
Default: nil
-->
默认值nil
</p>
</td>
</tr>
@ -2415,7 +2431,7 @@ Also, avoid specifying:</p>
enableProfilingHandler enables profiling via web interface host:port/debug/pprof/
Default: true
-->
<p><code>enableProfilingHandler</code> 启用通过 host:port/debug/pprof/ 接口来执行性能分析。</p>
<p><code>enableProfilingHandler</code> 启用通过 <code>host:port/debug/pprof/</code> 接口来执行性能分析。</p>
<p>默认值true</p>
</td>
</tr>
@ -2428,7 +2444,7 @@ Default: true
enableDebugFlagsHandler enables flags endpoint via web interface host:port/debug/flags/v
Default: true
-->
<p><code>enableDebugFlagsHandler</code> 启用通过 host:port/debug/flags/v Web
<p><code>enableDebugFlagsHandler</code> 启用通过 <code>host:port/debug/flags/v Web</code>
接口上的标志设置。</p>
<p>默认值true</p>
</td>
@ -2470,7 +2486,7 @@ Default: 0.8
</tr>
<tr><td><code>registerWithTaints</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/#taint-v1-core"><code>[]core/v1.Taint</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#taint-v1-core"><code>[]core/v1.Taint</code></a>
</td>
<td>
<!--
@ -2563,6 +2579,25 @@ Linux 支持 UNIX 域套接字,而 Windows 支持命名管道和 TCP 端点。
如果未指定,则使用 containerRuntimeEndpoint 中的值。</p>
</td>
</tr>
<tr><td><code>failCgroupV1</code><br/>
<code>bool</code>
</td>
<td>
<p>
<!--
FailCgroupV1 prevents the kubelet from starting on hosts
that use cgroup v1. By default, this is set to 'false', meaning
the kubelet is allowed to start on cgroup v1 hosts unless this
option is explicitly enabled.
Default: false
-->
<code>failCgroupV1</code> 防止 kubelet 在使用 cgroup v1 的主机上启动。
默认情况下,此选项设置为 “false”这意味着除非此选项被显式启用
否则 kubelet 被允许在 cgroup v1 主机上启动。
默认值false
</p>
</td>
</tr>
</tbody>
</table>
@ -2585,7 +2620,7 @@ SerializedNodeConfigSource 允许对 `v1.NodeConfigSource` 执行序列化操作
<tr><td><code>kind</code><br/>string</td><td><code>SerializedNodeConfigSource</code></td></tr>
<tr><td><code>source</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#nodeconfigsource-v1-core"><code>core/v1.NodeConfigSource</code></a>
</td>
<td>
<!--
@ -2657,8 +2692,8 @@ a single subdomain segment, so *.io does not match *.k8s.io.
-->
<p><code>matchImages</code> 中的每个条目都是一个模式字符串,其中可以包含端口号和路径。
域名部分可以包含统配符,但端口或路径部分不可以。通配符可以用作子域名,例如
<code>&ast;.k8s.io</code><code>k8s.&ast;.io</code>,以及顶级域名,如 <code>k8s.&ast;</code></p>
<p>对类似 <code>app&ast;.k8s.io</code> 这类部分子域名的匹配也是支持的。
<code>&ast;.k8s.io</code><code>k8s.&ast;.io</code>,以及顶级域名,如 <code>k8s.&ast;</code>
对类似 <code>app&ast;.k8s.io</code> 这类部分子域名的匹配也是支持的。
每个通配符只能用来匹配一个子域名段,所以 <code>&ast;.io</code> 不会匹配 <code>&ast;.k8s.io</code></p>
<!--
A match exists between an image and a matchImage when all of the below are true:
@ -3069,7 +3104,7 @@ MemoryReservation 为每个 NUMA 节点设置不同类型的内存预留。
</tr>
<tr><td><code>limits</code> <B><!-- [Required] -->[必需]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
</td>
<td>
<!--span class="text-muted">No description provided.</span-->

View File

@ -87,8 +87,12 @@ SelfSubjectReviewStatus 由 kube-apiserver 进行填充并发送回用户。
- **userInfo.groups** ([]string)
<!--
*Atomic: will be replaced during a merge*
The names of groups this user is a part of.
-->
**原子性:合并期间将被替换**
此用户所属的用户组的名称。

View File

@ -77,8 +77,12 @@ TokenReviewPec 是对令牌身份验证请求的描述。
- **audiences** ([]string)
<!--
*Atomic: will be replaced during a merge*
Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver.
-->
**原子性:将在合并期间被替换**
audiences 是带有令牌的资源服务器标识为受众的标识符列表。
受众感知令牌身份验证器将验证令牌是否适用于此列表中的至少一个受众。
如果未提供受众,受众将默认为 Kubernetes API 服务器的受众。
@ -102,8 +106,13 @@ TokenReviewStatus 是令牌认证请求的结果。
- **audiences** ([]string)
<!--
*Atomic: will be replaced during a merge*
Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is "true", the token is valid against the audience of the Kubernetes API server.
-->
**原子性:将在合并期间被替换**
audiences 是身份验证者选择的与 TokenReview 和令牌兼容的受众标识符。标识符是
TokenReviewSpec 受众和令牌受众的交集中的任何标识符。设置 spec.audiences
字段的 TokenReview API 的客户端应验证在 status.audiences 字段中返回了兼容的受众标识符,
@ -146,6 +155,12 @@ TokenReviewStatus 是令牌认证请求的结果。
- **user.groups** ([]string)
<!--
Atomic: will be replaced during a merg
-->
**Atomic将在合并期间被替换**
<!--
The names of groups this user is a part of.
-->

View File

@ -7,7 +7,6 @@ content_type: "api_reference"
description: "ClusterRoleBinding 引用 ClusterRole但不包含它。"
title: "ClusterRoleBinding"
weight: 6
auto_generated: false
---
<!--
---
@ -87,12 +86,18 @@ ClusterRoleBinding 引用 ClusterRole但不包含它。
name 是被引用的资源的名称
<!--
- **subjects** ([]Subject)
*Atomic: will be replaced during a merge*
Subjects holds references to the objects the role applies to.
<a name="Subject"></a>
*Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names.*
-->
- **subjects** ([]Subject)
**原子性:将在合并期间被替换**
Subjects 包含角色所适用的对象的引用。
<a name="Subject"></a>

View File

@ -99,12 +99,6 @@ resourceAuthorizationAttributes 和 nonResourceAuthorizationAttributes 二者必
<a name="ResourceAttributes"></a>
*ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface*
- **resourceAttributes.group** (string)
Group is the API Group of the Resource. "*" means all.
- **resourceAttributes.name** (string)
Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all.
-->
- **resourceAttributes** (ResourceAttributes)
@ -112,12 +106,224 @@ resourceAuthorizationAttributes 和 nonResourceAuthorizationAttributes 二者必
<a name="ResourceAttributes"></a>
**resourceAttributes 包括提供给 Authorizer 接口进行资源请求鉴权时所用的属性。**
<!--
- **resourceAttributes.fieldSelector** (FieldSelectorAttributes)
fieldSelector describes the limitation on access based on field. It can only limit access, not broaden it.
This field is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).
-->
- **resourceAttributes.fieldSelector** (FieldSelectorAttributes)
fieldSelector 描述基于字段的访问限制。此字段只能限制访问权限,而不能扩大访问权限。
此字段处于 Alpha 级别。要使用此字段,你必须启用 `AuthorizeWithSelectors` 特性门控(默认禁用)。
<!--
<a name="FieldSelectorAttributes"></a>
*FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*
-->
<a name="FieldSelectorAttributes"></a>
FieldSelectorAttributes 表示一个限制访问的字段。建议 Webhook 的开发者们:
* 确保 rawSelector 和 requirements 未被同时设置
* 如果设置了 fieldSelector则考虑 requirements 字段
* 如果设置了 fieldSelector不要尝试解析或考虑 rawSelector 字段。
这是为了避免出现另一个 CVE-2022-2880即我们不希望不同系统以一致的方式解析某个查询
有关细节参见 https://www.oxeye.io/resources/golang-parameter-smuggling-attack
对于 kube-apiserver 的 SubjectAccessReview 端点:
* 如果 rawSelector 为空且 requirements 为空,则请求未被限制。
* 如果 rawSelector 存在且 requirements 为空,则 rawSelector 将被解析,并在解析成功的情况下进行限制。
* 如果 rawSelector 为空且 requirements 存在,则应优先使用 requirements。
* 如果 rawSelector 存在requirements 也存在,则请求无效。
<!--
- **resourceAttributes.fieldSelector.rawSelector** (string)
rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.
-->
- **resourceAttributes.fieldSelector.rawSelector** (string)
rawSelector 是字段选择算符的序列化形式,将被包含在查询参数中。
建议 Webhook 实现忽略 rawSelector。只要 requirements 不存在,
kube-apiserver 的 SubjectAccessReview 将解析 rawSelector。
<!--
- **resourceAttributes.fieldSelector.requirements** ([]FieldSelectorRequirement)
*Atomic: will be replaced during a merge*
requirements is the parsed interpretation of a field selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.
<a name="FieldSelectorRequirement"></a>
*FieldSelectorRequirement is a selector that contains values, a key, and an operator that relates the key and values.*
-->
- **resourceAttributes.fieldSelector.requirements** ([]FieldSelectorRequirement)
**原子:将在合并期间被替换**
requirements 是字段选择算符已解析的解释。资源实例必须满足所有 requirements 才能匹配此选择算符。
Webhook 实现应处理 requirements但如何处理由 Webhook 自行决定。
由于 requirements 只能限制请求,因此如果不理解 requirements可以安全地将请求鉴权为无限制请求。
<a name="FieldSelectorRequirement"></a>
**FieldSelectorRequirement 是一个选择算符,包含值、键以及与将键和值关联起来的运算符。**
<!--
- **resourceAttributes.fieldSelector.requirements.key** (string), required
key is the field selector key that the requirement applies to.
- **resourceAttributes.fieldSelector.requirements.operator** (string), required
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. The list of operators may grow in the future.
- **resourceAttributes.fieldSelector.requirements.values** ([]string)
*Atomic: will be replaced during a merge*
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.
-->
- **resourceAttributes.fieldSelector.requirements.key** (string),必需
key 是 requirements 应用到的字段选择算符键。
- **resourceAttributes.fieldSelector.requirements.operator** (string),必需
operator 表示键与一组值之间的关系。有效的运算符有 In、NotIn、Exists、DoesNotExist。
运算符列表可能会在未来增加。
- **resourceAttributes.fieldSelector.requirements.values**[]string
**原子:将在合并期间被替换**
values 是一个字符串值的数组。如果运算符是 In 或 NotIn则 values 数组必须非空。
如果运算符是 Exists 或 DoesNotExist则 values 数组必须为空。
<!--
- **resourceAttributes.group** (string)
Group is the API Group of the Resource. "*" means all.
-->
- **resourceAttributes.group** (string)
group 是资源的 API 组。
"*" 表示所有组。
<!--
- **resourceAttributes.labelSelector** (LabelSelectorAttributes)
labelSelector describes the limitation on access based on labels. It can only limit access, not broaden it.
This field is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).
-->
- **resourceAttributes.labelSelector** (LabelSelectorAttributes)
labelSelector 描述基于标签的访问限制。此字段只能限制访问权限,而不能扩大访问权限。
此字段处于 Alpha 级别。要使用此字段,你必须启用 `AuthorizeWithSelectors` 特性门控(默认禁用)。
<!--
<a name="LabelSelectorAttributes"></a>
*LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*
-->
<a name="LabelSelectorAttributes"></a>
LabelSelectorAttributes 表示通过标签限制的访问。建议 Webhook 开发者们:
* 确保 rawSelector 和 requirements 未被同时设置
* 如果设置了 labelSelector则考虑 requirements 字段
* 如果设置了 labelSelector不要尝试解析或考虑 rawSelector 字段。
这是为了避免出现另一个 CVE-2022-2880即让不同系统以一致的方式解析为何某个查询不是我们想要的
有关细节参见 https://www.oxeye.io/resources/golang-parameter-smuggling-attack
对于 kube-apiserver 的 SubjectAccessReview 端点:
* 如果 rawSelector 为空且 requirements 为空,则请求未被限制。
* 如果 rawSelector 存在且 requirements 为空,则 rawSelector 将被解析,并在解析成功的情况下进行限制。
* 如果 rawSelector 为空且 requirements 存在,则应优先使用 requirements。
* 如果 rawSelector 存在requirements 也存在,则请求无效。
<!--
- **resourceAttributes.labelSelector.rawSelector** (string)
rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.
-->
- **resourceAttributes.labelSelector.rawSelector** (string)
rawSelector 是字段选择算符的序列化形式,将被包含在查询参数中。
建议 Webhook 实现忽略 rawSelector。只要 requirements 不存在,
kube-apiserver 的 SubjectAccessReview 将解析 rawSelector。
<!--
- **resourceAttributes.labelSelector.requirements** ([]LabelSelectorRequirement)
*Atomic: will be replaced during a merge*
requirements is the parsed interpretation of a label selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.
<a name="LabelSelectorRequirement"></a>
*A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.*
-->
- **resourceAttributes.labelSelector.requirements** ([]LabelSelectorRequirement)
**原子:将在合并期间被替换**
requirements 是字段选择算符已解析的解释。资源实例必须满足所有 requirements才能匹配此选择算符。
Webhook 实现应处理 requirements但如何处理由 Webhook 自行决定。
由于 requirements 只能限制请求,因此如果不理解 requirements可以安全地将请求鉴权为无限制请求。
<a name="FieldSelectorRequirement"></a>
**FieldSelectorRequirement 是一个选择算符,包含值、键以及将键和值关联起来的运算符。**
<!--
- **resourceAttributes.labelSelector.requirements.key** (string), required
key is the label key that the selector applies to.
- **resourceAttributes.labelSelector.requirements.operator** (string), required
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
- **resourceAttributes.labelSelector.requirements.values** ([]string)
*Atomic: will be replaced during a merge*
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
-->
- **resourceAttributes.labelSelector.requirements.key** (string),必需
key 是选择算符应用到的标签键。
- **resourceAttributes.labelSelector.requirements.operator** (string),必需
operator 表示键与一组值之间的关系。有效的运算符有 In、NotIn、Exists、DoesNotExist。
- **resourceAttributes.labelSelector.requirements.values** ([]string)
**原子:将在合并期间被替换**
values 是一个字符串值的数组。如果运算符是 In 或 NotIn则 values 数组必须非空。
如果运算符是 Exists 或 DoesNotExist则 values 数组必须为空。
此数组在策略性合并补丁Strategic Merge Patch期间被替换。
<!--
- **resourceAttributes.name** (string)
Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all.
-->
- **resourceAttributes.name** (string)
name 是 "get" 正在请求或 "delete" 已删除的资源的名称。

View File

@ -0,0 +1,930 @@
---
api_metadata:
apiVersion: "coordination.k8s.io/v1alpha1"
import: "k8s.io/api/coordination/v1alpha1"
kind: "LeaseCandidate"
content_type: "api_reference"
description: "LeaseCandidate 定义 Lease 对象的候选者。"
title: "LeaseCandidate v1alpha1"
weight: 6
---
<!--
api_metadata:
apiVersion: "coordination.k8s.io/v1alpha1"
import: "k8s.io/api/coordination/v1alpha1"
kind: "LeaseCandidate"
content_type: "api_reference"
description: "LeaseCandidate defines a candidate for a Lease object."
title: "LeaseCandidate v1alpha1"
weight: 6
auto_generated: true
-->
`apiVersion: coordination.k8s.io/v1alpha1`
`import "k8s.io/api/coordination/v1alpha1"`
## LeaseCandidate {#LeaseCandidate}
<!--
LeaseCandidate defines a candidate for a Lease object. Candidates are created such that coordinated leader election will pick the best leader from the list of candidates.
-->
LeaseCandidate 定义一个 Lease 对象的候选者。
通过创建候选者,协同式领导者选举能够从候选者列表中选出最佳的领导者。
<hr>
- **apiVersion**: coordination.k8s.io/v1alpha1
- **kind**: LeaseCandidate
<!--
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidateSpec" >}}">LeaseCandidateSpec</a>)
spec contains the specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidateSpec" >}}">LeaseCandidateSpec</a>)
spec 包含 Lease 的规约。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
## LeaseCandidateSpec {#LeaseCandidateSpec}
<!--
LeaseCandidateSpec is a specification of a Lease.
-->
LeaseCandidateSpec 是 Lease 的规约。
<hr>
<!--
- **leaseName** (string), required
LeaseName is the name of the lease for which this candidate is contending. This field is immutable.
-->
- **leaseName** (string),必需
leaseName 是此候选者正在争夺的租约的名称。此字段是不可变更的。
<!--
- **preferredStrategies** ([]string), required
*Atomic: will be replaced during a merge*
PreferredStrategies indicates the list of strategies for picking the leader for coordinated leader election. The list is ordered, and the first strategy supersedes all other strategies. The list is used by coordinated leader election to make a decision about the final election strategy. This follows as - If all clients have strategy X as the first element in this list, strategy X will be used. - If a candidate has strategy [X] and another candidate has strategy [Y, X], Y supersedes X and strategy Y
will be used.
- If a candidate has strategy [X, Y] and another candidate has strategy [Y, X], this is a user error and leader
election will not operate the Lease until resolved.
(Alpha) Using this field requires the CoordinatedLeaderElection feature gate to be enabled.
-->
- **preferredStrategies** ([]string),必需
**原子:将在合并期间被替换**
preferredStrategies 表示协同式领导者选举在选择领导者时所用的策略的列表。
此列表是有序的,第一个策略优先于所有其他策略。此列表将由协同式领导者选举用于决定最终的选举策略。
具体规则为:
- 如果所有客户端的策略列表的第一个元素为 X则策略 X 将被使用。
- 如果一个候选者的策略为 [X],而另一个候选者的策略为 [Y, X],则 Y 优先于 X策略 Y 将被使用。
- 如果一个候选者的策略为 [X, Y],而另一个候选者的策略为 [Y, X],则这是一个用户错误,
并且在解决此错误之前领导者选举将不会操作 Lease。
Alpha使用此字段需要启用 CoordinatedLeaderElection 特性门控。
<!--
- **binaryVersion** (string)
BinaryVersion is the binary version. It must be in a semver format without leading `v`. This field is required when strategy is "OldestEmulationVersion"
- **emulationVersion** (string)
EmulationVersion is the emulation version. It must be in a semver format without leading `v`. EmulationVersion must be less than or equal to BinaryVersion. This field is required when strategy is "OldestEmulationVersion"
-->
- **binaryVersion** (string)
binaryVersion 是可执行文件的版本。它必须采用不带前缀 `v` 的语义版本格式。
当策略为 "OldestEmulationVersion" 时,此字段是必需的。
- **emulationVersion** (string)
emulationVersion 是仿真版本。它必须采用不带前缀 `v` 的语义版本格式。
emulationVersion 必须小于或等于 binaryVersion。当策略为 "OldestEmulationVersion" 时,此字段是必需的。
<!--
- **pingTime** (MicroTime)
PingTime is the last time that the server has requested the LeaseCandidate to renew. It is only done during leader election to check if any LeaseCandidates have become ineligible. When PingTime is updated, the LeaseCandidate will respond by updating RenewTime.
<a name="MicroTime"></a>
*MicroTime is version of Time with microsecond level precision.*
-->
- **pingTime**MicroTime
pingTime 是服务器最近一次请求 LeaseCandidate 续订的时间。
此操作仅在领导者选举期间进行,用以检查是否有 LeaseCandidates 变得不合格。
当 pingTime 更新时LeaseCandidate 会通过更新 renewTime 来响应。
<a name="MicroTime"></a>
**MicroTime 是微秒级精度的 Time 版本**
<!--
- **renewTime** (MicroTime)
RenewTime is the time that the LeaseCandidate was last updated. Any time a Lease needs to do leader election, the PingTime field is updated to signal to the LeaseCandidate that they should update the RenewTime. Old LeaseCandidate objects are also garbage collected if it has been hours since the last renew. The PingTime field is updated regularly to prevent garbage collection for still active LeaseCandidates.
<a name="MicroTime"></a>
*MicroTime is version of Time with microsecond level precision.*
-->
- **renewTime**MicroTime
renewTime 是 LeaseCandidate 被最近一次更新的时间。每当 Lease 需要进行领导者选举时,
pingTime 字段会被更新,以向 LeaseCandidate 发出应更新 renewTime 的信号。
如果自上次续订以来已经过去几个小时,旧的 LeaseCandidate 对象也会被垃圾收集。
pingTime 字段会被定期更新,以防止对仍处于活动状态的 LeaseCandidates 进行垃圾收集。
<a name="MicroTime"></a>
**MicroTime 是微秒级精度的 Time 版本**
## LeaseCandidateList {#LeaseCandidateList}
<!--
LeaseCandidateList is a list of Lease objects.
-->
LeaseCandidateList 是 Lease 对象的列表。
<hr>
- **apiVersion**: coordination.k8s.io/v1alpha1
- **kind**: LeaseCandidateList
<!--
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>), required
items is a list of schema objects.
-->
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
标准的列表元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>),必需
items 是模式对象的列表。
<!--
## Operations {#Operations}
<hr>
### `get` read the specified LeaseCandidate
#### HTTP Request
-->
## 操作 {#Operations}
<hr>
### `get` 读取指定的 LeaseCandidate
#### HTTP 请求
GET /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the LeaseCandidate
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
LeaseCandidate 的名称。
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): OK
401: Unauthorized
<!--
### `list` list or watch objects of kind LeaseCandidate
#### HTTP Request
-->
### `list` 列举或监视类别为 LeaseCandidate 的对象
#### HTTP 请求
GET /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates
<!--
#### Parameters
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **allowWatchBookmarks** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
-->
#### 参数
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **allowWatchBookmarks** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidateList" >}}">LeaseCandidateList</a>): OK
401: Unauthorized
<!--
### `list` list or watch objects of kind LeaseCandidate
#### HTTP Request
-->
### `list` 列举或监视类别为 LeaseCandidate 的对象
#### HTTP 请求
GET /apis/coordination.k8s.io/v1alpha1/leasecandidates
<!--
#### Parameters
- **allowWatchBookmarks** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
-->
#### 参数
- **allowWatchBookmarks** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidateList" >}}">LeaseCandidateList</a>): OK
401: Unauthorized
<!--
### `create` create a LeaseCandidate
#### HTTP Request
-->
### `create` 创建 LeaseCandidate
#### HTTP 请求
POST /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates
<!--
#### Parameters
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): OK
201 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): Created
202 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): Accepted
401: Unauthorized
<!--
### `update` replace the specified LeaseCandidate
#### HTTP Request
-->
### `update` 替换指定的 LeaseCandidate
#### HTTP 请求
PUT /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the LeaseCandidate
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
LeaseCandidate 的名称。
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): OK
201 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): Created
401: Unauthorized
<!--
### `patch` partially update the specified LeaseCandidate
#### HTTP Request
-->
### `patch` 部分更新指定的 LeaseCandidate
#### HTTP 请求
PATCH /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the LeaseCandidate
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
LeaseCandidate 的名称。
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): OK
201 (<a href="{{< ref "../cluster-resources/lease-candidate-v1alpha1#LeaseCandidate" >}}">LeaseCandidate</a>): Created
401: Unauthorized
<!--
### `delete` delete a LeaseCandidate
#### HTTP Request
-->
### `delete` 删除 LeaseCandidate
#### HTTP 请求
DELETE /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the LeaseCandidate
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
-->
#### 参数
- **name** (**路径参数**): string必需
LeaseCandidate 的名称。
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
202 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): Accepted
401: Unauthorized
<!--
### `deletecollection` delete collection of LeaseCandidate
#### HTTP Request
-->
### `deletecollection` 删除 LeaseCandidate 的集合
#### HTTP 请求
DELETE /apis/coordination.k8s.io/v1alpha1/namespaces/{namespace}/leasecandidates
<!--
#### Parameters
- **namespace** (*in path*): string, required
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
-->
#### 参数
- **namespace** (**路径参数**): string必需
<a href="{{< ref "../common-parameters/common-parameters#namespace" >}}">namespace</a>
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
401: Unauthorized

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "Namespace 为名字提供作用域。"
title: "Namespace"
weight: 2
weight: 7
---
<!--
@ -17,7 +17,7 @@ api_metadata:
content_type: "api_reference"
description: "Namespace provides a scope for Names."
title: "Namespace"
weight: 2
weight: 7
auto_generated: true
-->
@ -69,6 +69,12 @@ NamespaceSpec 用于描述 Namespace 的属性。
finalizers 是一个不透明的值列表,只有此列表为空时才能从存储中永久删除对象。 更多信息: https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/namespaces/
<!--
*Atomic: will be replaced during a merge*
-->
**原子性:将在合并期间被替换**
## NamespaceStatus {#NamespaceStatus}
<!--
NamespaceStatus is information about the current status of a Namespace.
@ -79,11 +85,15 @@ NamespaceStatus 表示 Namespace 的当前状态信息。
- **conditions** ([]NamespaceCondition)
<!--
*Patch strategy: merge on key `type`*
*Map: unique values on key type will be kept during a merge*
Represents the latest available observations of a namespace's current state.
-->
**补丁策略:基于 `type` 健合并**
**Map`type` 的唯一值将在合并期间保留**
表示命名空间当前状态的最新可用状况。
<a name="NamespaceCondition"></a>

View File

@ -43,10 +43,14 @@ DeleteOptions may be provided when deleting an API object.
<!--
- **dryRun** ([]string)
*Atomic: will be replaced during a merge*
When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
-->
- **dryRun** ([]string)
**原子性:将在合并期间被替换**
该值如果存在,则表示不应保留修改。
无效或无法识别的 `dryRun` 指令将导致错误响应并且不会进一步处理请求。有效值为:
@ -124,4 +128,8 @@ DeleteOptions may be provided when deleting an API object.
表示是否以及如何执行垃圾收集。可以设置此字段或 `orphanDependents` 字段,但不能同时设置二者。
默认策略由 `metadata.finalizers` 中现有终结器Finalizer集合和特定资源的默认策略决定。
可接受的值为:`Orphan` - 令依赖对象成为孤儿对象;`Background` - 允许垃圾收集器在后台删除依赖项;`Foreground` - 一个级联策略,前台删除所有依赖项。
可选值为:
- `Orphan` 令依赖对象成为孤儿对象;
- `Background` 允许垃圾收集器在后台删除依赖项;
- `Foreground` 一个级联策略,前台删除所有依赖项。

View File

@ -4,7 +4,7 @@ api_metadata:
import: "k8s.io/apimachinery/pkg/apis/meta/v1"
kind: "LabelSelector"
content_type: "api_reference"
description: "标签选择是对一组资源的标签查询。"
description: "标签选择算符是对一组资源的标签查询。"
title: "LabelSelector"
weight: 2
---
@ -25,15 +25,17 @@ auto_generated: true
<!--
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
-->
标签选择是对一组资源的标签查询。
标签选择算符是对一组资源的标签查询。
`matchLabels``matchExpressions` 的结果按逻辑与的关系组合。
一个 `empty` 标签选择器匹配所有对象。一个 `null` 标签选择器不匹配任何对象。
一个 `empty` 标签选择算符匹配所有对象。一个 `null` 标签选择算符不匹配任何对象。
<hr>
<!--
- **matchExpressions** ([]LabelSelectorRequirement)
*Atomic: will be replaced during a merge*
matchExpressions is a list of label selector requirements. The requirements are ANDed.
<a name="LabelSelectorRequirement"></a>
@ -41,10 +43,12 @@ A label selector is a label query over a set of resources. The result of matchLa
-->
- **matchExpressions** ([]LabelSelectorRequirement)
`matchExpressions` 是标签选择器要求的列表,这些要求的结果按逻辑与的关系来计算。
**原子性:将在合并期间被替换**
`matchExpressions` 是标签选择算符要求的列表,这些要求的结果按逻辑与的关系来计算。
<a name="LabelSelectorRequirement"></a>
**标签选择器要求是包含值、键和关联键和值的运算符的选择器。**
**标签选择算符要求是包含值、键和关联键和值的运算符的选择算符。**
<!--
- **matchExpressions.key** (string), required
@ -54,7 +58,7 @@ A label selector is a label query over a set of resources. The result of matchLa
- **matchExpressions.key** (string),必需
`key` 是选择应用的标签键。
`key` 是选择算符应用的标签键。
<!--
- **matchExpressions.operator** (string), required
@ -69,11 +73,15 @@ A label selector is a label query over a set of resources. The result of matchLa
<!--
- **matchExpressions.values** ([]string)
*Atomic: will be replaced during a merge*
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
-->
- **matchExpressions.values** ([]string)
**原子性:将在合并期间被替换**
`values` 是一个字符串值数组。如果运算符为 `In``NotIn`,则 `values` 数组必须为非空。
如果运算符是 `Exists``DoesNotExist`,则 `values` 数组必须为空。
该数组在策略性合并补丁Strategic Merge Patch期间被替换。
@ -89,5 +97,5 @@ A label selector is a label query over a set of resources. The result of matchLa
`matchLabels` 映射中的单个 {`key`,`value`} 键值对相当于 `matchExpressions` 的一个元素,
其键字段为 `key`,运算符为 `In``values` 数组仅包含 `value`
所表达的需求最终要按逻辑与的关系组合。

View File

@ -34,13 +34,10 @@ LocalObjectReference 包含足够的信息,可以让你在同一命名空间
<!--
- **name** (string)
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-->
- **name** (string)
被引用者的名称。
更多信息: https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names。
被引用者的名称。该字段实际上是必需的,但由于向后兼容性允许为空。
这种类型的实例如果此处具有空值,几乎肯定是错误的。
更多信息https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/#names。

View File

@ -8,7 +8,6 @@ description: "ObjectMeta 是所有持久化资源必须具有的元数据,其
title: "ObjectMeta"
weight: 7
---
<!--
api_metadata:
apiVersion: ""
@ -45,6 +44,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<!--
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
-->
generateName 是一个可选前缀,由服务器使用,**仅在**未提供 name 字段时生成唯一名称。
如果使用此字段,则返回给客户端的名称将与传递的名称不同。该值还将与唯一的后缀组合。
提供的值与 name 字段具有相同的验证规则,并且可能会根据所需的后缀长度被截断,以使该值在服务器上唯一。
@ -54,7 +54,8 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
-->
如果指定了此字段并且生成的名称存在,则服务器将不会返回 409 ——相反,它将返回 201 Created 或 500
如果指定了此字段并且生成的名称存在,则服务器将不会返回 409。相反它将返回 201 Created 或 500
原因是 ServerTimeout 指示在分配的时间内找不到唯一名称,客户端应重试(可选,在 Retry-After 标头中指定的时间之后)。
仅在未指定 name 时应用。更多信息:
@ -68,7 +69,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces
-->
namespace 定义了一个值空间,其中每个名称必须唯一。空命名空间相当于 “default” 命名空间,但 “default” 是规范表示。
namespace 定义了一个值空间,其中每个名称必须唯一。空命名空间相当于 “default” 命名空间,但 “default” 是规范表示。
并非所有对象都需要限定在命名空间中——这些对象的此字段的值将为空。
必须是 DNS_LABEL。无法更新。更多信息
@ -94,11 +95,19 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
它们不可查询,在修改对象时应保留。更多信息:
https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/annotations
<!-- ### System {#System} -->
<!--
### System {#System}
-->
### 系统字段 {#System}
- **finalizers** ([]string)
<!--
*Set: unique values will be kept during a merge*
-->
**集合:唯一值将在合并期间被保留**
<!--
Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.
-->
@ -115,6 +124,12 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
- **managedFields** ([]ManagedFieldsEntry)
<!--
*Atomic: will be replaced during a merge*
-->
**原子性:将在合并期间被替换**
<!--
ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object.
@ -179,13 +194,17 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
2. `v:<value>`,其中 `<value>` 是列表项的精确 json 格式值
3. `i:<index>`,其中 `<index>` 是列表中项目的位置
4. `k:<keys>`,其中 `<keys>` 是列表项的关键字段到其唯一值的映射。
如果一个键映射到一个空的 Fields 值,则该键表示的字段是集合的一部分。
确切的格式在 sigs.k8s.io/structured-merge-diff 中定义。
- **managedFields.manager** (string)
<!-- Manager is an identifier of the workflow managing these fields. -->
<!--
Manager is an identifier of the workflow managing these fields.
-->
manager 是管理这些字段的工作流的标识符。
- **managedFields.operation** (string)
@ -228,6 +247,8 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<!--
*Patch strategy: merge on key `uid`*
*Map: unique values on key uid will be kept during a merge*
List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.
@ -237,6 +258,8 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
**补丁策略:根据 `uid` 键执行合并操作**
**映射:在合并期间将根据键 uid 保留唯一值**
此对象所依赖的对象列表。如果列表中的所有对象都已被删除,则该对象将被垃圾回收。
如果此对象由控制器管理则此列表中的条目将指向此控制器controller 字段设置为 true。
管理控制器不能超过一个。
@ -245,26 +268,37 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
**OwnerReference 包含足够可以让你识别属主对象的信息。
属主对象必须与依赖对象位于同一命名空间中,或者是集群作用域的,因此没有命名空间字段。**
- **ownerReferences.apiVersion** (string)<!-- required -->必选
<!-- API version of the referent. -->
- **ownerReferences.apiVersion** (string)<!-- required -->必需
<!--
API version of the referent.
-->
被引用资源的 API 版本。
- **ownerReferences.kind** (string)<!-- required -->必选
- **ownerReferences.kind** (string)<!-- required -->必需
<!--
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
-->
<!-- Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds -->
被引用资源的类别。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- **ownerReferences.name** (string)<!-- required -->
- **ownerReferences.name** (string)<!-- required -->
<!-- Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names-->
<!--
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
-->
被引用资源的名称。更多信息:
https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names/
- **ownerReferences.uid** (string)<!-- required -->
- **ownerReferences.uid** (string)<!-- required -->
<!-- UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids -->
<!--
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
-->
被引用资源的 uid。更多信息
https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/names#uids
@ -274,6 +308,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<!--
If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.
-->
如果为 true**并且** 如果属主具有 “foregroundDeletion” 终结器,
则在删除此引用之前,无法从键值存储中删除属主。
默认为 false。要设置此字段用户需要属主的 “delete” 权限,
@ -281,11 +316,16 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
- **ownerReferences.controller** (boolean)
<!-- If true, this reference points to the managing controller. -->
<!--
If true, this reference points to the managing controller.
-->
如果为 true则此引用指向管理的控制器。
<!-- ### Read-only {#Read-only} -->
### 只读字段 {#Read-only}
<!--
### Read-only {#Read-only}
-->
### 只读字段 {#Read-only}
- **creationTimestamp** (Time)
@ -297,6 +337,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<a name="Time"></a>
*Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
-->
creationTimestamp 是一个时间戳,表示创建此对象时的服务器时间。
不能保证在单独的操作中按发生前的顺序设置。
客户端不得设置此值。它以 RFC3339 形式表示,并采用 UTC。
@ -313,6 +354,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<!--
Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.
-->
此对象从系统中删除之前允许正常终止的秒数。
仅当设置了 deletionTimestamp 时才设置。
只能缩短。只读。
@ -342,6 +384,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<a name="Time"></a>
*Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
-->
请求体面删除时由系统填充。只读。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
@ -354,6 +397,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
<!--
A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.
-->
表示期望状态的特定生成的序列号。由系统填充。只读。
- **resourceVersion** (string)
@ -363,6 +407,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
-->
一个不透明的值,表示此对象的内部版本,客户端可以使用该值来确定对象是否已被更改。
可用于乐观并发、变更检测以及对资源或资源集的监听操作。
客户端必须将这些值视为不透明的,且未更改地传回服务器。
@ -378,6 +423,7 @@ ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户
DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release.
-->
selfLink 是表示此对象的 URL。由系统填充。只读。
**已弃用**。Kubernetes 将在 1.20 版本中停止传播该字段,并计划在 1.21 版本中删除该字段。

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "CSINode 包含节点上安装的所有 CSI 驱动有关的信息。"
title: "CSINode"
weight: 9
weight: 4
---
<!--
api_metadata:
@ -16,7 +16,7 @@ api_metadata:
content_type: "api_reference"
description: "CSINode holds information about all CSI drivers installed on a node."
title: "CSINode"
weight: 9
weight: 4
-->
`apiVersion: storage.k8s.io/v1`
@ -68,6 +68,8 @@ CSINodeSpec 包含一个节点上安装的所有 CSI 驱动规约有关的信息
*Patch strategy: merge on key `name`*
*Map: unique values on key name will be kept during a merge*
drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty.
<a name="CSINodeDriver"></a>
@ -81,6 +83,8 @@ CSINodeSpec 包含一个节点上安装的所有 CSI 驱动规约有关的信息
**补丁策略:按照键 `name` 合并**
**映射:键 `name` 的唯一值将在合并过程中保留**
drivers 是节点上存在的所有 CSI 驱动的信息列表。如果列表中的所有驱动均被卸载,则此字段可以为空。
<a name="CSINodeDriver"></a>
@ -136,11 +140,15 @@ CSINodeSpec 包含一个节点上安装的所有 CSI 驱动规约有关的信息
<!--
- **drivers.topologyKeys** ([]string)
*Atomic: will be replaced during a merge*
topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology.
-->
- **drivers.topologyKeys** ([]string)
**原子性:合并期间将被替换**
topologyKeys 是驱动支持的键的列表。
在集群上初始化一个驱动时,该驱动将提供一组自己理解的拓扑键
(例如 “company.com/zone”、“company.com/region”

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "StorageClass 为可以动态制备 PersistentVolume 的存储类描述参数。"
title: "StorageClass"
weight: 6
weight: 8
---
<!--
api_metadata:
@ -16,7 +16,8 @@ api_metadata:
content_type: "api_reference"
description: "StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned."
title: "StorageClass"
weight: 6
weight: 8
auto_generated: true
-->
`apiVersion: storage.k8s.io/v1`
@ -24,6 +25,7 @@ weight: 6
`import "k8s.io/api/storage/v1"`
## StorageClass {#StorageClass}
<!--
StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.
@ -41,13 +43,16 @@ StorageClass 是不受名字空间作用域限制的;按照 etcd 设定的存
<!--
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **provisioner** (string), required
provisioner indicates the type of the provisioner.
- **allowVolumeExpansion** (boolean)
allowVolumeExpansion shows whether the storage class allow volume expand
allowVolumeExpansion shows whether the storage class allow volume expand.
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
@ -64,6 +69,7 @@ StorageClass 是不受名字空间作用域限制的;按照 etcd 设定的存
<!--
- **allowedTopologies** ([]TopologySelectorTerm)
*Atomic: will be replaced during a merge*
allowedTopologies restrict the node topologies where volumes can be dynamically provisioned. Each volume plugin defines its own supported topology specifications. An empty TopologySelectorTerm list means there is no topology restriction. This field is only honored by servers that enable the VolumeScheduling feature.
@ -79,50 +85,70 @@ StorageClass 是不受名字空间作用域限制的;按照 etcd 设定的存
空的 TopologySelectorTerm 列表意味着没有拓扑限制。
只有启用 VolumeScheduling 功能特性的服务器才能使用此字段。
<a name="TopologySelectorTerm"></a>
<a name="TopologySelectorTerm"></a>
**拓扑选择器条件表示标签查询的结果。
一个 null 或空的拓扑选择器条件不会匹配任何对象。各个条件的要求按逻辑与的关系来计算。
此选择器作为 NodeSelectorTerm 所提供功能的子集。此功能为 Alpha 特性,将来可能会变更。**
此选择器作为 NodeSelectorTerm 所提供功能的子集。这是一个 Alpha 特性,将来可能会变更。**
<!--
<!--
- **allowedTopologies.matchLabelExpressions** ([]TopologySelectorLabelRequirement)
*Atomic: will be replaced during a merge*
A list of topology selector requirements by labels.
<a name="TopologySelectorLabelRequirement"></a>
*A topology selector requirement is a selector that matches given label. This is an alpha feature and may change in the future.*
-->
- **allowedTopologies.matchLabelExpressions.key** (string), required
The label key that the selector applies to.
- **allowedTopologies.matchLabelExpressions.values** ([]string), required
An array of string values. One value must match the label to be selected. Each entry in Values is ORed.
-->
- **allowedTopologies.matchLabelExpressions** ([]TopologySelectorLabelRequirement)
**原子性:将在合并期间被替换**
按标签设置的拓扑选择器要求的列表。
<a name="TopologySelectorLabelRequirement"></a>
**拓扑选择器要求是与给定标签匹配的一个选择器。此功能为 Alpha 特性,将来可能会变更。**
<a name="TopologySelectorLabelRequirement"></a>
**拓扑选择器要求是与给定标签匹配的一个选择器。这是一个 Alpha 特性,将来可能会变更。**
<!--
- **allowedTopologies.matchLabelExpressions.key** (string), required
The label key that the selector applies to.
- **allowedTopologies.matchLabelExpressions.values** ([]string), required
*Atomic: will be replaced during a merge*
An array of string values. One value must match the label to be selected. Each entry in Values is ORed.
-->
- **allowedTopologies.matchLabelExpressions.key** (string),必需
选择器所针对的标签键。
- **allowedTopologies.matchLabelExpressions.values** ([]string),必需
字符串数组。一个值必须与要选择的标签匹配。values 中的每个条目按逻辑或的关系来计算。
**原子性:将在合并期间被替换**
字符串值的数组。一个值必须与要选择的标签匹配。values 中的每个条目按逻辑或的关系来计算。
<!--
- **mountOptions** ([]string)
*Atomic: will be replaced during a merge*
mountOptions controls the mountOptions for dynamically provisioned PersistentVolumes of this storage class. e.g. ["ro", "soft"]. Not validated - mount of the PVs will simply fail if one is invalid.
- **parameters** (map[string]string)
parameters holds the parameters for the provisioner that should create volumes of this storage class.
-->
- **mountOptions** ([]string)
mountOptions 控制此存储类动态制备的 PersistentVolume 的挂载配置。
(例如 ["ro", "soft"])。
系统对选项作检查——如果有一个选项无效,则这些 PV 的挂载将失败。
**原子性:将在合并期间被替换**
mountOptions 控制此存储类动态制备的 PersistentVolume 的挂载配置,例如 ["ro", "soft"]。
针对此字段无合法性检查 —— 如果有一个选项无效,则这些 PV 的挂载将失败。
- **parameters** (map[string]string)
@ -134,6 +160,7 @@ StorageClass 是不受名字空间作用域限制的;按照 etcd 设定的存
reclaimPolicy controls the reclaimPolicy for dynamically provisioned PersistentVolumes of this storage class. Defaults to Delete.
- **volumeBindingMode** (string)
volumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound. When unset, VolumeBindingImmediate is used. This field is only honored by servers that enable the VolumeScheduling feature.
-->
- **reclaimPolicy** (string)
@ -147,6 +174,7 @@ StorageClass 是不受名字空间作用域限制的;按照 etcd 设定的存
只有启用 VolumeScheduling 功能特性的服务器才能使用此字段。
## StorageClassList {#StorageClassList}
<!--
StorageClassList is a collection of storage classes.
-->
@ -160,9 +188,11 @@ StorageClassList 是存储类的集合。
<!--
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>), required
items is the list of StorageClasses
-->
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
@ -184,7 +214,9 @@ StorageClassList 是存储类的集合。
<hr>
### `get` 读取指定的 StorageClass
#### HTTP 请求
GET /apis/storage.k8s.io/v1/storageclasses/{name}
<!--
@ -194,9 +226,10 @@ GET /apis/storage.k8s.io/v1/storageclasses/{name}
- **pretty** (*in query*): string
-->
#### 参数
- **name** (**路径参数**): string必需
StorageClass 的名称
StorageClass 的名称
- **pretty** (**查询参数**): string
@ -206,6 +239,7 @@ GET /apis/storage.k8s.io/v1/storageclasses/{name}
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): OK
401: Unauthorized
@ -215,7 +249,9 @@ GET /apis/storage.k8s.io/v1/storageclasses/{name}
#### HTTP Request
-->
### `list` 列出或观测类别为 StorageClass 的对象
#### HTTP 请求
GET /apis/storage.k8s.io/v1/storageclasses
<!--
@ -232,6 +268,7 @@ GET /apis/storage.k8s.io/v1/storageclasses
- **watch** (*in query*): boolean
-->
#### 参数
- **allowWatchBookmarks** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
@ -280,6 +317,7 @@ GET /apis/storage.k8s.io/v1/storageclasses
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClassList" >}}">StorageClassList</a>): OK
401: Unauthorized
@ -289,7 +327,9 @@ GET /apis/storage.k8s.io/v1/storageclasses
#### HTTP Request
-->
### `create` 创建 StorageClass
#### HTTP 请求
POST /apis/storage.k8s.io/v1/storageclasses
<!--
@ -301,6 +341,7 @@ POST /apis/storage.k8s.io/v1/storageclasses
- **pretty** (*in query*): string
-->
#### 参数
- **body**: <a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>,必需
- **dryRun** (**查询参数**): string
@ -323,6 +364,7 @@ POST /apis/storage.k8s.io/v1/storageclasses
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): Created
@ -336,7 +378,9 @@ POST /apis/storage.k8s.io/v1/storageclasses
#### HTTP Request
-->
### `update` 替换指定的 StorageClass
#### HTTP 请求
PUT /apis/storage.k8s.io/v1/storageclasses/{name}
<!--
@ -350,9 +394,10 @@ PUT /apis/storage.k8s.io/v1/storageclasses/{name}
- **pretty** (*in query*): string
-->
#### 参数
- **name** (**路径参数**): string必需
StorageClass 的名称
StorageClass 的名称
- **body**: <a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>,必需
@ -376,6 +421,7 @@ PUT /apis/storage.k8s.io/v1/storageclasses/{name}
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): Created
@ -387,7 +433,9 @@ PUT /apis/storage.k8s.io/v1/storageclasses/{name}
#### HTTP Request
-->
### `patch` 部分更新指定的 StorageClass
#### HTTP 请求
PATCH /apis/storage.k8s.io/v1/storageclasses/{name}
<!--
@ -402,9 +450,10 @@ PATCH /apis/storage.k8s.io/v1/storageclasses/{name}
- **pretty** (*in query*): string
-->
#### 参数
- **name** (**路径参数**): string必需
StorageClass 的名称
StorageClass 的名称
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
@ -432,6 +481,7 @@ PATCH /apis/storage.k8s.io/v1/storageclasses/{name}
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): Created
@ -443,7 +493,9 @@ PATCH /apis/storage.k8s.io/v1/storageclasses/{name}
#### HTTP Request
-->
### `delete` 删除 StorageClass
#### HTTP 请求
DELETE /apis/storage.k8s.io/v1/storageclasses/{name}
<!--
@ -457,9 +509,10 @@ DELETE /apis/storage.k8s.io/v1/storageclasses/{name}
- **propagationPolicy** (*in query*): string
-->
#### 参数
- **name** (**路径参数**): string必需
StorageClass 的名称
StorageClass 的名称
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
@ -483,6 +536,7 @@ DELETE /apis/storage.k8s.io/v1/storageclasses/{name}
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): OK
202 (<a href="{{< ref "../config-and-storage-resources/storage-class-v1#StorageClass" >}}">StorageClass</a>): Accepted
@ -494,7 +548,9 @@ DELETE /apis/storage.k8s.io/v1/storageclasses/{name}
#### HTTP Request
-->
### `deletecollection` 删除 StorageClass 的集合
#### HTTP 请求
DELETE /apis/storage.k8s.io/v1/storageclasses
<!--
@ -513,6 +569,7 @@ DELETE /apis/storage.k8s.io/v1/storageclasses
- **timeoutSeconds** (*in query*): integer
-->
#### 参数
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (**查询参数**): string
@ -567,6 +624,7 @@ DELETE /apis/storage.k8s.io/v1/storageclasses
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
401: Unauthorized

View File

@ -0,0 +1,680 @@
---
api_metadata:
apiVersion: "storage.k8s.io/v1beta1"
import: "k8s.io/api/storage/v1beta1"
kind: "VolumeAttributesClass"
content_type: "api_reference"
description: "VolumeAttributesClass 表示由 CSI 驱动所定义的可变更卷属性的规约。"
title: "VolumeAttributesClass v1beta1"
weight: 12
---
<!--
api_metadata:
apiVersion: "storage.k8s.io/v1beta1"
import: "k8s.io/api/storage/v1beta1"
kind: "VolumeAttributesClass"
content_type: "api_reference"
description: "VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver."
title: "VolumeAttributesClass v1beta1"
weight: 12
auto_generated: true
-->
`apiVersion: storage.k8s.io/v1beta1`
`import "k8s.io/api/storage/v1beta1"`
## VolumeAttributesClass {#VolumeAttributesClass}
<!--
VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver. The class can be specified during dynamic provisioning of PersistentVolumeClaims, and changed in the PersistentVolumeClaim spec after provisioning.
-->
VolumeAttributesClass 表示由 CSI 驱动所定义的可变更卷属性的规约。
此类可以在动态制备 PersistentVolumeClaim 期间被指定,
并且可以在制备之后在 PersistentVolumeClaim 规约中更改。
<hr>
- **apiVersion**: storage.k8s.io/v1beta1
- **kind**: VolumeAttributesClass
<!--
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **driverName** (string), required
Name of the CSI driver This field is immutable.
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的对象元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **driverName** (string),必需
CSI 驱动的名称。此字段是不可变更的。
<!--
- **parameters** (map[string]string)
parameters hold volume attributes defined by the CSI driver. These values are opaque to the Kubernetes and are passed directly to the CSI driver. The underlying storage provider supports changing these attributes on an existing volume, however the parameters field itself is immutable. To invoke a volume update, a new VolumeAttributesClass should be created with new parameters, and the PersistentVolumeClaim should be updated to reference the new VolumeAttributesClass.
This field is required and must contain at least one key/value pair. The keys cannot be empty, and the maximum number of parameters is 512, with a cumulative max size of 256K. If the CSI driver rejects invalid parameters, the target PersistentVolumeClaim will be set to an "Infeasible" state in the modifyVolumeStatus field.
-->
- **parameters** (map[string]string)
parameters 保存由 CSI 驱动所定义的卷属性。这些值对 Kubernetes 是不透明的,被直接传递给 CSI 驱动。
下层存储驱动支持更改现有卷的这些属性,但 parameters 字段本身是不可变更的。
要触发一次卷更新,应该使用新的参数创建新的 VolumeAttributesClass
并且应更新 PersistentVolumeClaim使之引用新的 VolumeAttributesClass。
此字段是必需的,必须至少包含一个键/值对。键不能为空,参数最多 512 个,累计最大尺寸为 256K。
如果 CSI 驱动拒绝无效参数,则目标 PersistentVolumeClaim
的状态中 modifyVolumeStatus 字段将被设置为 “Infeasible”。
## VolumeAttributesClassList {#VolumeAttributesClassList}
<!--
VolumeAttributesClassList is a collection of VolumeAttributesClass objects.
-->
VolumeAttributesClassList 是 VolumeAttributesClass 对象的集合。
<hr>
- **apiVersion**: storage.k8s.io/v1beta1
- **kind**: VolumeAttributesClassList
<!--
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>), required
items is the list of VolumeAttributesClass objects.
-->
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
标准的列表元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **items** ([]<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>),必需
items 是 VolumeAttributesClass 对象的列表。
<!--
## Operations {#Operations}
<hr>
### `get` read the specified VolumeAttributesClass
#### HTTP Request
-->
## 操作 {#Operations}
<hr>
### `get` 读取指定的 VolumeAttributesClass
#### HTTP 请求
GET /apis/storage.k8s.io/v1beta1/volumeattributesclasses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the VolumeAttributesClass
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
VolumeAttributesClass 的名称。
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): OK
401: Unauthorized
<!--
### `list` list or watch objects of kind VolumeAttributesClass
#### HTTP Request
-->
### `list` 列举或监视类别为 VolumeAttributesClass 的对象
#### HTTP 请求
GET /apis/storage.k8s.io/v1beta1/volumeattributesclasses
<!--
#### Parameters
- **allowWatchBookmarks** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
-->
#### 参数
- **allowWatchBookmarks** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#allowWatchBookmarks" >}}">allowWatchBookmarks</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
- **watch** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#watch" >}}">watch</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClassList" >}}">VolumeAttributesClassList</a>): OK
401: Unauthorized
<!--
### `create` create a VolumeAttributesClass
#### HTTP Request
-->
### `create` 创建 VolumeAttributesClass
#### HTTP 请求
POST /apis/storage.k8s.io/v1beta1/volumeattributesclasses
<!--
#### Parameters
- **body**: <a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **body**: <a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): Created
202 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): Accepted
401: Unauthorized
<!--
### `update` replace the specified VolumeAttributesClass
#### HTTP Request
-->
### `update` 替换指定的 VolumeAttributesClass
#### HTTP 请求
PUT /apis/storage.k8s.io/v1beta1/volumeattributesclasses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the VolumeAttributesClass
- **body**: <a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
VolumeAttributesClass 的名称。
- **body**: <a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): Created
401: Unauthorized
<!--
### `patch` partially update the specified VolumeAttributesClass
#### HTTP Request
-->
### `patch` 部分更新指定的 VolumeAttributesClass
#### HTTP 请求
PATCH /apis/storage.k8s.io/v1beta1/volumeattributesclasses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the VolumeAttributesClass
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>, required
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
-->
#### 参数
- **name** (**路径参数**): string必需
VolumeAttributesClass 的名称。
- **body**: <a href="{{< ref "../common-definitions/patch#Patch" >}}">Patch</a>,必需
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldManager** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldManager" >}}">fieldManager</a>
- **fieldValidation** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldValidation" >}}">fieldValidation</a>
- **force** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#force" >}}">force</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): OK
201 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): Created
401: Unauthorized
<!--
### `delete` delete a VolumeAttributesClass
#### HTTP Request
-->
### `delete` 删除 VolumeAttributesClass
#### HTTP 请求
DELETE /apis/storage.k8s.io/v1beta1/volumeattributesclasses/{name}
<!--
#### Parameters
- **name** (*in path*): string, required
name of the VolumeAttributesClass
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
-->
#### 参数
- **name** (**路径参数**): string必需
VolumeAttributesClass 的名称。
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): OK
202 (<a href="{{< ref "../config-and-storage-resources/volume-attributes-class-v1beta1#VolumeAttributesClass" >}}">VolumeAttributesClass</a>): Accepted
401: Unauthorized
<!--
### `deletecollection` delete collection of VolumeAttributesClass
#### HTTP Request
-->
### `deletecollection` 删除 VolumeAttributesClass 的集合
#### HTTP 请求
DELETE /apis/storage.k8s.io/v1beta1/volumeattributesclasses
<!--
#### Parameters
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (*in query*): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (*in query*): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (*in query*): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
-->
#### 参数
- **body**: <a href="{{< ref "../common-definitions/delete-options#DeleteOptions" >}}">DeleteOptions</a>
- **continue** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#continue" >}}">continue</a>
- **dryRun** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#dryRun" >}}">dryRun</a>
- **fieldSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#fieldSelector" >}}">fieldSelector</a>
- **gracePeriodSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#gracePeriodSeconds" >}}">gracePeriodSeconds</a>
- **labelSelector** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#labelSelector" >}}">labelSelector</a>
- **limit** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#limit" >}}">limit</a>
- **pretty** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#pretty" >}}">pretty</a>
- **propagationPolicy** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#propagationPolicy" >}}">propagationPolicy</a>
- **resourceVersion** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersion" >}}">resourceVersion</a>
- **resourceVersionMatch** (**查询参数**): string
<a href="{{< ref "../common-parameters/common-parameters#resourceVersionMatch" >}}">resourceVersionMatch</a>
- **sendInitialEvents** (**查询参数**): boolean
<a href="{{< ref "../common-parameters/common-parameters#sendInitialEvents" >}}">sendInitialEvents</a>
- **timeoutSeconds** (**查询参数**): integer
<a href="{{< ref "../common-parameters/common-parameters#timeoutSeconds" >}}">timeoutSeconds</a>
<!--
#### Response
-->
#### 响应
200 (<a href="{{< ref "../common-definitions/status#Status" >}}">Status</a>): OK
401: Unauthorized

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "ControllerRevision 实现了状态数据的不可变快照。"
title: "ControllerRevision"
weight: 7
weight: 8
---
<!--
@ -17,7 +17,7 @@ api_metadata:
content_type: "api_reference"
description: "ControllerRevision implements an immutable snapshot of state data."
title: "ControllerRevision"
weight: 7
weight: 8
auto_generated: true
-->

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "Deployment 使得 Pod 和 ReplicaSet 能够进行声明式更新。"
title: "Deployment"
weight: 5
weight: 6
---
<!--
api_metadata:
@ -16,7 +16,7 @@ api_metadata:
content_type: "api_reference"
description: "Deployment enables declarative updates for Pods and ReplicaSets."
title: "Deployment"
weight: 5
weight: 6
auto_generated: true
-->
@ -291,7 +291,9 @@ DeploymentStatus 是最近观测到的 Deployment 状态。
- **conditions** ([]DeploymentCondition)
*Patch strategy: merge on key `type`*
*Map: unique values on key type will be kept during a merge*
Represents the latest available observations of a deployment's current state.
<a name="DeploymentCondition"></a>
@ -300,7 +302,9 @@ DeploymentStatus 是最近观测到的 Deployment 状态。
- **conditions** ([]DeploymentCondition)
**补丁策略:按照键 `type` 合并**
**Map键 `type` 的唯一值将在合并期间保留**
表示 Deployment 当前状态的最新可用观测值。
<a name="DeploymentCondition"></a>

View File

@ -6,7 +6,7 @@ api_metadata:
content_type: "api_reference"
description: "PriorityClass 定义了从优先级类名到优先级数值的映射。"
title: "PriorityClass"
weight: 13
weight: 14
auto_generated: false
---
@ -18,7 +18,7 @@ api_metadata:
content_type: "api_reference"
description: "PriorityClass defines mapping from a priority class name to the priority integer value."
title: "PriorityClass"
weight: 13
weight: 14
auto_generated: true
-->

View File

@ -164,6 +164,8 @@ ReplicaSetStatus 表示 ReplicaSet 的当前状态。
- **conditions** ([]ReplicaSetCondition)
*Patch strategy: merge on key `type`*
*Map: unique values on key type will be kept during a merge*
Represents the latest available observations of a replica set's current state.
@ -173,6 +175,8 @@ ReplicaSetStatus 表示 ReplicaSet 的当前状态。
- **conditions** ([]ReplicaSetCondition)
**补丁策略:按照键 `type` 合并**
**Map键类型的唯一值将在合并期间保留**
表示副本集当前状态的最新可用观测值。

View File

@ -572,6 +572,33 @@ feature gate must be enabled for this label to be added to pods.
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
特性门控必须被启用,才能将此标签添加到 Pod 上。
<!--
### resource.kubernetes.io/pod-claim-name
Type: Annotation
Example: `resource.kubernetes.io/pod-claim-name: "my-pod-claim"`
Used on: ResourceClaim
This annotation is assigned to generated ResourceClaims.
Its value corresponds to the name of the resource claim in the `.spec` of any Pod(s) for which the ResourceClaim was created.
This annotation is an internal implementation detail of [dynamic resource allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/).
You should not need to read or modify the value of this annotation.
-->
### resource.kubernetes.io/pod-claim-name {#resource-kubernetes-io-pod-claim-name}
类别:注解
示例:`resource.kubernetes.io/pod-claim-name: "my-pod-claim"`
用于ResourceClaim
该注解被赋予自动生成的 ResourceClaim。
注解的值对应于触发 ResourceClaim 创建的 Pod 在 `.spec` 中的资源声明名称。
此注解是[动态资源分配](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/)的内部实现细节。
你不需要读取或修改此注解的值。
<!--
### cluster-autoscaler.kubernetes.io/safe-to-evict
@ -2373,13 +2400,12 @@ Example: `nginx.ingress.kubernetes.io/configuration-snippet: " more_set_headers
Used on: Ingress
You can use this annotation to set extra configuration on an Ingress that
uses the [NGINX Ingress Controller] (https://github.com/kubernetes/ingress-nginx/)
You can use this annotation to set extra configuration on an Ingress that
uses the [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/).
The `configuration-snippet` annotation is ignored
by default since version 1.9.0 of the ingress controller.
The NGINX ingress controller setting `allow-snippet-annotations.`
has to be explicitly enabled to
use this annotation.
The NGINX ingress controller setting `allow-snippet-annotations.`
has to be explicitly enabled to use this annotation.
Enabling the annotation can be dangerous in a multi-tenant cluster, as it can lead people with otherwise
limited permissions being able to retrieve all Secrets in the cluster.
-->

View File

@ -139,7 +139,8 @@ PodSecurity 执行中违反的特定策略及对应字段。
Example: `authorization.k8s.io/decision: "forbid"`
This annotation indicates whether or not a request was authorized in Kubernetes audit logs.
Value must be **forbid** or **allow**. This annotation indicates whether or not a request
was authorized in Kubernetes audit logs.
See [Auditing](/docs/tasks/debug/debug-cluster/audit/) for more information.
-->
@ -147,6 +148,7 @@ See [Auditing](/docs/tasks/debug/debug-cluster/audit/) for more information.
例子:`authorization.k8s.io/decision: "forbid"`
值必须是 **forbid** 或者 **allow**
此注解在 Kubernetes 审计日志中表示请求是否获得授权。
有关详细信息,请参阅[审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)。

View File

@ -13,18 +13,6 @@ weight: 70
<!-- overview -->
{{< note >}}
<!--
While kubeadm is being used as the management tool for external etcd nodes
in this guide, please note that kubeadm does not plan to support certificate rotation
or upgrades for such nodes. The long-term plan is to empower the tool
[etcdadm](https://github.com/kubernetes-sigs/etcdadm) to manage these
aspects.
-->
在本指南中,使用 kubeadm 作为外部 etcd 节点管理工具,请注意 kubeadm 不计划支持此类节点的证书更换或升级。
对于长期规划是使用 [etcdadm](https://github.com/kubernetes-sigs/etcdadm) 增强工具来管理这些方面。
{{< /note >}}
<!--
By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision

View File

@ -14,6 +14,10 @@ weight: 350
This page provides an overview of the steps you should follow to upgrade a
Kubernetes cluster.
The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to to stay secure.
The way that you upgrade a cluster depends on how you initially deployed it
and on any subsequent changes.
@ -21,6 +25,9 @@ At a high level, the steps you perform are:
-->
本页概述升级 Kubernetes 集群的步骤。
Kubernetes 项目建议及时升级到最新的补丁版本,并确保使用受支持的 Kubernetes 版本。
遵循这一建议有助于保障安全。
升级集群的方式取决于你最初部署它的方式、以及后续更改它的方式。
从高层规划的角度看,要执行的步骤是:

View File

@ -1,14 +1,14 @@
---
title: 升级 kubeadm 集群
content_type: task
weight: 40
weight: 30
---
<!--
reviewers:
- sig-cluster-lifecycle
title: Upgrading kubeadm clusters
content_type: task
weight: 40
weight: 30
-->
<!-- overview -->
@ -41,6 +41,14 @@ please refer to following pages instead:
- [将 kubeadm 集群从 {{< skew currentVersionAddMinor -4 >}} 升级到 {{< skew currentVersionAddMinor -3 >}}](https://v{{< skew currentVersionAddMinor -3 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- [将 kubeadm 集群从 {{< skew currentVersionAddMinor -5 >}} 升级到 {{< skew currentVersionAddMinor -4 >}}](https://v{{< skew currentVersionAddMinor -4 "-" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
<!--
The Kubernetes project recommends upgrading to the latest patch releases promptly, and
to ensure that you are running a supported minor release of Kubernetes.
Following this recommendation helps you to to stay secure.
-->
Kubernetes 项目建议立即升级到最新的补丁版本,并确保你运行的是受支持的 Kubernetes 次要版本。
遵循此建议可帮助你保持安全。
<!--
The upgrade workflow at high level is the following:
-->

View File

@ -49,7 +49,7 @@ GPU vendor. Here are some links to vendors' instructions:
作为集群管理员,你要在节点上安装来自对应硬件厂商的 GPU 驱动程序,并运行来自
GPU 厂商的对应设备插件。以下是一些厂商说明的链接:
* [AMD](https://github.com/RadeonOpenCompute/k8s-device-plugin#deployment)
* [AMD](https://github.com/ROCm/k8s-device-plugin#deployment)
* [Intel](https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/gpu_plugin/README.html)
* [NVIDIA](https://github.com/NVIDIA/k8s-device-plugin#quick-start)
<!--

View File

@ -4,7 +4,6 @@ content_type: task
weight: 60
min-kubernetes-server-version: 1.7
---
<!--
reviewers:
- rickypai
@ -20,14 +19,18 @@ min-kubernetes-server-version: 1.7
<!--
Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.
The Kubernetes project recommends modifying DNS configuration using the `hostAliases` field
(part of the `.spec` for a Pod), and not by using an init container or other means to edit `/etc/hosts`
directly.
Change made in other ways may be overwritten by the kubelet during Pod creation or restart.
-->
当 DNS 配置以及其它选项不合理的时候,通过向 Pod 的 `/etc/hosts` 文件中添加条目,
可以在 Pod 级别覆盖对主机名的解析。你可以通过 PodSpec 的 HostAliases
字段来添加这些自定义条目。
建议通过使用 HostAliases 来进行修改,因为该文件由 Kubelet 管理,并且
可以在 Pod 创建/重启过程中被重写。
Kubernetes 项目建议使用 `hostAliases` 字段Pod `.spec` 的一部分)来修改 DNS 配置,
而不是通过使用 Init 容器或其他方式直接编辑 `/etc/hosts`
以其他方式所做的更改可能会在 Pod 创建或重启过程中被 kubelet 重写。
<!-- steps -->
@ -36,7 +39,7 @@ Modification not using HostAliases is not suggested because the file is managed
Start an Nginx Pod which is assigned a Pod IP:
-->
## 默认 hosts 文件内容
## 默认 hosts 文件内容 {#default-hosts-file-content}
让我们从一个 Nginx Pod 开始,该 Pod 被分配一个 IP
@ -65,7 +68,7 @@ nginx 1/1 Running 0 13s 10.200.0.4 worker0
<!--
The hosts file content would look like this:
-->
主机文件的内容如下所示:
hosts 文件的内容如下所示:
```shell
kubectl exec nginx -- cat /etc/hosts
@ -97,7 +100,7 @@ For example: to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`
`bar.remote` to `10.1.2.3`, you can configure HostAliases for a Pod under
`.spec.hostAliases`:
-->
## 通过 HostAliases 增加额外条目
## 通过 HostAliases 增加额外条目 {#adding-additional-entries-with-hostaliases}
除了默认的样板内容,你可以向 `hosts` 文件添加额外的条目。
例如,要将 `foo.local`、`bar.local` 解析为 `127.0.0.1`

View File

@ -220,12 +220,12 @@ profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
```
<!--
The profile needs to loaded onto all nodes, since you don't know where the pod will be scheduled.
For this example we'll use SSH to install the profiles, but other approaches are
The profile needs to be loaded onto all nodes, since you don't know where the pod will be scheduled.
For this example you can use SSH to install the profiles, but other approaches are
discussed in [Setting up nodes with profiles](#setting-up-nodes-with-profiles).
-->
由于不知道 Pod 将被调度到哪里,该配置文件需要加载到所有节点上。
在本例中,我们将使用 SSH 来安装概要文件,
在本例中,你可以使用 SSH 来安装配置文件,
但是在[使用配置文件设置节点](#setting-up-nodes-with-profiles)中讨论了其他方法。
<!--

View File

@ -1,52 +1,44 @@
{{ $links := .Site.Params.links }}
<footer class="d-print-none">
<div class="footer__links">
<nav>
{{ with site.GetPage "page" "docs/tutorials/stateless-application/hello-minikube" }}<a href="{{ .RelPermalink }}">{{ .LinkTitle }}</a>{{ end }}
{{ $sections := slice "docs/home" "blog" "training" "partners" "community" "case-studies" }}
{{ range $sections }}
{{ with site.GetPage "section" . }}<a class="text-white" href="{{ .RelPermalink }}">{{ .LinkTitle }}</a>{{ end }}
{{ end }}
</nav>
</div>
<div class="container-fluid">
<footer class="bg-dark py-5 row d-print-none">
<div class="container-fluid mx-sm-5">
<div class="row">
<div class="col-6 col-sm-2 text-xs-center order-sm-2">
{{ template "footer-main-block" . }}
<div class="col col-sm-2 text-xs-center order-1">
{{ with $links }}
{{ with index . "user"}}
{{ template "footer-links-block" . }}
{{ end }}
{{ end }}
</div>
<div class="col-6 col-sm-2 text-right text-xs-center order-sm-3">
<div class="col col-sm-2 text-right text-xs-center order-3">
{{ with $links }}
{{ with index . "developer"}}
{{ template "footer-links-block" . }}
{{ end }}
{{ end }}
</div>
<div class="col-12 col-sm-8 text-center order-sm-2">
{{ with .Site.Params.copyright_k8s }}<small class="text-white">&copy; {{ now.Year}} {{ T "main_documentation_license" | safeHTML }}</small>{{ end }}
<br/>
{{ with .Site.Params.copyright_linux }}<small class="text-white">Copyright &copy; {{ now.Year }} {{ T "main_copyright_notice" | safeHTML }}</small>{{ end }}
<br/>
<small class="text-white">{{ T "china_icp_license" }} 京ICP备17074266号-3</small>
{{ with .Site.Params.privacy_policy }}<small class="ml-1"><a href="{{ . }}" target="_blank">{{ T "footer_privacy_policy" }}</a></small>{{ end }}
{{ if not .Site.Params.ui.footer_about_disable }}
{{ with .Site.GetPage "about" }}<p class="mt-2"><a href="{{ .RelPermalink }}">{{ .Title }}</a></p>{{ end }}
{{ end }}
</div>
</div>
</div>
</footer>
{{ define "footer-links-block" }}
<ul class="list-inline mb-0">
<ul class="list-inline mb-0 footer-icons">
{{ range . }}
<li class="list-inline-item mx-2 h3" data-toggle="tooltip" data-placement="top" title="{{ .name }}" aria-label="{{ .name }}">
<a class="text-white" target="_blank" href="{{ .url }}">
<a class="text-white" target="_blank" rel="noopener" href="{{ .url }}" aria-label="{{ .name }}">
<i class="{{ .icon }}"></i>
</a>
</li>
{{ end }}
</ul>
{{ end }}
{{ define "footer-main-block" }}
<div class="col-5 col-sm-7 text-center order-2 footer-main">
<p><span class="copyright-notice">&copy; {{ now.Year}} {{ T "main_documentation_license" | safeHTML }}</span></p>
<p><span class="copyright-notice">&copy; {{ now.Year }} {{ T "main_copyright_notice" | safeHTML }}</span></p>
<p><span class="certification-notice">{{ T "china_icp_license" }} 京ICP备17074266号-3</span></p>
{{ with .Site.Params.privacy_policy }}<p><span class="ml-1 privacy-policy"><a href="{{ . }}" target="_blank">{{ T "footer_privacy_policy" }}</a></span></p>{{ end }}
{{ if not .Site.Params.ui.footer_about_disable }}
{{ with .Site.GetPage "about" }}<p class="mt-2"><a href="{{ .RelPermalink }}">{{ .Title }}</a></p>{{ end }}
{{ end }}
</div>
{{ end }}