Merge branch 'main' into kubeconfig

pull/35091/head
Michael 2022-07-17 19:38:19 +08:00
commit 747a4026a6
20 changed files with 2042 additions and 250 deletions

View File

@ -21,15 +21,15 @@ Die Add-Ons in den einzelnen Kategorien sind alphabetisch sortiert - Die Reihenf
* [ACI](https://www.github.com/noironetworks/aci-containers) bietet Container-Networking und Network-Security mit Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/introduction/) ist ein Networking- und Network-Policy-Provider. Calico unterstützt eine Reihe von Networking-Optionen, damit Du die richtige für deinen Use-Case wählen kannst. Dies beinhaltet Non-Overlaying and Overlaying-Networks mit oder ohne BGP. Calico nutzt die gleiche Engine um Network-Policies für Hosts, Pods und (falls Du Istio & Envoy benutzt) Anwendungen auf Service-Mesh-Ebene durchzusetzen.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) vereint Flannel und Calico um Networking- und Network-Policies bereitzustellen.
* [Cilium](https://github.com/cilium/cilium) ist ein L3 Network- and Network-Policy-Plugin welches das transparent HTTP/API/L7-Policies durchsetzen kann. Sowohl Routing- als auch Overlay/Encapsulation-Modes werden uterstützt. Außerdem kann Cilium auf andere CNI-Plugins aufsetzen.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) ermöglicht das nahtlose Verbinden von Kubernetes mit einer Reihe an CNI-Plugins wie z.B. Calico, Canal, Flannel, Romana, oder Weave.
* [Contiv](https://contivpp.io/) bietet konfigurierbares Networking (Native L3 auf BGP, Overlay mit vxlan, Klassisches L2, Cisco-SDN/ACI) für verschiedene Anwendungszwecke und auch umfangreiches Policy-Framework. Das Contiv-Projekt ist vollständig [Open Source](http://github.com/contiv). Der [installer](http://github.com/contiv/install) bietet sowohl kubeadm als auch nicht-kubeadm basierte Installationen.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), basierend auf [Tungsten Fabric](https://tungsten.io), ist eine Open Source, multi-Cloud Netzwerkvirtualisierungs- und Policy-Management Plattform. Contrail und Tungsten Fabric sind mit Orechstratoren wie z.B. Kubernetes, OpenShift, OpenStack und Mesos integriert und bieten Isolationsmodi für Virtuelle Maschinen, Container (bzw. Pods) und Bare Metal workloads.
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) ist ein Overlay-Network-Provider der mit Kubernetes genutzt werden kann.
* [Knitter](https://github.com/ZTE/Knitter/) ist eine Network-Lösung die Mehrfach-Network in Kubernetes ermöglicht.
* Multus ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) ist ein Multi-Plugin für Mehrfachnetzwerk-Unterstützung um alle CNI-Plugins (z.B. Calico, Cilium, Contiv, Flannel), zusätzlich zu SRIOV-, DPDK-, OVS-DPDK- und VPP-Basierten Workloads in Kubernetes zu unterstützen.
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP) bietet eine Integration zwischen VMware NSX-T und einem Orchestator wie z.B. Kubernetes. Außerdem bietet es eine Integration zwischen NSX-T und Containerbasierten CaaS/PaaS-Plattformen wie z.B. Pivotal Container Service (PKS) und OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) ist eine SDN-Plattform die Policy-Basiertes Networking zwischen Kubernetes Pods und nicht-Kubernetes Umgebungen inklusive Sichtbarkeit und Security-Monitoring bereitstellt.
* [Romana](https://github.com/romana/romana) ist eine Layer 3 Network-Lösung für Pod-Netzwerke welche auch die [NetworkPolicy API](/docs/concepts/services-networking/network-policies/) unterstützt. Details zur Installation als kubeadm Add-On sind [hier](https://github.com/romana/romana/tree/master/containerize) verfügbar.
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) bietet Networking and Network-Policies und arbeitet auf beiden Seiten der Network-Partition ohne auf eine externe Datenbank angwiesen zu sein.

View File

@ -484,10 +484,10 @@ conflict. You must resolve all merge conflicts in your PR.
1. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch:
```shell
git fetch upstream
git rebase upstream/main
```
```shell
git fetch upstream
git rebase upstream/main
```
1. Inspect the results of the rebase:
@ -512,7 +512,7 @@ conflict. You must resolve all merge conflicts in your PR.
1. Continue the rebase:
``
```shell
git rebase --continue
```

View File

@ -6,16 +6,14 @@ slug: kubernetes-1-24-release-announcement
---
<!--
---
layout: blog
title: "Kubernetes 1.24: Stargazer"
date: 2022-05-03
slug: kubernetes-1-24-release-announcement
---
-->
<!--
**Authors**: [Kubernetes 1.24 Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.24/release-team.md)
**Authors**: [Kubernetes 1.24 Release Team](https://git.k8s.io/sig-release/releases/release-1.24/release-team.md)
We are excited to announce the release of Kubernetes 1.24, the first release of 2022!
@ -23,13 +21,12 @@ This release consists of 46 enhancements: fourteen enhancements have graduated t
fifteen enhancements are moving to beta, and thirteen enhancements are entering alpha.
Also, two features have been deprecated, and two features have been removed.
-->
**作者**: [Kubernetes 1.24 发布团队](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.24/release-team.md)
**作者**: [Kubernetes 1.24 发布团队](https://git.k8s.io/sig-release/releases/release-1.24/release-team.md)
我们很高兴地宣布 Kubernetes 1.24 的发布,这是 2022 年的第一个版本!
这个版本包括 46 个增强功能14 个增强功能已经升级到稳定版15 个增强功能正在进入 Beta 版,
13 个增强功能正在进入 Alpha 阶段。另外,有两个功能被废弃了,还有两个功能被删除了。
13 个增强功能正在进入 Alpha 阶段。另外,有两个功能被废弃了,还有两个功能被删除了。
<!--
## Major Themes
@ -42,14 +39,13 @@ or use cri-dockerd if you are relying on Docker Engine as your container runtime
For more information about ensuring your cluster is ready for this removal, please
see [this guide](/blog/2022/03/31/ready-for-dockershim-removal/).
-->
## 主要议题
### 从 kubelet 中删除 Dockershim
在 v1.20 版本中被废弃后dockershim 组件已被从 Kubernetes v1.24 版本的 kubelet 中移除。
从v1.24开始,如果你依赖 Docker Engine 作为容器运行时,
则需要使用其他[受支持的运行时](/docs/setup/production-environment/container-runtimes/)之一
则需要使用其他[受支持的运行时](/zh-cn/docs/setup/production-environment/container-runtimes/)之一
(如 containerd 或 CRI-O或使用 CRI dockerd。
有关确保群集已准备好进行此删除的更多信息,请参阅[本指南](/zh-cn/blog/2022/03/31/ready-for-dockershim-removal/)。
@ -59,7 +55,6 @@ see [this guide](/blog/2022/03/31/ready-for-dockershim-removal/).
[New beta APIs will not be enabled in clusters by default](https://github.com/kubernetes/enhancements/issues/3136).
Existing beta APIs and new versions of existing beta APIs will continue to be enabled by default.
-->
### 默认情况下关闭 Beta API
[新的 beta API 默认不会在集群中启用](https://github.com/kubernetes/enhancements/issues/3136)。
@ -73,7 +68,6 @@ signatures,
and there is experimental support for [verifying image signatures](/docs/tasks/administer-cluster/verify-signed-images/).
Signing and verification of release artifacts is part of [increasing software supply chain security for the Kubernetes release process](https://github.com/kubernetes/enhancements/issues/3027).
-->
### 签署发布工件
发布工件使用 [cosign](https://github.com/sigstore/cosign) 签名进行[签名](https://github.com/kubernetes/enhancements/issues/3031)
@ -86,10 +80,9 @@ Signing and verification of release artifacts is part of [increasing software su
Kubernetes 1.24 offers beta support for publishing its APIs in the [OpenAPI v3 format](https://github.com/kubernetes/enhancements/issues/2896).
-->
### OpenAPI v3
Kubernetes 1.24 提供了以 [OpenAPI v3 格式](https://github.com/kubernetes/enhancements/issues/2896)发布其 API 的 beta 支持。
Kubernetes 1.24 提供了以 [OpenAPI v3 格式](https://github.com/kubernetes/enhancements/issues/2896)发布其 API 的 Beta 支持。
<!--
### Storage Capacity and Volume Expansion Are Generally Available
@ -101,7 +94,6 @@ and enhances scheduling of pods that use CSI volumes with late binding.
[Volume expansion](https://github.com/kubernetes/enhancements/issues/284) adds support
for resizing existing persistent volumes.
-->
### 存储容量和卷扩展普遍可用
[存储容量跟踪](https://github.com/kubernetes/enhancements/issues/1472)支持通过
@ -116,10 +108,9 @@ for resizing existing persistent volumes.
This feature adds [a new option to PriorityClasses](https://github.com/kubernetes/enhancements/issues/902),
which can enable or disable pod preemption.
-->
### NonPreemptingPriority 到稳定
此功能[为 PriorityClasses 添加了一个新选项](https://github.com/kubernetes/enhancements/issues/902),可以启用或禁用 pod 抢占。
此功能[为 PriorityClasses 添加了一个新选项](https://github.com/kubernetes/enhancements/issues/902),可以启用或禁用 Pod 抢占。
<!--
### Storage Plugin Migration
@ -130,7 +121,6 @@ The [Azure Disk](https://github.com/kubernetes/enhancements/issues/1490)
and [OpenStack Cinder](https://github.com/kubernetes/enhancements/issues/1489) plugins
have both been migrated.
-->
### 存储插件迁移
目前正在进行[迁移树内存储插件的内部组件](https://github.com/kubernetes/enhancements/issues/625)工作,
@ -145,12 +135,10 @@ has entered beta and is available by default. You can now [configure startup, li
natively within Kubernetes without exposing an HTTP endpoint or
using an extra executable.
-->
### gRPC 探针升级到 Beta
在 Kubernetes 1.24 中,[gRPC 探测功能](https://github.com/kubernetes/enhancements/issues/2727)
已进入测试版,默认可用。现在,你可以在 Kubernetes 中为你的 gRPC
已进入测试版,默认可用。现在,你可以在 Kubernetes 中为你的 gRPC
应用程序原生地[配置启动、活跃度和就绪性探测](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes)
而无需暴露 HTTP 端点或使用额外的可执行文件。
@ -163,7 +151,6 @@ has now graduated to Beta.
This allows the kubelet to dynamically retrieve credentials for a container image registry
using exec plugins rather than storing credentials on the node's filesystem.
-->
### Kubelet 凭证提供者毕业至 Beta
kubelet 最初在 Kubernetes 1.20 中作为 Alpha 发布,现在它对[镜像凭证提供者](/zh-cn/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/)
@ -175,7 +162,6 @@ kubelet 最初在 Kubernetes 1.20 中作为 Alpha 发布,现在它对[镜像
Kubernetes 1.24 has introduced [contextual logging](https://github.com/kubernetes/enhancements/issues/3077)
that enables the caller of a function to control all aspects of logging (output formatting, verbosity, additional values, and names).
-->
### Alpha 中的上下文日志记录
Kubernetes 1.24 引入了[上下文日志](https://github.com/kubernetes/enhancements/issues/3077)
@ -190,11 +176,12 @@ to Services.
With the manual enablement of this feature, the cluster will prefer automatic assignment from
the pool of Service IP addresses, thereby reducing the risk of collision.
-->
### 避免 IP 分配给服务的冲突
Kubernetes 1.24 引入了一项新的选择加入功能,允许你[为服务的静态 IP 地址分配软保留范围](/docs/concepts/services-networking/service/#service-ip-static-sub-range)。
Kubernetes 1.24 引入了一项新的选择加入功能,
允许你[为服务的静态 IP 地址分配软保留范围](/zh-cn/docs/concepts/services-networking/service/#service-ip-static-sub-range)。
通过手动启用此功能,集群将更喜欢从服务 IP 地址池中自动分配,从而降低冲突风险。
<!--
A Service `ClusterIP` can be assigned:
@ -203,7 +190,6 @@ A Service `ClusterIP` can be assigned:
Service `ClusterIP` are unique; hence, trying to create a Service with a `ClusterIP` that has already been allocated will return an error.
-->
服务的 `ClusterIP` 可以按照以下两种方式分配:
* 动态,这意味着集群将自动在配置的服务 IP 范围内选择一个空闲 IP。
@ -216,10 +202,10 @@ Service `ClusterIP` are unique; hence, trying to create a Service with a `Cluste
After being deprecated in Kubernetes 1.22, Dynamic Kubelet Configuration has been removed from the kubelet. The feature will be removed from the API server in Kubernetes 1.26.
-->
### 从 Kubelet 中删除动态 Kubelet 配置
在 Kubernetes 1.22 中被弃用后,动态 Kubelet 配置已从 kubelet 中删除。该功能将从 Kubernetes 1.26 的 API 服务器中删除。
在 Kubernetes 1.22 中被弃用后,动态 Kubelet 配置已从 kubelet 中删除。
该功能将从 Kubernetes 1.26 的 API 服务器中删除。
<!--
## CNI Version-Related Breaking Change
@ -232,7 +218,6 @@ For example, the following container runtimes are being prepared, or have alread
* containerd v1.6.4 and later, v1.5.11 and later
* CRI-O 1.24 and later
-->
## CNI 版本相关的重大更改
在升级到 Kubernetes 1.24 之前,请确认你正在使用/升级到经过测试可以在此版本中正常工作的容器运行时。
@ -251,11 +236,11 @@ With containerd v1.6.0v1.6.3, if you do not upgrade the CNI plugins and/or
declare the CNI config version, you might encounter the following "Incompatible
CNI versions" or "Failed to destroy network for sandbox" error conditions.
-->
当 CNI 插件尚未升级和/或 CNI 配置版本未在 CNI 配置文件中声明时,在 containerd v1.6.0v1.6.3
当 CNI 插件尚未升级和/或 CNI 配置版本未在 CNI 配置文件中声明时,在 containerd v1.6.0v1.6.3
中存在 pod CNI 网络设置和拆除的服务问题。containerd 团队报告说,“这些问题在 containerd v1.6.4 中得到解决。”
在 containerd v1.6.0-v1.6.3 版本中,如果你不升级 CNI 插件和/或声明 CNI 配置版本,你可能会遇到以下
”CNI 版本不兼容“或“为沙箱销毁网络失败”的错误情况。
在 containerd v1.6.0-v1.6.3 版本中,如果你不升级 CNI 插件和/或声明 CNI 配置版本,
你可能会遇到以下 “Incompatible CNI versions” 或 “Failed to destroy network for sandbox” 的错误情况。
<!--
## CSI Snapshot
@ -263,18 +248,17 @@ CNI versions" or "Failed to destroy network for sandbox" error conditions.
_This information was added after initial publication._
[VolumeSnapshot v1beta1 CRD has been removed](https://github.com/kubernetes/enhancements/issues/177).
Volume snapshot and restore functionality for Kubernetes and the Container Storage Interface (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, moved to GA in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.20 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [Volume Snapshot GA blog](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/) for more information.
Volume snapshot and restore functionality for Kubernetes and the Container Storage Interface (CSI), which provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers, moved to GA in v1.20. VolumeSnapshot v1beta1 was deprecated in v1.20 and is now unsupported. Refer to [KEP-177: CSI Snapshot](https://git.k8s.io/enhancements/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot) and [Volume Snapshot GA blog](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/) for more information.
-->
## CSI 快照
_此信息是在首次发布后添加的。_
**此信息是在首次发布后添加的。**
[VolumeSnapshot v1beta1 CRD 已被移除](https://github.com/kubernetes/enhancements/issues/177)。
Kubernetes 和容器存储接口 (CSI) 的卷快照和恢复功能,提供标准化的 API 设计 (CRD) 并添加了对 CSI 卷驱动程序的
PV 快照/恢复支持,在 v1.20 中移至 GA。VolumeSnapshot v1beta1 在 v1.20 中被弃用,现在不受支持。
有关详细信息,请参阅[KEP-177: CSI 快照](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot)
和[卷快照 GA 博客](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/)。
有关详细信息,请参阅 [KEP-177: CSI 快照](https://git.k8s.io/enhancements/keps/sig-storage/177-volume-snapshot#kep-177-csi-snapshot)
和[卷快照 GA 博客](/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/)。
<!--
## Other Updates
@ -283,12 +267,11 @@ PV 快照/恢复支持,在 v1.20 中移至 GA。VolumeSnapshot v1beta1 在 v1.
This release saw fourteen enhancements promoted to stable:
-->
## 其他更新
### 毕业到稳定
在此版本中,有 14 项增强功能升级为稳定版:
在此版本中,有 14 项增强功能升级为稳定版:
<!--
* [Container Storage Interface (CSI) Volume Expansion](https://github.com/kubernetes/enhancements/issues/284)
@ -305,24 +288,26 @@ This release saw fourteen enhancements promoted to stable:
* [Leader Migration for Controller Managers](https://github.com/kubernetes/enhancements/issues/2436): kube-controller-manager and cloud-controller-manager can apply new controller-to-controller-manager assignment in HA control plane without downtime.
* [CSR Duration](https://github.com/kubernetes/enhancements/issues/2784): Extend the CertificateSigningRequest API with a mechanism to allow clients to request a specific duration for the issued certificate.
-->
* [容器存储接口CSI卷扩展](https://github.com/kubernetes/enhancements/issues/284)
* [Pod 开销](https://github.com/kubernetes/enhancements/issues/688): 核算与 Pod 沙箱绑定的资源,但不包括特定的容器。
* [向 PriorityClass 添加非抢占选项](https://github.com/kubernetes/enhancements/issues/902)
* [存储容量跟踪](https://github.com/kubernetes/enhancements/issues/1472)
* [存储容量跟踪](https://github.com/kubernetes/enhancements/issues/1472)
* [OpenStack Cinder In-Tree 到 CSI 驱动程序迁移](https://github.com/kubernetes/enhancements/issues/1489)
* [Azure 磁盘树到 CSI 驱动程序迁移](https://github.com/kubernetes/enhancements/issues/1490)
* [高效的监视恢复](https://github.com/kubernetes/enhancements/issues/1904): kube-apiserver 重新启动后,可以高效地恢复监视。
* [Service Type=LoadBalancer 类字段](https://github.com/kubernetes/enhancements/issues/1959):
引入新的服务注解 `service.kubernetes.io/load-balancer-class` ,允许在同一个集群中实现多个 `type: LoadBalancer` 服务在同一集群中的多个实现。
* [指数化的作业](https://github.com/kubernetes/enhancements/issues/2214): 为有固定完成数的作业的 Pod 添加完成指数。
* [在 Jobs API 中增加 Suspend 字段](https://github.com/kubernetes/enhancements/issues/2232):
在 Jobs API 中增加一个 suspend 字段,允许协调者在创建作业时对 pod 的创建有更多控制。
* [Pod Affinity NamespaceSelector](https://github.com/kubernetes/enhancements/issues/2249):
为 pod affinity/anti-affinity 规范添加一个 `namespaceSelector` 字段。
* [控制器管理器的领导者迁移](https://github.com/kubernetes/enhancements/issues/2436):
kube-controller-manager 和 cloud-controller-manager 可以在 HA 控制平面中应用新的控制器到控制器管理器分配,而无需停机。
* [CSR 期限](https://github.com/kubernetes/enhancements/issues/2784): 用一种机制来扩展证书签名请求 API允许客户为签发的证书请求一个特定的期限。
* [高效的监视恢复](https://github.com/kubernetes/enhancements/issues/1904)
kube-apiserver 重新启动后,可以高效地恢复监视。
* [Service Type=LoadBalancer 类字段](https://github.com/kubernetes/enhancements/issues/1959)
引入新的服务注解 `service.kubernetes.io/load-balancer-class`
允许在同一个集群中提供 `type: LoadBalancer` 服务的多个实现。
* [带索引的 Job](https://github.com/kubernetes/enhancements/issues/2214):为带有固定完成计数的 Job 的 Pod 添加完成索引。
* [在 Job API 中增加 suspend 字段](https://github.com/kubernetes/enhancements/issues/2232)
在 Job API 中增加一个 suspend 字段,允许协调者在创建作业时对 Pod 的创建进行更多控制。
* [Pod 亲和性 NamespaceSelector](https://github.com/kubernetes/enhancements/issues/2249)
为 Pod 亲和性/反亲和性规约添加一个 `namespaceSelector` 字段。
* [控制器管理器的领导者迁移](https://github.com/kubernetes/enhancements/issues/2436)
kube-controller-manager 和 cloud-controller-manager 可以在 HA 控制平面中重新分配新的控制器到控制器管理器,而无需停机。
* [CSR 期限](https://github.com/kubernetes/enhancements/issues/2784)
用一种机制来扩展证书签名请求 API允许客户为签发的证书请求一个特定的期限。
<!--
### Major Changes
@ -332,7 +317,6 @@ This release saw two major changes:
* [Dockershim Removal](https://github.com/kubernetes/enhancements/issues/2221)
* [Beta APIs are off by Default](https://github.com/kubernetes/enhancements/issues/3136)
-->
### 主要变化
此版本有两个主要变化:
@ -343,12 +327,11 @@ This release saw two major changes:
<!--
### Release Notes
Check out the full details of the Kubernetes 1.24 release in our [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md).
Check out the full details of the Kubernetes 1.24 release in our [release notes](https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.24.md).
-->
### 发行说明
在我们的[发行说明](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md) 中查看 Kubernetes 1.24 版本的完整详细信息。
在我们的[发行说明](https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.24.md) 中查看 Kubernetes 1.24 版本的完整详细信息。
<!--
### Availability
@ -358,12 +341,11 @@ To get started with Kubernetes, check out these [interactive tutorials](/docs/tu
Kubernetes clusters using containers as “nodes”, with [kind](https://kind.sigs.k8s.io/).
You can also easily install 1.24 using [kubeadm](/docs/setup/independent/create-cluster-kubeadm/).
-->
### 可用性
Kubernetes 1.24 可在 [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.24.0) 上下载。
要开始使用 Kubernetes请查看这些[交互式教程](/docs/tutorials/)或在本地运行。
使用[kind](https://kind.sigs.k8s.io/),可以将容器作为 Kubernetes 集群的“节点”。
要开始使用 Kubernetes请查看这些[交互式教程](/zh-cn/docs/tutorials/)或在本地运行。
使用 [kind](https://kind.sigs.k8s.io/),可以将容器作为 Kubernetes 集群的 “节点”。
你还可以使用 [kubeadm](/zh-cn/docs/setup/independent/create-cluster-kubeadm/) 轻松安装 1.24。
<!--
@ -377,11 +359,10 @@ Special thanks to James Laverack, our release lead, for guiding us through a suc
and to all of the release team members for the time and effort they put in to deliver the v1.24
release for the Kubernetes community.
-->
### 发布团队
如果没有组成 Kubernetes 1.24 发布团队的坚定个人的共同努力,这个版本是不可能实现的。
该团队齐心协力交付每个 Kubernetes 版本中的所有组件,包括代码、文档、发行说明等。
如果没有组成 Kubernetes 1.24 发布团队的坚定个人的共同努力,这个版本是不可能实现的。
该团队齐心协力交付每个 Kubernetes 版本中的所有组件,包括代码、文档、发行说明等。
特别感谢我们的发布负责人 James Laverack 指导我们完成了一个成功的发布周期,
并感谢所有发布团队成员投入时间和精力为 Kubernetes 社区提供 v1.24 版本。
@ -395,14 +376,13 @@ release for the Kubernetes community.
The theme for Kubernetes 1.24 is _Stargazer_.
-->
### 发布主题和徽标
### 发布主题和徽标
**Kubernetes 1.24: 观星者**
{{< figure src="/images/blog/2022-05-03-kubernetes-release-1.24/kubernetes-1.24.png" alt="" class="release-logo" >}}
Kubernetes 1.24 的主题是 is _观星者_.
Kubernetes 1.24 的主题是**观星者Stargazer**。
<!--
Generations of people have looked to the stars in awe and wonder, from ancient astronomers to the
@ -413,12 +393,12 @@ With this release we gaze upwards, to what is possible when our community comes
is the work of hundreds of contributors across the globe and thousands of end-users supporting
applications that serve millions. Every one is a star in our sky, helping us chart our course.
-->
古代天文学家到建造 James Webb 太空望远镜的科学家,几代人都怀着敬畏和惊奇的心情仰望星空。
星星启发了我们,点燃了我们的想象力,并引导我们在艰难的海上度过了漫长的夜晚。
通过此版本我们向上凝视当我们的社区聚集在一起时可能发生的事情。Kubernetes 是全球数百名贡献者和数千名最终用户支持的成果
为数百万服务的应用程序。每个人都是我们天空中的一颗星星,帮助我们规划路线。
通过此版本,我们向上凝视,当我们的社区聚集在一起时可能发生的事情。
Kubernetes 是全球数百名贡献者和数千名最终用户支持的成果,
是一款为数百万人服务的应用程序。每个人都是我们天空中的一颗星星,帮助我们规划路线。
<!--
The release logo is made by [Britnee Laverack](https://www.instagram.com/artsyfie/), and depicts a telescope set upon starry skies and the
[Pleiades](https://en.wikipedia.org/wiki/Pleiades), often known in mythology as the “Seven Sisters”. The number seven is especially auspicious
@ -427,7 +407,6 @@ for the Kubernetes project, and is a reference back to our original “Project S
This release of Kubernetes is named for those that would look towards the night sky and wonder — for
all the stargazers out there. ✨
-->
发布标志由 [Britnee Laverack](https://www.instagram.com/artsyfie/) 制作,
描绘了一架位于星空和[昴星团](https://en.wikipedia.org/wiki/Pleiades)的望远镜,在神话中通常被称为“七姐妹”。
数字 7 对于 Kubernetes 项目特别吉祥,是对我们最初的“项目七”名称的引用。
@ -444,23 +423,23 @@ all the stargazers out there. ✨
* Using Kubernetes, the Dutch organization [Stichting Open Nederland](http://www.stichtingopennederland.nl/) created a testing portal in just one-and-a-half months to help safely reopen events in the Netherlands. The [Testing for Entry (Testen voor Toegang)](https://www.testenvoortoegang.org/) platform [leveraged the performance and scalability of Kubernetes to help individuals book over 400,000 COVID-19 testing appointments per day. ](https://www.cncf.io/case-studies/true/)
* Working alongside SparkFabrik and utilizing Backstage, [Santagostino created the developer platform Samaritan to centralize services and documentation, manage the entire lifecycle of services, and simplify the work of Santagostino developers](https://www.cncf.io/case-studies/santagostino/).
-->
### 用户亮点
* 了解领先的零售电子商务公司 [La Redoute 如何使用 Kubernetes 以及其他 CNCF 项目来转变和简化](https://www.cncf.io/case-studies/la-redoute/)
其从开发到运营的软件交付生命周期。
* 了解领先的零售电子商务公司
[La Redoute 如何使用 Kubernetes 以及其他 CNCF 项目来转变和简化](https://www.cncf.io/case-studies/la-redoute/)
其从开发到运营的软件交付生命周期。
* 为了确保对 API 调用的更改不会导致任何中断,[Salt Security 完全在 Kubernetes 上构建了它的微服务,
它通过 gRPC 进行通信,而 Linkerd 确保消息是加密的](https://www.cncf.io/case-studies/salt-security/)。
它通过 gRPC 进行通信,而 Linkerd 确保消息是加密的](https://www.cncf.io/case-studies/salt-security/)。
* 为了从私有云迁移到公共云,[Alllainz Direct 工程师在短短三个月内重新设计了其 CI/CD 管道,
同时设法将 200 个工作流压缩到 10-15 个](https://www.cncf.io/case-studies/allianz/)。
同时设法将 200 个工作流压缩到 10-15 个](https://www.cncf.io/case-studies/allianz/)。
* 看看[英国金融科技公司 Bink 是如何用 Linkerd 更新其内部的 Kubernetes 分布,以建立一个云端的平台,
根据需要进行扩展,同时允许他们密切关注性能和稳定性](https://www.cncf.io/case-studies/bink/)。
* 利用Kubernetes荷兰组织 [Stichting Open Nederland](http://www.stichtingopennederland.nl/)
在短短一个半月内创建了一个测试门户网站,以帮助安全地重新开放荷兰的活动。[入门测试 (Testen voor Toegang)](https://www.testenvoortoegang.org/)
平台[利用 Kubernetes 的性能和可扩展性来帮助个人每天预订超过 400,000 个 COVID-19 测试预约。](https://www.cncf.io/case-studies/true/)
根据需要进行扩展,同时允许他们密切关注性能和稳定性](https://www.cncf.io/case-studies/bink/)。
* 利用Kubernetes荷兰组织 [Stichting Open Nederland](http://www.stichtingopennederland.nl/)
在短短一个半月内创建了一个测试门户网站,以帮助安全地重新开放荷兰的活动。
[入门测试 (Testen voor Toegang)](https://www.testenvoortoegang.org/)
平台[利用 Kubernetes 的性能和可扩展性来帮助个人每天预订超过 400,000 个 COVID-19 测试预约](https://www.cncf.io/case-studies/true/)。
* 与 SparkFabrik 合作并利用 Backstage[Santagostino 创建了开发人员平台 Samaritan 来集中服务和文档,
管理服务的整个生命周期,并简化 Santagostino 开发人员的工作](https://www.cncf.io/case-studies/santagostino/)。
管理服务的整个生命周期,并简化 Santagostino 开发人员的工作](https://www.cncf.io/case-studies/santagostino/)。
<!--
### Ecosystem Updates
@ -469,18 +448,15 @@ all the stargazers out there. ✨
* In the [2021 Cloud Native Survey](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/), the CNCF saw record Kubernetes and container adoption. Take a look at the [results of the survey](https://www.cncf.io/reports/cncf-annual-survey-2021/).
* The [Linux Foundation](https://www.linuxfoundation.org/) and [The Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) announced the availability of a new [Cloud Native Developer Bootcamp](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322) to provide participants with the knowledge and skills to design, build, and deploy cloud native applications. Check out the [announcement](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/) to learn more.
-->
### 生态系统更新
* KubeCon + CloudNativeCon Europe 2022 将于 2022 年 5 月 16 日至 20 日在西班牙巴伦西亚举行!
你可以在 [活动网站](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)
上找到有关会议和注册的更多信息。
* 在 [2021 Cloud Native Survey](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/)
CNCF 看到了创纪录的 Kubernetes 和容器采用。看看[调查结果](https://www.cncf.io/reports/cncf-annual-survey-2021/)。
你可以在[活动网站](https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/)上找到有关会议和注册的更多信息。
* 在 [2021 年云原生调查](https://www.cncf.io/announcements/2022/02/10/cncf-sees-record-kubernetes-and-container-adoption-in-2021-cloud-native-survey/)
CNCF 看到了创纪录的 Kubernetes 和容器采用。参阅[调查结果](https://www.cncf.io/reports/cncf-annual-survey-2021/)。
* [Linux 基金会](https://www.linuxfoundation.org/)和[云原生计算基金会](https://www.cncf.io/) (CNCF)
宣布推出新的 [云原生开发者训练营](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322)
为参与者提供设计、构建和部署云原生应用程序的知识和技能。查看[公告](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/)以了解更多信息。
宣布推出新的 [云原生开发者训练营](https://training.linuxfoundation.org/training/cloudnativedev-bootcamp/?utm_source=lftraining&utm_medium=pr&utm_campaign=clouddevbc0322)
为参与者提供设计、构建和部署云原生应用程序的知识和技能。查看[公告](https://www.cncf.io/announcements/2022/03/15/new-cloud-native-developer-bootcamp-provides-a-clear-path-to-cloud-native-careers/)以了解更多信息。
<!--
### Project Velocity
@ -492,8 +468,6 @@ are contributing, and is an illustration of the depth and breadth of effort that
In the v1.24 release cycle, which [ran for 17 weeks](https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24) (January 10 to May 3), we saw contributions from [1029 companies](https://k8s.devstats.cncf.io/d/9/companies-table?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions) and [1179 individuals](https://k8s.devstats.cncf.io/d/66/developer-activity-counts-by-companies?orgId=1&var-period_name=v1.23.0%20-%20v1.24.0&var-metric=contributions&var-repogroup_name=Kubernetes&var-country_name=All&var-companies=All&var-repo_name=kubernetes%2Fkubernetes).
-->
### 项目速度
The [CNCF K8s DevStats](https://k8s.devstats.cncf.io/d/12/dashboards?orgId=1&refresh=15m) 项目
@ -512,19 +486,17 @@ the major features of this release, as well as deprecations and removals to help
For more information and registration, visit the [event page](https://community.cncf.io/e/mck3kd/)
on the CNCF Online Programs site.
-->
## 即将发布的网络研讨会
在太平洋时间 2022 年 5 月 24 日星期二上午 9:45 至上午 11 点加入 Kubernetes 1.24 发布团队的成员,
了解此版本的主要功能以及弃用和删除,以帮助规划升级。有关更多信息和注册,请访问 CNCF 在线计划网站上的
[活动页面](https://community.cncf.io/e/mck3kd/)。
了解此版本的主要功能以及弃用和删除,以帮助规划升级。有关更多信息和注册,
请访问 CNCF 在线计划网站上的[活动页面](https://community.cncf.io/e/mck3kd/)。
<!--
## Get Involved
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests.
Have something youd like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below:
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://git.k8s.io/community/sig-list.md) (SIGs) that align with your interests.
Have something youd like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://git.k8s.io/community/communication), and through the channels below:
* Find out more about contributing to Kubernetes at the [Kubernetes Contributors](https://www.kubernetes.dev/) website
* Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for the latest updates
@ -533,14 +505,13 @@ Have something youd like to broadcast to the Kubernetes community? Share your
* Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes).
* Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
* Read more about whats happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)
* Learn more about the [Kubernetes Release Team](https://git.k8s.io/sig-release/release-team)
-->
## 参与进来
参与 Kubernetes 的最简单方法是加入符合你兴趣的众多 [特别兴趣组](https://github.com/kubernetes/community/blob/master/sig-list.md)(SIG) 之一。
你有什么想向 Kubernetes 社区广播的内容吗?在我们的每周的[社区会议](https://github.com/kubernetes/community/tree/master/communication)
上分享你的声音,并通过以下渠道:
参与 Kubernetes 的最简单方法是加入符合你兴趣的众多 [特别兴趣组](https://git.k8s.io/community/sig-list.md)(SIG) 之一。
你有什么想向 Kubernetes 社区广播的内容吗?
在我们的每周的[社区会议](https://git.k8s.io/community/communication)上分享你的声音,并通过以下渠道:
* 在 [Kubernetes Contributors](https://www.kubernetes.dev/) 网站上了解有关为 Kubernetes 做出贡献的更多信息
* 在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio) 以获取最新更新
@ -548,5 +519,5 @@ Have something youd like to broadcast to the Kubernetes community? Share your
* 加入 [Slack](http://slack.k8s.io/) 社区
* 在 [Server Fault](https://serverfault.com/questions/tagged/kubernetes) 上发布问题(或回答问题)。
* 分享你的 Kubernetes [故事](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
* 在[博客](https://kubernetes.io/blog/)上阅读有关 Kubernetes 正在发生的事情的更多信息
* 详细了解 [Kubernetes 发布团队](https://github.com/kubernetes/sig-release/tree/master/release-team)
* 在[博客](/zh-cn/blog/)上阅读有关 Kubernetes 正在发生的事情的更多信息
* 详细了解 [Kubernetes 发布团队](https://git.k8s.io/sig-release/release-team)

View File

@ -14,7 +14,7 @@ slug: annual-report-summary-2021
<!--
**Author:** Paris Pittman (Steering Committee)
-->
**作者:**Paris Pittman指导委员会
**作者:** Paris Pittman指导委员会
<!--
Last year, we published our first [Annual Report Summary](/blog/2021/06/28/announcing-kubernetes-community-group-annual-reports/) for 2020 and it's already time for our second edition!

View File

@ -1170,14 +1170,13 @@ see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
<!--
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
* Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
-->
* 进一步了解节点[组件](/zh-cn/docs/concepts/overview/components/#node-components)。
* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。
* 阅读架构设计文档中有关
[Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
[Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
的章节。
* 了解[污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。

View File

@ -474,7 +474,7 @@ Weave Net 可以作为 [CNI 插件](https://www.weave.works/docs/net/latest/cni-
<!--
The early design of the networking model and its rationale, and some future
plans are described in more detail in the
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
[networking design document](https://git.k8s.io/design-proposals-archive/network/networking.md).
-->
网络模型的早期设计、运行原理以及未来的一些计划,
都在[联网设计文档](https://git.k8s.io/community/contributors/design-proposals/network/networking.md)里有更详细的描述。
都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有更详细的描述。

View File

@ -13,9 +13,9 @@ Every Kubernetes object also has a [_UID_](#uids) that is unique across your who
For example, you can only have one Pod named `myapp-1234` within the same [namespace](/docs/concepts/overview/working-with-objects/namespaces/), but you can have one Pod and one Deployment that are each named `myapp-1234`.
-->
集群中的每一个对象都有一个[_名称_](#names)来标识在同类资源中的唯一性。
集群中的每一个对象都有一个[**名称**](#names)来标识在同类资源中的唯一性。
每个 Kubernetes 对象也有一个 [_UID_](#uids) 来标识在整个集群中的唯一性。
每个 Kubernetes 对象也有一个 [**UID**](#uids) 来标识在整个集群中的唯一性。
比如,在同一个[名字空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/)
中有一个名为 `myapp-1234` 的 Pod但是可以命名一个 Pod 和一个 Deployment 同为 `myapp-1234`
@ -171,9 +171,9 @@ UUIDs 是标准化的,见 ISO/IEC 9834-8 和 ITU-T X.667。
<!--
* Read about [labels](/docs/concepts/overview/working-with-objects/labels/) in Kubernetes.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md) design document.
* See the [Identifiers and Names in Kubernetes](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md) design document.
-->
* 进一步了解 Kubernetes [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)
* 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/community/contributors/design-proposals/architecture/identifiers.md)的设计文档
* 参阅 [Kubernetes 标识符和名称](https://git.k8s.io/design-proposals-archive/architecture/identifiers.md)的设计文档

View File

@ -28,7 +28,7 @@ A _LimitRange_ provides constraints that can:
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
-->
一个 _LimitRange限制范围_ 对象提供的限制能够做到:
一个 **LimitRange限制范围** 对象提供的限制能够做到:
- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
@ -40,13 +40,14 @@ A _LimitRange_ provides constraints that can:
LimitRange support has been enabled by default since Kubernetes 1.10.
LimitRange support is enabled by default for many Kubernetes distributions.
A LimitRange is enforced in a particular namespace when there is a
LimitRange object in that namespace.
-->
## 启用 LimitRange
对 LimitRange 的支持自 Kubernetes 1.10 版本默认启用。
LimitRange 支持在很多 Kubernetes 发行版本中也是默认启用的
当某命名空间中有一个 LimitRange 对象时,将在该命名空间中实施 LimitRange 限制
<!--
The name of a LimitRange object must be a valid
@ -58,7 +59,7 @@ LimitRange 的名称必须是合法的
<!--
### Overview of Limit Range
- The administrator creates one `LimitRange` in one namespace.
- The administrator creates one LimitRange in one namespace.
- Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace.
- The `LimitRanger` admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace.
- If creating or updating a resource (Pod, Container, PersistentVolumeClaim) that violates a LimitRange constraint, the request to the API server will fail with an HTTP status code `403 FORBIDDEN` and a message explaining the constraint that have been violated.
@ -106,21 +107,21 @@ Neither contention nor changes to a LimitRange will affect already created resou
## {{% heading "whatsnext" %}}
<!--
See [LimitRanger design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
Refer to the [LimitRanger design document](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md) for more information.
-->
参阅 [LimitRanger 设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md)获取更多信息。
参阅 [LimitRanger 设计文档](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_limit_range.md)获取更多信息。
<!--
For examples on using limits, see:
- See [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- See [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- See [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- See [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- Check [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- See a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/).
- [how to configure minimum and maximum CPU constraints per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/).
- [how to configure minimum and maximum Memory constraints per namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).
-->
关于使用限值的例子,可参
关于使用限值的例子,可参阅:
- [如何配置每个命名空间最小和最大的 CPU 约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)。
- [如何配置每个命名空间最小和最大的内存约束](/zh-cn/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)。

View File

@ -52,14 +52,15 @@ Resource quotas work like this:
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
-->
- 不同的团队可以在不同的命名空间下工作。这可以通过 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
- 不同的团队可以在不同的命名空间下工作。这可以通过
[RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) 强制执行。
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会
跟踪集群的资源使用情况,以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
- 当用户在命名空间下创建资源(如 Pod、Service 等Kubernetes 的配额系统会跟踪集群的资源使用情况,
以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
- 如果资源创建或者更新请求违反了配额约束那么该请求会报错HTTP 403 FORBIDDEN
并在消息中给出有可能违反的约束。
- 如果命名空间下的计算资源 (如 `cpu``memory`)的配额被启用,则用户必须为
这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。
- 如果命名空间下的计算资源 (如 `cpu``memory`)的配额被启用,
则用户必须为这些资源设定请求值request和约束值limit否则配额系统将拒绝 Pod 的创建。
提示: 可使用 `LimitRanger` 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
若想避免这类问题,请参考
@ -161,7 +162,7 @@ The following resource types are supported:
### Resource Quota For Extended Resources
In addition to the resources mentioned above, in release 1.10, quota support for
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) is added.
-->
### 扩展资源的资源配额
@ -316,12 +317,9 @@ Job 而导致集群拒绝服务。
<!--
It is possible to do generic object count quota on a limited set of resources.
In addition, it is possible to further constrain quota for particular resources by their type.
The following types are supported:
-->
对有限的一组资源上实施一般性的对象数量配额也是可能的。
此外,还可以进一步按资源的类型设置其配额。
支持以下类型:
@ -466,10 +464,10 @@ one value. For example:
```
<!--
If the `operator` is `Exists` or `DoesNotExist`, the `values field must *NOT* be
If the `operator` is `Exists` or `DoesNotExist`, the `values` field must *NOT* be
specified.
-->
如果 `operator``Exists``DoesNotExist`,则*不*可以设置 `values` 字段。
如果 `operator``Exists``DoesNotExist`,则****可以设置 `values` 字段。
<!--
### Resource Quota Per PriorityClass
@ -495,8 +493,8 @@ A quota is matched and consumed only if `scopeSelector` in the quota spec select
When quota is scoped for priority class using `scopeSelector` field, quota object
is restricted to track only following resources:
-->
如果配额对象通过 `scopeSelector` 字段设置其作用域为优先级类,则配额对象只能
跟踪以下资源:
如果配额对象通过 `scopeSelector` 字段设置其作用域为优先级类,
则配额对象只能跟踪以下资源:
* `pods`
* `cpu`
@ -713,27 +711,27 @@ Operators can use `CrossNamespacePodAffinity` quota scope to limit which namespa
have pods with affinity terms that cross namespaces. Specifically, it controls which pods are allowed
to set `namespaces` or `namespaceSelector` fields in pod affinity terms.
-->
集群运维人员可以使用 `CrossNamespacePodAffinity` 配额作用域来
限制哪个名字空间中可以存在包含跨名字空间亲和性规则的 Pod。
更为具体一点,此作用域用来配置哪些 Pod 可以在其 Pod 亲和性规则
中设置 `namespaces``namespaceSelector` 字段。
集群运维人员可以使用 `CrossNamespacePodAffinity`
配额作用域来限制哪个名字空间中可以存在包含跨名字空间亲和性规则的 Pod。
更为具体一点,此作用域用来配置哪些 Pod 可以在其 Pod 亲和性规则中设置
`namespaces``namespaceSelector` 字段。
<!--
Preventing users from using cross-namespace affinity terms might be desired since a pod
with anti-affinity constraints can block pods from all other namespaces
from getting scheduled in a failure domain.
-->
禁止用户使用跨名字空间的亲和性规则可能是一种被需要的能力,因为带有
反亲和性约束的 Pod 可能会阻止所有其他名字空间的 Pod 被调度到某失效域中。
禁止用户使用跨名字空间的亲和性规则可能是一种被需要的能力,
因为带有反亲和性约束的 Pod 可能会阻止所有其他名字空间的 Pod 被调度到某失效域中。
<!--
Using this scope operators can prevent certain namespaces (`foo-ns` in the example below)
from having pods that use cross-namespace pod affinity by creating a resource quota object in
that namespace with `CrossNamespaceAffinity` scope and hard limit of 0:
-->
使用此作用域操作符可以避免某些名字空间(例如下面例子中的 `foo-ns`)运行
特别的 Pod这类 Pod 使用跨名字空间的 Pod 亲和性约束,在该名字空间中创建
了作用域为 `CrossNamespaceAffinity` 的、硬性约束为 0 的资源配额对象。
使用此作用域操作符可以避免某些名字空间(例如下面例子中的 `foo-ns`)运行特别的 Pod
这类 Pod 使用跨名字空间的 Pod 亲和性约束,在该名字空间中创建了作用域为
`CrossNamespaceAffinity` 的、硬性约束为 0 的资源配额对象。
```yaml
apiVersion: v1
@ -752,12 +750,12 @@ spec:
<!--
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
only allow it for specific namespaces, they could configure `CrossNamespaceAffinity`
as a limited resource by setting the kube-apiserver flag -admission-control-config-file
as a limited resource by setting the kube-apiserver flag --admission-control-config-file
to the path of the following configuration file:
-->
如果集群运维人员希望默认禁止使用 `namespaces``namespaceSelector`
仅仅允许在特定名字空间中这样做,他们可以将 `CrossNamespaceAffinity` 作为一个
被约束的资源。方法是为 `kube-apiserver` 设置标志
如果集群运维人员希望默认禁止使用 `namespaces``namespaceSelector`
仅仅允许在特定名字空间中这样做,他们可以将 `CrossNamespaceAffinity`
作为一个被约束的资源。方法是为 `kube-apiserver` 设置标志
`--admission-control-config-file`,使之指向如下的配置文件:
```yaml
@ -779,8 +777,8 @@ With the above configuration, pods can use `namespaces` and `namespaceSelector`
if the namespace where they are created have a resource quota object with
`CrossNamespaceAffinity` scope and a hard limit greater than or equal to the number of pods using those fields.
-->
基于上面的配置,只有名字空间中包含作用域为 `CrossNamespaceAffinity`
硬性约束大于或等于使用 `namespaces``namespaceSelector` 字段的 Pods
基于上面的配置,只有名字空间中包含作用域为 `CrossNamespaceAffinity`
硬性约束大于或等于使用 `namespaces``namespaceSelector` 字段的 Pod
个数时,才可以在该名字空间中继续创建在其 Pod 亲和性规则中设置 `namespaces`
`namespaceSelector` 的新 Pod。
@ -987,18 +985,18 @@ should be allowed in a namespace, if and only if, a matching quota object exists
(例如 "cluster-services")的 Pod。
<!--
With this mechanism, operators will be able to restrict usage of certain high
With this mechanism, operators are able to restrict usage of certain high
priority classes to a limited number of namespaces and not every namespace
will be able to consume these priority classes by default.
-->
通过这种机制,操作人员能够限制某些高优先级类仅出现在有限数量的命名空间中,
通过这种机制,操作人员能够限制某些高优先级类仅出现在有限数量的命名空间中,
而并非每个命名空间默认情况下都能够使用这些优先级类。
<!--
To enforce this, kube-apiserver flag `-admission-control-config-file` should be
To enforce this, `kube-apiserver` flag `--admission-control-config-file` should be
used to pass path to the following configuration file:
-->
要实现此目的,应设置 kube-apiserver 的标志 `--admission-control-config-file`
要实现此目的,应设置 `kube-apiserver` 的标志 `--admission-control-config-file`
指向如下配置文件:
```yaml
@ -1057,14 +1055,13 @@ and it is to be created in a namespace other than `kube-system`.
## {{% heading "whatsnext" %}}
<!--
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design doc](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md) for more information.
- See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
- Read [Quota support for priority class design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md).
- Read [Quota support for priority class design doc](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md).
- See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
-->
- 查看[资源配额设计文档](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md)
- 查看[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。
- 阅读[优先级类配额支持的设计文档](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/pod-priority-resourcequota.md)。
了解更多信息。
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)
- 参阅[资源配额设计文档](https://git.k8s.io/design-proposals-archive/resource-management/admission_control_resource_quota.md)。
- 参阅[如何使用资源配额的详细示例](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)。
- 参阅[优先级类配额支持的设计文档](https://git.k8s.io/design-proposals-archive/scheduling/pod-priority-resourcequota.md)了解更多信息。
- 参阅 [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765)。

View File

@ -18,13 +18,13 @@ weight: 20
<!--
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
{{< glossary_tooltip text="Node(s)" term_id="node" >}}.
There are several ways to do this, and the recommended approaches all use
{{< glossary_tooltip text="node(s)" term_id="node" >}}.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.)
(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
However, there are some circumstances where you may want to control which node
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate pods from two different
the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
-->
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}}
@ -172,6 +172,19 @@ define. Some of the benefits of affinity and anti-affinity include:
* 你可以使用节点上(或其他拓扑域中)运行的其他 Pod 的标签来实施调度约束,
而不是只能使用节点本身的标签。这个能力让你能够定义规则允许哪些 Pod 可以被放置在一起。
<!--
The affinity feature consists of two types of affinity:
* *Node affinity* functions like the `nodeSelector` field but is more expressive and
allows you to specify soft rules.
* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
on other Pods.
-->
亲和性功能由两种类型的亲和性组成:
* **节点亲和性**功能类似于 `nodeSelector` 字段,但它的表达能力更强,并且允许你指定软规则。
* Pod 间亲和性/反亲和性允许你根据其他 Pod 的标签来约束 Pod。
<!--
### Node affinity
@ -222,15 +235,16 @@ For example, consider the following Pod spec:
<!--
In this example, the following rules apply:
* The node *must* have a label with the key `kubernetes.io/os` and
the value `linux`.
* The node *must* have a label with the key `topology.kubernetes.io/zone` and
the value of that label *must* be either `antarctica-east1` or `antarctica-west1`.
* The node *preferably* has a label with the key `another-node-label-key` and
the value `another-node-label-value`.
-->
在这一示例中,所应用的规则如下:
* 节点必须包含键名为 `kubernetes.io/os` 的标签,并且其取值为 `linux`
* 节点 **最好** 具有键名为 `another-node-label-key` 且取值为
* 节点**必须**包含一个键名为 `topology.kubernetes.io/zone` 的标签,
并且该标签的取值**必须**为 `antarctica-east1``antarctica-west1`
* 节点**最好**具有一个键名为 `another-node-label-key` 且取值为
`another-node-label-value` 的标签。
<!--
@ -269,7 +283,7 @@ satisfied.
<!--
If you specify multiple `matchExpressions` associated with a single `nodeSelectorTerms`,
then the Pod can be scheduled onto a node only if all the `matchExpressions` are
satisfied.
satisfied.
-->
如果你指定了多个与同一 `nodeSelectorTerms` 关联的 `matchExpressions`
则只有当所有 `matchExpressions` 都满足时 Pod 才可以被调度到节点上。
@ -341,8 +355,8 @@ must have existing nodes with the `kubernetes.io/os=linux` label.
<!--
When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate
a profile with a Node affinity, which is useful if a profile only applies to a specific set of nodes.
To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
a profile with a node affinity, which is useful if a profile only applies to a specific set of nodes.
To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
in the [scheduler configuration](/docs/reference/scheduling/config/). For example:
-->
在配置多个[调度方案](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)时,
@ -410,7 +424,7 @@ Inter-pod affinity and anti-affinity allow you to constrain which nodes your
Pods can be scheduled on based on the labels of **Pods** already running on that
node, instead of the node labels.
-->
### pod 间亲和性与反亲和性 {#inter-pod-affinity-and-anti-affinity}
### Pod 间亲和性与反亲和性 {#inter-pod-affinity-and-anti-affinity}
Pod 间亲和性与反亲和性使你可以基于已经在节点上运行的 **Pod** 的标签来约束
Pod 可以调度到的节点,而不是基于节点上的标签。
@ -552,9 +566,9 @@ same zone currently running Pods with the `Security=S2` Pod label.
<!--
To get yourself more familiar with the examples of Pod affinity and anti-affinity,
refer to the [design proposal](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md).
refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
-->
查阅[设计文档](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/podaffinity.md)
查阅[设计文档](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)
以进一步熟悉 Pod 亲和性与反亲和性的示例。
<!--
@ -571,8 +585,7 @@ exceptions for performance and security reasons:
有一些限制:
<!--
* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both
`requiredDuringSchedulingIgnoredDuringExecution`
* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
and `preferredDuringSchedulingIgnoredDuringExecution`.
* For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules,
the admission controller `LimitPodHardAntiAffinityTopology` limits
@ -634,6 +647,14 @@ Pod 间亲和性与反亲和性在与更高级别的集合(例如 ReplicaSet
Deployment 等)一起使用时,它们可能更加有用。
这些规则使得你可以配置一组工作负载,使其位于相同定义拓扑(例如,节点)中。
<!--
Take, for example, a three-node cluster running a web application with an
in-memory cache like redis. You could use inter-pod affinity and anti-affinity
to co-locate the web servers with the cache as much as possible.
-->
以一个三节点的集群为例,该集群运行一个带有 Redis 这种内存缓存的 Web 应用程序。
你可以使用节点间的亲和性和反亲和性来尽可能地将 Web 服务器与缓存并置。
<!--
In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
@ -803,16 +824,16 @@ The above Pod will only run on the node `kube-01`.
<!--
* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md).
* Read the design docs for [node affinity](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md).
* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
resource allocation decisions.
* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
-->
* 进一步阅读[污点与容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)文档。
* 阅读[节点亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
和[Pod 间亲和性与反亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
* 阅读[节点亲和性](https://git.k8s.io/design-proposals-archive/scheduling/nodeaffinity.md)
和[Pod 间亲和性与反亲和性](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md)
的设计文档。
* 了解[拓扑管理器](/zh-cn/docs/tasks/administer-cluster/topology-manager/)如何参与节点层面资源分配决定。
* 了解如何使用 [nodeSelector](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/)。

View File

@ -129,7 +129,7 @@ memory is reclaimable under pressure.
`memory.available` 的值来自 cgroupfs而不是像 `free -m` 这样的工具。
这很重要,因为 `free -m` 在容器中不起作用,如果用户使用
[节点可分配资源](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
这一功能特性,资源不足的判定是基于 CGroup 层次结构中的用户 Pod 所处的局部及 CGroup 根节点作出的。
这一功能特性,资源不足的判定是基于 cgroup 层次结构中的用户 Pod 所处的局部及 cgroup 根节点作出的。
这个[脚本](/zh-cn/examples/admin/resource/memory-available.sh)
重现了 kubelet 为计算 `memory.available` 而执行的相同步骤。
kubelet 在其计算中排除了 inactive_file即非活动 LRU 列表上基于文件来虚拟的内存的字节数),
@ -154,15 +154,26 @@ kubelet 支持以下文件系统分区:
kubelet 会自动发现这些文件系统并忽略其他文件系统。kubelet 不支持其他配置。
{{<note>}}
<!--
Some kubelet garbage collection features are deprecated in favor of eviction.
For a list of the deprecated features, see [kubelet garbage collection deprecation](/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation).
<!--
Some kubelet garbage collection features are deprecated in favor of eviction:
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |
-->
一些 kubelet 垃圾收集功能已被弃用,以支持驱逐。
有关已弃用功能的列表,请参阅
[kubelet 垃圾收集弃用](/zh-cn/docs/concepts/cluster-administration/kubelet-garbage-collection/#deprecation)。
{{</note>}}
一些 kubelet 垃圾收集功能已被弃用,以鼓励使用驱逐机制。
| 现有标志 | 新的标志 | 原因 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现有的驱逐信号可以触发镜像垃圾收集 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收具有相同的行为 |
| `--maximum-dead-containers` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
| `--maximum-dead-containers-per-container` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
| `--minimum-container-ttl-duration` | - | 一旦旧的日志存储在容器的上下文之外就会被弃用 |
<!--
### Eviction thresholds
@ -247,7 +258,7 @@ You can use the following flags to configure soft eviction thresholds:
如果驱逐条件持续时长超过指定的宽限期,可以触发 Pod 驱逐。
* `eviction-soft-grace-period`:一组驱逐宽限期,
`memory.available=1m30s`,定义软驱逐条件在触发 Pod 驱逐之前必须保持多长时间。
* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。
<!--
#### Hard eviction thresholds {#hard-eviction-thresholds}
@ -320,7 +331,7 @@ kubelet 根据下表将驱逐信号映射为节点状况:
| 节点条件 | 驱逐信号 | 描述 |
|---------|--------|------|
| `MemoryPressure` | `memory.available` | 节点上的可用内存已满足驱逐条件 |
| `DiskPressure` | `nodefs.available`、`nodefs.inodesFree`、`imagefs.available` 或 `imagefs.inodesFree` | 节点的根文件系统或像文件系统上的可用磁盘空间和 inode 已满足驱逐条件 |
| `DiskPressure` | `nodefs.available`、`nodefs.inodesFree`、`imagefs.available` 或 `imagefs.inodesFree` | 节点的根文件系统或像文件系统上的可用磁盘空间和 inode 已满足驱逐条件 |
| `PIDPressure` | `pid.available` | (Linux) 节点上的可用进程标识符已低于驱逐条件 |
kubelet 根据配置的 `--node-status-update-frequency` 更新节点条件,默认为 `10s`
@ -472,7 +483,7 @@ requests.
The kubelet sorts pods differently based on whether the node has a dedicated
`imagefs` filesystem:
-->
当 kubelet 因 inode 或 PID 不足而驱逐 pod 时,
当 kubelet 因 inode 或 PID 不足而驱逐 Pod 时,
它使用优先级来确定驱逐顺序,因为 inode 和 PID 没有请求。
kubelet 根据节点是否具有专用的 `imagefs` 文件系统对 Pod 进行不同的排序:
@ -648,7 +659,7 @@ Consider the following scenario:
* 节点内存容量:`10Gi`
* 操作员希望为系统守护进程(内核、`kubelet` 等)保留 10% 的内存容量
* 操作员希望驱逐内存利用率为 95% 的Pod以减少系统 OOM 的概率。
* 操作员希望驱逐内存利用率为 95% 的 Pod以减少系统 OOM 的概率。
<!--
For this to work, the kubelet is launched as follows:
@ -757,7 +768,7 @@ You can work around that behavior by setting the memory limit and memory request
the same for containers likely to perform intensive I/O activity. You will need
to estimate or measure an optimal memory limit value for that container.
-->
更多细节请参见 [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
更多细节请参见 [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
你可以通过为可能执行 I/O 密集型活动的容器设置相同的内存限制和内存请求来应对该行为。
你将需要估计或测量该容器的最佳内存限制值。
@ -765,14 +776,14 @@ to estimate or measure an optimal memory limit value for that container.
## {{% heading "whatsnext" %}}
<!--
* Learn about [API-initiated Eviction](/docs/reference/generated/kubernetes-api/v1.23/)
* Learn about [API-initiated Eviction](/docs/concepts/scheduling-eviction/api-eviction/)
* Learn about [Pod Priority and Preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* Learn about [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
* Learn about [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/) (QoS)
* Check out the [Eviction API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)
-->
* 了解 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/)
* 了解 [Pod 优先级和驱逐](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 了解 [PodDisruptionBudgets](/docs/tasks/run-application/configure-pdb/)
* 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)
* 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/)
* 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)QoS
* 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core)

View File

@ -540,11 +540,11 @@ It mounts a directory and writes the requested data in plain text files.
这种卷类型挂载一个目录并在纯文本文件中写入所请求的数据。
<!--
A Container using Downward API as a [subPath](#using-subpath) volume mount will not
receive Downward API updates.
A container using the downward API as a [`subPath`](#using-subpath) volume mount does not
receive updates when field values change.
-->
{{< note >}}
容器以 [subPath](#using-subpath) 卷挂载方式使用 downwardAPI 时,将不能接收到它的更新。
容器以 [subPath](#using-subpath) 卷挂载方式使用 downward API 时,在字段值更改时将不能接收到它的更新。
{{< /note >}}
<!--

View File

@ -158,6 +158,14 @@ PUT | update
PATCH | patch
DELETE | delete针对单个资源、deletecollection针对集合
{{< caution >}}
<!--
The `get`, `list` and `watch` verbs can all return the full details of a resource. In terms of the returned data they are equivalent. For example, `list` on `secrets` will still reveal the `data` attributes of any returned resources.
-->
`get`、`list` 和 `watch` 动作都可以返回一个资源的完整详细信息。就返回的数据而言,它们是等价的。
例如,对 `secrets` 使用 `list` 仍然会显示所有已返回资源的 `data` 属性。
{{< /caution >}}
<!--
Kubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:

View File

@ -4,13 +4,12 @@ api_metadata:
import: "k8s.io/api/networking/v1"
kind: "IngressClass"
content_type: "api_reference"
description: "IngressClass 代表 Ingress 的类, 被 Ingress 的规约引用。"
description: "IngressClass 代表 Ingress 的类被 Ingress 的规约引用。"
title: "IngressClass"
weight: 5
---
<!--
---
api_metadata:
apiVersion: "networking.k8s.io/v1"
import: "k8s.io/api/networking/v1"
@ -20,7 +19,6 @@ description: "IngressClass represents the class of the Ingress, referenced by th
title: "IngressClass"
weight: 5
auto_generated: true
---
-->
`apiVersion: networking.k8s.io/v1`
@ -34,8 +32,9 @@ IngressClass represents the class of the Ingress, referenced by the Ingress Spec
-->
## IngressClass {#IngressClass}
IngressClass 代表 Ingress 的类, 被 Ingress 的规约引用。
`ingressclass.kubernetes.io/is-default-class` 注解可以用来标明一个 IngressClass 应该被视为默认的 Ingress 类。
IngressClass 代表 Ingress 的类,被 Ingress 的规约引用。
`ingressclass.kubernetes.io/is-default-class`
注解可以用来标明一个 IngressClass 应该被视为默认的 Ingress 类。
当某个 IngressClass 资源将此注解设置为 true 时,
没有指定类的新 Ingress 资源将被分配到此默认类。
@ -50,22 +49,23 @@ IngressClass 代表 Ingress 的类, 被 Ingress 的规约引用。
<!--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
-->
标准的列表元数据。
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的列表元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- **spec** (<a href="{{< ref "../service-resources/ingress-class-v1#IngressClassSpec" >}}">IngressClassSpec</a>)
<!--
Spec is the desired state of the IngressClass. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
-->
spec 是 IngressClass 的期望状态。更多信息https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
spec 是 IngressClass 的期望状态。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
## IngressClassSpec {#IngressClassSpec}
<!--
IngressClassSpec provides information about the class of an Ingress.
-->
IngressClassSpec 提供有关 Ingress 类的信息。
IngressClassSpec 提供有关 Ingress 类的信息。
<hr>
@ -120,14 +120,16 @@ IngressClassSpec 提供有关 Ingress 类的信息。
apiGroup 是被引用资源的组。
如果未指定 apiGroup则被指定的 kind 必须在核心 API 组中。
对于任何其他第三方类型, APIGroup 是必需的。
对于任何其他第三方类型,apiGroup 是必需的。
- **parameters.namespace** (string)
<!--
Namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster".
-->
namespace 是被引用资源的命名空间。当范围被设置为 “namespace” 时,此字段是必需的,当范围被设置为 “Cluster”此字段必须取消设置。
namespace 是被引用资源的命名空间。
当范围被设置为 “namespace” 时,此字段是必需的;
当范围被设置为 “Cluster” 时,此字段必须不设置。
- **parameters.scope** (string)
<!--
@ -164,7 +166,7 @@ IngressClassList 是 IngressClasses 的集合。
-->
- **items** ([]<a href="{{< ref "../service-resources/ingress-class-v1#IngressClass" >}}">IngressClass</a>),必需
items 是 IngressClasses 的列表
items 是 IngressClasses 的列表
<!--
## Operations {#Operations}

View File

@ -7,7 +7,6 @@ content_type: "api_reference"
description: "ControllerRevision 实现了状态数据的不可变快照。"
title: "ControllerRevision"
weight: 7
auto_generated: false
---
<!--
@ -22,7 +21,6 @@ weight: 7
auto_generated: true
-->
`apiVersion: apps/v1`
`import "k8s.io/api/apps/v1"`
@ -71,8 +69,8 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
-->
- **metadata** (<a href="{{< ref "../common-definitions/object-meta#ObjectMeta" >}}">ObjectMeta</a>)
标准的对象元数据。
更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
标准的对象元数据。更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
<!--
- **revision** (int64), required
@ -97,7 +95,7 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
*RawExtension is used to hold extensions in external versions.
-->
<a name="RawExtension"></a>
*RawExtension 用于以外部版本来保存扩展数据。
**RawExtension 用于以外部版本来保存扩展数据。**
<!--
To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types.
@ -112,7 +110,9 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
AOption string `json:"aOption"`
}
-->
// 内部包:
内部包:
```go
type MyAPIObject struct {
runtime.TypeMeta `json:",inline"`
MyPlugin runtime.Object `json:"myPlugin"`
@ -120,6 +120,7 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
type PluginA struct {
AOption string `json:"aOption"`
}
```
<!--
// External package: type MyAPIObject struct {
@ -129,7 +130,9 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
AOption string `json:"aOption"`
}
-->
// 外部包:
外部包:
```go
type MyAPIObject struct {
runtime.TypeMeta `json:",inline"`
MyPlugin runtime.RawExtension `json:"myPlugin"`
@ -137,6 +140,7 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
type PluginA struct {
AOption string `json:"aOption"`
}
```
<!--
// On the wire, the JSON will look something like this: {
@ -148,7 +152,8 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
},
}
-->
// 在网络上JSON 看起来像这样:
在网络上JSON 看起来像这样:
```json
{
"kind":"MyAPIObject",
"apiVersion":"v1",
@ -157,6 +162,7 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
"aOption":"foo",
},
}
```
<!--
So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.)*
@ -167,7 +173,7 @@ API 服务器将无法成功验证所有尝试改变 data 字段的请求。
下一步是复制(使用 pkg/conversion到内部结构中。
runtime 包的 DefaultScheme 安装了转换函数,它将解析存储在 RawExtension 中的 JSON
将其转换为正确的对象类型,并将其存储在 Object 中。
TODO如果对象是未知类型将创建并存储一个 `runtime.Unknown`对象。)*
TODO如果对象是未知类型将创建并存储一个 `runtime.Unknown`对象。)
<!--
## ControllerRevisionList {#ControllerRevisionList}
@ -199,7 +205,8 @@ ControllerRevisionList 是一个包含 ControllerRevision 对象列表的资源
-->
- **metadata** (<a href="{{< ref "../common-definitions/list-meta#ListMeta" >}}">ListMeta</a>)
更多信息https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
更多信息:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
<!--
- **items** ([]<a href="{{< ref "../workload-resources/controller-revision-v1#ControllerRevision" >}}">ControllerRevision</a>), required

View File

@ -152,12 +152,12 @@ Here's a summary of each level:
## API 组
<!--
[API groups](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)
[API groups](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md)
make it easier to extend the Kubernetes API.
The API group is specified in a REST path and in the `apiVersion` field of a
serialized object.
-->
[API 组](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)
[API 组](https://git.k8s.io/design-proposals-archive/api-machinery/api-group.md)
能够简化对 Kubernetes API 的扩展。
API 组信息出现在REST 路径中,也出现在序列化对象的 `apiVersion` 字段中。

View File

@ -462,10 +462,10 @@ PersistentVolume are not present on the Pod resource itself.
<!--
* Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/).
* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
* Read the [Persistent Storage design document](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md).
-->
* 进一步了解 [PersistentVolumes](/zh-cn/docs/concepts/storage/persistent-volumes/)
* 阅读[持久存储设计文档](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md)
* 阅读[持久存储设计文档](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md)
<!--
### Reference

View File

@ -721,7 +721,7 @@ Pod 的安全上下文适用于 Pod 中的容器,也适用于 Pod 所挂载的
<!--
* `fsGroup`: Volumes that support ownership management are modified to be owned
and writable by the GID specified in `fsGroup`. See the
[Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)
[Ownership Management design document](https://git.k8s.io/design-proposals-archive/storage/volume-ownership-management.md)
for more details.
* `seLinuxOptions`: Volumes that support SELinux labeling are relabeled to be accessible
@ -732,7 +732,7 @@ Pod 的安全上下文适用于 Pod 中的容器,也适用于 Pod 所挂载的
-->
* `fsGroup`:支持属主管理的卷会被修改,将其属主变更为 `fsGroup` 所指定的 GID
并且对该 GID 可写。进一步的细节可参阅
[属主变更设计文档](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)。
[属主变更设计文档](https://git.k8s.io/design-proposals-archive/storage/volume-ownership-management.md)。
* `seLinuxOptions`:支持 SELinux 标签的卷会被重新打标签,以便可被 `seLinuxOptions`
下所设置的标签访问。通常你只需要设置 `level` 部分。
@ -771,11 +771,11 @@ kubectl delete pod security-context-demo-4
* [PodSecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritycontext-v1-core)
* [SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
* [Tuning Docker with the newest security enhancements](https://github.com/containerd/containerd/blob/main/docs/cri/config.md)
* [Security Contexts design document](https://git.k8s.io/community/contributors/design-proposals/auth/security_context.md)
* [Ownership Management design document](https://git.k8s.io/community/contributors/design-proposals/storage/volume-ownership-management.md)
* [Security Contexts design document](https://git.k8s.io/design-proposals-archive/auth/security_context.md)
* [Ownership Management design document](https://git.k8s.io/design-proposals-archive/storage/volume-ownership-management.md)
* [Pod Security Policies](/docs/concepts/security/pod-security-policy/)
* [AllowPrivilegeEscalation design
document](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
document](https://git.k8s.io/design-proposals-archive/auth/no-new-privs.md)
* For more information about security mechanisms in Linux, see
[Overview of Linux Kernel Security Features](https://www.linux.com/learn/overview-linux-kernel-security-features)
-->

View File

@ -147,14 +147,12 @@ releases may also occur in between these.
<!--
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| July 2022 | 2022-07-08 | 2022-07-13 |
| August 2022 | 2022-08-12 | 2022-08-17 |
| September 2022 | 2022-09-09 | 2022-09-14 |
| October 2022 | 2022-10-07 | 2022-10-12 |
-->
| 月度补丁发布 | Cherry Pick 截止日期 | 目标日期 |
| ------------- | -------------------- | ----------- |
| 2022 年 7 月 | 2022-07-08 | 2022-07-13 |
| 2022 年 8 月 | 2022-08-12 | 2022-08-16 |
| 2022 年 9 月 | 2022-09-09 | 2022-09-14 |
| 2022 年 10 月 | 2022-10-07 | 2022-10-12 |
@ -164,12 +162,13 @@ releases may also occur in between these.
### 1.24
Next patch release is **1.24.1**
Next patch release is **1.24.4**
End of Life for **1.24** is **2023-09-29**
End of Life for **1.24** is **2023-07-28**
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|---------------|----------------------|-------------|------|
| 1.24.4 | 2022-08-12 | 2022-08-17 | |
| 1.24.3 | 2022-07-08 | 2022-07-13 | |
| 1.24.2 | 2022-06-10 | 2022-06-15 | |
| 1.24.1 | 2022-05-20 | 2022-05-24 | |
@ -178,12 +177,13 @@ End of Life for **1.24** is **2023-09-29**
### 1.24
下一个补丁版本是 **1.24.1**
下一个补丁版本是 **1.24.4**
**1.24** 的生命周期结束时间为 **2023-09-29**
**1.24** 的生命周期结束时间为 **2023-07-28**
| 补丁发布 | Cherry Pick 截止日期 | 目标日期 | 说明 |
|----------|----------------------|------------|------|
| 1.24.4 | 2022-08-12 | 2022-08-17 | |
| 1.24.3 | 2022-07-08 | 2022-07-13 | |
| 1.24.2 | 2022-06-10 | 2022-06-15 | |
| 1.24.1 | 2022-05-20 | 2022-05-24 | |
@ -191,10 +191,15 @@ End of Life for **1.24** is **2023-09-29**
### 1.23
<!--
Next patch release is **1.23.10**
**1.23** enters maintenance mode on **2022-12-28**.
End of Life for **1.23** is **2023-02-28**.
-->
下一个补丁版本是 **1.23.10**
**1.23** 于 **2022-12-28** 进入维护模式。
**1.23** 的生命周期结束时间为 **2023-02-28**
@ -203,9 +208,23 @@ End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note |
[Out-of-Band Release](https://groups.google.com/a/kubernetes.io/g/dev/c/Xl1sm-CItaY)
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
| 1.23.10 | 2022-08-12 | 2022-08-17 | |
| 1.23.9 | 2022-07-08 | 2022-07-13 | |
| 1.23.8 | 2022-06-10 | 2022-06-15 | |
| 1.23.7 | 2022-05-20 | 2022-05-24 | |
| 1.23.6 | 2022-04-08 | 2022-04-13 | |
| 1.23.5 | 2022-03-11 | 2022-03-16 | |
| 1.23.4 | 2022-02-11 | 2022-02-16 | |
| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) |
| 1.23.2 | 2022-01-14 | 2022-01-19 | |
| 1.23.1 | 2021-12-14 | 2021-12-16 | |
-->
| 补丁发布 | Cherry Pick 截止日期 | 目标日期 | 说明 |
|---------------|----------------------|-------------|------|
| 1.23.10 | 2022-08-12 | 2022-08-17 | |
| 1.23.9 | 2022-07-08 | 2022-07-13 | |
| 1.23.8 | 2022-06-10 | 2022-06-15 | |
| 1.23.7 | 2022-05-20 | 2022-05-24 | |
@ -219,19 +238,39 @@ End of Life for **1.23** is **2023-02-28**.
### 1.22
<!--
Next patch release is **1.22.13**
**1.22** enters maintenance mode on **2022-08-28**
End of Life for **1.22** is **2022-10-28**
-->
下一个补丁版本是 **1.22.13**
**1.22** 于 **2022-08-28** 进入维护模式
**1.22** 的生命周期结束时间为 **2022-10-28**
<!--
| PATCH RELEASE | CHERRY PICK DEADLINE | TARGET DATE | NOTE |
|---------------|----------------------|-------------|------|
| 1.22.13 | 2022-08-12 | 2022-08-17 | |
| 1.22.12 | 2022-07-08 | 2022-07-13 | |
| 1.22.11 | 2022-06-10 | 2022-06-15 | |
| 1.22.10 | 2022-05-20 | 2022-05-24 | |
| 1.22.9 | 2022-04-08 | 2022-04-13 | |
| 1.22.8 | 2022-03-11 | 2022-03-16 | |
| 1.22.7 | 2022-02-11 | 2022-02-16 | |
| 1.22.6 | 2022-01-14 | 2022-01-19 | |
| 1.22.5 | 2021-12-10 | 2021-12-15 | |
| 1.22.4 | 2021-11-12 | 2021-11-17 | |
| 1.22.3 | 2021-10-22 | 2021-10-27 | |
| 1.22.2 | 2021-09-10 | 2021-09-15 | |
| 1.22.1 | 2021-08-16 | 2021-08-19 | |
-->
| 补丁发布 | Cherry Pick 截止日期 | 目标日期 | 说明 |
|---------------|----------------------|-------------|------|
| 1.22.13 | 2022-08-12 | 2022-08-17 | |
| 1.22.12 | 2022-07-08 | 2022-07-13 | |
| 1.22.11 | 2022-06-10 | 2022-06-15 | |
| 1.22.10 | 2022-05-20 | 2022-05-24 | |