[zh] resync blog 2015-04-00 and 2023-02-06

pull/40345/head
Zhuzhenghao 2023-03-27 23:50:35 +08:00
parent 0acbfbd810
commit 3b503fc968
3 changed files with 68 additions and 36 deletions

View File

@ -19,7 +19,6 @@ Google has been running containerized workloads in production for more than a de
[MapReduce](http://research.google.com/archive/mapreduce.html) 和 [Millwheel](http://research.google.com/pubs/pub41378.html)一样的批处理框架,
Google 的几乎一切都是以容器的方式运行的。今天,我们揭开了 Borg 的面纱Google 传闻已久的面向容器的内部集群管理系统,并在学术计算机系统会议 [Eurosys](http://eurosys2015.labri.fr/) 上发布了详细信息。你可以在 [此处](https://research.google.com/pubs/pub43438.html) 找到论文。
<!--
Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We've incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.
-->
@ -33,9 +32,9 @@ To give you a flavor, here are four Kubernetes features that came from our exper
Kubernetes 中的以下四个功能特性源于我们从 Borg 获得的经验:
<!--
1) [Pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes.
1) [Pods](/docs/concepts/workloads/pods/). A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes.
-->
1) [Pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md).
1) [Pods](/zh-cn/docs/concepts/workloads/pods/)。
Pod 是 Kubernetes 中调度的单位。
它是一个或多个容器在其中运行的资源封装。
保证属于同一 Pod 的容器可以一起调度到同一台计算机上,并且可以通过本地卷共享状态。
@ -54,12 +53,10 @@ Pods not only support these use cases, but they also provide an environment simi
-->
Pod 不仅支持这些用例,而且还提供类似于在单个 VM 中运行多个进程的环境 -- Kubernetes 用户可以在 Pod 中部署多个位于同一地点的协作过程,而不必放弃一个应用程序一个容器的部署模型。
<!--
2) [Services](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md). Although Borgs primary role is to manage the lifecycles of tasks and machines, the applications that run on Borg benefit from many other cluster services, including naming and load balancing. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector (see next section). Any container in the cluster can connect to the service using the service name.
2) [Services](/docs/concepts/services-networking/service/). Although Borgs primary role is to manage the lifecycles of tasks and machines, the applications that run on Borg benefit from many other cluster services, including naming and load balancing. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector (see next section). Any container in the cluster can connect to the service using the service name.
-->
2) [服务](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md)。
2) [服务](/zh-cn/docs/concepts/services-networking/service/)。
尽管 Borg 的主要角色是管理任务和计算机的生命周期,但是在 Borg 上运行的应用程序还可以从许多其它集群服务中受益,包括命名和负载均衡。
Kubernetes 使用服务抽象支持命名和负载均衡:带名字的服务,会映射到由标签选择器定义的一组动态 Pod 集(请参阅下一节)。
集群中的任何容器都可以使用服务名称链接到服务。
@ -68,30 +65,24 @@ Under the covers, Kubernetes automatically load-balances connections to the serv
-->
在幕后Kubernetes 会自动在与标签选择器匹配到 Pod 之间对与服务的连接进行负载均衡,并跟踪 Pod 在哪里运行,由于故障,它们会随着时间的推移而重新安排。
<!--
3) [Labels](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md).
A container in Borg is usually one replica in a collection of identical or nearly identical containers that correspond to one tier of an Internet service (e.g. the front-ends for Google Maps) or to the workers of a batch job (e.g. a MapReduce). The collection is called a Job, and each replica is called a Task. While the Job is a very useful abstraction, it can be limiting. For example, users often want to manage their entire service (composed of many Jobs) as a single entity, or to uniformly manage several related instances of their service, for example separate canary and stable release tracks.
At the other end of the spectrum, users frequently want to reason about and control subsets of tasks within a Job -- the most common example is during rolling updates, when different subsets of the Job need to have different configurations.
3) [Labels](/docs/concepts/overview/working-with-objects/labels/). A container in Borg is usually one replica in a collection of identical or nearly identical containers that correspond to one tier of an Internet service (e.g. the front-ends for Google Maps) or to the workers of a batch job (e.g. a MapReduce). The collection is called a Job, and each replica is called a Task. While the Job is a very useful abstraction, it can be limiting. For example, users often want to manage their entire service (composed of many Jobs) as a single entity, or to uniformly manage several related instances of their service, for example separate canary and stable release tracks. At the other end of the spectrum, users frequently want to reason about and control subsets of tasks within a Job -- the most common example is during rolling updates, when different subsets of the Job need to have different configurations.
-->
3) [标签](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md)。
3) [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。
Borg 中的容器通常是一组相同或几乎相同的容器中的一个副本,该容器对应于 Internet 服务的一层(例如 Google Maps 的前端)或批处理作业的工人(例如 MapReduce
该集合称为 Job ,每个副本称为任务。
尽管 Job 是一个非常有用的抽象,但它可能是有限的。
例如,用户经常希望将其整个服务(由许多 Job 组成)作为一个实体进行管理,或者统一管理其服务的几个相关实例,例如单独的 Canary 和稳定的发行版。
另一方面,用户经常希望推理和控制 Job 中的任务子集 --最常见的示例是在滚动更新期间,此时作业的不同子集需要具有不同的配置。
<!--
Kubernetes supports more flexible collections than Borg by organizing pods using labels, which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can create groupings equivalent to Borg Jobs by using a “job:\<jobname\>” label on their pods, but they can also use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and [replication controllers](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md) allow for very flexible update semantics, as well as for operations that span the equivalent of Borg Jobs.
Kubernetes supports more flexible collections than Borg by organizing pods using labels, which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can create groupings equivalent to Borg Jobs by using a “job:\<jobname\>” label on their pods, but they can also use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and [replication controllers](/docs/concepts/workloads/controllers/replicationcontroller/) allow for very flexible update semantics, as well as for operations that span the equivalent of Borg Jobs.
-->
通过使用标签组织 Pod Kubernetes 比 Borg 支持更灵活的集合,标签是用户附加到 Pod实际上是系统中的任何对象的任意键/值对。
用户可以通过在其 Pod 上使用 “job:\<jobname\>” 标签来创建与 Borg Jobs 等效的分组,但是他们还可以使用其他标签来标记服务名称,服务实例(生产,登台,测试)以及一般而言,其 pod 的任何子集。
标签查询(称为“标签选择器”)用于选择操作应用于哪一组 Pod 。
结合起来,标签和[复制控制器](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md) 允许非常灵活的更新语义,以及跨等效项的操作 Borg Jobs。
结合起来,标签和[复制控制器](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) 允许非常灵活的更新语义,以及跨等效项的操作 Borg Jobs。
<!--
4) IP-per-Pod. In Borg, all tasks on a machine use the IP address of that host, and thus share the hosts port space. While this means Borg can use a vanilla network, it imposes a number of burdens on infrastructure and application developers: Borg must schedule ports as a resource; tasks must pre-declare how many ports they need, and take as start-up arguments which ports to use; the Borglet (node agent) must enforce port isolation; and the naming and RPC systems must handle ports as well as IP addresses.
@ -99,7 +90,6 @@ Kubernetes supports more flexible collections than Borg by organizing pods using
4) 每个 Pod 一个 IP。在 Borg 中,计算机上的所有任务都使用该主机的 IP 地址,从而共享主机的端口空间。
虽然这意味着 Borg 可以使用普通网络但是它给基础结构和应用程序开发人员带来了许多负担Borg 必须将端口作为资源进行调度任务必须预先声明它们需要多少个端口并将要使用的端口作为启动参数Borglet节点代理必须强制端口隔离命名和 RPC 系统必须处理端口以及 IP 地址。
<!--
Thanks to the advent of software-defined overlay networks such as [flannel](https://coreos.com/blog/introducing-rudder/) or those built into [public clouds](https://cloud.google.com/compute/docs/networking), Kubernetes is able to give every pod and service its own IP address. This removes the infrastructure complexity of managing ports, and allows developers to choose any ports they want rather than requiring their software to adapt to the ones chosen by the infrastructure. The latter point is crucial for making it easy to run off-the-shelf open-source applications on Kubernetes--pods can be treated much like VMs or physical hosts, with access to the full port space, oblivious to the fact that they may be sharing the same physical machine with other pods.
-->
@ -107,11 +97,8 @@ Thanks to the advent of software-defined overlay networks such as [flannel](http
这消除了管理端口的基础架构的复杂性,并允许开发人员选择他们想要的任何端口,而不需要其软件适应基础架构选择的端口。
后一点对于使现成的易于运行 Kubernetes 上的开源应用程序至关重要 -- 可以将 Pod 视为 VMs 或物理主机,可以访问整个端口空间,他们可能与其他 Pod 共享同一台物理计算机,这一事实已被忽略。
<!--
With the growing popularity of container-based microservice architectures, the lessons Google has learned from running such systems internally have become of increasing interest to the external DevOps community. By revealing some of the inner workings of our cluster manager Borg, and building our next-generation cluster manager as both an open-source project (Kubernetes) and a publicly available hosted service ([Google Container Engine](http://cloud.google.com/container-engine)), we hope these lessons can benefit the broader community outside of Google and advance the state-of-the-art in container scheduling and cluster management.
-->
随着基于容器的微服务架构的日益普及Google 从内部运行此类系统所汲取的经验教训已引起外部 DevOps 社区越来越多的兴趣。
通过揭示集群管理器 Borg 的一些内部工作原理并将下一代集群管理器构建为一个开源项目Kubernetes和一个公开可用的托管服务[Google Container Engine](http://cloud.google.com/container-engine)),我们希望这些课程可以使 Google 之外的广大社区受益,并推动容器调度和集群管理方面的最新技术发展。

View File

@ -13,6 +13,21 @@ title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
-->
作者Tabitha SableKubernetes SIG Security
{{% pageinfo color="primary" %}}
<!--
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.*
-->
**更新:随着 Kubernetes v1.25 的发布PodSecurityPolicy 已被删除。**
<!--
*You can read more information about the removal of PodSecurityPolicy in the
[Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
-->
**你可以在 [Kubernetes 1.25 发布说明](/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes)
中阅读有关删除 PodSecurityPolicy 的更多信息。**
{{% /pageinfo %}}
<!--
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week.
This starts the countdown to its removal, but doesnt change anything else.

View File

@ -19,9 +19,16 @@ slug: k8s-gcr-io-freeze-announcement
**译者**Michael Yao (Daocloud)
<!--
The Kubernetes project runs a community-owned image registry called `registry.k8s.io` to host its container images. On the 3rd of April 2023, the old registry `k8s.gcr.io` will be frozen and no further images for Kubernetes and related subprojects will be pushed to the old registry.
The Kubernetes project runs a community-owned image registry called `registry.k8s.io`
to host its container images. On the 3rd of April 2023, the old registry `k8s.gcr.io`
will be frozen and no further images for Kubernetes and related subprojects will be
pushed to the old registry.
This registry `registry.k8s.io` replaced the old one and has been generally available for several months. We have published a [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) about its benefits to the community and the Kubernetes project. This post also announced that future versions of Kubernetes will not be available in the old registry. Now that time has come.
This registry `registry.k8s.io` replaced the old one and has been generally available
for several months. We have published a [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/)
about its benefits to the community and the Kubernetes project. This post also
announced that future versions of Kubernetes will not be available in the old
registry. Now that time has come.
-->
Kubernetes 项目运行一个名为 `registry.k8s.io`、由社区管理的镜像仓库来托管其容器镜像。
2023 年 4 月 3 日,旧仓库 `k8s.gcr.io` 将被冻结Kubernetes 及其相关子项目的镜像将不再推送到这个旧仓库。
@ -32,7 +39,9 @@ Kubernetes 项目带来的好处。这篇博客再次宣布后续版本的 Kuber
<!--
What does this change mean for contributors:
- If you are a maintainer of a subproject, you will need to update your manifests and Helm charts to use the new registry.
- If you are a maintainer of a subproject, you will need to update your manifests
and Helm charts to use the new registry.
-->
这次变更对贡献者意味着:
@ -40,10 +49,18 @@ What does this change mean for contributors:
<!--
What does this change mean for end users:
- 1.27 Kubernetes release will not be published to the old registry.
- Patch releases for 1.24, 1.25, and 1.26 will no longer be published to the old registry from April. Please read the timelines below for details of the final patch releases in the old registry.
- Starting in 1.25, the default image registry has been set to `registry.k8s.io`. This value is overridable in `kubeadm` and `kubelet` but setting it to `k8s.gcr.io` will fail for new releases after April as they wont be present in the old registry.
- If you want to increase the reliability of your cluster and remove dependency on the community-owned registry or you are running Kubernetes in networks where external traffic is restricted, you should consider hosting local image registry mirrors. Some cloud vendors may offer hosted solutions for this.
- Patch releases for 1.24, 1.25, and 1.26 will no longer be published to the old
registry from April. Please read the timelines below for details of the final
patch releases in the old registry.
- Starting in 1.25, the default image registry has been set to `registry.k8s.io`.
This value is overridable in `kubeadm` and `kubelet` but setting it to `k8s.gcr.io`
will fail for new releases after April as they wont be present in the old registry.
- If you want to increase the reliability of your cluster and remove dependency on
the community-owned registry or you are running Kubernetes in networks where
external traffic is restricted, you should consider hosting local image registry
mirrors. Some cloud vendors may offer hosted solutions for this.
-->
这次变更对终端用户意味着:
@ -77,7 +94,8 @@ What does this change mean for end users:
<!--
## What's next
Please make sure your cluster does not have dependencies on old image registry. For example, you can run this command to list the images used by pods:
Please make sure your cluster does not have dependencies on old image registry.
For example, you can run this command to list the images used by pods:
-->
## 下一步 {#whats-next}
@ -91,14 +109,19 @@ uniq -c
```
<!--
There may be other dependencies on the old image registry. Make sure you review any potential dependencies to keep your cluster healthy and up to date.
There may be other dependencies on the old image registry. Make sure you review
any potential dependencies to keep your cluster healthy and up to date.
-->
旧的镜像仓库可能存在其他依赖项。请确保你检查了所有潜在的依赖项,以保持集群健康和最新。
<!--
## Acknowledgments
__Change is hard__, and evolving our image-serving platform is needed to ensure a sustainable future for the project. We strive to make things better for everyone using Kubernetes. Many contributors from all corners of our community have been working long and hard to ensure we are making the best decisions possible, executing plans, and doing our best to communicate those plans.
__Change is hard__, and evolving our image-serving platform is needed to ensure
a sustainable future for the project. We strive to make things better for everyone
using Kubernetes. Many contributors from all corners of our community have been
working long and hard to ensure we are making the best decisions possible,
executing plans, and doing our best to communicate those plans.
-->
## 致谢 {#acknowledgments}
@ -107,7 +130,14 @@ __改变是艰难的__但只有镜像服务平台演进才能确保 Kubernete
确保我们能够做出尽可能最好的决策、履行计划并尽最大努力传达这些计划。
<!--
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen, and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson Jr. from Google.
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine,
Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen,
and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle,
Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from
SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from
the Security Response Committee. Also a big thank you to our friends acting as
liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson
Jr. from Google.
-->
衷心感谢: