Merge release1.15 into master (#16512)
* initial commit * promote AWS-NLB Support from alpha to beta (#14451) (#16459) (#16484) * 1. Sync release-1.15 into master 2. Sync with en version * 1. Add the lost yaml file.pull/16533/head
parent
513199d5af
commit
b23e9ab024
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: "Kubernetes 架构"
|
||||
weight: 30
|
||||
---
|
|
@ -1,175 +1,395 @@
|
|||
title: 云控制器管理器的基本概念
|
||||
---
|
||||
title: 云控制器管理器的基础概念
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
## 云控制器管理器
|
||||
<!--
|
||||
---
|
||||
title: Concepts Underlying the Cloud Controller Manager
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
---
|
||||
-->
|
||||
|
||||
云控制器管理器(CCM)这个概念创建的初衷是为了让特定的云服务供应商代码和Kubernetes核心相互独立演化。云控制器管理器与其他主要组件如Kubernetes控制器管理器,API服务器和调度程序同时运行。云控制器管理器也可以作为Kubernetes的插件启动,这种情况下,CCM运行在Kubernetes系统之上。
|
||||
|
||||
云控制器管理器基于插件机制设计,允许新的云服务供应商通过插件轻松地与Kubernetes集成。目前已经有在Kubernetes上加入新的云服务供应商计划,并为云服务供应商提供从原先的旧模式迁移到新CCM模式的方案。
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.
|
||||
-->
|
||||
|
||||
云控制器管理器(cloud controller manager,CCM)这个概念 (不要与二进制文件混淆)创建的初衷是为了让特定的云服务供应商代码和 Kubernetes 核心相互独立演化。云控制器管理器与其他主要组件(如 Kubernetes 控制器管理器,API 服务器和调度程序)一起运行。它也可以作为 Kubernetes 的插件启动,在这种情况下,它会运行在 Kubernetes 之上。
|
||||
|
||||
<!--
|
||||
The cloud controller manager's design is based on a plugin mechanism that allows new cloud providers to integrate with Kubernetes easily by using plugins. There are plans in place for on-boarding new cloud providers on Kubernetes and for migrating cloud providers from the old model to the new CCM model.
|
||||
-->
|
||||
|
||||
云控制器管理器基于插件机制设计,允许新的云服务供应商通过插件轻松地与 Kubernetes 集成。目前已经有在 Kubernetes 上加入新的云服务供应商计划,并为云服务供应商提供从原先的旧模式迁移到新 CCM 模式的方案。
|
||||
|
||||
<!--
|
||||
This document discusses the concepts behind the cloud controller manager and gives details about its associated functions.
|
||||
-->
|
||||
|
||||
本文讨论了云控制器管理器背后的概念,并提供了相关功能的详细信息。
|
||||
|
||||
下面这张图描述了没有云控制器管理器的Kubernetes集群架构:
|
||||
<!--
|
||||
Here's the architecture of a Kubernetes cluster without the cloud controller manager:
|
||||
-->
|
||||
|
||||
这是没有云控制器管理器的 Kubernetes 集群的架构:
|
||||
|
||||
<!--
|
||||
![Pre CCM Kube Arch](/images/docs/pre-ccm-arch.png)
|
||||
-->
|
||||
|
||||
![没有云控制器管理器的 Kubernetes 架构](/images/docs/pre-ccm-arch.png)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Design
|
||||
-->
|
||||
|
||||
![无云控制器管理器的 K8s 集群架构](/images/docs/pre-ccm-arch.png)
|
||||
|
||||
## 设计
|
||||
|
||||
在上图中,Kubernetes和云服务供应商通过几个不同的组件进行了集成,分别是:
|
||||
<!--
|
||||
In the preceding diagram, Kubernetes and the cloud provider are integrated through several different components:
|
||||
-->
|
||||
|
||||
在上图中,Kubernetes 和云服务供应商通过几个不同的组件进行了集成,分别是:
|
||||
|
||||
<!--
|
||||
* Kubelet
|
||||
* Kubernetes controller manager
|
||||
* Kubernetes API server
|
||||
-->
|
||||
|
||||
* Kubelet
|
||||
* Kubernetes 控制管理器
|
||||
* Kubernetes API服务器
|
||||
* Kubernetes API 服务器
|
||||
|
||||
而CCM整合了前三个组件中的所有依赖于云的逻辑,用来创建与云的单点集成。新架构如下图所示:
|
||||
<!--
|
||||
The CCM consolidates all of the cloud-dependent logic from the preceding three components to create a single point of integration with the cloud. The new architecture with the CCM looks like this:
|
||||
-->
|
||||
|
||||
![有云控制器管理器的 K8s 集群架构](/images/docs/post-ccm-arch.png)
|
||||
CCM 整合了前三个组件中的所有依赖于云的逻辑,以创建与云的单一集成点。CCM 的新架构如下所示:
|
||||
|
||||
## CCM的组件
|
||||
<!--
|
||||
![CCM Kube Arch](/images/docs/post-ccm-arch.png)
|
||||
-->
|
||||
|
||||
CCM突破了Kubernetes控制器管理器(KCM)的一些功能,并将其作为一个独立的进程运行。具体而言,它打破了KCM中与云相关的控制器。KCM具有以下依赖于云的控制器引擎:
|
||||
![含有云控制器管理器的 Kubernetes 架构](/images/docs/post-ccm-arch.png)
|
||||
|
||||
<!--
|
||||
## Components of the CCM
|
||||
-->
|
||||
## CCM 的组成部分
|
||||
|
||||
<!--
|
||||
The CCM breaks away some of the functionality of Kubernetes controller manager (KCM) and runs it as a separate process. Specifically, it breaks away those controllers in the KCM that are cloud dependent. The KCM has the following cloud dependent controller loops:
|
||||
-->
|
||||
|
||||
CCM 打破了 Kubernetes 控制器管理器(KCM)的一些功能,并将其作为一个单独的进程运行。具体来说,它打破了 KCM 中依赖于云的控制器。KCM 具有以下依赖于云的控制器:
|
||||
|
||||
<!--
|
||||
* Node controller
|
||||
* Volume controller
|
||||
* Route controller
|
||||
* Service controller
|
||||
-->
|
||||
|
||||
* 节点控制器
|
||||
* 卷控制器
|
||||
* 路由控制器
|
||||
* 服务控制器
|
||||
<!--
|
||||
In version 1.9, the CCM runs the following controllers from the preceding list:
|
||||
-->
|
||||
|
||||
在1.8版本中,当前运行中的CCM从上面的列表中运行以下控制器:
|
||||
在 1.9 版本中,CCM 运行前述列表中的以下控制器:
|
||||
|
||||
<!--
|
||||
* Node controller
|
||||
* Route controller
|
||||
* Service controller
|
||||
-->
|
||||
|
||||
* 节点控制器
|
||||
* 路由控制器
|
||||
* 服务控制器
|
||||
|
||||
另外,它运行另一个名为 PersistentVolumeLabels Controller 的控制器。这个控制器负责对在GCP和AWS云里创建的PersistentVolumes的域(Zone)和区(Region)标签进行设置。
|
||||
{{< note >}}
|
||||
<!--
|
||||
Volume controller was deliberately chosen to not be a part of CCM. Due to the complexity involved and due to the existing efforts to abstract away vendor specific volume logic, it was decided that volume controller will not be moved to CCM.
|
||||
-->
|
||||
|
||||
**注意**:卷控制器被特意设计为CCM之外的一部分。由于其中涉及到的复杂性和对现有供应商特定卷的逻辑抽象,因此决定了卷控制器不会被移动到CCM之中。
|
||||
注意卷控制器不属于 CCM,由于其中涉及到的复杂性和对现有供应商特定卷的逻辑抽象,因此决定了卷控制器不会被移动到 CCM 之中。
|
||||
|
||||
原本计划使用CCM来支持卷的目的是为了引入FlexVolume卷来支持可插拔卷。然而,官方正在计划使用更具备竞争力的CSI来取代FlexVolume卷。
|
||||
{{< /note >}}
|
||||
|
||||
考虑到这些正在进行中的变化,我们决定暂时停止当前工作直至CSI准备就绪。
|
||||
<!--
|
||||
The original plan to support volumes using CCM was to use Flex volumes to support pluggable volumes. However, a competing effort known as CSI is being planned to replace Flex.
|
||||
-->
|
||||
|
||||
云服务供应商工作组(wg-cloud-provider)正在开展相关工作,以实现通过CCM支持PersistentVolume的功能。详细信息请参见[kubernetes/kubernetes#52371](https://github.com/kubernetes/kubernetes/pull/52371)。
|
||||
使用 CCM 支持 volume 的最初计划是使用 Flex volume 来支持可插拔卷,但是现在正在计划一项名为 CSI 的项目以取代 Flex。
|
||||
|
||||
## CCM功能
|
||||
<!--
|
||||
Considering these dynamics, we decided to have an intermediate stop gap measure until CSI becomes ready.
|
||||
-->
|
||||
|
||||
CCM从Kubernetes组件中继承了与云服务供应商相关的功能。本节基于被CCM继承其功能的组件展开描述。
|
||||
考虑到这些正在进行中的变化,在 CSI 准备就绪之前,我们决定停止当前的工作。
|
||||
|
||||
<!--
|
||||
## Functions of the CCM
|
||||
-->
|
||||
|
||||
## CCM 的功能
|
||||
|
||||
<!--
|
||||
The CCM inherits its functions from components of Kubernetes that are dependent on a cloud provider. This section is structured based on those components.
|
||||
-->
|
||||
|
||||
CCM 从依赖于云提供商的 Kubernetes 组件继承其功能,本节基于这些组件组织。
|
||||
|
||||
<!--
|
||||
### 1. Kubernetes controller manager
|
||||
-->
|
||||
|
||||
### 1. Kubernetes 控制器管理器
|
||||
|
||||
CCM的大部分功能都来自KCM。 如上一节所述,CCM运行以下控制引擎:
|
||||
<!--
|
||||
The majority of the CCM's functions are derived from the KCM. As mentioned in the previous section, the CCM runs the following control loops:
|
||||
-->
|
||||
|
||||
CCM 的大多数功能都来自 KCM,如上一节所述,CCM 运行以下控制器。
|
||||
|
||||
<!--
|
||||
* Node controller
|
||||
* Route controller
|
||||
* Service controller
|
||||
-->
|
||||
|
||||
* 节点控制器
|
||||
* 路由控制器
|
||||
* 服务控制器
|
||||
* PersistentVolumeLabels控制器
|
||||
|
||||
<!--
|
||||
#### Node controller
|
||||
-->
|
||||
|
||||
#### 节点控制器
|
||||
|
||||
节点控制器负责通过从云服务供应商获得有关在集群中运行的节点的信息来初始化节点。节点控制器执行以下功能:
|
||||
<!--
|
||||
The Node controller is responsible for initializing a node by obtaining information about the nodes running in the cluster from the cloud provider. The node controller performs the following functions:
|
||||
-->
|
||||
|
||||
1.使用云特定域(Zone)/区(Region)标签初始化节点。
|
||||
节点控制器负责通过从云提供商获取有关在集群中运行的节点的信息来初始化节点,节点控制器执行以下功能:
|
||||
|
||||
1.使用特定于云的实例详细信息初始化节点,例如类型和大小。
|
||||
<!--
|
||||
1. Initialize a node with cloud specific zone/region labels.
|
||||
2. Initialize a node with cloud specific instance details, for example, type and size.
|
||||
3. Obtain the node's network addresses and hostname.
|
||||
4. In case a node becomes unresponsive, check the cloud to see if the node has been deleted from the cloud.
|
||||
If the node has been deleted from the cloud, delete the Kubernetes Node object.
|
||||
-->
|
||||
|
||||
1.获取节点的网络地址和主机名。
|
||||
1. 使用特定于云的域(zone)/区(region)标签初始化节点;
|
||||
2. 使用特定于云的实例详细信息初始化节点,例如,类型和大小;
|
||||
3. 获取节点的网络地址和主机名;
|
||||
4. 如果节点无响应,请检查云以查看该节点是否已从云中删除。如果已从云中删除该节点,请删除 Kubernetes 节点对象。
|
||||
|
||||
1.如果节点无响应,检查该节点是否已从云中删除。如果该节点已从云中删除,则删除Kubernetes节点对象。
|
||||
<!--
|
||||
#### Route controller
|
||||
-->
|
||||
|
||||
#### 路由控制器
|
||||
|
||||
路由控制器负责为云配置正确的路由,以便Kubernetes集群中不同节点上的容器可以相互通信。路由控制器仅适用于Google Compute Engine平台。
|
||||
<!--
|
||||
The Route controller is responsible for configuring routes in the cloud appropriately so that containers on different nodes in the Kubernetes cluster can communicate with each other. The route controller is only applicable for Google Compute Engine clusters.
|
||||
-->
|
||||
|
||||
Route 控制器负责适当地配置云中的路由,以便 Kubernetes 集群中不同节点上的容器可以相互通信。route 控制器仅适用于 Google Compute Engine 群集。
|
||||
|
||||
<!--
|
||||
#### Service Controller
|
||||
-->
|
||||
|
||||
#### 服务控制器
|
||||
|
||||
服务控制器负责监听服务的创建、更新和删除事件。根据Kubernetes中各个服务的当前状态,它将配置云负载平衡器(如ELB或Google LB)以反映Kubernetes中的服务状态。此外,它还确保云负载均衡器的服务后端保持最新。
|
||||
<!--
|
||||
The Service controller is responsible for listening to service create, update, and delete events. Based on the current state of the services in Kubernetes, it configures cloud load balancers (such as ELB , Google LB, or Oracle Cloud Infrastructure LB) to reflect the state of the services in Kubernetes. Additionally, it ensures that service backends for cloud load balancers are up to date.
|
||||
-->
|
||||
|
||||
#### PersistentVolumeLabels 控制器
|
||||
服务控制器负责监听服务的创建、更新和删除事件。根据 Kubernetes 中各个服务的当前状态,它配置云负载均衡器(如 ELB, Google LB 或者 Oracle Cloud Infrastructure LB)以反映 Kubernetes 中的服务状态。此外,它还确保云负载均衡器的服务后端是最新的。
|
||||
|
||||
PersistentVolumeLabels控制器在AWS的EBS卷、GCE的PD卷创建时申请标签,这使得用户不再需要手动设置这些卷标签。
|
||||
|
||||
这些标签对于pod的调度工作是非常重要的,因为这些卷只能在它们所在的域(Zone)/区(Region)内工作,因此所有使用这些卷的pod都必须要在同一个域/区中才能保证进行调度正常进行。
|
||||
|
||||
PersistentVolumeLabels控制器是专门为CCM创建的; 也就是说,在CCM创建之前它是不存在的。这样做是为了将Kubernetes API服务器(它是一个许可控制器)中的PV标签逻辑移动到CCM。 它不在KCM上运行。
|
||||
<!--
|
||||
### 2. Kubelet
|
||||
-->
|
||||
|
||||
### 2. Kubelet
|
||||
|
||||
Node控制器包含kubelet中依赖于云的功能。在系统引入CCM组件之前,是由kubelet采用包含云特定信息的方式对节点进行初始化,如IP地址、区(Region)/域(Zone)标签和实例类型信息;引入CCM之后,这部分的初始化操作就从kubelet转移到了CCM中。
|
||||
<!--
|
||||
The Node controller contains the cloud-dependent functionality of the kubelet. Prior to the introduction of the CCM, the kubelet was responsible for initializing a node with cloud-specific details such as IP addresses, region/zone labels and instance type information. The introduction of the CCM has moved this initialization operation from the kubelet into the CCM.
|
||||
-->
|
||||
|
||||
在引入CCM后的新的模型中,kubelet采用不包含云特定信息的方式初始化一个节点。但是,它会为新创建的节点添加一个污点,使得该节点不可被立即调度,直到CCM使用包含云的特定信息初始化节点后,才会删除该污点,使得该节点可被调度。
|
||||
节点控制器包含 kubelet 中依赖于云的功能,在引入 CCM 之前,kubelet 负责使用特定于云的详细信息(如 IP 地址,域/区标签和实例类型信息)初始化节点。CCM 的引入已将此初始化操作从 kubelet 转移到 CCM 中。
|
||||
|
||||
### 3. Kubernetes API服务器
|
||||
<!--
|
||||
In this new model, the kubelet initializes a node without cloud-specific information. However, it adds a taint to the newly created node that makes the node unschedulable until the CCM initializes the node with cloud-specific information. It then removes this taint.
|
||||
-->
|
||||
|
||||
PersistentVolumeLabels控制器将Kubernetes API服务器的依赖于云的功能移至CCM,如前面部分所述。
|
||||
在这个新模型中,kubelet 初始化一个没有特定于云的信息的节点。但是,它会为新创建的节点添加污点,使节点不可调度,直到 CCM 使用特定于云的信息初始化节点后,才会清除这种污点,便得该节点可被调度。
|
||||
|
||||
<!--
|
||||
## Plugin mechanism
|
||||
-->
|
||||
|
||||
## 插件机制
|
||||
|
||||
云控制器管理器使用Go接口与外部对接从而实现功能扩展。具体来说,它使用了[这里](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go)定义的CloudProvider接口。
|
||||
<!--
|
||||
The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the CloudProvider Interface defined [here](https://github.com/kubernetes/cloud-provider/blob/9b77dc1c384685cb732b3025ed5689dd597a5971/cloud.go#L42-L62).
|
||||
-->
|
||||
|
||||
上面强调的四个共享控制器的实现,以及一些辅助设施(scaffolding)和共享的云服务供应商接口,将被保留在Kubernetes核心当中。但云服务供应商特有的实现将会建立在核心之外,并实现核心中定义的接口。
|
||||
云控制器管理器使用 Go 接口允许插入任何云的实现。具体来说,它使用[此处](https://github.com/kubernetes/cloud-provider/blob/9b77dc1c384685cb732b3025ed5689dd597a5971/cloud.go#L42-L62)定义的 CloudProvider 接口。
|
||||
|
||||
有关开发插件的更多信息,请参阅
|
||||
[开发云控制器管理器](/docs/tasks/administrators-cluster/developing-cloud-controller-manager/)。
|
||||
<!--
|
||||
The implementation of the four shared controllers highlighted above, and some scaffolding along with the shared cloudprovider interface, will stay in the Kubernetes core. Implementations specific to cloud providers will be built outside of the core and implement interfaces defined in the core.
|
||||
-->
|
||||
|
||||
上面强调的四个共享控制器的实现,以及一些辅助设施(scaffolding)和共享的 cloudprovider 接口,将被保留在 Kubernetes 核心中。但特定于云提供商的实现将在核心之外构建,并实现核心中定义的接口。
|
||||
|
||||
<!--
|
||||
For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/).
|
||||
-->
|
||||
|
||||
有关开发插件的更多信息,请参阅[开发云控制器管理器](/docs/tasks/administer-cluster/developing-cloud-controller-manager/)。
|
||||
|
||||
<!--
|
||||
## Authorization
|
||||
-->
|
||||
|
||||
## 授权
|
||||
|
||||
本节分解了CCM对各种API对象的访问,以执行其操作。
|
||||
<!--
|
||||
This section breaks down the access required on various API objects by the CCM to perform its operations.
|
||||
-->
|
||||
|
||||
本节分解了 CCM 执行其操作时各种 API 对象所需的访问权限。
|
||||
|
||||
<!--
|
||||
### Node Controller
|
||||
-->
|
||||
|
||||
### 节点控制器
|
||||
|
||||
节点控制器仅适用于节点对象。它需要完全访问权限来获取、列出、创建、更新、修补、监视和删除节点对象。
|
||||
<!--
|
||||
The Node controller only works with Node objects. It requires full access to get, list, create, update, patch, watch, and delete Node objects.
|
||||
-->
|
||||
|
||||
Node 控制器仅适用于 Node 对象,它需要完全访问权限来获取、列出、创建、更新、修补、监视和删除 Node 对象。
|
||||
|
||||
<!--
|
||||
v1/Node:
|
||||
|
||||
v1/Node:
|
||||
- Get
|
||||
- List
|
||||
- Create
|
||||
- Update
|
||||
- Patch
|
||||
- Watch
|
||||
- Delete
|
||||
-->
|
||||
|
||||
v1/Node:
|
||||
|
||||
- Get
|
||||
- List
|
||||
- Create
|
||||
- Update
|
||||
- Patch
|
||||
- Watch
|
||||
- Delete
|
||||
|
||||
<!--
|
||||
### Route controller
|
||||
-->
|
||||
|
||||
### 路由控制器
|
||||
|
||||
路由控制器监听节点对象的创建并配置合适的路由。它需要对节点对象的访问权限。
|
||||
<!--
|
||||
The route controller listens to Node object creation and configures routes appropriately. It requires get access to Node objects.
|
||||
-->
|
||||
|
||||
路由控制器侦听 Node 对象创建并适当地配置路由,它需要访问 Node 对象。
|
||||
|
||||
v1/Node:
|
||||
|
||||
v1/Node:
|
||||
- Get
|
||||
|
||||
<!--
|
||||
### Service controller
|
||||
-->
|
||||
|
||||
### 服务控制器
|
||||
|
||||
服务控制器侦听服务对象创建、更新和删除事件,然后对这些服务的端点进行恰当的配置。
|
||||
<!--
|
||||
The service controller listens to Service object create, update and delete events and then configures endpoints for those Services appropriately.
|
||||
-->
|
||||
|
||||
要访问服务,它需要罗列和监控权限。要更新服务,它需要修补和更新权限。
|
||||
服务控制器侦听 Service 对象创建、更新和删除事件,然后适当地为这些服务配置端点。
|
||||
|
||||
要为服务设置端点,需要访问创建、列表、获取、监视和更新。
|
||||
<!--
|
||||
To access Services, it requires list, and watch access. To update Services, it requires patch and update access.
|
||||
-->
|
||||
|
||||
要访问服务,它需要列表和监视访问权限。要更新服务,它需要修补和更新访问权限。
|
||||
|
||||
<!--
|
||||
To set up endpoints for the Services, it requires access to create, list, get, watch, and update.
|
||||
-->
|
||||
|
||||
要为服务设置端点,需要访问 create、list、get、watch 和 update。
|
||||
|
||||
v1/Service:
|
||||
|
||||
- List
|
||||
- Get
|
||||
- Watch
|
||||
- Patch
|
||||
- Update
|
||||
|
||||
### PersistentVolumeLabels 控制器
|
||||
|
||||
PersistentVolumeLabels控制器监听PersistentVolume(PV)创建事件并更新它们。该控制器需要访问列表、查看、获取和更新PV的权限。
|
||||
|
||||
v1/PersistentVolume:
|
||||
- Get
|
||||
- List
|
||||
- Watch
|
||||
- Update
|
||||
<!--
|
||||
### Others
|
||||
-->
|
||||
|
||||
### 其它
|
||||
|
||||
CCM核心的实现需要创建事件的权限,为了确保安全操作,需要创建ServiceAccounts的权限。
|
||||
<!--
|
||||
The implementation of the core of CCM requires access to create events, and to ensure secure operation, it requires access to create ServiceAccounts.
|
||||
-->
|
||||
|
||||
CCM 核心的实现需要访问权限以创建事件,并且为了确保安全操作,它需要访问权限以创建服务账户。
|
||||
|
||||
v1/Event:
|
||||
|
||||
- Create
|
||||
- Patch
|
||||
- Update
|
||||
|
||||
v1/ServiceAccount:
|
||||
|
||||
- Create
|
||||
|
||||
针对CCM的RBAC ClusterRole如下所示:
|
||||
<!--
|
||||
The RBAC ClusterRole for the CCM looks like this:
|
||||
-->
|
||||
|
||||
针对 CCM 的 RBAC ClusterRole 看起来像这样:
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -233,17 +453,50 @@ rules:
|
|||
- update
|
||||
```
|
||||
|
||||
<!--
|
||||
## Vendor Implementations
|
||||
-->
|
||||
|
||||
## 供应商实施
|
||||
|
||||
以下云服务供应商为自己的云部署了CCM。
|
||||
<!--
|
||||
The following cloud providers have implemented CCMs:
|
||||
-->
|
||||
|
||||
* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
|
||||
* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
|
||||
* [Azure](https://github.com/kubernetes/cloud-provider-azure)
|
||||
* [GCP](https://github.com/kubernetes/cloud-provider-gcp)
|
||||
以下云服务提供商已实现了 CCM:
|
||||
|
||||
<!--
|
||||
* [AWS](https://github.com/kubernetes/cloud-provider-aws)
|
||||
* [Azure](https://github.com/kubernetes/cloud-provider-azure)
|
||||
* [BaiduCloud](https://github.com/baidu/cloud-provider-baiducloud)
|
||||
* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
|
||||
* [GCP](https://github.com/kubernetes/cloud-provider-gcp)
|
||||
* [Linode](https://github.com/linode/linode-cloud-controller-manager)
|
||||
* [OpenStack](https://github.com/kubernetes/cloud-provider-openstack)
|
||||
* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
|
||||
-->
|
||||
|
||||
* [AWS](https://github.com/kubernetes/cloud-provider-aws)
|
||||
* [Azure](https://github.com/kubernetes/cloud-provider-azure)
|
||||
* [BaiduCloud](https://github.com/baidu/cloud-provider-baiducloud)
|
||||
* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
|
||||
* [GCP](https://github.com/kubernetes/cloud-provider-gcp)
|
||||
* [Linode](https://github.com/linode/linode-cloud-controller-manager)
|
||||
* [OpenStack](https://github.com/kubernetes/cloud-provider-openstack)
|
||||
* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
|
||||
|
||||
<!--
|
||||
## Cluster Administration
|
||||
-->
|
||||
|
||||
## 群集管理
|
||||
|
||||
[这里](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)提供了配置和运行CCM的完整说明。
|
||||
<!--
|
||||
Complete instructions for configuring and running the CCM are provided
|
||||
[here](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager).
|
||||
-->
|
||||
|
||||
[这里](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)提供了有关配置和运行 CCM 的完整说明。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: "容器"
|
||||
weight: 50
|
||||
---
|
|
@ -0,0 +1,217 @@
|
|||
---
|
||||
title: 容器生命周期钩子
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
This page describes how kubelet managed Containers can use the Container lifecycle hook framework
|
||||
to run code triggered by events during their management lifecycle.
|
||||
-->
|
||||
这个页面描述了 kubelet 管理的容器如何使用容器生命周期钩子框架来运行在其管理生命周期中由事件触发的代码。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
-->
|
||||
|
||||
## 概述
|
||||
|
||||
<!--
|
||||
Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular,
|
||||
Kubernetes provides Containers with lifecycle hooks.
|
||||
The hooks enable Containers to be aware of events in their management lifecycle
|
||||
and run code implemented in a handler when the corresponding lifecycle hook is executed.
|
||||
-->
|
||||
类似于许多具有生命周期钩子组件的编程语言框架,例如Angular,Kubernetes为容器提供了生命周期钩子。
|
||||
钩子使容器能够了解其管理生命周期中的事件,并在执行相应的生命周期钩子时运行在处理程序中实现的代码。
|
||||
|
||||
<!--
|
||||
## Container hooks
|
||||
-->
|
||||
|
||||
## 容器钩子
|
||||
|
||||
<!--
|
||||
There are two hooks that are exposed to Containers:
|
||||
-->
|
||||
有两个钩子暴露在容器中:
|
||||
|
||||
`PostStart`
|
||||
|
||||
<!--
|
||||
This hook executes immediately after a container is created.
|
||||
However, there is no guarantee that the hook will execute before the container ENTRYPOINT.
|
||||
No parameters are passed to the handler.
|
||||
-->
|
||||
这个钩子在创建容器之后立即执行。
|
||||
但是,不能保证钩子会在容器入口点之前执行。
|
||||
没有参数传递给处理程序。
|
||||
|
||||
`PreStop`
|
||||
|
||||
<!--
|
||||
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
|
||||
It is blocking, meaning it is synchronous,
|
||||
so it must complete before the call to delete the container can be sent.
|
||||
No parameters are passed to the handler.
|
||||
-->
|
||||
|
||||
在容器终止之前是否立即调用此钩子,取决于 API 的请求或者管理事件,类似活动探针故障、资源抢占、资源竞争等等。 如果容器已经完全处于终止或者完成状态,则对 preStop 钩子的调用将失败。
|
||||
它是阻塞的,同时也是同步的,因此它必须在删除容器的调用之前完成。
|
||||
没有参数传递给处理程序。
|
||||
|
||||
<!--
|
||||
A more detailed description of the termination behavior can be found in
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod/#termination-of-pods).
|
||||
-->
|
||||
有关终止行为的更详细描述,请参见[终止 Pod](/docs/concepts/workloads/pods/pod/#termination-of-pods)。
|
||||
|
||||
<!--
|
||||
### Hook handler implementations
|
||||
-->
|
||||
|
||||
### 钩子处理程序的实现
|
||||
|
||||
<!--
|
||||
Containers can access a hook by implementing and registering a handler for that hook.
|
||||
There are two types of hook handlers that can be implemented for Containers:
|
||||
-->
|
||||
容器可以通过实现和注册该钩子的处理程序来访问该钩子。
|
||||
针对容器,有两种类型的钩子处理程序可供实现:
|
||||
|
||||
<!--
|
||||
* Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container.
|
||||
Resources consumed by the command are counted against the Container.
|
||||
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
|
||||
-->
|
||||
|
||||
* Exec - 执行一个特定的命令,例如 `pre-stop.sh`,在容器的 cgroups 和名称空间中。
|
||||
命令所消耗的资源根据容器进行计算。
|
||||
* HTTP - 对容器上的特定端点执行 HTTP 请求。
|
||||
|
||||
<!--
|
||||
### Hook handler execution
|
||||
-->
|
||||
|
||||
### 钩子处理程序执行
|
||||
|
||||
<!--
|
||||
When a Container lifecycle management hook is called,
|
||||
the Kubernetes management system executes the handler in the Container registered for that hook.
|
||||
-->
|
||||
当调用容器生命周期管理钩子时,Kubernetes 管理系统在为该钩子注册的容器中执行处理程序。
|
||||
|
||||
<!--
|
||||
Hook handler calls are synchronous within the context of the Pod containing the Container.
|
||||
This means that for a `PostStart` hook,
|
||||
the Container ENTRYPOINT and hook fire asynchronously.
|
||||
However, if the hook takes too long to run or hangs,
|
||||
the Container cannot reach a `running` state.
|
||||
-->
|
||||
钩子处理程序调用在包含容器的 Pod 上下文中是同步的。
|
||||
这意味着对于 `PostStart` 钩子,容器入口点和钩子异步触发。
|
||||
但是,如果钩子运行或挂起的时间太长,则容器无法达到 `running` 状态。
|
||||
|
||||
<!--
|
||||
The behavior is similar for a `PreStop` hook.
|
||||
If the hook hangs during execution,
|
||||
the Pod phase stays in a `Terminating` state and is killed after `terminationGracePeriodSeconds` of pod ends.
|
||||
If a `PostStart` or `PreStop` hook fails,
|
||||
it kills the Container.
|
||||
-->
|
||||
行为与 `PreStop` 钩子的行为类似。
|
||||
如果钩子在执行过程中挂起,Pod 阶段将保持在 `Terminating` 状态,并在 Pod 结束的 `terminationGracePeriodSeconds` 之后被杀死。
|
||||
如果 `PostStart` 或 `PreStop` 钩子失败,它会杀死容器。
|
||||
|
||||
<!--
|
||||
Users should make their hook handlers as lightweight as possible.
|
||||
There are cases, however, when long running commands make sense,
|
||||
such as when saving state prior to stopping a Container.
|
||||
-->
|
||||
用户应该使他们的钩子处理程序尽可能的轻量级。
|
||||
但也需要考虑长时间运行的命令也很有用的情况,比如在停止容器之前保存状态。
|
||||
|
||||
<!--
|
||||
### Hook delivery guarantees
|
||||
-->
|
||||
|
||||
### 钩子寄送保证
|
||||
|
||||
<!--
|
||||
Hook delivery is intended to be *at least once*,
|
||||
which means that a hook may be called multiple times for any given event,
|
||||
such as for `PostStart` or `PreStop`.
|
||||
It is up to the hook implementation to handle this correctly.
|
||||
-->
|
||||
钩子的寄送应该是*至少一次*,这意味着对于任何给定的事件,例如 `PostStart` 或 `PreStop`,钩子可以被调用多次。
|
||||
如何正确处理,是钩子实现所要考虑的问题。
|
||||
|
||||
<!--
|
||||
Generally, only single deliveries are made.
|
||||
If, for example, an HTTP hook receiver is down and is unable to take traffic,
|
||||
there is no attempt to resend.
|
||||
In some rare cases, however, double delivery may occur.
|
||||
For instance, if a kubelet restarts in the middle of sending a hook,
|
||||
the hook might be resent after the kubelet comes back up.
|
||||
-->
|
||||
通常情况下,只会进行单次寄送。
|
||||
例如,如果 HTTP 钩子接收器宕机,无法接收流量,则不会尝试重新发送。
|
||||
然而,偶尔也会发生重复寄送的可能。
|
||||
例如,如果 kubelet 在发送钩子的过程中重新启动,钩子可能会在 kubelet 恢复后重新发送。
|
||||
|
||||
<!--
|
||||
### Debugging Hook handlers
|
||||
-->
|
||||
|
||||
### 调试钩子处理程序
|
||||
|
||||
<!--
|
||||
The logs for a Hook handler are not exposed in Pod events.
|
||||
If a handler fails for some reason, it broadcasts an event.
|
||||
For `PostStart`, this is the `FailedPostStartHook` event,
|
||||
and for `PreStop`, this is the `FailedPreStopHook` event.
|
||||
You can see these events by running `kubectl describe pod <pod_name>`.
|
||||
Here is some example output of events from running this command:
|
||||
-->
|
||||
钩子处理程序的日志不会在 Pod 事件中公开。
|
||||
如果处理程序由于某种原因失败,它将播放一个事件。
|
||||
对于 `PostStart`,这是 `FailedPostStartHook` 事件,对于 `PreStop`,这是 `FailedPreStopHook` 事件。
|
||||
您可以通过运行 `kubectl describe pod <pod_name>` 命令来查看这些事件。
|
||||
下面是运行这个命令的一些事件输出示例:
|
||||
|
||||
```
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
|
||||
1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
|
||||
38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
|
||||
37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
|
||||
38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
|
||||
1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
<!--
|
||||
* Learn more about the [Container environment](/docs/concepts/containers/container-environment-variables/).
|
||||
* Get hands-on experience
|
||||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
-->
|
||||
|
||||
* 了解更多关于[容器环境](/docs/concepts/containers/container-environment-variables/)。
|
||||
* 获取实践经验[将处理程序附加到容器生命周期事件](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)。
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: 扩展 Kubernetes
|
||||
weight: 40
|
||||
---
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: 扩展 Kubernetes API
|
||||
weight: 20
|
||||
---
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
title: 通过聚合层扩展 Kubernetes API
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs.
|
||||
-->
|
||||
|
||||
聚合层允许 Kubernetes 通过额外的 API 进行扩展,而不局限于 Kubernetes 核心 API 提供的功能。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Overview
|
||||
|
||||
The aggregation layer enables installing additional Kubernetes-style APIs in your cluster. These can either be pre-built, existing 3rd party solutions, such as [service-catalog](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md), or user-created APIs like [apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md), which can get you started.
|
||||
-->
|
||||
## 概述
|
||||
|
||||
聚合层使您的集群可以安装其他 Kubernetes 风格的 API。这些 API 可以是预编译的、第三方的解决方案提供的例如[service-catalog](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md)、或者用户创建的类似[apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md)一样的API可以帮助你上手。
|
||||
|
||||
<!--
|
||||
The aggregation layer runs in-process with the kube-apiserver. Until an extension resource is registered, the aggregation layer will do nothing. To register an API, users must add an APIService object, which "claims" the URL path in the Kubernetes API. At that point, the aggregation layer will proxy anything sent to that API path (e.g. /apis/myextension.mycompany.io/v1/…) to the registered APIService.
|
||||
-->
|
||||
|
||||
聚合层在 kube-apiserver 进程内运行。在扩展资源注册之前,聚合层不做任何事情。要注册 API,用户必须添加一个 APIService 对象,用它来申领 Kubernetes API 中的 URL 路径。自此以后,聚合层将会把发给该 API 路径的所有内容(例如 /apis/myextension.mycompany.io/v1/…)代理到已注册的 APIService。
|
||||
|
||||
<!--
|
||||
Ordinarily, the APIService will be implemented by an *extension-apiserver* in a pod running in the cluster. This extension-apiserver will normally need to be paired with one or more controllers if active management of the added resources is needed. As a result, the apiserver-builder will actually provide a skeleton for both. As another example, when the service-catalog is installed, it provides both the extension-apiserver and controller for the services it provides.
|
||||
-->
|
||||
|
||||
正常情况下,APIService 会实现为运行于集群中某 Pod 内的 extension-apiserver。如果需要对增加的资源进行动态管理,extension-apiserver 经常需要和一个或多个控制器一起使用。因此,apiserver-builder 同时提供用来管理新资源的 API 框架和控制器框架。另外一个例子,当安装了 service-catalog 时,它会为自己提供的服务提供 extension-apiserver 和控制器。
|
||||
|
||||
<!--
|
||||
Extension-apiservers should have low latency connections to and from the kube-apiserver.
|
||||
In particular, discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
|
||||
If your deployment cannot achieve this, you should consider how to change it. For now, setting the
|
||||
`EnableAggregatedDiscoveryTimeout=false` feature gate on the kube-apiserver
|
||||
will disable the timeout restriction. It will be removed in a future release.
|
||||
-->
|
||||
|
||||
扩展 api 服务与 kube-apiserver 之间的连接应该具有低延迟的特性。
|
||||
特别是,发现请求和 kube-apiserver 的请求响应时间需要在五秒钟或更短的时间内。
|
||||
如果您的部署无法实现此目的,则应考虑如何进行改进。
|
||||
现在,利用 kube-apiserver 上的`EnableAggregatedDiscoveryTimeout = false`功能可以禁用超时限制。但是,它将在将来的版本中删除。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
<!--
|
||||
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/).
|
||||
* Then, [setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer.
|
||||
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
|
||||
-->
|
||||
|
||||
* 阅读[配置聚合层](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 文档,了解如何在自己的环境中启用聚合器(aggregator)。
|
||||
* 然后[安装扩展的 api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) 来开始使用聚合层。
|
||||
* 也可以学习怎样 [使用客户资源定义扩展 Kubernetes API](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: 计算、存储和网络扩展
|
||||
weight: 30
|
||||
---
|
|
@ -0,0 +1,269 @@
|
|||
---
|
||||
title: 网络插件
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state state="alpha" >}}
|
||||
<!--
|
||||
{{< warning >}}Alpha features change rapidly. {{< /warning >}}
|
||||
-->
|
||||
{{< warning >}}Alpha 特性迅速变化。{{< /warning >}}
|
||||
|
||||
<!--
|
||||
Network plugins in Kubernetes come in a few flavors:
|
||||
|
||||
* CNI plugins: adhere to the appc/CNI specification, designed for interoperability.
|
||||
* Kubenet plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins
|
||||
-->
|
||||
Kubernetes中的网络插件有几种类型:
|
||||
|
||||
* CNI 插件: 遵守 appc/CNI 规约,为互操作性设计。
|
||||
* Kubenet 插件:使用 `bridge` 和 `host-local` CNI 插件实现了基本的 `cbr0`。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Installation
|
||||
|
||||
The kubelet has a single default network plugin, and a default network common to the entire cluster. It probes for plugins when it starts up, remembers what it found, and executes the selected plugin at appropriate times in the pod lifecycle (this is only true for Docker, as rkt manages its own CNI plugins). There are two Kubelet command line parameters to keep in mind when using plugins:
|
||||
|
||||
* `cni-bin-dir`: Kubelet probes this directory for plugins on startup
|
||||
* `network-plugin`: The network plugin to use from `cni-bin-dir`. It must match the name reported by a plugin probed from the plugin directory. For CNI plugins, this is simply "cni".
|
||||
-->
|
||||
## 安装
|
||||
|
||||
kubelet 有一个单独的默认网络插件,以及一个对整个集群通用的默认网络。
|
||||
它在启动时探测插件,记住找到的内容,并在 pod 生命周期的适当时间执行所选插件(这仅适用于 Docker,因为 rkt 管理自己的 CNI 插件)。
|
||||
在使用插件时,需要记住两个 Kubelet 命令行参数:
|
||||
|
||||
* `cni-bin-dir`: Kubelet 在启动时探测这个目录中的插件
|
||||
* `network-plugin`: 要使用的网络插件来自 `cni-bin-dir`。它必须与从插件目录探测到的插件报告的名称匹配。对于 CNI 插件,其值为 "cni"。
|
||||
|
||||
<!--
|
||||
## Network Plugin Requirements
|
||||
|
||||
Besides providing the [`NetworkPlugin` interface](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go) to configure and clean up pod networking, the plugin may also need specific support for kube-proxy. The iptables proxy obviously depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the `net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge (but instead something like Open vSwitch or some other mechanism) it should ensure container traffic is appropriately routed for the proxy.
|
||||
|
||||
By default if no kubelet network plugin is specified, the `noop` plugin is used, which sets `net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy.
|
||||
-->
|
||||
## 网络插件要求
|
||||
|
||||
除了提供[`NetworkPlugin` 接口](https://github.com/kubernetes/kubernetes/tree/{{< param "fullversion" >}}/pkg/kubelet/dockershim/network/plugins.go)来配置和清理 pod 网络之外,该插件还可能需要对 kube-proxy 的特定支持。
|
||||
iptables 代理显然依赖于 iptables,插件可能需要确保 iptables 能够监控容器的网络通信。
|
||||
例如,如果插件将容器连接到 Linux 网桥,插件必须将 `net/bridge/bridge-nf-call-iptables` 系统参数设置为`1`,以确保 iptables 代理正常工作。
|
||||
如果插件不使用 Linux 网桥(而是类似于 Open vSwitch 或者其它一些机制),它应该确保为代理对容器通信执行正确的路由。
|
||||
|
||||
默认情况下,如果未指定 kubelet 网络插件,则使用 `noop` 插件,该插件设置 `net/bridge/bridge-nf-call-iptables=1`,以确保简单的配置(如带网桥的 Docker )与 iptables 代理正常工作。
|
||||
|
||||
<!--
|
||||
### CNI
|
||||
|
||||
The CNI plugin is selected by passing Kubelet the `--network-plugin=cni` command-line option. Kubelet reads a file from `--cni-conf-dir` (default `/etc/cni/net.d`) and uses the CNI configuration from that file to set up each pod's network. The CNI configuration file must match the [CNI specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration), and any required CNI plugins referenced by the configuration must be present in `--cni-bin-dir` (default `/opt/cni/bin`).
|
||||
|
||||
If there are multiple CNI configuration files in the directory, the first one in lexicographic order of file name is used.
|
||||
|
||||
In addition to the CNI plugin specified by the configuration file, Kubernetes requires the standard CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) plugin, at minimum version 0.2.0
|
||||
-->
|
||||
### CNI
|
||||
|
||||
通过给 Kubelet 传递 `--network-plugin=cni` 命令行选项来选择 CNI 插件。
|
||||
Kubelet 从 `--cni-conf-dir` (默认是 `/etc/cni/net.d`) 读取文件并使用该文件中的 CNI 配置来设置每个 pod 的网络。
|
||||
CNI 配置文件必须与 [CNI 规约](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)匹配,并且配置引用的任何所需的 CNI 插件都必须存在于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)。
|
||||
|
||||
如果这个目录中有多个 CNI 配置文件,则使用按文件名的字典顺序排列的第一个配置文件。
|
||||
|
||||
除了配置文件指定的 CNI 插件外,Kubernetes 还需要标准的 CNI [`lo`](https://github.com/containernetworking/plugins/blob/master/plugins/main/loopback/loopback.go) 插件,最低版本是0.2.0。
|
||||
|
||||
<!--
|
||||
#### Support hostPort
|
||||
|
||||
The CNI networking plugin supports `hostPort`. You can use the official [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
|
||||
plugin offered by the CNI plugin team or use your own plugin with portMapping functionality.
|
||||
|
||||
If you want to enable `hostPort` support, you must specify `portMappings capability` in your `cni-conf-dir`.
|
||||
For example:
|
||||
-->
|
||||
#### 支持 hostPort
|
||||
|
||||
CNI 网络插件支持 `hostPort`。 您可以使用官方 [portmap](https://github.com/containernetworking/plugins/tree/master/plugins/meta/portmap)
|
||||
插件,它由 CNI 插件团队提供,或者使用您自己的带有 portMapping 功能的插件。
|
||||
|
||||
如果你想要启动 `hostPort` 支持,则必须在 `cni-conf-dir` 指定 `portMappings capability`。
|
||||
例如:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.0",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "127.0.0.1",
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "usePodCidr"
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s"
|
||||
},
|
||||
"kubernetes": {
|
||||
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {"portMappings": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Support traffic shaping
|
||||
|
||||
The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use the official [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
|
||||
plugin offered by the CNI plugin team or use your own plugin with bandwidth control functionality.
|
||||
|
||||
If you want to enable traffic shaping support, you must add a `bandwidth` plugin to your CNI configuration file
|
||||
(default `/etc/cni/net.d`).
|
||||
-->
|
||||
#### 支持流量整形
|
||||
|
||||
CNI 网络插件还支持 pod 入口和出口流量整形。
|
||||
您可以使用 CNI 插件团队提供的 [bandwidth](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth) 插件,
|
||||
也可以使用您自己的具有带宽控制功能的插件。
|
||||
|
||||
如果您想要启用流量整形支持,你必须将 `bandwidth` 插件添加到 CNI 配置文件
|
||||
(默认是 `/etc/cni/net.d`)。
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "k8s-pod-network",
|
||||
"cniVersion": "0.3.0",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "calico",
|
||||
"log_level": "info",
|
||||
"datastore_type": "kubernetes",
|
||||
"nodename": "127.0.0.1",
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "usePodCidr"
|
||||
},
|
||||
"policy": {
|
||||
"type": "k8s"
|
||||
},
|
||||
"kubernetes": {
|
||||
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "bandwidth",
|
||||
"capabilities": {"bandwidth": true}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
Now you can add the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to your pod.
|
||||
For example:
|
||||
-->
|
||||
现在,您可以将 `kubernetes.io/ingress-bandwidth` 和 `kubernetes.io/egress-bandwidth` 注解添加到 pod 中。
|
||||
例如:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
annotations:
|
||||
kubernetes.io/ingress-bandwidth: 1M
|
||||
kubernetes.io/egress-bandwidth: 1M
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
### kubenet
|
||||
|
||||
Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.
|
||||
|
||||
Kubenet creates a Linux bridge named `cbr0` and creates a veth pair for each pod with the host end of each pair connected to `cbr0`. The pod end of the pair is assigned an IP address allocated from a range assigned to the node either through configuration or by the controller-manager. `cbr0` is assigned an MTU matching the smallest MTU of an enabled normal interface on the host.
|
||||
|
||||
The plugin requires a few things:
|
||||
|
||||
* The standard CNI `bridge`, `lo` and `host-local` plugins are required, at minimum version 0.2.0. Kubenet will first search for them in `/opt/cni/bin`. Specify `cni-bin-dir` to supply additional search path. The first found match will take effect.
|
||||
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
|
||||
* Kubelet should also be run with the `--non-masquerade-cidr=<clusterCidr>` argument to ensure traffic to IPs outside this range will use IP masquerade.
|
||||
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.
|
||||
-->
|
||||
### kubenet
|
||||
|
||||
Kubenet 是一个非常基本的、简单的网络插件,仅适用于 Linux。
|
||||
它本身并不实现更高级的功能,如跨节点网络或网络策略。
|
||||
它通常与云驱动一起使用,云驱动为节点间或单节点环境中的通信设置路由规则。
|
||||
|
||||
Kubenet 创建名为 `cbr0` 的网桥,并为每个 pod 创建了一个 veth 对,每个 pod 的主机端都连接到 `cbr0`。
|
||||
这个 veth 对的 pod 端会被分配一个 IP 地址,该 IP 地址隶属于节点所被分配的 IP 地址范围内。节点的 IP 地址范围则通过配置或控制器管理器来设置。
|
||||
`cbr0` 被分配一个 MTU,该 MTU 匹配主机上已启用的正常接口的最小 MTU。
|
||||
|
||||
使用此插件还需要一些其他条件:
|
||||
|
||||
* 需要标准的 CNI `bridge`、`lo` 以及 `host-local` 插件,最低版本是0.2.0。Kubenet 首先在 `/opt/cni/bin` 中搜索它们。 指定 `cni-bin-dir` 以提供其它的搜索路径。首次找到的匹配将生效。
|
||||
* Kubelet 必须和 `--network-plugin=kubenet` 参数一起运行,才能启用该插件。
|
||||
* Kubelet 还应该和 `--non-masquerade-cidr=<clusterCidr>` 参数一起运行,以确保超出此范围的 IP 流量将使用 IP 伪装。
|
||||
* 节点必须被分配一个 IP 子网,通过kubelet 命令行的 `--pod-cidr` 选项或控制器管理器的命令行选项 `--allocate-node-cidrs=true --cluster-cidr=<cidr>` 来设置。
|
||||
|
||||
<!--
|
||||
### Customizing the MTU (with kubenet)
|
||||
|
||||
The MTU should always be configured correctly to get the best networking performance. Network plugins will usually try
|
||||
to infer a sensible MTU, but sometimes the logic will not result in an optimal MTU. For example, if the
|
||||
Docker bridge or another interface has a small MTU, kubenet will currently select that MTU. Or if you are
|
||||
using IPSEC encapsulation, the MTU must be reduced, and this calculation is out-of-scope for
|
||||
most network plugins.
|
||||
|
||||
Where needed, you can specify the MTU explicitly with the `network-plugin-mtu` kubelet option. For example,
|
||||
on AWS the `eth0` MTU is typically 9001, so you might specify `--network-plugin-mtu=9001`. If you're using IPSEC you
|
||||
might reduce it to allow for encapsulation overhead e.g. `--network-plugin-mtu=8873`.
|
||||
|
||||
This option is provided to the network-plugin; currently **only kubenet supports `network-plugin-mtu`**.
|
||||
-->
|
||||
### 自定义 MTU(使用 kubenet)
|
||||
|
||||
要获得最佳的网络性能,必须确保 MTU 的取值配置正确。
|
||||
网络插件通常会尝试推断出一个合理的 MTU,但有时候这个逻辑不会产生一个最优的 MTU。
|
||||
例如,如果 Docker 网桥或其他接口有一个小的 MTU, kubenet 当前将选择该 MTU。
|
||||
或者如果您正在使用 IPSEC 封装,则必须减少 MTU,并且这种计算超出了大多数网络插件的能力范围。
|
||||
|
||||
如果需要,您可以使用 `network-plugin-mtu` kubelet 选项显式的指定 MTU。
|
||||
例如:在 AWS 上 `eth0` MTU 通常是 9001,因此您可以指定 `--network-plugin-mtu=9001`。
|
||||
如果您正在使用 IPSEC ,您可以减少它以允许封装开销,例如 `--network-plugin-mtu=8873`。
|
||||
|
||||
此选项会传递给网络插件; 当前 **仅 kubenet 支持 `network-plugin-mtu`**。
|
||||
|
||||
<!--
|
||||
## Usage Summary
|
||||
|
||||
* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`).
|
||||
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
|
||||
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
|
||||
-->
|
||||
## 使用总结
|
||||
|
||||
* `--network-plugin=cni` 用来表明我们要使用 `cni` 网络插件,实际的 CNI 插件可执行文件位于 `--cni-bin-dir`(默认是 `/opt/cni/bin`)下, CNI 插件配置位于 `--cni-conf-dir`(默认是 `/etc/cni/net.d`)下。
|
||||
* `--network-plugin=kubenet` 用来表明我们要使用 `kubenet` 网络插件,CNI `bridge` 和 `host-local` 插件位于 `/opt/cni/bin` 或 `cni-bin-dir` 中。
|
||||
* `--network-plugin-mtu=9001` 指定了我们使用的 MTU,当前仅被 `kubenet` 网络插件使用。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
{{% /capture %}}
|
|
@ -1,4 +1,4 @@
|
|||
---
|
||||
title: "概述"
|
||||
title: 概述
|
||||
weight: 20
|
||||
---
|
|
@ -1,103 +1,265 @@
|
|||
---
|
||||
title: Kubernetes API
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
card:
|
||||
name: concepts
|
||||
weight: 30
|
||||
---
|
||||
|
||||
# Kubernetes API 概述
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
Overall API conventions are described in the [API conventions doc](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
|
||||
|
||||
API endpoints, resource types and samples are described in [API Reference](/docs/reference).
|
||||
|
||||
Remote access to the API is discussed in the [Controlling API Access doc](/docs/reference/access-authn-authz/controlling-access/).
|
||||
|
||||
The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [kubectl](/docs/reference/kubectl/overview/) command-line tool can be used to create, update, delete, and get API objects.
|
||||
|
||||
Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources.
|
||||
|
||||
Kubernetes itself is decomposed into multiple components, which interact through its API.
|
||||
-->
|
||||
[API协议文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)描述了主系统和API概念。
|
||||
|
||||
[API参考文档](https://kubernetes.io/docs/reference)描述了API整体规范。
|
||||
[API参考文档](/docs/reference)描述了API整体规范。
|
||||
|
||||
[访问文档](https://kubernetes.io/docs/admin/accessing-the-api)讨论了通过远程访问API的相关问题。
|
||||
[访问文档](/docs/admin/accessing-the-api)讨论了通过远程访问API的相关问题。
|
||||
|
||||
Kubernetes API是系统描述性配置的基础。 [Kubectl](https://kubernetes.io/docs/user-guide/kubectl/) 命令行工具被用于创建、更新、删除、获取API对象。
|
||||
Kubernetes API是系统描述性配置的基础。 [Kubectl](/docs/user-guide/kubectl/) 命令行工具被用于创建、更新、删除、获取API对象。
|
||||
|
||||
Kubernetes 通过API资源存储自己序列化状态(现在存储在[etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/))。
|
||||
|
||||
Kubernetes 被分成多个组件,各部分通过API相互交互。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## API changes
|
||||
|
||||
In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following the [API deprecation policy](/docs/reference/using-api/deprecation-policy/).
|
||||
|
||||
What constitutes a compatible change and how to change the API are detailed by the [API change document](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md).
|
||||
-->
|
||||
## API 变更
|
||||
|
||||
根据经验,任何成功的系统都需要随着新的用例出现或现有用例发生变化的情况下,进行相应的进化与调整。因此,我们希望Kubernetes API也可以保持持续的进化和调整。同时,在较长一段时间内,我们也希望与现有客户端版本保持良好的向下兼容性。一般情况下,增加新的API资源和资源字段不会导致向下兼容性问题发生;但如果是需要删除一个已有的资源或者字段,那么必须通过[API废弃流程](/docs/reference/deprecation-policy/)来进行。
|
||||
|
||||
参考[API变更文档](https://git.k8s.io/community/contributors/devel/api_changes.md),了解兼容性变更的要素以及如何变更API的流程。
|
||||
参考[API变更文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md),了解兼容性变更的要素以及如何变更API的流程。
|
||||
|
||||
## API Swagger 定义
|
||||
<!--
|
||||
## OpenAPI and Swagger definitions
|
||||
|
||||
Kubernetes使用 [Swagger v1.2](http://swagger.io/) 与 [OpenAPI](https://www.openapis.org/) 记录API所有细节。Kubernetes apiserver (即 “master”)提供了一个API接口用于获取 Swagger 1.2 Kubernetes API 规范 ,默认在路径 **`/swaggerapi`** 下。你也可以为API服务器可以设置 **`-enable-swagger-ui=true`** 来启用API界面,之后使用浏览器访问 **`/swagger-ui`**,浏览API文档。
|
||||
Complete API details are documented using [OpenAPI](https://www.openapis.org/).
|
||||
|
||||
Starting with Kubernetes 1.10, the Kubernetes API server serves an OpenAPI spec via the `/openapi/v2` endpoint.
|
||||
The requested format is specified by setting HTTP headers:
|
||||
-->
|
||||
|
||||
Kubernetes从1.4版本开始,也支持通过 [**`/swagger.json`**](https://git.k8s.io/kubernetes/api/openapi-spec/swagger.json) 来访问 OpenAPI 形式给出的API文档。在我们将 Swagger v1.2 切换到 OpenAPI (aka Swagger v2.0) 期间,一部分工具(如 kubectl 与 swagger-ui )会继续使用 1.2 版本规范。Kubernetes 1.5 版本中的 OpenAPI 规范是 Beta 版本。
|
||||
## OpenAPI 和 API Swagger 定义
|
||||
|
||||
完整的 API 细节被记录在 [OpenAPI](https://www.openapis.org/).
|
||||
|
||||
随着 Kubernetes 1.10 版本的正式启用,Kubernetes API 服务通过 `/openapi/v2` 接口提供 OpenAPI 规范。
|
||||
通过设置 HTTP 标头的规定了请求的结构。
|
||||
|
||||
Header | Possible Values
|
||||
------ | ---------------
|
||||
Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (the default content-type is `application/json` for `*/*` or not passing this header)
|
||||
Accept-Encoding | `gzip` (not passing this header is acceptable)
|
||||
|
||||
<!--
|
||||
Prior to 1.14, format-separated endpoints (`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)
|
||||
serve the OpenAPI spec in different formats. These endpoints are deprecated, and are removed in Kubernetes 1.14.
|
||||
|
||||
**Examples of getting OpenAPI spec**:
|
||||
|
||||
Before 1.10 | Starting with Kubernetes 1.10
|
||||
-->
|
||||
|
||||
在1.14版本之前,区分结构的接口通过(`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)
|
||||
提供不同格式的 OpenAPI 规范。但是这些接口已经被废弃,并且已经在 Kubernetes 1.14 中被删除。
|
||||
|
||||
**获取 OpenAPI 规范的例子**:
|
||||
|
||||
1.10 之前 | 从 1.10 开始
|
||||
----------- | -----------------------------
|
||||
GET /swagger.json | GET /openapi/v2 **Accept**: application/json
|
||||
GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
|
||||
GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip
|
||||
|
||||
<!--
|
||||
Kubernetes implements an alternative Protobuf based serialization format for the API that is primarily intended for intra-cluster communication, documented in the [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) and the IDL files for each schema are located in the Go packages that define the API objects.
|
||||
|
||||
Prior to 1.14, the Kubernetes apiserver also exposes an API that can be used to retrieve
|
||||
the [Swagger v1.2](http://swagger.io/) Kubernetes API spec at `/swaggerapi`.
|
||||
This endpoint is deprecated, and will be removed in Kubernetes 1.14.
|
||||
-->
|
||||
|
||||
Kubernetes实现了另一种基于Protobuf的序列化格式,该格式主要用于集群内通信,并在[设计方案](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md)中进行了说明,每个模式的IDL文件位于定义API对象的Go软件包中。
|
||||
在 1.14 版本之前, Kubernetes apiserver 也提供 API 服务用于返回
|
||||
[Swagger v1.2](http://swagger.io/) Kubernetes API 规范通过 `/swaggerapi` 接口.
|
||||
但是这个接口已经被废弃,并且在 Kubernetes 1.14 中已经被移除。
|
||||
|
||||
<!--
|
||||
## API versioning
|
||||
|
||||
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
|
||||
multiple API versions, each at a different API path, such as `/api/v1` or
|
||||
`/apis/extensions/v1beta1`.
|
||||
-->
|
||||
|
||||
## API 版本
|
||||
|
||||
为了使删除字段或者重构资源表示更加容易,Kubernetes 支持多个API版本。每一个版本都在不同API路径下,例如 **`/api/v1`** 或者 **`/apis/extensions/v1beta1`**。
|
||||
为了使删除字段或者重构资源表示更加容易,Kubernetes 支持
|
||||
多个API版本。每一个版本都在不同API路径下,例如 `/api/v1` 或者
|
||||
`/apis/extensions/v1beta1`。
|
||||
|
||||
<!--
|
||||
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-life and/or experimental APIs. The JSON and Protobuf serialization schemas follow the same guidelines for schema changes - all descriptions below cover both formats.
|
||||
|
||||
Note that API versioning and Software versioning are only indirectly related. The [API and release
|
||||
versioning proposal](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) describes the relationship between API versioning and
|
||||
software versioning.
|
||||
-->
|
||||
|
||||
我们选择在API级别进行版本化,而不是在资源或字段级别进行版本化,以确保API提供清晰,一致的系统资源和行为视图,并控制对已废止的API和/或实验性API的访问。 JSON和Protobuf序列化模式遵循架构更改的相同准则 - 下面的所有描述都同时适用于这两种格式。
|
||||
|
||||
请注意,API版本控制和软件版本控制只有间接相关性。 [API和发行版本建议](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md)描述了API版本与软件版本之间的关系。
|
||||
请注意,API版本控制和软件版本控制只有间接相关性。
|
||||
[API和发行版本建议](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) 描述了API版本与软件版本之间的关系。
|
||||
|
||||
不同的API版本名称意味着不同级别的软件稳定性和支持程度。 每个级别的标准在[API变更文档](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions)中有更详细的描述。 内容主要概括如下:
|
||||
<!--
|
||||
Different API versions imply different levels of stability and support. The criteria for each level are described
|
||||
in more detail in the [API Changes documentation](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). They are summarized here:
|
||||
-->
|
||||
|
||||
* Alpha 测试版本:
|
||||
不同的API版本名称意味着不同级别的软件稳定性和支持程度。 每个级别的标准在[API变更文档](https://git.k8s.io/community/contributors/devel/api_changes.md#alpha-beta-and-stable-versions)中有更详细的描述。 内容主要概括如下:
|
||||
|
||||
* 版本名称包含了 **`alpha`** (例如:**`v1alpha1`**)。
|
||||
<!--
|
||||
- Alpha level:
|
||||
- The version names contain `alpha` (e.g. `v1alpha1`).
|
||||
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
|
||||
- Support for feature may be dropped at any time without notice.
|
||||
- The API may change in incompatible ways in a later software release without notice.
|
||||
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
|
||||
- Beta level:
|
||||
- The version names contain `beta` (e.g. `v2beta3`).
|
||||
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
|
||||
- Support for the overall feature will not be dropped, though details may change.
|
||||
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
|
||||
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
|
||||
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
|
||||
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
|
||||
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
|
||||
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
|
||||
- Stable level:
|
||||
- The version name is `vX` where `X` is an integer.
|
||||
- Stable versions of features will appear in released software for many subsequent versions.
|
||||
-->
|
||||
- Alpha 测试版本:
|
||||
- 版本名称包含了 **`alpha`** (例如:**`v1alpha1`**)。
|
||||
- 可能是有缺陷的。启用该功能可能会带来隐含的问题,默认情况是关闭的。
|
||||
- 支持的功能可能在没有通知的情况下随时删除。
|
||||
- API的更改可能会带来兼容性问题,但是在后续的软件发布中不会有任何通知。
|
||||
- 由于bugs风险的增加和缺乏长期的支持,推荐在短暂的集群测试中使用。
|
||||
- Beta 测试版本:
|
||||
- 版本名称包含了 **`beta`** (例如: **`v2beta3`**)。
|
||||
- 代码已经测试过。启用该功能被认为是安全的,功能默认已启用。
|
||||
- 所有已支持的功能不会被删除,细节可能会发生变化。
|
||||
- 对象的模式和/或语义可能会在后续的beta测试版或稳定版中以不兼容的方式进行更改。 发生这种情况时,我们将提供迁移到下一个版本的说明。 这可能需要删除、编辑和重新创建API对象。执行编辑操作时需要谨慎行事,这可能需要停用依赖该功能的应用程序。
|
||||
- 建议仅用于非业务关键型用途,因为后续版本中可能存在不兼容的更改。 如果您有多个可以独立升级的集群,则可以放宽此限制。
|
||||
- **请尝试我们的 beta 版本功能并且给出反馈!一旦他们退出 beta 测试版,我们可能不会做出更多的改变。**
|
||||
- 稳定版本:
|
||||
- 版本名称是 **`vX`**,其中 **`X`** 是整数。
|
||||
- 功能的稳定版本将出现在许多后续版本的发行软件中。
|
||||
|
||||
* 可能是有缺陷的。启用该功能可能会带来隐含的问题,默认情况是关闭的。
|
||||
<!--
|
||||
## API groups
|
||||
|
||||
* 支持的功能可能在没有通知的情况下随时删除。
|
||||
|
||||
* API的更改可能会带来兼容性问题,但是在后续的软件发布中不会有任何通知。
|
||||
|
||||
* 由于bugs风险的增加和缺乏长期的支持,推荐在短暂的集群测试中使用。
|
||||
|
||||
* Beta 测试版本:
|
||||
|
||||
* 版本名称包含了 **`beta`** (例如: **`v2beta3`**)。
|
||||
|
||||
* 代码已经测试过。启用该功能被认为是安全的,功能默认已启用。
|
||||
|
||||
* 所有已支持的功能不会被删除,细节可能会发生变化。
|
||||
|
||||
* 对象的模式和/或语义可能会在后续的beta测试版或稳定版中以不兼容的方式进行更改。 发生这种情况时,我们将提供迁移到下一个版本的说明。 这可能需要删除、编辑和重新创建API对象。执行编辑操作时需要谨慎行事,这可能需要停用依赖该功能的应用程序。
|
||||
|
||||
* 建议仅用于非业务关键型用途,因为后续版本中可能存在不兼容的更改。 如果您有多个可以独立升级的集群,则可以放宽此限制。
|
||||
|
||||
* **请尝试我们的 beta 版本功能并且给出反馈!一旦他们退出 beta 测试版,我们可能不会做出更多的改变。**
|
||||
|
||||
* 稳定版本:
|
||||
|
||||
* 版本名称是 **`vX`**,其中 **`X`** 是整数。
|
||||
|
||||
* 功能的稳定版本将出现在许多后续版本的发行软件中。
|
||||
To make it easier to extend the Kubernetes API, we implemented [*API groups*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md).
|
||||
The API group is specified in a REST path and in the `apiVersion` field of a serialized object.
|
||||
-->
|
||||
|
||||
## API 组
|
||||
|
||||
为了更容易地扩展Kubernetes API,我们实现了[*`API组`*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)。 API组在REST路径和序列化对象的 **`apiVersion`** 字段中指定。
|
||||
为了更容易地扩展Kubernetes API,我们实现了[*`API组`*](https://git.k8s.io/community/contributors/design-proposals/api-machinery/api-group.md)。
|
||||
API组在REST路径和序列化对象的 **`apiVersion`** 字段中指定。
|
||||
|
||||
<!--
|
||||
Currently there are several API groups in use:
|
||||
|
||||
1. The *core* group, often referred to as the *legacy group*, is at the REST path `/api/v1` and uses `apiVersion: v1`.
|
||||
|
||||
1. The named groups are at REST path `/apis/$GROUP_NAME/$VERSION`, and use `apiVersion: $GROUP_NAME/$VERSION`
|
||||
(e.g. `apiVersion: batch/v1`). Full list of supported API groups can be seen in [Kubernetes API reference](/docs/reference/).
|
||||
-->
|
||||
|
||||
目前有几个API组正在使用中:
|
||||
|
||||
1. 核心组(通常被称为遗留组)位于REST路径 **`/api/v1`** 并使用 **`apiVersion:v1`**。
|
||||
1. 核心组(通常被称为遗留组)位于REST路径 `/api/v1` 并使用 `apiVersion:v1`。
|
||||
|
||||
1. 指定的组位于REST路径 **`/apis/$GROUP_NAME/$VERSION`**,并使用 **`apiVersion:$GROUP_NAME/$VERSION`**(例如 **`apiVersion:batch/v1`**)。 在[Kubernetes API参考](https://kubernetes.io/docs/reference/)中可以看到支持的API组的完整列表。
|
||||
1. 指定的组位于REST路径 `/apis/$GROUP_NAME/$VERSION`,并使用 `apiVersion:$GROUP_NAME/$VERSION`
|
||||
(例如 `apiVersion:batch/v1`)。 在[Kubernetes API参考](/docs/reference/)中可以看到支持的API组的完整列表。
|
||||
|
||||
社区支持使用以下两种方式来提供自定义资源对API进行扩展[自定义资源](https://kubernetes.io/docs/concepts/api-extension/custom-resources/):
|
||||
<!--
|
||||
There are two supported paths to extending the API with [custom resources](/docs/concepts/api-extension/custom-resources/):
|
||||
|
||||
1. [CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)适用于具有非常基本的CRUD需求的用户。
|
||||
1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)
|
||||
is for users with very basic CRUD needs.
|
||||
1. Users needing the full set of Kubernetes API semantics can implement their own apiserver
|
||||
and use the [aggregator](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/)
|
||||
to make it seamless for clients.
|
||||
-->
|
||||
|
||||
1. 即将推出:需要全套Kubernetes API语义的用户可以实现自己的apiserver,并使用[聚合器](https://git.k8s.io/community/contributors/design-proposals/api-machinery/aggregated-api-servers.md)为客户提供无缝的服务。
|
||||
社区支持使用以下两种方式来提供自定义资源对API进行扩展[自定义资源](/docs/concepts/api-extension/custom-resources/):
|
||||
|
||||
1. [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)
|
||||
适用于具有非常基本的CRUD需求的用户。
|
||||
|
||||
1. 需要全套Kubernetes API语义的用户可以实现自己的apiserver,
|
||||
并使用[聚合器](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/)
|
||||
为客户提供无缝的服务。
|
||||
|
||||
<!--
|
||||
## Enabling API groups
|
||||
|
||||
Certain resources and API groups are enabled by default. They can be enabled or disabled by setting `--runtime-config`
|
||||
on apiserver. `--runtime-config` accepts comma separated values. For ex: to disable batch/v1, set
|
||||
`--runtime-config=batch/v1=false`, to enable batch/v2alpha1, set `--runtime-config=batch/v2alpha1`.
|
||||
The flag accepts comma separated set of key=value pairs describing runtime configuration of the apiserver.
|
||||
|
||||
IMPORTANT: Enabling or disabling groups or resources requires restarting apiserver and controller-manager
|
||||
to pick up the `--runtime-config` changes.
|
||||
-->
|
||||
|
||||
## 启用 API 组
|
||||
|
||||
某些资源和API组默认情况下处于启用状态。 可以通过在apiserver上设置 **`--runtime-config`** 来启用或禁用它们。 **`--runtime-config`** 接受逗号分隔的值。 例如:要禁用batch/v1,请设置**`--runtime-config=batch/v1=false`**,以启用batch/v2alpha1,请设置**`--runtime-config=batch/v2alpha1`**。 该标志接受描述apiserver的运行时配置的逗号分隔的一组键值对。
|
||||
某些资源和API组默认情况下处于启用状态。 可以通过在apiserver上设置 `--runtime-config` 来启用或禁用它们。
|
||||
`--runtime-config` 接受逗号分隔的值。
|
||||
例如:要禁用batch/v1,请设置 `--runtime-config=batch/v1=false`,以启用batch/v2alpha1,请设置`--runtime-config=batch/v2alpha1`。
|
||||
该标志接受描述apiserver的运行时配置的逗号分隔的一组键值对。
|
||||
|
||||
重要:启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 **`--runtime-config`** 更改生效。
|
||||
重要:启用或禁用组或资源需要重新启动apiserver和控制器管理器来使得 `--runtime-config` 更改生效。
|
||||
|
||||
<!--
|
||||
## Enabling resources in the groups
|
||||
|
||||
DaemonSets, Deployments, HorizontalPodAutoscalers, Ingresses, Jobs and ReplicaSets are enabled by default.
|
||||
Other extensions resources can be enabled by setting `--runtime-config` on
|
||||
apiserver. `--runtime-config` accepts comma separated values. For example: to disable deployments and ingress, set
|
||||
`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingresses=false`
|
||||
-->
|
||||
|
||||
## 启用组中资源
|
||||
|
||||
DaemonSets,Deployments,HorizontalPodAutoscalers,Ingress,Jobs和ReplicaSets是默认启用的。 其他扩展资源可以通过在apiserver上设置 **`--runtime-config`** 来启用。**`--runtime-config`** 接受逗号分隔的值。
|
||||
DaemonSets,Deployments,HorizontalPodAutoscalers,Ingress,Jobs 和 ReplicaSets是默认启用的。
|
||||
其他扩展资源可以通过在apiserver上设置 `--runtime-config` 来启用。
|
||||
`--runtime-config` 接受逗号分隔的值。 例如:要禁用 Deployment 和 Ingress,
|
||||
请设置 `--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingress=false`
|
||||
|
||||
例如:要禁用 Deployment 和 Ingress,请设置 **`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingress=false`**
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
title: 注解
|
||||
content_template: templates/concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Annotations
|
||||
content_template: templates/concept
|
||||
weight: 50
|
||||
---
|
||||
-->
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
你可以使用 Kubernetes 注解为对象附加任意的非标识的元数据。客户端程序(例如工具和库)能够获取这些元数据信息。
|
||||
<!--
|
||||
You can use Kubernetes annotations to attach arbitrary non-identifying metadata
|
||||
to objects. Clients such as tools and libraries can retrieve this metadata.
|
||||
-->
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
## 为对象附加元数据
|
||||
<!--
|
||||
## Attaching metadata to objects
|
||||
-->
|
||||
|
||||
您可以使用标签或注解将元数据附加到 Kubernetes 对象。
|
||||
标签可以用来选择对象和查找满足某些条件的对象集合。 相反,注解不用于标识和选择对象。
|
||||
注解中的元数据,可以很小,也可以很大,可以是结构化的,也可以是非结构化的,能够包含标签不允许的字符。
|
||||
|
||||
<!--
|
||||
You can use either labels or annotations to attach metadata to Kubernetes
|
||||
objects. Labels can be used to select objects and to find
|
||||
collections of objects that satisfy certain conditions. In contrast, annotations
|
||||
are not used to identify and select objects. The metadata
|
||||
in an annotation can be small or large, structured or unstructured, and can
|
||||
include characters not permitted by labels.
|
||||
-->
|
||||
|
||||
|
||||
注解和标签一样,是键/值对:
|
||||
<!--
|
||||
Annotations, like labels, are key/value maps:
|
||||
-->
|
||||
|
||||
```json
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"key1" : "value1",
|
||||
"key2" : "value2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
以下是一些例子,用来说明哪些信息可以使用注解来记录:
|
||||
<!--
|
||||
Here are some examples of information that could be recorded in annotations:
|
||||
-->
|
||||
|
||||
* 由声明性配置所管理的字段。
|
||||
将这些字段附加为注解,能够将它们与客户端或服务端设置的默认值、自动生成的字段以及通过自动调整大小或自动伸缩系统设置的字段区分开来。
|
||||
|
||||
<!--
|
||||
* Fields managed by a declarative configuration layer. Attaching these fields
|
||||
as annotations distinguishes them from default values set by clients or
|
||||
servers, and from auto-generated fields and fields set by
|
||||
auto-sizing or auto-scaling systems.
|
||||
-->
|
||||
|
||||
* 构建、发布或镜像信息(如时间戳、发布 ID、Git 分支、PR 数量、镜像哈希、仓库地址)。
|
||||
|
||||
<!--
|
||||
* Build, release, or image information like timestamps, release IDs, git branch,
|
||||
PR numbers, image hashes, and registry address.
|
||||
-->
|
||||
|
||||
* 指向日志记录、监控、分析或审计仓库的指针。
|
||||
|
||||
<!--
|
||||
* Pointers to logging, monitoring, analytics, or audit repositories.
|
||||
-->
|
||||
|
||||
* 可用于调试目的的客户端库或工具信息:例如,名称、版本和构建信息。
|
||||
|
||||
<!--
|
||||
* Client library or tool information that can be used for debugging purposes:
|
||||
for example, name, version, and build information.
|
||||
-->
|
||||
|
||||
* 用户或者工具/系统的来源信息,例如来自其他生态系统组件的相关对象的 URL。
|
||||
|
||||
<!--
|
||||
* User or tool/system provenance information, such as URLs of related objects
|
||||
from other ecosystem components.
|
||||
-->
|
||||
|
||||
* 推出的轻量级工具的元数据信息:例如,配置或检查点。
|
||||
|
||||
<!--
|
||||
* Lightweight rollout tool metadata: for example, config or checkpoints.
|
||||
-->
|
||||
|
||||
* 负责人员的电话或呼机号码,或指定在何处可以找到该信息的目录条目,如团队网站。
|
||||
|
||||
<!--
|
||||
* Phone or pager numbers of persons responsible, or directory entries that
|
||||
specify where that information can be found, such as a team web site.
|
||||
-->
|
||||
|
||||
|
||||
从用户到最终运行的指令,以修改行为或使用非标准功能。
|
||||
<!--
|
||||
* Directives from the end-user to the implementations to modify behavior or
|
||||
engage non-standard features.
|
||||
-->
|
||||
|
||||
您可以将这类信息存储在外部数据库或目录中而不使用注解,但这样做就使得开发人员很难生成用于部署、管理、自检的客户端共享库和工具。
|
||||
<!--
|
||||
Instead of using annotations, you could store this type of information in an
|
||||
external database or directory, but that would make it much harder to produce
|
||||
shared client libraries and tools for deployment, management, introspection,
|
||||
and the like.
|
||||
-->
|
||||
|
||||
<!--
|
||||
## Syntax and character set
|
||||
|
||||
_Annotations_ are key/value pairs. Valid annotation keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
|
||||
|
||||
If the prefix is omitted, the annotation Key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add annotations to end-user objects must specify a prefix.
|
||||
-->
|
||||
|
||||
## 语法和字符集
|
||||
_注解_ 存储的形式是键/值对。有效的注解键分为两部分:可选的前缀和名称,以斜杠(`/`)分隔。 名称段是必需项,并且必须在63个字符以内,以字母数字字符(`[a-z0-9A-Z]`)开头和结尾,并允许使用破折号(`-`),下划线(`_`),点(`.`)和字母数字。 前缀是可选的。 如果指定,则前缀必须是DNS子域:一系列由点(`.`)分隔的DNS标签,总计不超过253个字符,后跟斜杠(`/`)。
|
||||
如果省略前缀,则假定注释键对用户是私有的。 由系统组件添加的注释(例如,`kube-scheduler`,`kube-controller-manager`,`kube-apiserver`,`kubectl` 或其他第三方组件),必须为终端用户添加注释前缀。
|
||||
|
||||
<!--
|
||||
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
|
||||
|
||||
For example, here’s the configuration file for a Pod that has the annotation `imageregistry: https://hub.docker.com/` :
|
||||
-->
|
||||
|
||||
`kubernetes.io /` 和 `k8s.io /` 前缀是为Kubernetes核心组件保留的。
|
||||
|
||||
例如,这是Pod的配置文件,其注释为 `imageregistry:https:// hub.docker.com /` :
|
||||
```yaml
|
||||
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: annotations-demo
|
||||
annotations:
|
||||
imageregistry: "https://hub.docker.com/"
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
||||
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
进一步了解[标签和选择器](/docs/concepts/overview/working-with-objects/labels/)。
|
||||
<!--
|
||||
Learn more about [Labels and Selectors](/docs/concepts/overview/working-with-objects/labels/).
|
||||
-->
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,262 @@
|
|||
---
|
||||
title: 推荐使用的标签
|
||||
content_template: templates/concept
|
||||
---
|
||||
<!--
|
||||
---
|
||||
title: Recommended Labels
|
||||
content_template: templates/concept
|
||||
---
|
||||
-->
|
||||
|
||||
{{% capture overview %}}
|
||||
<!--
|
||||
You can visualize and manage Kubernetes objects with more tools than kubectl and
|
||||
the dashboard. A common set of labels allows tools to work interoperably, describing
|
||||
objects in a common manner that all tools can understand.
|
||||
-->
|
||||
除了 kubectl 和 dashboard 之外,您可以其他工具来可视化和管理 Kubernetes 对象。
|
||||
一组通用的标签可以让多个工具之间互操作,用所有工具都能理解的通用方式描述对象。
|
||||
|
||||
<!--
|
||||
In addition to supporting tooling, the recommended labels describe applications
|
||||
in a way that can be queried.
|
||||
-->
|
||||
除了支持工具外,推荐的标签还以一种可以查询的方式描述了应用程序。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
<!--
|
||||
The metadata is organized around the concept of an _application_. Kubernetes is not
|
||||
a platform as a service (PaaS) and doesn't have or enforce a formal notion of an application.
|
||||
Instead, applications are informal and described with metadata. The definition of
|
||||
what an application contains is loose.
|
||||
-->
|
||||
元数据围绕 _应用(application)_ 的概念进行组织。Kubernetes 不是
|
||||
平台即服务(PaaS),没有或强制执行正式的应用程序概念。
|
||||
相反,应用程序是非正式的,并使用元数据进行描述。应用程序包含的定义是松散的。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
These are recommended labels. They make it easier to manage applications
|
||||
but aren't required for any core tooling.
|
||||
-->
|
||||
这些是推荐的标签。它们使管理应用程序变得更容易但不是任何核心工具所必需的。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Shared labels and annotations share a common prefix: `app.kubernetes.io`. Labels
|
||||
without a prefix are private to users. The shared prefix ensures that shared labels
|
||||
do not interfere with custom user labels.
|
||||
-->
|
||||
共享标签和注解都使用同一个前缀:`app.kubernetes.io`。没有前缀的标签是用户私有的。共享前缀可以确保共享标签不会干扰用户自定义的标签。
|
||||
|
||||
<!--
|
||||
## Labels
|
||||
|
||||
In order to take full advantage of using these labels, they should be applied
|
||||
on every resource object.
|
||||
-->
|
||||
## 标签
|
||||
为了充分利用这些标签,应该在每个资源对象上都使用它们。
|
||||
|
||||
<!--
|
||||
| Key | Description | Example | Type |
|
||||
| ----------------------------------- | --------------------- | -------- | ---- |
|
||||
| `app.kubernetes.io/name` | The name of the application | `mysql` | string |
|
||||
| `app.kubernetes.io/instance` | A unique name identifying the instance of an application | `wordpress-abcxzy` | string |
|
||||
| `app.kubernetes.io/version` | The current version of the application (e.g., a semantic version, revision hash, etc.) | `5.7.21` | string |
|
||||
| `app.kubernetes.io/component` | The component within the architecture | `database` | string |
|
||||
| `app.kubernetes.io/part-of` | The name of a higher level application this one is part of | `wordpress` | string |
|
||||
| `app.kubernetes.io/managed-by` | The tool being used to manage the operation of an application | `helm` | string |
|
||||
|
||||
-->
|
||||
| 键 | 描述 | 示例 | 类型 |
|
||||
| ----------------------------------- | --------------------- | -------- | ---- |
|
||||
| `app.kubernetes.io/name` | 应用程序的名称 | `mysql` | 字符串 |
|
||||
| `app.kubernetes.io/instance` | 用于唯一确定应用实例的名称 | `wordpress-abcxzy` | 字符串 |
|
||||
| `app.kubernetes.io/version` | 应用程序的当前版本(例如,语义版本,修订版哈希等) | `5.7.21` | 字符串 |
|
||||
| `app.kubernetes.io/component` | 架构中的组件 | `database` | 字符串 |
|
||||
| `app.kubernetes.io/part-of` | 此级别的更高级别应用程序的名称 | `wordpress` | 字符串 |
|
||||
| `app.kubernetes.io/managed-by` | 用于管理应用程序的工具 | `helm` | 字符串 |
|
||||
<!--
|
||||
To illustrate these labels in action, consider the following StatefulSet object:
|
||||
-->
|
||||
为说明这些标签的实际使用情况,请看下面的 StatefulSet 对象:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/instance: wordpress-abcxzy
|
||||
app.kubernetes.io/version: "5.7.21"
|
||||
app.kubernetes.io/component: database
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
app.kubernetes.io/managed-by: helm
|
||||
```
|
||||
|
||||
<!--
|
||||
## Applications And Instances Of Applications
|
||||
|
||||
An application can be installed one or more times into a Kubernetes cluster and,
|
||||
in some cases, the same namespace. For example, wordpress can be installed more
|
||||
than once where different websites are different installations of wordpress.
|
||||
|
||||
The name of an application and the instance name are recorded separately. For
|
||||
example, WordPress has a `app.kubernetes.io/name` of `wordpress` while it has
|
||||
an instance name, represented as `app.kubernetes.io/instance` with a value of
|
||||
`wordpress-abcxzy`. This enables the application and instance of the application
|
||||
to be identifiable. Every instance of an application must have a unique name.
|
||||
-->
|
||||
## 应用和应用实例
|
||||
|
||||
应用可以在 Kubernetes 集群中安装一次或多次。在某些情况下,可以安装在同一命名空间中。例如,可以不止一次地为不同的站点安装不同的 wordpress。
|
||||
|
||||
应用的名称和实例的名称是分别记录的。例如,某 WordPress 实例的 `app.kubernetes.io/name` 为 `wordpress`,而其实例名称表现为 `app.kubernetes.io/instance` 的属性值 `wordpress-abcxzy`。这使应用程序和应用程序的实例成为可能是可识别的。应用程序的每个实例都必须具有唯一的名称。
|
||||
|
||||
<!--
|
||||
## Examples
|
||||
-->
|
||||
## 示例
|
||||
|
||||
<!--
|
||||
To illustrate different ways to use these labels the following examples have varying complexity.
|
||||
-->
|
||||
为了说明使用这些标签的不同方式,以下示例具有不同的复杂性。
|
||||
|
||||
<!--
|
||||
### A Simple Stateless Service
|
||||
-->
|
||||
### 一个简单的无状态服务
|
||||
|
||||
<!--
|
||||
Consider the case for a simple stateless service deployed using `Deployment` and `Service` objects. The following two snippets represent how the labels could be used in their simplest form.
|
||||
-->
|
||||
考虑使用 `Deployment` 和 `Service` 对象部署的简单无状态服务的情况。以下两个代码段表示如何以最简单的形式使用标签。
|
||||
|
||||
<!--
|
||||
The `Deployment` is used to oversee the pods running the application itself.
|
||||
-->
|
||||
下面的 `Deployment` 用于监督运行应用本身的 pods。
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: myservice
|
||||
app.kubernetes.io/instance: myservice-abcxzy
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
The `Service` is used to expose the application.
|
||||
-->
|
||||
下面的 `Service` 用于暴露应用。
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: myservice
|
||||
app.kubernetes.io/instance: myservice-abcxzy
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
### Web Application With A Database
|
||||
-->
|
||||
### 带有一个数据库的 Web 应用程序
|
||||
|
||||
<!--
|
||||
Consider a slightly more complicated application: a web application (WordPress)
|
||||
using a database (MySQL), installed using Helm. The following snippets illustrate
|
||||
the start of objects used to deploy this application.
|
||||
|
||||
The start to the following `Deployment` is used for WordPress:
|
||||
-->
|
||||
考虑一个稍微复杂的应用:一个使用 Helm 安装的 Web 应用(WordPress),其中
|
||||
使用了数据库(MySQL)。以下代码片段说明用于部署此应用程序的对象的开始。
|
||||
|
||||
以下 `Deployment` 的开头用于 WordPress:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: wordpress
|
||||
app.kubernetes.io/instance: wordpress-abcxzy
|
||||
app.kubernetes.io/version: "4.9.4"
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: server
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
The `Service` is used to expose WordPress:
|
||||
-->
|
||||
这个 `Service` 用于暴露 WordPress:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: wordpress
|
||||
app.kubernetes.io/instance: wordpress-abcxzy
|
||||
app.kubernetes.io/version: "4.9.4"
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: server
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
MySQL is exposed as a `StatefulSet` with metadata for both it and the larger application it belongs to:
|
||||
-->
|
||||
|
||||
MySQL 作为一个 `StatefulSet` 暴露,包含它和它所属的较大应用程序的元数据:
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/instance: mysql-abcxzy
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: database
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
app.kubernetes.io/version: "5.7.21"
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
The `Service` is used to expose MySQL as part of WordPress:
|
||||
-->
|
||||
|
||||
`Service` 用于将 MySQL 作为 WordPress 的一部分暴露:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: mysql
|
||||
app.kubernetes.io/instance: mysql-abcxzy
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/component: database
|
||||
app.kubernetes.io/part-of: wordpress
|
||||
app.kubernetes.io/version: "5.7.21"
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
With the MySQL `StatefulSet` and `Service` you'll notice information about both MySQL and Wordpress, the broader application, are included.
|
||||
-->
|
||||
使用 MySQL `StatefulSet` 和 `Service`,您会注意到有关 MySQL 和 Wordpress 的信息,包括更广泛的应用程序。
|
||||
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,108 @@
|
|||
---
|
||||
title: 字段选择器
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Field Selectors
|
||||
weight: 60
|
||||
---
|
||||
-->
|
||||
|
||||
字段选择器允许您根据一个或多个资源字段的值[筛选 Kubernetes 资源](/docs/concepts/overview/working-with-objects/kubernetes-objects)。
|
||||
下面是一些使用字段选择器查询的例子:
|
||||
<!--
|
||||
_Field selectors_ let you [select Kubernetes resources](/docs/concepts/overview/working-with-objects/kubernetes-objects) based on the value of one or more resource fields. Here are some example field selector queries:
|
||||
-->
|
||||
|
||||
* `metadata.name=my-service`
|
||||
* `metadata.namespace!=default`
|
||||
* `status.phase=Pending`
|
||||
|
||||
下面这个 `kubectl` 命令将筛选出[`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)字段值为 `Running` 的所有 Pod:
|
||||
<!--
|
||||
This `kubectl` command selects all Pods for which the value of the [`status.phase`](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) field is `Running`:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get pods --field-selector status.phase=Running
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
||||
字段选择器本质上是资源*过滤器*。默认情况下,字段选择器/过滤器是未被应用的,这意味着指定类型的所有资源都会被筛选出来。
|
||||
这使得以下的两个 `kubectl` 查询是等价的:
|
||||
<!--
|
||||
Field selectors are essentially resource *filters*. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the following `kubectl` queries equivalent:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
kubectl get pods --field-selector ""
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
## 支持的字段
|
||||
<!--
|
||||
## Supported fields
|
||||
-->
|
||||
|
||||
不同的 Kubernetes 资源类型支持不同的字段选择器。
|
||||
所有资源类型都支持 `metadata.name` 和 `metadata.namespace` 字段。
|
||||
使用不被支持的字段选择器会产生错误,例如:
|
||||
<!--
|
||||
Supported field selectors vary by Kubernetes resource type. All resource types support the `metadata.name` and `metadata.namespace` fields. Using unsupported field selectors produces an error. For example:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get ingress --field-selector foo.bar=baz
|
||||
```
|
||||
```
|
||||
Error from server (BadRequest): Unable to find "ingresses" that match label selector "", field selector "foo.bar=baz": "foo.bar" is not a known field selector: only "metadata.name", "metadata.namespace"
|
||||
```
|
||||
|
||||
## 支持的运算符
|
||||
<!--
|
||||
## Supported operators
|
||||
-->
|
||||
|
||||
您可以使用 `=`、`==`和 `!=` 对字段选择器进行运算(`=` 和 `==` 的意义是相同的)。
|
||||
例如,下面这个 `kubectl` 命令将筛选所有不属于 `default` 名称空间的 Kubernetes Service:
|
||||
<!--
|
||||
You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` mean the same thing). This `kubectl` command, for example, selects all Kubernetes Services that aren't in the `default` namespace:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get services --all-namespaces --field-selector metadata.namespace!=default
|
||||
```
|
||||
|
||||
## 链式选择器
|
||||
<!--
|
||||
## Chained selectors
|
||||
-->
|
||||
|
||||
同[标签](/docs/concepts/overview/working-with-objects/labels)和其他选择器一样,字段选择器可以通过使用逗号分隔的列表组成一个选择链。
|
||||
下面这个 `kubectl` 命令将筛选 `status.phase` 字段不等于 `Running` 同时 `spec.restartPolicy` 字段等于 `Always` 的所有 Pod:
|
||||
<!--
|
||||
As with [label](/docs/concepts/overview/working-with-objects/labels) and other selectors, field selectors can be chained together as a comma-separated list. This `kubectl` command selects all Pods for which the `status.phase` does not equal `Running` and the `spec.restartPolicy` field equals `Always`:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always
|
||||
```
|
||||
|
||||
## 多种资源类型
|
||||
<!--
|
||||
## Multiple resource types
|
||||
-->
|
||||
|
||||
您能够跨多种资源类型来使用字段选择器。
|
||||
下面这个 `kubectl` 命令将筛选出所有不在 `default` 命名空间中的 StatefulSet 和 Service:
|
||||
<!--
|
||||
You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default
|
||||
```
|
|
@ -5,18 +5,25 @@ redirect_from:
|
|||
- "/docs/concepts/abstractions/overview/"
|
||||
- "/docs/concepts/abstractions/overview.html"
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in `.yaml` format.
|
||||
-->
|
||||
本页说明了 Kubernetes 对象在 Kubernetes API 中是如何表示的,以及如何在 `.yaml` 格式的文件中表示。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
<!--
|
||||
## Understanding Kubernetes Objects
|
||||
|
||||
*Kubernetes Objects* are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
|
||||
|
||||
|
||||
* What containerized applications are running (and on which nodes)
|
||||
* The resources available to those applications
|
||||
* The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
|
||||
-->
|
||||
|
||||
## 理解 Kubernetes 对象
|
||||
|
||||
|
@ -26,14 +33,21 @@ weight: 10
|
|||
* 可以被应用使用的资源
|
||||
* 关于应用运行时表现的策略,比如重启策略、升级策略,以及容错策略
|
||||
|
||||
<!--
|
||||
A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's *desired state*.
|
||||
|
||||
To work with Kubernetes objects--whether to create, modify, or delete them--you'll need to use the [Kubernetes API](/docs/concepts/overview/kubernetes-api/). When you use the `kubectl` command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the [Client Libraries](/docs/reference/using-api/client-libraries/).
|
||||
-->
|
||||
|
||||
Kubernetes 对象是 “目标性记录” —— 一旦创建对象,Kubernetes 系统将持续工作以确保对象存在。通过创建对象,本质上是在告知 Kubernetes 系统,所需要的集群工作负载看起来是什么样子的,这就是 Kubernetes 集群的 **期望状态(Desired State)**。
|
||||
|
||||
操作 Kubernetes 对象 —— 无论是创建、修改,或者删除 —— 需要使用 [Kubernetes API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)。比如,当使用 `kubectl` 命令行接口时,CLI 会执行必要的 Kubernetes API 调用,也可以在程序中直接调用 Kubernetes API。为了实现该目标,Kubernetes 当前提供了一个 `golang` [客户端库](https://github.com/kubernetes/client-go)
|
||||
,其它语言库(例如[Python](https://github.com/kubernetes-incubator/client-python))也正在开发中。
|
||||
操作 Kubernetes 对象 —— 无论是创建、修改,或者删除 —— 需要使用 [Kubernetes API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)。比如,当使用 `kubectl` 命令行接口时,CLI 会执行必要的 Kubernetes API 调用,也可以在程序中使用 [客户端库](/docs/reference/using-api/client-libraries/) 直接调用 Kubernetes API。
|
||||
|
||||
<!--
|
||||
### Object Spec and Status
|
||||
|
||||
Every Kubernetes object includes two nested object fields that govern the object's configuration: the object *spec* and the object *status*. The *spec*, which you must provide, describes your desired state for the object--the characteristics that you want the object to have. The *status* describes the *actual state* of the object, and is supplied and updated by the Kubernetes system. At any given time, the Kubernetes Control Plane actively manages an object's actual state to match the desired state you supplied.
|
||||
-->
|
||||
|
||||
### 对象规约(Spec)与状态(Status)
|
||||
|
||||
|
@ -41,16 +55,28 @@ Kubernetes 对象是 “目标性记录” —— 一旦创建对象,Kubernete
|
|||
*spec* 是必需的,它描述了对象的 *期望状态(Desired State)* —— 希望对象所具有的特征。
|
||||
*status* 描述了对象的 *实际状态(Actual State)* ,它是由 Kubernetes 系统提供和更新的。在任何时刻,Kubernetes 控制面一直努力地管理着对象的实际状态以与期望状态相匹配。
|
||||
|
||||
|
||||
<!--
|
||||
For example, a Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance.
|
||||
-->
|
||||
|
||||
例如,Kubernetes Deployment 对象能够表示运行在集群中的应用。
|
||||
当创建 Deployment 时,可能需要设置 Deployment 的规约,以指定该应用需要有 3 个副本在运行。
|
||||
Kubernetes 系统读取 Deployment 规约,并启动我们所期望的该应用的 3 个实例 —— 更新状态以与规约相匹配。
|
||||
如果那些实例中有失败的(一种状态变更),Kubernetes 系统通过修正来响应规约和状态之间的不一致 —— 这种情况,会启动一个新的实例来替换。
|
||||
|
||||
<!--
|
||||
For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
|
||||
-->
|
||||
|
||||
关于对象 spec、status 和 metadata 的更多信息,查看 [Kubernetes API 约定](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md)。
|
||||
|
||||
<!--
|
||||
### Describing a Kubernetes Object
|
||||
|
||||
When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via `kubectl`), that API request must include that information as JSON in the request body. **Most often, you provide the information to `kubectl` in a .yaml file.** `kubectl` converts the information to JSON when making the API request.
|
||||
|
||||
Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:
|
||||
-->
|
||||
|
||||
### 描述 Kubernetes 对象
|
||||
|
||||
|
@ -61,22 +87,41 @@ Kubernetes 系统读取 Deployment 规约,并启动我们所期望的该应用
|
|||
|
||||
这里有一个 `.yaml` 示例文件,展示了 Kubernetes Deployment 的必需字段和对象规约:
|
||||
|
||||
{{< code file="nginx-deployment.yaml" >}}
|
||||
{{< codenew file="application/deployment.yaml" >}}
|
||||
|
||||
使用类似于上面的 `.yaml` 文件来创建 Deployment,一种方式是使用 `kubectl` 命令行接口(CLI)中的 [`kubectl create`](/docs/user-guide/kubectl/v1.7/#create) 命令,将 `.yaml` 文件作为参数。下面是一个示例:
|
||||
<!--
|
||||
One way to create a Deployment using a `.yaml` file like the one above is to use the
|
||||
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command
|
||||
in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
|
||||
-->
|
||||
|
||||
使用类似于上面的 `.yaml` 文件来创建 Deployment,一种方式是使用 `kubectl` 命令行接口(CLI)中的
|
||||
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) 命令,
|
||||
将 `.yaml` 文件作为参数。下面是一个示例:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml --record
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
|
||||
输出类似如下这样:
|
||||
|
||||
```shell
|
||||
deployment "nginx-deployment" created
|
||||
deployment.apps/nginx-deployment created
|
||||
```
|
||||
|
||||
<!--
|
||||
### Required Fields
|
||||
|
||||
In the `.yaml` file for the Kubernetes object you want to create, you'll need to set values for the following fields:
|
||||
|
||||
* `apiVersion` - Which version of the Kubernetes API you're using to create this object
|
||||
* `kind` - What kind of object you want to create
|
||||
* `metadata` - Data that helps uniquely identify the object, including a `name` string, `UID`, and optional `namespace`
|
||||
-->
|
||||
|
||||
### 必需字段
|
||||
|
||||
|
@ -86,13 +131,29 @@ deployment "nginx-deployment" created
|
|||
* `kind` - 想要创建的对象的类型
|
||||
* `metadata` - 帮助识别对象唯一性的数据,包括一个 `name` 字符串、UID 和可选的 `namespace`
|
||||
|
||||
<!--
|
||||
You'll also need to provide the object `spec` field. The precise format of the object `spec` is different for every Kubernetes object, and contains nested fields specific to that object. The [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) can help you find the spec format for all of the objects you can create using Kubernetes.
|
||||
For example, the `spec` format for a `Pod` can be found
|
||||
[here](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core),
|
||||
and the `spec` format for a `Deployment` can be found
|
||||
[here](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps).
|
||||
-->
|
||||
|
||||
|
||||
也需要提供对象的 `spec` 字段。对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。[Kubernetes API 参考](/docs/api/)能够帮助我们找到任何我们想创建的对象的 spec 格式。
|
||||
|
||||
也需要提供对象的 `spec` 字段。对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。[Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)能够帮助我们找到任何我们想创建的对象的 spec 格式。
|
||||
例如,可以从
|
||||
[这里](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
|
||||
查看 `Pod` 的 `spec` 格式,
|
||||
并且可以从
|
||||
[这里](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#deploymentspec-v1-apps)
|
||||
查看 `Deployment` 的 `spec` 格式。
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
<!--
|
||||
* Learn about the most important basic Kubernetes objects, such as [Pod](/docs/concepts/workloads/pods/pod-overview/).
|
||||
-->
|
||||
|
||||
* 了解最重要的基本 Kubernetes 对象,例如 [Pod](/docs/concepts/abstractions/pod/)。
|
||||
* 了解最重要的基本 Kubernetes 对象,例如 [Pod](/docs/concepts/workloads/pods/pod-overview/)。
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,209 @@
|
|||
---
|
||||
title: 命名空间
|
||||
content_template: templates/concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
Kubernetes 支持多个虚拟集群,它们底层依赖于同一个物理集群。
|
||||
这些虚拟集群被称为命名空间。
|
||||
<!--
|
||||
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
|
||||
These virtual clusters are called namespaces.
|
||||
-->
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## 何时使用多个命名空间
|
||||
<!--
|
||||
## When to Use Multiple Namespaces
|
||||
-->
|
||||
|
||||
命名空间适用于存在很多跨多个团队或项目的用户的场景。
|
||||
对于只有几到几十个用户的集群,根本不需要创建或考虑命名空间。当您需要命名空间提供的特性时,请开始使用它们。
|
||||
<!--
|
||||
Namespaces are intended for use in environments with many users spread across multiple
|
||||
teams, or projects. For clusters with a few to tens of users, you should not
|
||||
need to create or think about namespaces at all. Start using namespaces when you
|
||||
need the features they provide.
|
||||
-->
|
||||
|
||||
命名空间为名称提供了一个范围。
|
||||
资源的名称需要在命名空间内是惟一的,但不能跨命名空间。命名空间不能嵌套在另外一个命名空间内,而且每个 Kubernetes
|
||||
资源只能属于一个命名空间。
|
||||
|
||||
<!--
|
||||
Namespaces provide a scope for names. Names of resources need to be unique within a namespace,
|
||||
but not across namespaces. Namespaces can not be nested inside one another and each Kubernetes
|
||||
resource can only be in one namespace.
|
||||
-->
|
||||
|
||||
命名空间是在多个用户之间划分集群资源的一种方法(通过[资源配额](/docs/concepts/policy/resource-quotas/))。
|
||||
<!--
|
||||
Namespaces are a way to divide cluster resources between multiple users (via [resource quota](/docs/concepts/policy/resource-quotas/)).
|
||||
-->
|
||||
|
||||
在 Kubernetes 未来版本中,相同命名空间中的对象默认将具有相同的访问控制策略。
|
||||
<!--
|
||||
In future versions of Kubernetes, objects in the same namespace will have the same
|
||||
access control policies by default.
|
||||
-->
|
||||
|
||||
不需要使用多个命名空间来分隔轻微不同的资源,例如同一软件的不同版本:
|
||||
使用[标签](/docs/user-guide/labels)来区分同一命名空间中的不同资源。
|
||||
<!--
|
||||
It is not necessary to use multiple namespaces just to separate slightly different
|
||||
resources, such as different versions of the same software: use [labels](/docs/user-guide/labels) to distinguish
|
||||
resources within the same namespace.
|
||||
-->
|
||||
|
||||
## 使用命名空间
|
||||
<!--
|
||||
## Working with Namespaces
|
||||
-->
|
||||
|
||||
命名空间的创建和删除已在[命名空间的管理指南文档](/docs/admin/namespaces)中进行了描述。
|
||||
<!--
|
||||
Creation and deletion of namespaces are described in the [Admin Guide documentation
|
||||
for namespaces](/docs/admin/namespaces).
|
||||
-->
|
||||
|
||||
### 查看命名空间
|
||||
<!--
|
||||
### Viewing namespaces
|
||||
-->
|
||||
|
||||
您可以使用以下命令列出集群中现存的命名空间:
|
||||
<!--
|
||||
You can list the current namespaces in a cluster using:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl get namespace
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 1d
|
||||
kube-system Active 1d
|
||||
kube-public Active 1d
|
||||
```
|
||||
|
||||
Kubernetes 会创建三个初始命名空间:
|
||||
<!--
|
||||
Kubernetes starts with three initial namespaces:
|
||||
|
||||
* `default` The default namespace for objects with no other namespace
|
||||
* `kube-system` The namespace for objects created by the Kubernetes system
|
||||
* `kube-public` This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
|
||||
-->
|
||||
|
||||
* `default` 没有指明使用其它命名空间的对象所使用的默认命名空间
|
||||
* `kube-system` Kubernetes 系统创建对象所使用的命名空间
|
||||
* `kube-public` 这个命名空间是自动创建的,所有用户(包括未经过身份验证的用户)都可以读取它。这个命名空间主要用于集群使用,以防某些资源在整个集群中应该是可见和可读的。这个命名空间的公共方面只是一种约定,而不是要求。
|
||||
|
||||
### 为请求设置命名空间
|
||||
<!--
|
||||
### Setting the namespace for a request
|
||||
-->
|
||||
|
||||
要为当前的请求设定一个命名空间,请使用 `--namespace` 参数。
|
||||
<!--
|
||||
To set the namespace for a current request, use the `--namespace` flag.
|
||||
-->
|
||||
|
||||
例如:
|
||||
<!--
|
||||
For example:
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>
|
||||
kubectl get pods --namespace=<insert-namespace-name-here>
|
||||
```
|
||||
|
||||
### 设置命名空间首选项
|
||||
<!--
|
||||
### Setting the namespace preference
|
||||
-->
|
||||
|
||||
您可以永久保存该上下文中所有后续 kubectl 命令使用的命名空间。
|
||||
<!--
|
||||
You can permanently save the namespace for all subsequent kubectl commands in that
|
||||
context.
|
||||
-->
|
||||
|
||||
```shell
|
||||
kubectl config set-context --current --namespace=<insert-namespace-name-here>
|
||||
# Validate it
|
||||
kubectl config view | grep namespace:
|
||||
```
|
||||
|
||||
## 命名空间和 DNS
|
||||
<!--
|
||||
## Namespaces and DNS
|
||||
-->
|
||||
|
||||
当您创建一个[服务](/docs/user-guide/services)时,Kubernetes 会创建一个相应的[DNS 条目](/docs/concepts/services-networking/dns-pod-service/)。
|
||||
<!--
|
||||
When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
|
||||
-->
|
||||
|
||||
该条目的形式是 `<service-name>.<namespace-name>.svc.cluster.local`,
|
||||
这意味着如果容器只使用 `<service-name>`,它将被解析到本地命名空间的服务。
|
||||
这对于跨多个命名空间(如开发、分级和生产)使用相同的配置非常有用。
|
||||
如果您希望跨命名空间访问,则需要使用完全限定域名(FQDN)。
|
||||
<!--
|
||||
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
|
||||
that if a container just uses `<service-name>`, it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
multiple namespaces such as Development, Staging and Production. If you want to reach
|
||||
across namespaces, you need to use the fully qualified domain name (FQDN).
|
||||
-->
|
||||
|
||||
## 并非所有对象都在命名空间中
|
||||
<!--
|
||||
## Not All Objects are in a Namespace
|
||||
-->
|
||||
|
||||
大多数 kubernetes 资源(例如 Pod、服务、副本控制器等)都位于某些命名空间中。
|
||||
但是命名空间资源本身并不在命名空间中。
|
||||
<!--
|
||||
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are
|
||||
in some namespaces. However namespace resources are not themselves in a namespace.
|
||||
-->
|
||||
|
||||
而且底层资源,例如[节点](/docs/admin/node)和持久化卷不属于任何命名空间。
|
||||
<!--
|
||||
And low-level resources, such as [nodes](/docs/admin/node) and
|
||||
persistentVolumes, are not in any namespace.
|
||||
-->
|
||||
|
||||
查看哪些 Kubernetes 资源在命名空间中,哪些不在命名空间中:
|
||||
<!--
|
||||
To see which Kubernetes resources are and aren't in a namespace:
|
||||
-->
|
||||
|
||||
```shell
|
||||
# In a namespace
|
||||
kubectl api-resources --namespaced=true
|
||||
|
||||
# Not in a namespace
|
||||
kubectl api-resources --namespaced=false
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
<!--
|
||||
* Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
|
||||
* Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).
|
||||
-->
|
||||
|
||||
* 了解相关内容 [创建一个新的命名空间](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
|
||||
* 了解相关内容 [删除一个命名空间](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).
|
||||
{{% /capture %}}
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
title: "策略"
|
||||
weight: 160
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: "Policies"
|
||||
weight: 160
|
||||
---
|
||||
-->
|
|
@ -0,0 +1,19 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
replicas: 2 # tells deployment to run 2 pods matching the template
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.7.9
|
||||
ports:
|
||||
- containerPort: 80
|
Loading…
Reference in New Issue