merged zh master feature-gates changes to dev-1.21 feature-gates

pull/27448/head
Rey Lejano 2021-04-07 09:37:47 -07:00
commit 678f09c736
14 changed files with 1589 additions and 1043 deletions

View File

@ -639,3 +639,10 @@ body.td-documentation {
}
}
.td-content {
table code {
background-color: inherit !important;
color: inherit !important;
font-size: inherit !important;
}
}

View File

@ -0,0 +1,74 @@
---
layout: blog
title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
date: 2021-04-06
slug: podsecuritypolicy-deprecation-past-present-and-future
---
**Author:** Tabitha Sable (Kubernetes SIG Security)
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week. This starts the countdown to its removal, but doesnt change anything else. PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely. In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
What are Pod Security Policies? Why did we need them? Why are they going away, and whats next? How does this affect you? These key questions come to mind as we prepare to say goodbye to PSP, so lets walk through them together. Well start with an overview of how features get removed from Kubernetes.
## What does deprecation mean in Kubernetes?
Whenever a Kubernetes feature is set to go away, our [deprecation policy](/docs/reference/using-api/deprecation-policy/) is our guide. First the feature is marked as deprecated, then after enough time has passed, it can finally be removed.
Kubernetes 1.21 starts the deprecation process for PodSecurityPolicy. As with all feature deprecations, PodSecurityPolicy will continue to be fully functional for several more releases. The current plan is to remove PSP from Kubernetes in the 1.25 release.
Until then, PSP is still PSP. There will be at least a year during which the newest Kubernetes releases will still support PSP, and nearly two years until PSP will pass fully out of all supported Kubernetes versions.
## What is PodSecurityPolicy?
[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) is a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
First, one or more PodSecurityPolicy resources are created in a cluster to define the requirements Pods must meet. Then, RBAC rules are created to control which PodSecurityPolicy applies to a given pod. If a pod meets the requirements of its PSP, it will be admitted to the cluster as usual. In some cases, PSP can also modify Pod fields, effectively creating new defaults for those fields. If a Pod does not meet the PSP requirements, it is rejected, and cannot run.
One more important thing to know about PodSecurityPolicy: its not the same as [PodSecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
A part of the Pod specification, PodSecurityContext (and its per-container counterpart `SecurityContext`) is the collection of fields that specify many of the security-relevant settings for a Pod. The security context dictates to the kubelet and container runtime how the Pod should actually be run. In contrast, the PodSecurityPolicy only constrains (or defaults) the values that may be set on the security context.
The deprecation of PSP does not affect PodSecurityContext in any way.
## Why did we need PodSecurityPolicy?
In Kubernetes, we define resources such as Deployments, StatefulSets, and Services that represent the building blocks of software applications. The various controllers inside a Kubernetes cluster react to these resources, creating further Kubernetes resources or configuring some software or hardware to accomplish our goals.
In most Kubernetes clusters, RBAC (Role-Based Access Control) [rules](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) control access to these resources. `list`, `get`, `create`, `edit`, and `delete` are the sorts of API operations that RBAC cares about, but _RBAC does not consider what settings are being put into the resources it controls_. For example, a Pod can be almost anything from a simple webserver to a privileged command prompt offering full access to the underlying server node and all the data. Its all the same to RBAC: a Pod is a Pod is a Pod.
To control what sorts of settings are allowed in the resources defined in your cluster, you need Admission Control in addition to RBAC. Since Kubernetes 1.3, PodSecurityPolicy has been the built-in way to do that for security-related Pod fields. Using PodSecurityPolicy, you can prevent “create Pod” from automatically meaning “root on every cluster node,” without needing to deploy additional external admission controllers.
## Why is PodSecurityPolicy going away?
In the years since PodSecurityPolicy was first introduced, we have realized that PSP has some serious usability problems that cant be addressed without making breaking changes.
The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them. It is easy to accidentally grant broader permissions than intended, and difficult to inspect which PSP(s) apply in a given situation. The “changing Pod defaults” feature can be handy, but is only supported for certain Pod settings and its not obvious when they will or will not apply to your Pod. Without a “dry run” or audit mode, its impractical to retrofit PSP to existing clusters safely, and its impossible for PSP to ever be enabled by default.
For more information about these and other PSP difficulties, check out SIG Auths KubeCon NA 2019 Maintainer Track session video: {{< youtube "SFtHRmPuhEw?start=953" youtube-quote-sm >}}
Today, youre not limited only to deploying PSP or writing your own custom admission controller. Several external admission controllers are available that incorporate lessons learned from PSP to provide a better user experience. [K-Rail](https://github.com/cruise-automation/k-rail), [Kyverno](https://github.com/kyverno/kyverno/), and [OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper/) are all well-known, and each has its fans.
Although there are other good options available now, we believe there is still value in having a built-in admission controller available as a choice for users. With this in mind, we turn toward building whats next, inspired by the lessons learned from PSP.
## Whats next?
Kubernetes SIG Security, SIG Auth, and a diverse collection of other community members have been working together for months to ensure that whats coming next is going to be awesome. We have developed a Kubernetes Enhancement Proposal ([KEP 2579](https://github.com/kubernetes/enhancements/issues/2579)) and a prototype for a new feature, currently being called by the temporary name "PSP Replacement Policy." We are targeting an Alpha release in Kubernetes 1.22.
PSP Replacement Policy starts with the realization that since there is a robust ecosystem of external admission controllers already available, PSPs replacement doesnt need to be all things to all people. Simplicity of deployment and adoption is the key advantage a built-in admission controller has compared to an external webhook, so we have focused on how to best utilize that advantage.
PSP Replacement Policy is designed to be as simple as practically possible while providing enough flexibility to really be useful in production at scale. It has soft rollout features to enable retrofitting it to existing clusters, and is configurable enough that it can eventually be active by default. It can be deactivated partially or entirely, to coexist with external admission controllers for advanced use cases.
## What does this mean for you?
What this all means for you depends on your current PSP situation. If youre already using PSP, theres plenty of time to plan your next move. Please review the PSP Replacement Policy KEP and think about how well it will suit your use case.
If youre making extensive use of the flexibility of PSP with numerous PSPs and complex binding rules, you will likely find the simplicity of PSP Replacement Policy too limiting. Use the next year to evaluate the other admission controller choices in the ecosystem. There are resources available to ease this transition, such as the [Gatekeeper Policy Library](https://github.com/open-policy-agent/gatekeeper-library).
If your use of PSP is relatively simple, with a few policies and straightforward binding to service accounts in each namespace, you will likely find PSP Replacement Policy to be a good match for your needs. Evaluate your PSPs compared to the Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) to get a feel for where youll be able to use the Restricted, Baseline, and Privileged policies. Please follow along with or contribute to the KEP and subsequent development, and try out the Alpha release of PSP Replacement Policy when it becomes available.
If youre just beginning your PSP journey, you will save time and effort by keeping it simple. You can approximate the functionality of PSP Replacement Policy today by using the Pod Security Standards PSPs. If you set the cluster default by binding a Baseline or Restricted policy to the `system:serviceaccounts` group, and then make a more-permissive policy available as needed in certain Namespaces [using ServiceAccount bindings](/docs/concepts/policy/pod-security-policy/#run-another-pod), you will avoid many of the PSP pitfalls and have an easy migration to PSP Replacement Policy. If your needs are much more complex than this, your effort is probably better spent adopting one of the more fully-featured external admission controllers mentioned above.
Were dedicated to making Kubernetes the best container orchestration tool we can, and sometimes that means we need to remove longstanding features to make space for better things to come. When that happens, the Kubernetes deprecation policy ensures you have plenty of time to plan your next move. In the case of PodSecurityPolicy, several options are available to suit a range of needs and use cases. Start planning ahead now for PSPs eventual removal, and please consider contributing to its replacement! Happy securing!
**Acknowledgment:** It takes a wonderful group to make wonderful software. Thanks are due to everyone who has contributed to the PSP replacement effort, especially (in alphabetical order) Tim Allclair, Ian Coldwater, and Jordan Liggitt. Its been a joy to work with yall on this.

View File

@ -582,10 +582,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
[CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
- `CustomResourceWebhookConversion`: Enable webhook-based conversion
on resources created from [CustomResourceDefinition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
troubleshoot a running Pod.
- `DefaultPodTopologySpread`: Enables the use of `PodTopologySpread` scheduling plugin to do
[default spreading](/docs/concepts/workloads/pods/pod-topology-spread-constraints/#internal-default-constraints).
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/cluster-administration/device-plugins/)
- `DevicePlugins`: Enable the [device-plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
based resource provisioning on nodes.
- `DisableAcceleratorUsageMetrics`:
[Disable accelerator metrics collected by the kubelet](/docs/concepts/cluster-administration/system-metrics/#disable-accelerator-metrics).
@ -786,8 +785,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
topology of the cluster. See
[ServiceTopology](/docs/concepts/services-networking/service-topology/)
for more details.
- `SizeMemoryBackedVolumes`: Enables kubelet support to size memory backed volumes.
See [volumes](/docs/concepts/storage/volumes) for more details.
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain
Name(FQDN) as the hostname of a pod. See
[Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).

View File

@ -75,7 +75,7 @@ kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
address: 0.0.0.0
bind-address: 0.0.0.0
config: /home/johndoe/schedconfig.yaml
kubeconfig: /home/johndoe/kubeconfig.yaml
```

View File

@ -49,7 +49,6 @@ Kubernetes Deploymentオブジェクトを作成することでアプリケー
出力はこのようになります:
user@computer:~/website$ kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 30 Aug 2016 18:11:37 -0700

View File

@ -185,7 +185,7 @@ Deploymentはマニフェストファイル内に書かれた設定に基づい
1. Podのリストを問い合わせて、3つのフロントエンドのレプリカが実行中になっていることを確認します。
```shell
kubectl get pods -l app=guestbook -l tier=frontend
kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
```
結果は次のようになるはずです。

View File

@ -1,15 +1,12 @@
---
reviewers:
- dchen1107
- liggitt
title: Comunicação entre Node e Master
title: Comunicação entre Nó e Control Plane
content_type: concept
weight: 20
---
<!-- overview -->
Este documento cataloga os caminhos de comunicação entre o Master (o
Este documento cataloga os caminhos de comunicação entre o control plane (o
apiserver) e o cluster Kubernetes. A intenção é permitir que os usuários
personalizem sua instalação para proteger a configuração de rede
então o cluster pode ser executado em uma rede não confiável (ou em IPs totalmente públicos em um
@ -20,10 +17,10 @@ provedor de nuvem).
<!-- body -->
## Cluster para o Master
## Nó para o Control Plane
Todos os caminhos de comunicação do cluster para o Master terminam no
apiserver (nenhum dos outros componentes do Master são projetados para expor
Todos os caminhos de comunicação do cluster para o control plane terminam no
apiserver (nenhum dos outros componentes do control plane são projetados para expor
Serviços remotos). Em uma implantação típica, o apiserver é configurado para escutar
conexões remotas em uma porta HTTPS segura (443) com uma ou mais clientes [autenticação](/docs/reference/access-authn-authz/authentication/) habilitado.
Uma ou mais formas de [autorização](/docs/reference/access-authn-authz/authorization/)
@ -41,21 +38,21 @@ para provisionamento automatizado de certificados de cliente kubelet.
Os pods que desejam se conectar ao apiserver podem fazê-lo com segurança, aproveitando
conta de serviço para que o Kubernetes injetará automaticamente o certificado raiz público
certificado e um token de portador válido no pod quando ele é instanciado.
O serviço `kubernetes` (em todos os namespaces) é configurado com um IP virtual
O serviço `kubernetes` (no namespace `default`) é configurado com um IP virtual
endereço que é redirecionado (via kube-proxy) para o endpoint com HTTPS no
apiserver.
Os componentes principais também se comunicam com o apiserver do cluster através da porta segura.
Os componentes do control plane também se comunicam com o apiserver do cluster através da porta segura.
Como resultado, o modo de operação padrão para conexões do cluster
(nodes e pods em execução nos Nodes) para o Master é protegido por padrão
(nodes e pods em execução nos Nodes) para o control plane é protegido por padrão
e pode passar por redes não confiáveis e/ou públicas.
## Master para o Cluster
## Control Plane para o nó
Existem dois caminhos de comunicação primários do mestre (apiserver) para o
cluster. O primeiro é do apiserver para o processo do kubelet que é executado em
cada Node no cluster. O segundo é do apiserver para qualquer Node, pod,
Existem dois caminhos de comunicação primários do control plane (apiserver) para os nós.
O primeiro é do apiserver para o processo do kubelet que é executado em
cada nó no cluster. O segundo é do apiserver para qualquer nó, pod,
ou serviço através da funcionalidade de proxy do apiserver.
### apiserver para o kubelet
@ -63,8 +60,8 @@ ou serviço através da funcionalidade de proxy do apiserver.
As conexões do apiserver ao kubelet são usadas para:
* Buscar logs para pods.
  * Anexar (através de kubectl) pods em execução.
  * Fornecer a funcionalidade de encaminhamento de porta do kubelet.
* Anexar (através de kubectl) pods em execução.
* Fornecer a funcionalidade de encaminhamento de porta do kubelet.
Essas conexões terminam no endpoint HTTPS do kubelet. Por padrão,
o apiserver não verifica o certificado de serviço do kubelet,
@ -94,12 +91,18 @@ Estas conexões **não são atualmente seguras** para serem usados por redes nã
### SSH Túnel
O Kubernetes suporta túneis SSH para proteger o Servidor Master -> caminhos de comunicação no cluster. Nesta configuração, o apiserver inicia um túnel SSH para cada nó
O Kubernetes suporta túneis SSH para proteger os caminhos de comunicação do control plane para os nós. Nesta configuração, o apiserver inicia um túnel SSH para cada nó
no cluster (conectando ao servidor ssh escutando na porta 22) e passa
todo o tráfego destinado a um kubelet, nó, pod ou serviço através do túnel.
Este túnel garante que o tráfego não seja exposto fora da rede aos quais
os nós estão sendo executados.
Atualmente, os túneis SSH estão obsoletos, portanto, você não deve optar por usá-los, a menos que saiba o que está fazendo. Um substituto para este canal de comunicação está sendo projetado.
Atualmente, os túneis SSH estão obsoletos, portanto, você não deve optar por usá-los, a menos que saiba o que está fazendo. O serviço Konnectivity é um substituto para este canal de comunicação.
### Konnectivity service
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
Como uma substituição aos túneis SSH, o serviço Konnectivity fornece proxy de nível TCP para a comunicação do control plane para o cluster. O serviço Konnectivity consiste em duas partes: o servidor Konnectivity na rede control plane e os agentes Konnectivity na rede dos nós. Os agentes Konnectivity iniciam conexões com o servidor Konnectivity e mantêm as conexões de rede. Depois de habilitar o serviço Konnectivity, todo o tráfego do control plane para os nós passa por essas conexões.
Veja a [tarefa do Konnectivity](docs/tasks/extend-kubernetes/setup-konnectivity/) para configurar o serviço Konnectivity no seu cluster.

View File

@ -2,7 +2,7 @@
title: API Group
id: api-group
date: 2019-12-16
full_link: /docs/concepts/overview/kubernetes-api/#api-groups
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
short_description: >
Một tập những đường dẫn tương đối đến Kubernetes API.
@ -18,4 +18,4 @@ Một tập những đường dẫn tương đối đến Kubernetes API.
Bạn có thể cho phép hay vô hiệu từng API group bằng cách thay đổi cấu hình trên API server của mình. Đồng thời bạn cũng có thể vô hiệu hay kích hoạt các đường dẫn cho những tài nguyên cụ thể. API group đơn giản hóa việc mở rộng Kubernetes API. Nó được chỉ định dưới dạng REST và trong trường `apiVersion` của một đối tượng đã được chuyển hóa.
- Đọc thêm về [API Group](/docs/concepts/overview/kubernetes-api/#api-groups).
- Đọc thêm về [API Group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning).

View File

@ -317,7 +317,7 @@ kubectl taint nodes foo dedicated=special-user:NoSchedule
### Các loại tài nguyên
Liệt kê tất cả các loại tài nguyên được hỗ trợ cùng với tên viết tắt của chúng, [API group](/docs/concepts/overview/kubernetes-api/#api-groups), cho dù chúng là [namespaced](/docs/concepts/overview/working-with-objects/namespaces), và [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
Liệt kê tất cả các loại tài nguyên được hỗ trợ cùng với tên viết tắt của chúng, [API group](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning), cho dù chúng là [namespaced](/docs/concepts/overview/working-with-objects/namespaces), và [Kind](/docs/concepts/overview/working-with-objects/kubernetes-objects):
```bash
kubectl api-resources

View File

@ -129,8 +129,8 @@ a new instance.
The name of a Service object must be a valid
[DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names).
For example, suppose you have a set of Pods that each listen on TCP port 9376
and carry a label `app=MyApp`:
For example, suppose you have a set of Pods where each listens on TCP port 9376
and contains a label `app=MyApp`:
-->
## 定义 Service
@ -255,11 +255,11 @@ spec:
```
<!--
Because this Service has no selector, the corresponding Endpoint object is *not*
Because this Service has no selector, the corresponding Endpoints object is not
created automatically. You can manually map the Service to the network address and port
where it's running, by adding an Endpoint object manually:
where it's running, by adding an Endpoints object manually:
-->
由于此服务没有选择算符,因此 *不会* 自动创建相应的 Endpoint 对象。
由于此服务没有选择算符,因此不会自动创建相应的 Endpoint 对象。
你可以通过手动添加 Endpoint 对象,将服务手动映射到运行该服务的网络地址和端口:
```yaml
@ -728,7 +728,7 @@ Services by their DNS name.
For example, if you have a Service called `my-service` in a Kubernetes
namespace `my-ns`, the control plane and the DNS Service acting together
create a DNS record for `my-service.my-ns`. Pods in the `my-ns` namespace
should be able to find it by simply doing a name lookup for `my-service`
should be able to find the service by doing a name lookup for `my-service`
(`my-service.my-ns` would also work).
Pods in other Namespaces must qualify the name as `my-service.my-ns`. These names
@ -736,7 +736,7 @@ will resolve to the cluster IP assigned for the Service.
-->
例如,如果你在 Kubernetes 命名空间 `my-ns` 中有一个名为 `my-service` 的服务,
则控制平面和 DNS 服务共同为 `my-service.my-ns` 创建 DNS 记录。
`my-ns` 命名空间中的 Pod 应该能够通过简单地按名检索 `my-service` 来找到它
`my-ns` 命名空间中的 Pod 应该能够通过按名检索 `my-service` 来找到服务
`my-service.my-ns` 也可以工作)。
其他命名空间中的 Pod 必须将名称限定为 `my-service.my-ns`
@ -794,12 +794,12 @@ DNS 如何实现自动配置,依赖于 Service 是否定义了选择算符。
For headless Services that define selectors, the endpoints controller creates
`Endpoints` records in the API, and modifies the DNS configuration to return
records (addresses) that point directly to the `Pods` backing the `Service`.
A records (IP addresses) that point directly to the `Pods` backing the `Service`.
-->
### 带选择算符的服务 {#with-selectors}
对定义了选择算符的无头服务Endpoint 控制器在 API 中创建了 Endpoints 记录,
并且修改 DNS 配置返回 A 记录(地址),通过这个地址直接到达 `Service` 的后端 Pod 上。
并且修改 DNS 配置返回 A 记录(IP 地址),通过这个地址直接到达 `Service` 的后端 Pod 上。
<!--
### Without selectors
@ -889,8 +889,13 @@ allocates a port from a range specified by `--service-node-port-range` flag (def
Each node proxies that port (the same port number on every Node) into your Service.
Your Service reports the allocated port in its `.spec.ports[*].nodePort` field.
If you want to specify particular IP(s) to proxy the port, you can set the `--nodeport-addresses` flag in kube-proxy to particular IP block(s); this is supported since Kubernetes v1.10.
This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/25) to specify IP address ranges that kube-proxy should consider as local to this node.
If you want to specify particular IP(s) to proxy the port, you can set the
`--nodeport-addresses` flag for kube-proxy or the equivalent `nodePortAddresses`
field of the
[kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
to particular IP block(s).
This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) to specify IP address ranges that kube-proxy should consider as local to this node.
-->
### NodePort 类型 {#nodeport}
@ -899,8 +904,9 @@ This flag takes a comma-delimited list of IP blocks (e.g. 10.0.0.0/8, 192.0.2.0/
每个节点将那个端口(每个节点上的相同端口号)代理到你的服务中。
你的服务在其 `.spec.ports[*].nodePort` 字段中要求分配的端口。
如果你想指定特定的 IP 代理端口,则可以将 kube-proxy 中的 `--nodeport-addresses`
标志设置为特定的 IP 块。从 Kubernetes v1.10 开始支持此功能。
如果你想指定特定的 IP 代理端口,则可以设置 kube-proxy 中的 `--nodeport-addresses` 参数
或者将[kube-proxy 配置文件](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
中的等效 `nodePortAddresses` 字段设置为特定的 IP 块。
该标志采用逗号分隔的 IP 块列表(例如,`10.0.0.0/8`、`192.0.2.0/25`)来指定
kube-proxy 应该认为是此节点本地的 IP 地址范围。
@ -928,10 +934,12 @@ for NodePort use.
<!--
Using a NodePort gives you the freedom to set up your own load balancing solution,
to configure environments that are not fully supported by Kubernetes, or even
to just expose one or more nodes' IPs directly.
to expose one or more nodes' IPs directly.
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
and `.spec.clusterIP:spec.ports[*].port`.
If the `--nodeport-addresses` flag for kube-proxy or the equivalent field
in the kube-proxy configuration file is set, `<NodeIP>` would be filtered node IP(s).
For example:
-->
@ -940,8 +948,9 @@ For example:
甚至直接暴露一个或多个节点的 IP。
需要注意的是Service 能够通过 `<NodeIP>:spec.ports[*].nodePort`
`spec.clusterIp:spec.ports[*].port` 而对外可见
(如果 kube-proxy 的 `--nodeport-addresses` 参数被设置了, <NodeIP>将被过滤 NodeIP。
`spec.clusterIp:spec.ports[*].port` 而对外可见。
如果设置了 kube-proxy 的 `--nodeport-addresses` 参数或 kube-proxy 配置文件中的等效字段,
`<NodeIP>` 将被过滤 NodeIP。
例如:
@ -1284,14 +1293,13 @@ TCP 和 SSL 选择第4层代理ELB 转发流量而不修改报头。
<!--
In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
be proxied HTTP.
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
-->
在上例中,如果服务包含 `80`、`443` 和 `8443` 三个端口, 那么 `443``8443` 将使用 SSL 证书,
`80` 端口将仅仅转发 HTTP 数据包。
`80` 端口将转发 HTTP 数据包。
从 Kubernetes v1.9 起可以使用
[预定义的 AWS SSL 策略](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
@ -1540,7 +1548,7 @@ groups are modified with the following IP rules:
| Rule | Protocol | Port(s) | IpRange(s) | IpRange Description |
|------|----------|---------|------------|---------------------|
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy=Local`) | VPC CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
| Health Check | TCP | NodePort(s) (`.spec.healthCheckNodePort` for `.spec.externalTrafficPolicy = Local`) | Subnet CIDR | kubernetes.io/rule/nlb/health=\<loadBalancerName\> |
| Client Traffic | TCP | NodePort(s) | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/client=\<loadBalancerName\> |
| MTU Discovery | ICMP | 3,4 | `.spec.loadBalancerSourceRanges` (defaults to `0.0.0.0/0`) | kubernetes.io/rule/nlb/mtu=\<loadBalancerName\> |
@ -1794,7 +1802,7 @@ iptables 代理不会隐藏 Kubernetes 集群内部的 IP 地址,但却要求
<!--
## Virtual IP implementation {#the-gory-details-of-virtual-ips}
The previous information should be sufficient for many people who just want to
The previous information should be sufficient for many people who want to
use Services. However, there is a lot going on behind the scenes that may be
worth understanding.
-->
@ -1889,7 +1897,7 @@ rule kicks in, and redirects the packets to the proxy's own port.
The "Service proxy" chooses a backend, and starts proxying traffic from the client to the backend.
This means that Service owners can choose any port they want without risk of
collision. Clients can simply connect to an IP and port, without being aware
collision. Clients can connect to an IP and port, without being aware
of which Pods they are actually accessing.
-->
@ -1904,7 +1912,7 @@ of which Pods they are actually accessing.
"服务代理" 选择一个后端,并将客户端的流量代理到后端上。
这意味着 Service 的所有者能够选择任何他们想使用的端口,而不存在冲突的风险。
客户端可以简单地连接到一个 IP 和端口,而不需要知道实际访问了哪些 Pod。
客户端可以连接到一个 IP 和端口,而不需要知道实际访问了哪些 Pod。
#### iptables
@ -2060,7 +2068,7 @@ You can also use {{< glossary_tooltip term_id="ingress" >}} in place of Service
to expose HTTP / HTTPS Services.
-->
{{< note >}}
你还可以使用 {{< glossary_tooltip text="Ingres" term_id="ingress" >}} 代替
你还可以使用 {{< glossary_tooltip text="Ingress" term_id="ingress" >}} 代替
Service 来公开 HTTP/HTTPS 服务。
{{< /note >}}

View File

@ -1,263 +0,0 @@
---
title: 知名标签Label、注解Annotation和 污点Taint
content_type: concept
weight: 60
---
<!-- overview -->
<!--
Kubernetes reserves all labels and annotations in the kubernetes.io namespace.
This document serves both as a reference to the values and as a coordination point for assigning values.
-->
Kubernetes 保留了 kubernetes.io 命名空间下的所有标签和注解。
本文档提供这些标签、注解和污点的参考,也可用来协调对这类标签、注解和污点的设置。
<!-- body -->
## kubernetes.io/arch
<!--
Example: `kubernetes.io/arch=amd64`
Used on: Node
The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes.
-->
示例:`kubernetes.io/arch=amd64`
用于Node
Kubelet 用 Go 中定义的 `runtime.GOARCH` 值来填充该标签。这在诸如混用 arm 和 x86 节点的情况下很有用。
## kubernetes.io/os
<!--
Example: `kubernetes.io/os=linux`
Used on: Node
The Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).
-->
示例:`kubernetes.io/os=linux`
用于Node
Kubelet 用该 Go 中定义的 `runtime.GOOS` 值来填充该标签。这在集群中存在不同操作系统的节点时很有用(例如:混合 Linux 和 Windows 操作系统的节点)。
<!--
## beta.kubernetes.io/arch (deprecated)
-->
## beta.kubernetes.io/arch (已弃用)
<!--
This label has been deprecated. Please use `kubernetes.io/arch` instead.
-->
该标签已被弃用。请使用 `kubernetes.io/arch`
<!--
## beta.kubernetes.io/os (deprecated)
-->
## beta.kubernetes.io/os (已弃用)
<!--
This label has been deprecated. Please use `kubernetes.io/os` instead.
-->
该标签已被弃用。请使用 `kubernetes.io/os`
## kubernetes.io/hostname
<!--
Example: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
Used on: Node
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the `--hostname-override` flag to the `kubelet`.
-->
示例:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
用于Node
Kubelet 用 hostname 值来填充该标签。注意:可以通过向 `kubelet` 传入 `--hostname-override`
参数对 “真正的” hostname 进行修改。
<!--
This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
-->
此标签还用作拓扑层次结构的一部分。有关更多信息,请参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
<!--
## beta.kubernetes.io/instance-type (deprecated)
-->
## beta.kubernetes.io/instance-type (已弃用)
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。
{{< /note >}}
## node.kubernetes.io/instance-type {#nodekubernetesioinstance-type}
<!--
Example: `node.kubernetes.io/instance-type=m3.medium`
Used on: Node
The Kubelet populates this with the instance type as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. This setting is handy
if you want to target certain workloads to certain instance types, but typically you want
to rely on the Kubernetes scheduler to perform resource-based scheduling. You should aim to schedule based on properties rather than on instance types (for example: require a GPU, instead of requiring a `g2.2xlarge`).
-->
示例:`node.kubernetes.io/instance-type=m3.medium`
用于Node
Kubelet 用 `cloudprovider` 中定义的实例类型来填充该标签。未使用 `cloudprovider` 时不会设置该标签。
该标签在想要将某些负载定向到特定实例类型的节点上时会很有用,但通常用户更希望依赖 Kubernetes 调度器来执行基于资源的调度,
所以用户应该致力于基于属性而不是实例类型来进行调度(例如:需要一个 GPU而不是 `g2.2xlarge`)。
## failure-domain.beta.kubernetes.io/region (已弃用) {#failure-domainbetakubernetesioregion}
<!--
See [topology.kubernetes.io/region](#topologykubernetesioregion).
-->
参考 [topology.kubernetes.io/region](#topologykubernetesioregion)。
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/region](#topologykubernetesioregion).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [topology.kubernetes.io/region](#topologykubernetesioregion)。
{{< /note >}}
## failure-domain.beta.kubernetes.io/zone (已弃用) {#failure-domainbetakubernetesiozone}
<!--
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
参考 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
从 kubernetes 1.17 版本开始,不推荐使用此标签,而推荐使用 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
{{< /note >}}
## topology.kubernetes.io/region {#topologykubernetesioregion}
<!--
Example:
`topology.kubernetes.io/region=us-east-1`
-->
示例:
`topology.kubernetes.io/region=us-east-1`
<!--
See [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
参考 [topology.kubernetes.io/zone](#topologykubernetesiozone)。
## topology.kubernetes.io/zone {#topologykubernetesiozone}
<!--
Example:
`topology.kubernetes.io/zone=us-east-1c`
On Node: The `kubelet` or the external `cloud-controller-manager` populates this with the information as provided by the `cloudprovider`. This will be set only if you are using a `cloudprovider`. However, you should consider setting this on nodes if it makes sense in your topology.
-->
示例:
`topology.kubernetes.io/zone=us-east-1c`
用于Node、PersistentVolume
对于 Node `Kubelet` 或外部 `cloud-controller-manager``cloudprovider` 中定义的区域信息来填充该标签。
未使用 `cloudprovider` 时不会设置该标签,但如果该标签在你的拓扑中有意义的话,应该考虑设置。
<!--
On PersistentVolume: topology-aware volume provisioners will automatically set node affinity constraints on `PersistentVolumes`.
-->
对于 PersistentVolume可感知拓扑的卷制备程序将自动在 `PersistentVolumes` 上设置节点亲和性约束。
<!--
A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
-->
区域代表逻辑故障域。Kubernetes 集群通常跨越多个区域以提高可用性。
虽然区域的确切定义留给基础架构实现,但是区域的常见属性包括区域内的网络延迟非常低,区域内的免费网络流量以及与其他区域的故障独立性。
例如,一个区域内的节点可能共享一个网络交换机,但不同区域内的节点则不应共享。
<!--
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
-->
地区代表一个更大的域,由一个或多个区域组成。
Kubernetes 集群跨越多个地域是不常见的,而地域或区域的确切定义则留给基础设施实现,
地域的共同属性包括它们之间的网络延迟比它们内部更高,它们之间的网络流量成本不为零,故障独立于其他区域或域。
例如,一个地域内的节点可能共享电力基础设施(例如 UPS 或发电机),但不同地域的节点通常不会共享。
<!--
Kubernetes makes a few assumptions about the structure of zones and regions:
1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
-->
Kubernetes 对区域和区域的结构做了一些假设:
1) 地域和区域是分层的:区域是地域的严格子集,任何区域都不能位于两个地域中。
2) 区域名称在地域之间是唯一的;例如,地域 “africa-east-1” 可能包含区域 “africa-east-1a” 和 “africa-east-1b”。
<!--
It should be safe to assume that topology labels do not change.
Even though labels are strictly mutable,
consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
-->
标签的使用者可以安全地假设拓扑标签不变。
即使标签是严格可变的,标签的使用者也可以认为节点只能通过被销毁并重建才能从一个区域迁移到另一个区域。
<!--
Kubernetes can use this information in various ways.
For example,
the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures,
see [kubernetes.io/hostname](#kubernetesiohostname)). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures).
This is achieved via _SelectorSpreadPriority_.
-->
Kubernetes 可以以各种方式使用这些信息。
例如,调度器自动尝试将 ReplicaSet 中的多个 Pods 分布到单区域集群中的多个节点上(为了减少节点故障的影响,
请参阅 [kubernetesiohostname](#kubernetesiohostname))。
对于多区域集群,这种分布行为也被应用到区域上(以减少区域故障的影响)。
这是通过 _SelectorSpreadPriority_ 实现的。
<!--
_SelectorSpreadPriority_ is a best effort placement.
If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements),
this placement might prevent equal spreading of your Pods across zones.
If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-->
_SelectorSpreadPriority_ 是一种尽力而为best-effort的处理方式如果集群中的区域是异构的
(例如:不同区域之间的节点数量、节点类型或 Pod 资源需求不同),可能使得 Pod 在各区域间无法均匀分布。
如有需要,用户可以使用同质的区域(节点数量和类型相同) 来减小 Pod 分布不均的可能性。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
由于卷不能跨区域挂载Attach调度器通过 _VolumeZonePredicate_ 预选)也会保证需要特定卷的 Pod 被调度到卷所在的区域中。
<!--
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`,
the scheduler prevents Pods from mounting volumes in a different zone.
If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 准入控制器不支持自动为 PersistentVolume 打标签,且用户希望防止 pod 跨区域进行卷的挂载,
应考虑手动打标签 (或增加对 `PersistentVolumeLabel` 的支持)。如果用户的基础设施没有这种约束,则不需要为卷添加区域标签。

View File

@ -0,0 +1,423 @@
---
title: 常见的标签、注解和污点
content_type: concept
weight: 20
---
<!--
---
title: Well-Known Labels, Annotations and Taints
content_type: concept
weight: 20
---
-->
<!-- overview -->
<!--
Kubernetes reserves all labels and annotations in the kubernetes.io namespace.
This document serves both as a reference to the values and as a coordination point for assigning values.
-->
Kubernetes 预留命名空间 kubernetes.io 用于所有的标签和注解。
本文档有两个作用,一是作为可用值的参考,二是作为赋值的协调点。
<!-- body -->
## kubernetes.io/arch
示例:`kubernetes.io/arch=amd64`
用于Node
<!--
The Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes.
-->
Kubelet 用 Go 定义的 `runtime.GOARCH` 生成该标签的键值。在混合使用 arm 和 x86 节点的场景中,此键值可以带来极大便利。
## kubernetes.io/os
示例:`kubernetes.io/os=linux`
用于Node
<!--
The Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).
-->
Kubelet 用 Go 定义的 `runtime.GOOS` 生成该标签的键值。在混合使用异构操作系统场景下(例如:混合使用 Linux 和 Windows 节点),此键值可以带来极大便利。
## beta.kubernetes.io/arch (deprecated)
<!--
This label has been deprecated. Please use `kubernetes.io/arch` instead.
-->
此标签已被弃用,取而代之的是 `kubernetes.io/arch`.
## beta.kubernetes.io/os (deprecated)
<!--
This label has been deprecated. Please use `kubernetes.io/os` instead.
-->
此标签已被弃用,取而代之的是 `kubernetes.io/os`.
## kubernetes.io/hostname {#kubernetesiohostname}
示例:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
用于Node
<!--
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the `--hostname-override` flag to the `kubelet`.
This label is also used as part of the topology hierarchy. See [topology.kubernetes.io/zone](#topologykubernetesiozone) for more information.
-->
Kubelet 用主机名生成此标签。需要注意的是主机名可修改,这是把“实际的”主机名通过参数 `--hostname-override` 传给 `kubelet` 实现的。
此标签也可用做拓扑层次的一个部分。更多信息参见[topology.kubernetes.io/zone](#topologykubernetesiozone)。
## beta.kubernetes.io/instance-type (deprecated)
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type).
-->
从 v1.17 起,此标签被弃用,取而代之的是 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type).
{{< /note >}}
## node.kubernetes.io/instance-type {#nodekubernetesioinstance-type}
示例:`node.kubernetes.io/instance-type=m3.medium`
用于Node
<!--
The Kubelet populates this with the instance type as defined by the `cloudprovider`.
This will be set only if you are using a `cloudprovider`. This setting is handy
if you want to target certain workloads to certain instance types, but typically you want
to rely on the Kubernetes scheduler to perform resource-based scheduling. You should aim to schedule based on properties rather than on instance types (for example: require a GPU, instead of requiring a `g2.2xlarge`).
-->
Kubelet 用 `cloudprovider` 定义的实例类型生成此标签。
所以只有用到 `cloudprovider` 的场合,才会设置此标签。
此标签非常有用,特别是在你希望把特定工作负载打到特定实例类型的时候,但更常见的调度方法是基于 Kubernetes 调度器来执行基于资源的调度。
你应该聚焦于使用基于属性的调度方式,而尽量不要依赖实例类型(例如:应该申请一个 GPU而不是 `g2.2xlarge`)。
## failure-domain.beta.kubernetes.io/region (deprecated) {#failure-domainbetakubernetesioregion}
参见 [topology.kubernetes.io/region](#topologykubernetesioregion).
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/region](#topologykubernetesioregion).
-->
从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/region](#topologykubernetesioregion).
{{< /note >}}
## failure-domain.beta.kubernetes.io/zone (deprecated) {#failure-domainbetakubernetesiozone}
参见 [topology.kubernetes.io/zone](#topologykubernetesiozone).
{{< note >}}
<!--
Starting in v1.17, this label is deprecated in favor of [topology.kubernetes.io/zone](#topologykubernetesiozone).
-->
从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/zone](#topologykubernetesiozone).
{{< /note >}}
## topology.kubernetes.io/region {#topologykubernetesioregion}
示例
`topology.kubernetes.io/region=us-east-1`
参见 [topology.kubernetes.io/zone](#topologykubernetesiozone).
## topology.kubernetes.io/zone {#topologykubernetesiozone}
示例:
`topology.kubernetes.io/zone=us-east-1c`
用于Node, PersistentVolume
<!--
On Node: The `kubelet` or the external `cloud-controller-manager` populates this with the information as provided by the `cloudprovider`. This will be set only if you are using a `cloudprovider`. However, you should consider setting this on nodes if it makes sense in your topology.
On PersistentVolume: topology-aware volume provisioners will automatically set node affinity constraints on `PersistentVolumes`.
-->
Node 场景:`kubelet` 或外部的 `cloud-controller-manager``cloudprovider` 提供的信息生成此标签。
所以只有在用到 `cloudprovider` 的场景下,此标签才会被设置。
但如果此标签在你的拓扑中有意义,你也可以考虑在 node 上设置它。
PersistentVolume 场景:拓扑自感知的卷制备程序将在 `PersistentVolumes` 上自动设置节点亲和性限制。
<!--
A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
-->
一个可用区zone表示一个逻辑故障域。Kubernetes 集群通常会跨越多个可用区以提高可用性。
虽然可用区的确切定义留给基础设施来决定,但可用区常见的属性包括:可用区内的网络延迟非常低,可用区内的网络通讯没成本,独立于其他可用区的故障域。
例如,一个可用区中的节点可以共享交换机,但不同可用区则不会。
<!--
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
-->
一个地区region表示一个更大的域由一个到多个可用区组成。对于 Kubernetes 来说,跨越多个地区的集群很罕见。
虽然可用区和地区的确切定义留给基础设施来决定,但地区的常见属性包括:地区间比地区内更高的网络延迟,地区间网络流量更高的成本,独立于其他可用区或是地区的故障域。例如,一个地区内的节点可以共享电力基础设施(例如 UPS 或发电机),但不同地区内的节点显然不会。
<!--
Kubernetes makes a few assumptions about the structure of zones and regions:
1) regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
2) zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
-->
Kubernetes 对可用区和地区的结构做出一些假设:
1地区和可用区是层次化的可用区是地区的严格子集任何可用区都不能再 2 个地区中出现。
2可用区名字在地区中独一无二例如地区 "africa-east-1" 可由可用区 "africa-east-1a" 和 "africa-east-1b" 构成。
<!--
It should be safe to assume that topology labels do not change. Even though labels are strictly mutable, consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
-->
你可以安全的假定拓扑类的标签是固定不变的。即使标签严格来说是可变的,但使用者依然可以假定一个节点只有通过销毁、重建的方式,才能在可用区间移动。
<!--
Kubernetes can use this information in various ways. For example, the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures, see [kubernetes.io/hostname](#kubernetesiohostname)). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures). This is achieved via _SelectorSpreadPriority_.
-->
Kubernetes 能以多种方式使用这些信息。
例如,调度器自动地尝试将 ReplicaSet 中的 Pod 打散在单可用区集群的不同节点上(以减少节点故障的影响,参见[kubernetes.io/hostname](#kubernetesiohostname))。
在多可用区的集群中,这类打散分布的行为也会应用到可用区(以减少可用区故障的影响)。
做到这一点靠的是 _SelectorSpreadPriority_
<!--
_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
-->
_SelectorSpreadPriority_ 是一种最大能力分配方法best effort。如果集群中的可用区是异构的例如不同数量的节点不同类型的节点或不同的 Pod 资源需求),这种分配方法可以防止平均分配 Pod 到可用区。如果需要,你可以用同构的可用区(相同数量和类型的节点)来减少潜在的不平衡分布。
<!--
The scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
-->
调度器(通过 _VolumeZonePredicate_ 的预测)也会保障声明了某卷的 Pod 只能分配到该卷相同的可用区。
卷不支持跨可用区挂载。
<!--
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for `PersistentVolumeLabel`). With `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.
-->
如果 `PersistentVolumeLabel` 不支持给 PersistentVolume 自动打标签,你可以考虑手动加标签(或增加 `PersistentVolumeLabel` 支持)。
有了 `PersistentVolumeLabel`,调度器可以防止 Pod 挂载不同可用区中的卷。
如果你的基础架构没有此限制,那你根本就没有必要给卷增加 zone 标签。
## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build}
示例: `node.kubernetes.io/windows-build=10.0.17763`
用于Node
<!--
When the kubelet is running on Microsoft Windows, it automatically labels its node to record the version of Windows Server in use.
The label's value is in the format "MajorVersion.MinorVersion.BuildNumber".
-->
当 kubelet 运行于 Microsoft Windows它给节点自动打标签以记录 Windows Server 的版本。
标签值的格式为 "主版本.次版本.构建号"
## service.kubernetes.io/headless {#servicekubernetesioheadless}
示例:`service.kubernetes.io/headless=""`
用于Service
<!--
The control plane adds this label to an Endpoints object when the owning Service is headless.
-->
在无头headless服务的场景下控制平面为 Endpoint 对象添加此标签。
## kubernetes.io/service-name {#kubernetesioservice-name}
示例:`kubernetes.io/service-name="nginx"`
用于Service
<!--
Kubernetes uses this label to differentiate multiple Services. Used currently for `ELB`(Elastic Load Balancer) only.
-->
Kubernetes 用此标签区分多个服务。当前仅用于 `ELB`(Elastic Load Balancer)。
## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
示例:`endpointslice.kubernetes.io/managed-by="controller"`
用于EndpointSlices
<!--
The label is used to indicate the controller or entity that manages an EndpointSlice. This label aims to enable different EndpointSlice objects to be managed by different controllers or entities within the same cluster.
-->
此标签用来指向管理 EndpointSlice 的控制器或实体。
此标签的目的是用集群中不同的控制器或实体来管理不同的 EndpointSlice。
## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror}
示例:`endpointslice.kubernetes.io/skip-mirror="true"`
用于Endpoints
<!--
The label can be set to `"true"` on an Endpoints resource to indicate that the EndpointSliceMirroring controller should not mirror this resource with EndpointSlices.
-->
此标签在 Endpoints 资源上设为 `"true"` 指示 EndpointSliceMirroring 控制器不要镜像此 EndpointSlices 资源。
## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name}
示例:`service.kubernetes.io/service-proxy-name="foo-bar"`
用于Service
<!--
The kube-proxy has this label for custom proxy, which delegates service control to custom proxy.
-->
kube-proxy 把此标签用于客户代理,将服务控制委托给客户代理。
## experimental.windows.kubernetes.io/isolation-type
示例:`experimental.windows.kubernetes.io/isolation-type: "hyperv"`
用于Pod
<!--
The annotation is used to run Windows containers with Hyper-V isolation. To use Hyper-V isolation feature and create a Hyper-V isolated container, the kubelet should be started with feature gates HyperVContainer=true and the Pod should include the annotation experimental.windows.kubernetes.io/isolation-type=hyperv.
< note >
You can only set this annotation on Pods that have a single container.
< /note >
-->
此注解用于运行 Hyper-V 隔离的 Windows 容器。
要使用 Hyper-V 隔离特性,并创建 Hyper-V 隔离容器kubelet 应该用特性门控 HyperVContainer=true 来启动,并且 Pod 应该包含注解 `experimental.windows.kubernetes.io/isolation-type=hyperv`
{{< note >}}
你只能在单容器 Pod 上设置此注解。
{{< /note >}}
## ingressclass.kubernetes.io/is-default-class
示例:`ingressclass.kubernetes.io/is-default-class: "true"`
用于IngressClass
<!--
When a single IngressClass resource has this annotation set to `"true"`, new Ingress resource without a class specified will be assigned this default class.
-->
当唯一的 IngressClass 资源将此注解的值设为 "true",没有指定类型的新 Ingress 资源将使用此默认类型。
## kubernetes.io/ingress.class (deprecated)
{{< note >}}
<!--
Starting in v1.18, this annotation is deprecated in favor of `spec.ingressClassName`.
-->
从 v1.18 开始,此注解被弃用,取而代之的是 `spec.ingressClassName`
{{< /note >}}
## alpha.kubernetes.io/provided-node-ip
示例:`alpha.kubernetes.io/provided-node-ip: "10.0.0.1"`
用于Node
<!--
The kubelet can set this annotation on a Node to denote its configured IPv4 address.
When kubelet is started with the "external" cloud provider, it sets this annotation on the Node to denote an IP address set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid by the cloud-controller-manager.
**The taints listed below are always used on Nodes**
-->
kubectl 在 Node 上设置此注解,表示它的 IPv4 地址。
当 kubectl 由外部的云供应商启动时,在 Node 上设置此注解,表示由命令行标记(`--node-ip`)设置的 IP 地址。
cloud-controller-manager 向云供应商验证此 IP 是否有效。
**以下列出的污点只能用于 Node**
## node.kubernetes.io/not-ready
示例:`node.kubernetes.io/not-ready:NoExecute`
<!--
The node controller detects whether a node is ready by monitoring its health and adds or removes this taint accordingly.
-->
节点控制器通过健康监控来检测节点是否就绪,并据此添加/删除此污点。
## node.kubernetes.io/unreachable
示例:`node.kubernetes.io/unreachable:NoExecute`
<!--
The node controller adds the taint to a node corresponding to the [NodeCondition](/docs/concepts/architecture/nodes/#condition) `Ready` being `Unknown`.
-->
如果 [NodeCondition](/docs/concepts/architecture/nodes/#condition) 的 `Ready` 键值为 `Unknown`,节点控制器将添加污点到 node。
## node.kubernetes.io/unschedulable
示例:`node.kubernetes.io/unschedulable:NoSchedule`
<!--
The taint will be added to a node when initializing the node to avoid race condition.
-->
当初始化节点时,添加此污点,来避免竟态的发生。
## node.kubernetes.io/memory-pressure
示例:`node.kubernetes.io/memory-pressure:NoSchedule`
<!--
The kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available` observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
-->
kubelet 依据节点上观测到的 `memory.available``allocatableMemory.available` 来检测内存压力。
用观测值对比 kubelet 设置的阈值,以判断节点状态和污点是否可以被添加/移除。
## node.kubernetes.io/disk-pressure
示例:`node.kubernetes.io/disk-pressure:NoSchedule`
<!--
The kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`, `nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added/removed.
-->
kubelet 依据节点上观测到的 `imagefs.available`、`imagefs.inodesFree`、`nodefs.available` 和 `nodefs.inodesFree`(仅 Linux) 来判断磁盘压力。
用观测值对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。
## node.kubernetes.io/network-unavailable
示例:`node.kubernetes.io/network-unavailable:NoSchedule`
<!--
This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration. Only when the route on the cloud is configured properly will the taint be removed by the cloud provider.
-->
它初始由 kubectl 设置,云供应商用它来指示对额外网络配置的需求。
仅当云中的路由器配置妥当后,云供应商才会移除此污点。
## node.kubernetes.io/pid-pressure
示例:`node.kubernetes.io/pid-pressure:NoSchedule`
<!--
The kubelet checks D-value of the size of `/proc/sys/kernel/pid_max` and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the `pid.available` metric. The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added/removed.
-->
kubelet 检查 `/proc/sys/kernel/pid_max` 尺寸的 D 值D-value以及节点上 Kubernetes 消耗掉的 PID以获取可用的 PID 数量,此数量可通过指标 `pid.available` 得到。
然后用此指标对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。
## node.cloudprovider.kubernetes.io/uninitialized
示例:`node.cloudprovider.kubernetes.io/uninitialized:NoSchedule`
<!--
Sets this taint on a node to mark it as unusable, when kubelet is started with the "external" cloud provider, until a controller from the cloud-controller-manager initializes this node, and then removes the taint.
-->
当 kubelet 由外部云供应商启动时,在节点上设置此污点以标记节点不可用,直到一个 cloud-controller-manager 控制器初始化此节点之后,才会移除此污点。
## node.cloudprovider.kubernetes.io/shutdown
示例:`node.cloudprovider.kubernetes.io/shutdown:NoSchedule`
<!--
If a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly with `node.cloudprovider.kubernetes.io/shutdown` and the taint effect of `NoSchedule`.
-->
如果一个云供应商的节点被指定为关机状态,节点被打上污点 `node.cloudprovider.kubernetes.io/shutdown`,污点的影响为 `NoSchedule`

View File

@ -377,12 +377,12 @@ exclude=kubelet kubeadm kubectl
EOF
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
sudo systemctl enable --now kubelet
```
<!--
@ -402,8 +402,7 @@ systemctl enable --now kubelet
你必须这么做,直到 kubelet 做出对 SELinux 的支持进行升级为止。
- 你可以保持 SELinux 处于弃用状态,前提是你知道如何配置它,不过这也意味着有些
配置是 kubeadm 所不支持的。
- 如果你知道如何配置 SELinux 则可以将其保持启用状态,但可能需要设定 kubeadm 不支持的部分配置
{{% /tab %}}
{{% tab name="无包管理器的情况" %}}
@ -415,7 +414,7 @@ Install CNI plugins (required for most pod network):
```bash
CNI_VERSION="v0.8.2"
mkdir -p /opt/cni/bin
sudo mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
```
@ -457,16 +456,12 @@ Install `kubeadm`, `kubelet`, `kubectl` and add a `kubelet` systemd service:
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
sudo chmod +x {kubeadm,kubelet,kubectl}
RELEASE_VERSION="v0.4.0"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
<!--
@ -522,13 +517,14 @@ cgroupDriver: <value>
```
<!--
For further details, please read [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
For further details, please read [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)
and the [`KubeletConfiguration` reference](/docs/reference/config-api/kubelet-config.v1beta1/)
Please mind, that you **only** have to do that if the cgroup driver of your CRI
is not `cgroupfs`, because that is the default value in the kubelet already.
-->
进一步的相关细节,可参阅
[使用配置文件来执行 kubeadm init](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)。
[使用配置文件来执行 kubeadm init](/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file) 以及 [KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)
请注意,你只需要在你的 cgroup 驱动程序不是 `cgroupfs` 时这么做,
因为它已经是 kubelet 中的默认值。