Merge branch 'kubernetes:main' into hi-running-multiple-zone
commit
49365c25ea
|
|
@ -1,17 +1,16 @@
|
|||
aliases:
|
||||
sig-docs-blog-owners: # Approvers for blog content
|
||||
- onlydole
|
||||
- mrbobbytables
|
||||
- sftim
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- sftim
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- mrbobbytables
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- sftim
|
||||
- nate-double-u
|
||||
sig-docs-localization-owners: # Admins for localization content
|
||||
- a-mccarthy
|
||||
- bradtopol
|
||||
- divya-mohan0209
|
||||
- jimangel
|
||||
- kbhawkey
|
||||
|
|
@ -33,7 +32,6 @@ aliases:
|
|||
- bradtopol
|
||||
- divya-mohan0209
|
||||
- jimangel
|
||||
- jlbutler
|
||||
- kbhawkey
|
||||
- krol3
|
||||
- natalisucks
|
||||
|
|
@ -44,7 +42,6 @@ aliases:
|
|||
- tengqm
|
||||
sig-docs-en-reviews: # PR reviews for English content
|
||||
- bradtopol
|
||||
- daminisatya
|
||||
- divya-mohan0209
|
||||
- jimangel
|
||||
- kbhawkey
|
||||
|
|
@ -52,7 +49,6 @@ aliases:
|
|||
- natalisucks
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- rajeshdeshpande02
|
||||
- reylejano
|
||||
- sftim
|
||||
- shannonxtreme
|
||||
|
|
|
|||
|
|
@ -11,5 +11,9 @@
|
|||
# INSTRUCTIONS AT https://kubernetes.io/security/
|
||||
|
||||
divya-mohan0209
|
||||
jimangel
|
||||
reylejano
|
||||
sftim
|
||||
tengqm
|
||||
onlydole
|
||||
kbhawkey
|
||||
natalisucks
|
||||
|
|
|
|||
|
|
@ -4,9 +4,12 @@ title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
|
|||
date: 2021-04-06
|
||||
slug: podsecuritypolicy-deprecation-past-present-and-future
|
||||
---
|
||||
|
||||
**Author:** Tabitha Sable (Kubernetes SIG Security)
|
||||
|
||||
{{% pageinfo color="primary" %}}
|
||||
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.* *You can read more information about the removal of PodSecurityPolicy in the [Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
|
||||
{{% /pageinfo %}}
|
||||
|
||||
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week. This starts the countdown to its removal, but doesn’t change anything else. PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely. In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
|
||||
|
||||
What are Pod Security Policies? Why did we need them? Why are they going away, and what’s next? How does this affect you? These key questions come to mind as we prepare to say goodbye to PSP, so let’s walk through them together. We’ll start with an overview of how features get removed from Kubernetes.
|
||||
|
|
|
|||
|
|
@ -39,6 +39,18 @@ Resource quotas work like this:
|
|||
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
for an example of how to avoid this problem.
|
||||
|
||||
{{< note >}}
|
||||
- For `cpu` and `memory` resources, ResourceQuotas enforce that **every**
|
||||
(new) pod in that namespace sets a limit for that resource.
|
||||
If you enforce a resource quota in a namespace for either `cpu` or `memory`,
|
||||
you, and other clients, **must** specify either `requests` or `limits` for that resource,
|
||||
for every new Pod you submit. If you don't, the control plane may reject admission
|
||||
for that Pod.
|
||||
- For other resources: ResourceQuota works and will ignore pods in the namespace without setting a limit or request for that resource. It means that you can create a new pod without limit/request ephemeral storage if the resource quota limits the ephemeral storage of this namespace.
|
||||
You can use a [LimitRange](/docs/concepts/policy/limit-range/) to automatically set
|
||||
a default request for these resources.
|
||||
{{< /note >}}
|
||||
|
||||
The name of a ResourceQuota object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
||||
|
|
|
|||
|
|
@ -268,7 +268,7 @@ The certificate value is in Base64-encoded format under `status.certificate`.
|
|||
|
||||
Export the issued certificate from the CertificateSigningRequest.
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt
|
||||
```
|
||||
|
||||
|
|
@ -295,20 +295,20 @@ The last step is to add this user into the kubeconfig file.
|
|||
|
||||
First, you need to add new credentials:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true
|
||||
|
||||
```
|
||||
|
||||
Then, you need to add the context:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl config set-context myuser --cluster=kubernetes --user=myuser
|
||||
```
|
||||
|
||||
To test it, change the context to `myuser`:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubectl config use-context myuser
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -266,6 +266,7 @@ kubectl expose rc nginx --port=80 --target-port=8000
|
|||
kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -
|
||||
|
||||
kubectl label pods my-pod new-label=awesome # Add a Label
|
||||
kubectl label pods my-pod new-label- # Remove a label
|
||||
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
|
||||
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
|
||||
```
|
||||
|
|
|
|||
|
|
@ -47,7 +47,30 @@ horizontal pod autoscaling.
|
|||
|
||||
## How does a HorizontalPodAutoscaler work?
|
||||
|
||||
{{< figure src="/images/docs/horizontal-pod-autoscaler.svg" caption="HorizontalPodAutoscaler controls the scale of a Deployment and its ReplicaSet" class="diagram-medium">}}
|
||||
{{< mermaid >}}
|
||||
graph BT
|
||||
|
||||
hpa[Horizontal Pod Autoscaler] --> scale[Scale]
|
||||
|
||||
subgraph rc[RC / Deployment]
|
||||
scale
|
||||
end
|
||||
|
||||
scale -.-> pod1[Pod 1]
|
||||
scale -.-> pod2[Pod 2]
|
||||
scale -.-> pod3[Pod N]
|
||||
|
||||
classDef hpa fill:#D5A6BD,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D;
|
||||
classDef rc fill:#F9CB9C,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D;
|
||||
classDef scale fill:#B6D7A8,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D;
|
||||
classDef pod fill:#9FC5E8,stroke:#1E1E1D,stroke-width:1px,color:#1E1E1D;
|
||||
class hpa hpa;
|
||||
class rc rc;
|
||||
class scale scale;
|
||||
class pod1,pod2,pod3 pod
|
||||
{{< /mermaid >}}
|
||||
|
||||
Figure 1. HorizontalPodAutoscaler controls the scale of a Deployment and its ReplicaSet
|
||||
|
||||
Kubernetes implements horizontal pod autoscaling as a control loop that runs intermittently
|
||||
(it is not a continuous process). The interval is set by the
|
||||
|
|
|
|||
|
|
@ -242,7 +242,7 @@ Au lieu de cela, un volume existant est redimensionné.
|
|||
|
||||
#### Redimensionnement de volume CSI
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
La prise en charge du redimensionnement des volumes CSI est activée par défaut, mais elle nécessite également un pilote CSI spécifique pour prendre en charge le redimensionnement des volumes.
|
||||
Reportez-vous à la documentation du pilote CSI spécifique pour plus d'informations.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: Interface de Armazenamento de Contêiner
|
||||
id: csi
|
||||
date: 2018-06-25
|
||||
full_link: /pt-br/docs/concepts/storage/volumes/#csi
|
||||
short_description: >
|
||||
A Interface de Armazenamento de Contêiner (_Container Storage Interface_, CSI) define um padrão de interface para expor sistemas de armazenamento a contêineres.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- storage
|
||||
---
|
||||
A Interface de Armazenamento de Contêiner (_Container Storage Interface_, CSI) define um padrão de interface para expor sistemas de armazenamento a contêineres.
|
||||
|
||||
<!--more-->
|
||||
|
||||
O CSI permite que os fornecedores criem plugins personalizados de armazenamento para o Kubernetes sem adicioná-los ao repositório Kubernetes (plugins fora da árvore).
|
||||
Para usar um driver CSI de um provedor de armazenamento, você deve primeiro [instalá-lo no seu cluster](https://kubernetes-csi.github.io/docs/deploying.html).
|
||||
Você poderá então criar uma {{< glossary_tooltip text="Classe de Armazenamento" term_id="storage-class" >}} que use esse driver CSI.
|
||||
|
||||
* [CSI na documentação do Kubernetes](/pt-br/docs/concepts/storage/volumes/#csi)
|
||||
* [Lista de drivers CSI disponíveis](https://kubernetes-csi.github.io/docs/drivers.html)
|
||||
|
|
@ -17,7 +17,7 @@ constrain resources that are allocated to processes.
|
|||
|
||||
The {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
|
||||
underlying container runtime need to interface with cgroups to enforce
|
||||
[resource mangement for pods and containers](/docs/concepts/configuration/manage-resources-containers/) which
|
||||
[resource management for pods and containers](/docs/concepts/configuration/manage-resources-containers/) which
|
||||
includes cpu/memory requests and limits for containerized workloads.
|
||||
|
||||
There are two versions of cgroups in Linux: cgroup v1 and cgroup v2. cgroup v2 is
|
||||
|
|
@ -204,7 +204,7 @@ cgroup v2 使用一个与 cgroup v1 不同的 API,因此如果有任何应用
|
|||
<!--
|
||||
## Identify the cgroup version on Linux Nodes {#check-cgroup-version}
|
||||
|
||||
The cgroup version depends on on the Linux distribution being used and the
|
||||
The cgroup version depends on the Linux distribution being used and the
|
||||
default cgroup version configured on the OS. To check which cgroup version your
|
||||
distribution uses, run the `stat -fc %T /sys/fs/cgroup/` command on
|
||||
the node:
|
||||
|
|
|
|||
|
|
@ -4,6 +4,12 @@ content_type: concept
|
|||
weight: 30
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Controllers
|
||||
content_type: concept
|
||||
weight: 30
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
In robotics and automation, a _control loop_ is
|
||||
|
|
@ -127,7 +133,7 @@ indicate that your room is now at the temperature you set).
|
|||
<!--
|
||||
### Direct control
|
||||
|
||||
By contrast with Job, some controllers need to make changes to
|
||||
In contrast with Job, some controllers need to make changes to
|
||||
things outside of your cluster.
|
||||
|
||||
For example, if you use a control loop to make sure there
|
||||
|
|
@ -158,11 +164,11 @@ that horizontally scales the nodes in your cluster.)
|
|||
可以水平地扩展集群中的节点。)
|
||||
|
||||
<!--
|
||||
The important point here is that the controller makes some change to bring about
|
||||
your desired state, and then reports current state back to your cluster's API server.
|
||||
The important point here is that the controller makes some changes to bring about
|
||||
your desired state, and then reports the current state back to your cluster's API server.
|
||||
Other control loops can observe that reported data and take their own actions.
|
||||
-->
|
||||
这里,很重要的一点是,控制器做出了一些变更以使得事物更接近你的期望状态,
|
||||
这里的重点是,控制器做出了一些变更以使得事物更接近你的期望状态,
|
||||
之后将当前状态报告给集群的 API 服务器。
|
||||
其他控制回路可以观测到所汇报的数据的这种变化并采取其各自的行动。
|
||||
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ and doesn't register as a node.
|
|||
<!--
|
||||
## Upgrading
|
||||
|
||||
When upgrading Kubernetes, then the kubelet tries to automatically select the
|
||||
When upgrading Kubernetes, the kubelet tries to automatically select the
|
||||
latest CRI version on restart of the component. If that fails, then the fallback
|
||||
will take place as mentioned above. If a gRPC re-dial was required because the
|
||||
container runtime has been upgraded, then the container runtime must also
|
||||
|
|
|
|||
|
|
@ -148,7 +148,7 @@ Containers are a good way to bundle and run your applications. In a production e
|
|||
如果此行为交由给系统处理,是不是会更容易一些?
|
||||
|
||||
<!--
|
||||
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
|
||||
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.
|
||||
-->
|
||||
这就是 Kubernetes 要来做的事情!
|
||||
Kubernetes 为你提供了一个可弹性运行分布式系统的框架。
|
||||
|
|
@ -166,7 +166,7 @@ Kubernetes can expose a container using the DNS name or using their own IP addre
|
|||
-->
|
||||
* **服务发现和负载均衡**
|
||||
|
||||
Kubernetes 可以使用 DNS 名称或自己的 IP 地址来曝露容器。
|
||||
Kubernetes 可以使用 DNS 名称或自己的 IP 地址来暴露容器。
|
||||
如果进入容器的流量很大,
|
||||
Kubernetes 可以负载均衡并分配网络流量,从而使部署稳定。
|
||||
|
||||
|
|
@ -249,8 +249,7 @@ Kubernetes:
|
|||
* 不提供应用程序级别的服务作为内置服务,例如中间件(例如消息中间件)、
|
||||
数据处理框架(例如 Spark)、数据库(例如 MySQL)、缓存、集群存储系统
|
||||
(例如 Ceph)。这样的组件可以在 Kubernetes 上运行,并且/或者可以由运行在
|
||||
Kubernetes 上的应用程序通过可移植机制
|
||||
(例如[开放服务代理](https://openservicebrokerapi.org/))来访问。
|
||||
Kubernetes 上的应用程序通过可移植机制(例如[开放服务代理](https://openservicebrokerapi.org/))来访问。
|
||||
<!--
|
||||
* Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metrics.
|
||||
* Does not provide nor mandate a configuration language/system (for example, Jsonnet). It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
|
||||
|
|
@ -265,8 +264,8 @@ Kubernetes:
|
|||
* 此外,Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。
|
||||
编排的技术定义是执行已定义的工作流程:首先执行 A,然后执行 B,再执行 C。
|
||||
而 Kubernetes 包含了一组独立可组合的控制过程,可以连续地将当前状态驱动到所提供的预期状态。
|
||||
你不需要在乎如何从 A 移动到 C,也不需要集中控制,这使得系统更易于使用
|
||||
且功能更强大、系统更健壮,更为弹性和可扩展。
|
||||
你不需要在乎如何从 A 移动到 C,也不需要集中控制,这使得系统更易于使用且功能更强大、
|
||||
系统更健壮,更为弹性和可扩展。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -384,13 +384,15 @@ following Pod-specific DNS policies. These policies are specified in the
|
|||
See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers)
|
||||
for more details.
|
||||
- "`ClusterFirst`": Any DNS query that does not match the configured cluster
|
||||
domain suffix, such as "`www.kubernetes.io`", is forwarded to the upstream
|
||||
nameserver inherited from the node. Cluster administrators may have extra
|
||||
domain suffix, such as "`www.kubernetes.io`", is forwarded to an upstream
|
||||
nameserver by the DNS server. Cluster administrators may have extra
|
||||
stub-domain and upstream DNS servers configured.
|
||||
See [related discussion](/docs/tasks/administer-cluster/dns-custom-nameservers)
|
||||
for details on how DNS queries are handled in those cases.
|
||||
- "`ClusterFirstWithHostNet`": For Pods running with hostNetwork, you should
|
||||
explicitly set its DNS policy "`ClusterFirstWithHostNet`".
|
||||
explicitly set its DNS policy to "`ClusterFirstWithHostNet`". Otherwise, Pods
|
||||
running with hostNetwork and `"ClusterFirst"` will fallback to the behavior
|
||||
of the `"Default"` policy.
|
||||
- Note: This is not supported on Windows. See [below](#dns-windows) for details
|
||||
- "`None`": It allows a Pod to ignore DNS settings from the Kubernetes
|
||||
environment. All DNS settings are supposed to be provided using the
|
||||
|
|
@ -405,11 +407,12 @@ DNS 策略可以逐个 Pod 来设定。目前 Kubernetes 支持以下特定 Pod
|
|||
- "`Default`": Pod 从运行所在的节点继承名称解析配置。
|
||||
参考[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers)获取更多信息。
|
||||
- "`ClusterFirst`": 与配置的集群域后缀不匹配的任何 DNS 查询(例如 "www.kubernetes.io")
|
||||
都将转发到从节点继承的上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。
|
||||
都会由 DNS 服务器转发到上游名称服务器。集群管理员可能配置了额外的存根域和上游 DNS 服务器。
|
||||
参阅[相关讨论](/zh-cn/docs/tasks/administer-cluster/dns-custom-nameservers)
|
||||
了解在这些场景中如何处理 DNS 查询的信息。
|
||||
- "`ClusterFirstWithHostNet`":对于以 hostNetwork 方式运行的 Pod,应显式设置其 DNS 策略
|
||||
"`ClusterFirstWithHostNet`"。
|
||||
- "`ClusterFirstWithHostNet`": 对于以 hostNetwork 方式运行的 Pod,应将其 DNS 策略显式设置为
|
||||
"`ClusterFirstWithHostNet`"。否则,以 hostNetwork 方式和 `"ClusterFirst"` 策略运行的
|
||||
Pod 将会做出回退至 `"Default"` 策略的行为。
|
||||
- 注意:这在 Windows 上不支持。 有关详细信息,请参见[下文](#dns-windows)。
|
||||
- "`None`": 此设置允许 Pod 忽略 Kubernetes 环境中的 DNS 设置。Pod 会使用其 `dnsConfig`
|
||||
字段所提供的 DNS 设置。
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 使用拓扑键实现拓扑感知的流量路由
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 150
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
|
@ -9,7 +9,7 @@ reviewers:
|
|||
- imroc
|
||||
title: Topology-aware traffic routing with topology keys
|
||||
content_type: concept
|
||||
weight: 10
|
||||
weight: 150
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
|
@ -52,14 +52,12 @@ to endpoints within the same zone.
|
|||
By setting `topologyKeys` on a Service, you're able to define a policy for routing
|
||||
traffic based upon the Node labels for the originating and destination Nodes.
|
||||
-->
|
||||
## 拓扑感知的流量路由
|
||||
## 拓扑感知的流量路由 {#topology-aware-traffic-routing}
|
||||
|
||||
默认情况下,发往 `ClusterIP` 或者 `NodePort` 服务的流量可能会被路由到
|
||||
服务的任一后端的地址。Kubernetes 1.7 允许将“外部”流量路由到接收到流量的
|
||||
节点上的 Pod。对于 `ClusterIP` 服务,无法完成同节点优先的路由,你也无法
|
||||
配置集群优选路由到同一可用区中的端点。
|
||||
通过在 Service 上配置 `topologyKeys`,你可以基于来源节点和目标节点的
|
||||
标签来定义流量路由策略。
|
||||
默认情况下,发往 `ClusterIP` 或者 `NodePort` 服务的流量可能会被路由到服务的任一后端的地址。
|
||||
Kubernetes 1.7 允许将“外部”流量路由到接收到流量的节点上的 Pod。对于 `ClusterIP`
|
||||
服务,无法完成同节点优先的路由,你也无法配置集群优选路由到同一可用区中的端点。
|
||||
通过在 Service 上配置 `topologyKeys`,你可以基于来源节点和目标节点的标签来定义流量路由策略。
|
||||
|
||||
<!--
|
||||
The label matching between the source and destination lets you, as a cluster
|
||||
|
|
@ -76,8 +74,8 @@ same top-of-rack switch for the lowest latency.
|
|||
来定义节点集合。你可以基于符合自身需求的任何度量值来定义标签。
|
||||
例如,在公有云上,你可能更偏向于把流量控制在同一区内,因为区间流量是有费用成本的,
|
||||
而区内流量则没有。
|
||||
其它常见需求还包括把流量路由到由 `DaemonSet` 管理的本地 Pod 上,或者
|
||||
把将流量转发到连接在同一机架交换机的节点上,以获得低延时。
|
||||
其它常见需求还包括把流量路由到由 `DaemonSet` 管理的本地 Pod
|
||||
上,或者把将流量转发到连接在同一机架交换机的节点上,以获得低延时。
|
||||
|
||||
<!--
|
||||
## Using Service Topology
|
||||
|
|
@ -183,7 +181,7 @@ traffic as follows.
|
|||
|
||||
The following are common examples of using the Service Topology feature.
|
||||
-->
|
||||
## 示例
|
||||
## 示例 {#examples}
|
||||
|
||||
以下是使用服务拓扑功能的常见示例。
|
||||
|
||||
|
|
@ -192,7 +190,7 @@ The following are common examples of using the Service Topology feature.
|
|||
|
||||
A Service that only routes to node local endpoints. If no endpoints exist on the node, traffic is dropped:
|
||||
-->
|
||||
### 仅节点本地端点
|
||||
### 仅节点本地端点 {#only-node-local-endpoints}
|
||||
|
||||
仅路由到节点本地端点的一种服务。如果节点上不存在端点,流量则被丢弃:
|
||||
|
||||
|
|
@ -217,7 +215,7 @@ spec:
|
|||
|
||||
A Service that prefers node local Endpoints but falls back to cluster wide endpoints if node local endpoints do not exist:
|
||||
-->
|
||||
### 首选节点本地端点
|
||||
### 首选节点本地端点 {#prefer-node-local-endpoints}
|
||||
|
||||
首选节点本地端点,如果节点本地端点不存在,则回退到集群范围端点的一种服务:
|
||||
|
||||
|
|
@ -243,7 +241,7 @@ spec:
|
|||
|
||||
A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped.
|
||||
-->
|
||||
### 仅地域或区域端点
|
||||
### 仅地域或区域端点 {#only-zonal-or-regional-endpoints}
|
||||
|
||||
首选地域端点而不是区域端点的一种服务。 如果以上两种范围内均不存在端点,
|
||||
流量则被丢弃。
|
||||
|
|
@ -270,10 +268,9 @@ spec:
|
|||
|
||||
A Service that prefers node local, zonal, then regional endpoints but falls back to cluster wide endpoints.
|
||||
-->
|
||||
### 优先选择节点本地端点、地域端点,然后是区域端点
|
||||
### 优先选择节点本地端点、地域端点,然后是区域端点 {#prefer-node-local-zonal-then-regional-endpoints}
|
||||
|
||||
优先选择节点本地端点,地域端点,然后是区域端点,最后才是集群范围端点的
|
||||
一种服务。
|
||||
优先选择节点本地端点,地域端点,然后是区域端点,最后才是集群范围端点的一种服务。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
@ -294,12 +291,11 @@ spec:
|
|||
- "*"
|
||||
```
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
<!--
|
||||
* Read about [enabling Service Topology](/docs/tasks/administer-cluster/enabling-service-topology)
|
||||
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/)
|
||||
* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
-->
|
||||
* 阅读关于[启用服务拓扑](/zh-cn/docs/tasks/administer-cluster/enabling-service-topology/)
|
||||
* 阅读[用服务连接应用程序](/zh-cn/docs/concepts/services-networking/connect-applications-service/)
|
||||
* 阅读关于[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/)
|
||||
* 阅读[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)
|
||||
|
||||
|
|
|
|||
|
|
@ -174,7 +174,8 @@ For a reference to old feature gates that are removed, please refer to
|
|||
| `JobReadyPods` | `false` | Alpha | 1.23 | 1.23 |
|
||||
| `JobReadyPods` | `true` | Beta | 1.24 | |
|
||||
| `JobTrackingWithFinalizers` | `false` | Alpha | 1.22 | 1.22 |
|
||||
| `JobTrackingWithFinalizers` | `true` | Beta | 1.23 | |
|
||||
| `JobTrackingWithFinalizers` | `false` | Beta | 1.23 | 1.24 |
|
||||
| `JobTrackingWithFinalizers` | `true` | Beta | 1.25 | |
|
||||
| `KubeletCredentialProviders` | `false` | Alpha | 1.20 | 1.23 |
|
||||
| `KubeletCredentialProviders` | `true` | Beta | 1.24 | |
|
||||
| `KubeletInUserNamespace` | `false` | Alpha | 1.22 | |
|
||||
|
|
|
|||
|
|
@ -34,5 +34,7 @@ tags:
|
|||
|
||||
<!--
|
||||
Containers decouple applications from underlying host infrastructure to make deployment easier in different cloud or OS environments, and for easier scaling.
|
||||
The applications that run inside containers are called containerized applications. The process of bundling these applications and their dependencies into a container image is called containerization.
|
||||
-->
|
||||
容器使应用和底层的主机基础设施解耦,降低了应用在不同云环境或者操作系统上的部署难度,便于应用扩展。
|
||||
容器使应用和底层的主机基础设施解耦,降低了应用在不同云环境或者操作系统上的部署难度,便于应用扩展。
|
||||
在容器内运行的应用程序称为容器化应用程序。 将这些应用程序及其依赖项捆绑到容器映像中的过程称为容器化。
|
||||
|
|
@ -31,6 +31,8 @@ A {{< glossary_tooltip term_id="container" >}} type that you can temporarily run
|
|||
|
||||
<!--
|
||||
If you want to investigate a Pod that's running with problems, you can add an ephemeral container to that Pod and carry out diagnostics. Ephemeral containers have no resource or scheduling guarantees, and you should not use them to run any part of the workload itself.
|
||||
Ephemeral containers are not supported by {{< glossary_tooltip text="static pods" term_id="static-pod" >}}.
|
||||
-->
|
||||
如果想要调查运行中有问题的 Pod,可以向该 Pod 添加一个临时容器(Ephemeral Container)并进行诊断。
|
||||
临时容器没有资源或调度保证,因此不应该使用它们来运行任何部分的工作负荷本身。
|
||||
{{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}} 不支持临时容器。
|
||||
|
|
|
|||
|
|
@ -1,86 +0,0 @@
|
|||
---
|
||||
title: 开启服务拓扑
|
||||
content_type: task
|
||||
min-kubernetes-server-version: 1.17
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- andrewsykim
|
||||
- johnbelamaric
|
||||
- imroc
|
||||
title: Enabling Service Topology
|
||||
content_type: task
|
||||
min-kubernetes-server-version: 1.17
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
This feature, specifically the alpha `topologyKeys` field, is deprecated since
|
||||
Kubernetes v1.21.
|
||||
[Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/),
|
||||
introduced in Kubernetes v1.21, provide similar functionality.
|
||||
-->
|
||||
这项功能,特别是 Alpha 状态的 `topologyKeys` 字段,在 Kubernetes v1.21 中已经弃用。
|
||||
在 Kubernetes v1.21
|
||||
加入的[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/)提供了类似的功能。
|
||||
|
||||
<!--
|
||||
_Service Topology_ enables a {{< glossary_tooltip term_id="service">}} to route traffic based upon the Node
|
||||
topology of the cluster. For example, a service can specify that traffic be
|
||||
preferentially routed to endpoints that are on the same Node as the client, or
|
||||
in the same availability zone.
|
||||
-->
|
||||
**服务拓扑(Service Topology)** 使 {{< glossary_tooltip term_id="service">}}
|
||||
能够根据集群中的 Node 拓扑来路由流量。
|
||||
比如,服务可以指定将流量优先路由到与客户端位于同一节点或者同一可用区域的端点上。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
<!--
|
||||
The following prerequisites are needed in order to enable topology aware service
|
||||
routing:
|
||||
|
||||
* Kubernetes v1.17 or later
|
||||
* Configure {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} to run in iptables mode or IPVS mode
|
||||
-->
|
||||
需要满足下列先决条件,才能启用拓扑感知的服务路由:
|
||||
|
||||
* Kubernetes 1.17 或更高版本
|
||||
* 配置 {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} 以 iptables 或者 IPVS 模式运行
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
## Enable Service Topology
|
||||
-->
|
||||
## 启用服务拓扑 {#enable-service-topology}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
To enable service topology, enable the `ServiceTopology`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for all Kubernetes components:
|
||||
-->
|
||||
要启用服务拓扑,需要为所有 Kubernetes 组件启用 `ServiceTopology`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/):
|
||||
|
||||
```
|
||||
--feature-gates="ServiceTopology=true`
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Read about [Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/), the replacement for the `topologyKeys` field.
|
||||
* Read about [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/)
|
||||
* Read about the [Service Topology](/docs/concepts/services-networking/service-topology/) concept
|
||||
* Read [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
-->
|
||||
* 阅读[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/),该技术是用来替换 `topologyKeys` 字段的。
|
||||
* 阅读[端点切片](/zh-cn/docs/concepts/services-networking/endpoint-slices)
|
||||
* 阅读[服务拓扑](/zh-cn/docs/concepts/services-networking/service-topology)概念
|
||||
* 阅读[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)
|
||||
|
|
@ -350,11 +350,15 @@ program to retrieve the contents of your Secret.
|
|||
|
||||
<!--
|
||||
1. Verify the stored Secret is prefixed with `k8s:enc:aescbc:v1:` which indicates
|
||||
the `aescbc` provider has encrypted the resulting data.
|
||||
the `aescbc` provider has encrypted the resulting data. Confirm that the key name shown in `etcd`
|
||||
matches the key name specified in the `EncryptionConfiguration` mentioned above. In this example,
|
||||
you can see that the encryption key named `key1` is used in `etcd` and in `EncryptionConfiguration`.
|
||||
|
||||
1. Verify the Secret is correctly decrypted when retrieved via the API:
|
||||
-->
|
||||
3. 验证存储的密钥前缀是否为 `k8s:enc:aescbc:v1:`,这表明 `aescbc` provider 已加密结果数据。
|
||||
确认 `etcd` 中显示的密钥名称和上述 `EncryptionConfiguration` 中指定的密钥名称一致。
|
||||
在此例中,你可以看到在 `etcd` 和 `EncryptionConfiguration` 中使用了名为 `key1` 的加密密钥。
|
||||
|
||||
4. 通过 API 检索,验证 Secret 是否被正确解密:
|
||||
|
||||
|
|
|
|||
|
|
@ -67,6 +67,13 @@ The `spec` of a static Pod cannot refer to other API objects
|
|||
{{< glossary_tooltip text="Secret" term_id="secret" >}} 等)。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Static pods do not support [ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/).
|
||||
-->
|
||||
静态 Pod 不支持[临时容器](/zh-cn/docs/concepts/workloads/pods/ephemeral-containers/)。
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
|
|
|||
Loading…
Reference in New Issue