Merge pull request #43330 from tengqm/zh-fix-links
[zh] Fix some links in the zh-cn localization sitepull/43347/head^2
commit
6b1a95ff54
|
@ -61,7 +61,7 @@ ConfigMap 在设计上不是用来保存大量数据的。在 ConfigMap 中保
|
|||
<!--
|
||||
## ConfigMap object
|
||||
|
||||
A ConfigMap is an API [object](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
|
||||
A ConfigMap is an {{< glossary_tooltip text="API object" term_id="object" >}}
|
||||
that lets you store configuration for other objects to use. Unlike most
|
||||
Kubernetes objects that have a `spec`, a ConfigMap has `data` and `binaryData`
|
||||
fields. These fields accept key-value pairs as their values. Both the `data`
|
||||
|
@ -74,8 +74,7 @@ The name of a ConfigMap must be a valid
|
|||
-->
|
||||
## ConfigMap 对象
|
||||
|
||||
ConfigMap 是一个 API [对象](/zh-cn/docs/concepts/overview/working-with-objects/kubernetes-objects/),
|
||||
让你可以存储其他对象所需要使用的配置。
|
||||
ConfigMap 是一个让你可以存储其他对象所需要使用的配置的 {{< glossary_tooltip text="API 对象" term_id="object" >}}。
|
||||
和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 和
|
||||
`binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData`
|
||||
字段都是可选的。`data` 字段设计用来保存 UTF-8 字符串,而 `binaryData`
|
||||
|
@ -190,7 +189,7 @@ Here's an example Pod that uses values from `game-demo` to configure a Pod:
|
|||
|
||||
下面是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod:
|
||||
|
||||
{{% code file="configmap/configure-pod.yaml" %}}
|
||||
{{% code_sample file="configmap/configure-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
|
|
|
@ -505,7 +505,7 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust
|
|||
你可以配置 kubelet,以调用插件可执行文件的方式来动态获取容器镜像的仓库凭据。
|
||||
这是为私有仓库获取凭据最稳健和最通用的方法,但也需要 kubelet 级别的配置才能启用。
|
||||
|
||||
有关更多细节请参见[配置 kubelet 镜像凭据提供程序](/docs/tasks/administer-cluster/kubelet-credential-provider/)。
|
||||
有关更多细节请参见[配置 kubelet 镜像凭据提供程序](/zh-cn/docs/tasks/administer-cluster/kubelet-credential-provider/)。
|
||||
|
||||
<!--
|
||||
### Interpretation of config.json {#config-json}
|
||||
|
|
|
@ -29,7 +29,7 @@ Node-pressure eviction is not the same as
|
|||
当这些资源中的一个或者多个达到特定的消耗水平,
|
||||
kubelet 可以主动地使节点上一个或者多个 Pod 失效,以回收资源防止饥饿。
|
||||
|
||||
在节点压力驱逐期间,kubelet 将所选 Pod 的[阶段](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)
|
||||
在节点压力驱逐期间,kubelet 将所选 Pod 的[阶段](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase)
|
||||
设置为 `Failed`。这将终止 Pod。
|
||||
|
||||
节点压力驱逐不同于 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)。
|
||||
|
@ -390,7 +390,7 @@ kubelet 根据下表将驱逐信号映射为节点状况:
|
|||
| `DiskPressure` | `nodefs.available`、`nodefs.inodesFree`、`imagefs.available` 或 `imagefs.inodesFree` | 节点的根文件系统或镜像文件系统上的可用磁盘空间和 inode 已满足驱逐条件 |
|
||||
| `PIDPressure` | `pid.available` | (Linux) 节点上的可用进程标识符已低于驱逐条件 |
|
||||
|
||||
控制平面还将这些节点状况[映射]((/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition))为其污点。
|
||||
控制平面还将这些节点状况[映射](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)为其污点。
|
||||
|
||||
kubelet 根据配置的 `--node-status-update-frequency` 更新节点条件,默认为 `10s`。
|
||||
|
||||
|
|
|
@ -110,6 +110,6 @@ kube-proxy 基于 `spec.internalTrafficPolicy` 的设置来过滤路由的目标
|
|||
* Read about [Service External Traffic Policy](/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)
|
||||
* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial
|
||||
-->
|
||||
* 请阅读[拓扑感知路由](/zh-cn/docs/concepts/services-networking/topology-aware-hints)
|
||||
* 请阅读[拓扑感知路由](/zh-cn/docs/concepts/services-networking/topology-aware-routing/)
|
||||
* 请阅读 [Service 的外部流量策略](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip)
|
||||
* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程
|
||||
* 遵循[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)教程
|
||||
|
|
|
@ -100,7 +100,7 @@ To submit a blog post, follow these steps:
|
|||
-->
|
||||
要提交博文,你可以遵从以下步骤:
|
||||
|
||||
1. 如果你还未签署 CLA,请先[签署 CLA](https://kubernetes.io/docs/contribute/start/#sign-the-cla)。
|
||||
1. 如果你还未签署 CLA,请先[签署 CLA](/zh-cn/docs/contribute/start/#sign-the-cla)。
|
||||
2. 查阅[网站仓库](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts)中现有博文的 Markdown 格式。
|
||||
3. 在你所选的文本编辑器中撰写你的博文。
|
||||
4. 在第 2 步的同一链接上,点击 **Create new file** 按钮。
|
||||
|
|
|
@ -605,7 +605,7 @@ request:
|
|||
|
||||
# dryRun 表示 API 请求正在以 `dryrun` 模式运行,并且将不会保留。
|
||||
# 带有副作用的 Webhook 应该避免在 dryRun 为 true 时激活这些副作用。
|
||||
# 有关更多详细信息,请参见 http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request
|
||||
# 有关更多详细信息,请参见 http://k8s.io/zh-cn/docs/reference/using-api/api-concepts/#make-a-dry-run-request
|
||||
dryRun: False
|
||||
```
|
||||
|
||||
|
@ -1173,7 +1173,7 @@ webhook to be called.
|
|||
如果你需要细粒度地过滤请求,你可以为 Webhook 定义**匹配条件**。
|
||||
如果你发现匹配规则、`objectSelectors` 和 `namespaceSelectors` 仍然不能提供你想要的何时进行 HTTP
|
||||
调用的过滤条件,那么添加这些条件会很有用。
|
||||
匹配条件是 [CEL 表达式](/docs/reference/using-api/cel/)。
|
||||
匹配条件是 [CEL 表达式](/zh-cn/docs/reference/using-api/cel/)。
|
||||
所有匹配条件都必须为 true 才能调用 Webhook。
|
||||
|
||||
<!--
|
||||
|
|
|
@ -598,7 +598,7 @@ workload to function correctly are applied.
|
|||
在 resources 和 verbs 条目中使用通配符会为敏感资源授予过多的访问权限。
|
||||
例如,如果添加了新的资源类型、新的子资源或新的自定义动词,
|
||||
通配符条目会自动授予访问权限,用户可能不希望出现这种情况。
|
||||
应该执行[最小特权原则](zh-cn/docs/concepts/security/rbac-good-practices/#least-privilege),
|
||||
应该执行[最小特权原则](/zh-cn/docs/concepts/security/rbac-good-practices/#least-privilege),
|
||||
使用具体的 resources 和 verbs 确保仅赋予工作负载正常运行所需的权限。
|
||||
{{< /caution >}}
|
||||
|
||||
|
|
|
@ -97,7 +97,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
-->
|
||||
### Alpha 和 Beta 状态的特性门控 {#feature-gates-for-alpha-or-beta-features}
|
||||
|
||||
{{< table caption="处于 Alpha 或 Beta 状态的特性门控" >}}
|
||||
{{< table caption="处于 Alpha 或 Beta 状态的特性门控" sortable="true" >}}
|
||||
|
||||
<!--
|
||||
| Feature | Default | Stage | Since | Until |
|
||||
|
@ -275,7 +275,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
-->
|
||||
### 已毕业和已废弃的特性门控 {#feature-gates-for-graduated-or-deprecated-features}
|
||||
|
||||
{{< table caption="已毕业或不推荐使用的特性门控" >}}
|
||||
{{< table caption="已毕业或不推荐使用的特性门控" sortable="true" >}}
|
||||
|
||||
<!--
|
||||
| Feature | Default | Stage | Since | Until |
|
||||
|
@ -421,7 +421,7 @@ A *Beta* feature means:
|
|||
**Beta** 特性代表:
|
||||
|
||||
<!--
|
||||
* Enabled by default.
|
||||
* Usually enabled by default. Beta API groups are [disabled by default](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default).
|
||||
* The feature is well tested. Enabling the feature is considered safe.
|
||||
* Support for the overall feature will not be dropped, though details may change.
|
||||
* The schema and/or semantics of objects may change in incompatible ways in a
|
||||
|
@ -433,7 +433,7 @@ A *Beta* feature means:
|
|||
incompatible changes in subsequent releases. If you have multiple clusters
|
||||
that can be upgraded independently, you may be able to relax this restriction.
|
||||
-->
|
||||
* 默认启用。
|
||||
* 通常默认启用。Beta API 组[默认是被禁用的](https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3136-beta-apis-off-by-default)。
|
||||
* 该特性已经经过良好测试。启用该特性是安全的。
|
||||
* 尽管详细信息可能会更改,但不会放弃对整体特性的支持。
|
||||
* 对象的架构或语义可能会在随后的 Beta 或稳定版本中以不兼容的方式更改。
|
||||
|
@ -720,8 +720,6 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `DevicePluginCDIDevices`: Enable support to CDI device IDs in the
|
||||
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) API.
|
||||
-->
|
||||
- `DevicePluginCDIDevices`: Enable support to CDI device IDs in the
|
||||
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) API.
|
||||
- `DefaultHostNetworkHostPortsInPodTemplates`:更改何时分配 `PodSpec.containers[*].ports[*].hostPort` 的默认值。
|
||||
默认仅在 Pod 中设置默认值。启用此特性意味着即使在嵌套的 PodSpec(例如 Deployment 中)中也会分配默认值,
|
||||
这是以前的默认行为。
|
||||
|
@ -962,7 +960,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `KMSv2`:启用 KMS v2 API 以实现静态加密。
|
||||
详情参见[使用 KMS 驱动进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
|
||||
- `KMSv2KDF`:启用 KMS v2 以生成一次性数据加密密钥。
|
||||
详情参见[使用 KMS 提供程序进行数据加密](/docs/tasks/administer-cluster/kms-provider)。
|
||||
详情参见[使用 KMS 提供程序进行数据加密](/zh-cn/docs/tasks/administer-cluster/kms-provider)。
|
||||
如果 `KMSv2` 特性门控在你的集群未被启用 ,则 `KMSv2KDF` 特性门控的值不会产生任何影响。
|
||||
- `KubeProxyDrainingTerminatingNodes`:为 `externalTrafficPolicy: Cluster` 服务实现正终止节点的连接排空。
|
||||
<!--
|
||||
|
@ -1332,7 +1330,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
This feature gate will never graduate to stable.
|
||||
-->
|
||||
- `TopologyAwareHints`: 在 EndpointSlices 中启用基于拓扑提示的拓扑感知路由。
|
||||
更多详细信息可参见[拓扑感知提示](/zh-cn/docs/concepts/services-networking/topology-aware-hints/)。
|
||||
更多详细信息可参见[拓扑感知路由](/zh-cn/docs/concepts/services-networking/topology-aware-routing/)。
|
||||
- `TopologyManager`:启用一种机制来协调 Kubernetes 不同组件的细粒度硬件资源分配。
|
||||
详见[控制节点上的拓扑管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)。
|
||||
- `TopologyManagerPolicyAlphaOptions`:允许微调拓扑管理器策略的实验性的、Alpha 质量的选项。
|
||||
|
@ -1351,7 +1349,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
-->
|
||||
- `TopologyManagerPolicyOptions`:允许微调拓扑管理器策略。
|
||||
- `UnknownVersionInteroperabilityProxy`:存在多个不同版本的 kube-apiserver 时将资源请求代理到正确的对等 kube-apiserver。
|
||||
详细信息请参阅[混合版本代理](/docs/concepts/architecture/mixed-version-proxy/)。
|
||||
详细信息请参阅[混合版本代理](/zh-cn/docs/concepts/architecture/mixed-version-proxy/)。
|
||||
- `UserNamespacesSupport`:为 Pod 启用用户名字空间支持。
|
||||
在 Kubernetes v1.28 之前,此特性门控被命名为 `UserNamespacesStatelessPodsSupport`。
|
||||
<!--
|
||||
|
|
|
@ -501,8 +501,9 @@ When a StatefulSet controller creates a Pod for the StatefulSet, it sets this la
|
|||
The value of the label is the ordinal index of the pod being created.
|
||||
|
||||
See [Pod Index Label](/docs/concepts/workloads/controllers/statefulset/#pod-index-label)
|
||||
in the StatefulSet topic for more details. Note the [PodIndexLabel](content/en/docs/reference/command-line-tools-reference/feature-gates.md) feature gate must be enabled
|
||||
for this label to be added to pods.
|
||||
in the StatefulSet topic for more details.
|
||||
Note the [PodIndexLabel](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
feature gate must be enabled for this label to be added to pods.
|
||||
-->
|
||||
类别:标签
|
||||
|
||||
|
@ -515,8 +516,8 @@ for this label to be added to pods.
|
|||
|
||||
更多细节参阅 StatefulSet 主题中的
|
||||
[Pod 索引标签](/zh-cn/docs/concepts/workloads/controllers/statefulset/#pod-index-label)。
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md) 特性门控必须被启用,
|
||||
才能将此标签添加到 Pod 上。
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
特性门控必须被启用,才能将此标签添加到 Pod 上。
|
||||
|
||||
<!--
|
||||
### cluster-autoscaler.kubernetes.io/safe-to-evict
|
||||
|
@ -2113,7 +2114,7 @@ Example: `alpha.kubernetes.io/provided-node-ip: "10.0.0.1"`
|
|||
|
||||
Used on: Node
|
||||
|
||||
The kubelet can set this annotation on a Node to denote its configured IPv4 address.
|
||||
The kubelet can set this annotation on a Node to denote its configured IPv4 and/or IPv6 address.
|
||||
|
||||
When kubelet is started with the `--cloud-provider` flag set to any value (includes both external
|
||||
and legacy in-tree cloud providers), it sets this annotation on the Node to denote an IP address
|
||||
|
@ -2128,7 +2129,7 @@ by the cloud-controller-manager.
|
|||
|
||||
用于:Node
|
||||
|
||||
kubelet 可以在 Node 上设置此注解来表示其配置的 IPv4 地址。
|
||||
kubelet 可以在 Node 上设置此注解来表示其配置的 IPv4 与/或 IPv6 地址。
|
||||
|
||||
如果 kubelet 被启动时 `--cloud-provider` 标志设置为任一云驱动(包括外部云驱动和传统树内云驱动)
|
||||
kubelet 会在 Node 上设置此注解以表示从命令行标志(`--node-ip`)设置的 IP 地址。
|
||||
|
@ -2159,11 +2160,12 @@ kube-controller-manager 中的 Job 控制器为使用 Indexed
|
|||
设置此标签和注解。
|
||||
|
||||
<!--
|
||||
Note the [PodIndexLabel](content/en/docs/reference/command-line-tools-reference/feature-gates.md) feature gate must be enabled
|
||||
for this to be added as a pod **label**, otherwise it will just be an annotation.
|
||||
Note the [PodIndexLabel](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
feature gate must be enabled for this to be added as a pod **label**,
|
||||
otherwise it will just be an annotation.
|
||||
-->
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates.md) 特性门控必须被启用,
|
||||
才能将其添加为 Pod 的**标签**,否则它只会用作注解。
|
||||
请注意,[PodIndexLabel](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
特性门控必须被启用,才能将其添加为 Pod 的**标签**,否则它只会用作注解。
|
||||
|
||||
### batch.kubernetes.io/cronjob-scheduled-timestamp
|
||||
|
||||
|
|
|
@ -4,9 +4,20 @@ weight: 10
|
|||
no_list: true
|
||||
content_type: concept
|
||||
card:
|
||||
name: reference
|
||||
weight: 40
|
||||
title: kubeadm 命令参考
|
||||
name: setup
|
||||
weight: 80
|
||||
---
|
||||
<!--
|
||||
title: "Kubeadm"
|
||||
weight: 10
|
||||
no_list: true
|
||||
content_type: concept
|
||||
card:
|
||||
title: kubeadm command reference
|
||||
name: setup
|
||||
weight: 80
|
||||
-->
|
||||
|
||||
<img src="/images/kubeadm-stacked-color.png" align="right" width="150px">
|
||||
|
||||
|
@ -71,7 +82,7 @@ To install kubeadm, see the [installation guide](/docs/setup/production-environm
|
|||
用于恢复通过 `kubeadm init` 或者 `kubeadm join` 命令对节点进行的任何变更
|
||||
* [kubeadm certs](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-certs)
|
||||
用于管理 Kubernetes 证书
|
||||
* [kubeadm kubeconfig](/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig)
|
||||
* [kubeadm kubeconfig](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig)
|
||||
用于管理 kubeconfig 文件
|
||||
* [kubeadm version](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-version)
|
||||
用于打印 kubeadm 的版本信息
|
||||
|
|
|
@ -555,4 +555,4 @@ kubectl delete namespace mem-example
|
|||
* [为命名空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
* [配置命名空间下 Pod 总数](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
|
||||
* [配置 API 对象配额](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)
|
||||
* [调整分配给容器的 CPU 和内存资源的大小](/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
* [调整分配给容器的 CPU 和内存资源的大小](/zh-cn/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
|
|
|
@ -352,7 +352,7 @@ kubectl delete namespace qos-example
|
|||
|
||||
* [为容器和 Pod 分配内存资源](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
|
||||
* [为容器和 Pod 分配 CPU 资源](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
|
||||
<!--
|
||||
### For cluster administrators
|
||||
|
|
|
@ -477,7 +477,7 @@ f427638871c35 docker.io/library/nginx@sha256:... 19 seconds ago Running
|
|||
-->
|
||||
* [为控制面组件生成静态 Pod 清单](/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifests-for-control-plane-components)
|
||||
* [为本地 etcd 生成静态 Pod 清单](/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/#generate-static-pod-manifest-for-local-etcd)
|
||||
* [使用 `crictl` 对 Kubernetes 节点进行调试](/docs/tasks/debug/debug-cluster/crictl/)
|
||||
* [使用 `crictl` 对 Kubernetes 节点进行调试](/zh-cn/docs/tasks/debug/debug-cluster/crictl/)
|
||||
* 更多细节请参阅 [`crictl`](https://github.com/kubernetes-sigs/cri-tools)
|
||||
* [从 `docker` CLI 命令映射到 `crictl`](/zh-cn/docs/reference/tools/map-crictl-dockercli/)
|
||||
* [将 etcd 实例设置为由 kubelet 管理的静态 Pod](/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
|
||||
|
|
|
@ -451,5 +451,6 @@ By default truncate is disabled in both `webhook` and `log`, a cluster administr
|
|||
-->
|
||||
* 进一步了解 [Mutating webhook 审计注解](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations)。
|
||||
* 通过阅读审计配置参考,进一步了解
|
||||
[`Event`](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
|
||||
和 [`Policy`](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy) 资源的信息。
|
||||
[`Event`](/zh-cn/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
|
||||
和 [`Policy`](/zh-cn/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy) 资源的信息。
|
||||
|
||||
|
|
|
@ -113,7 +113,7 @@ Deployment 管理一个 [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/
|
|||
ReplicaSet 再管理 Pod,从而使调度器能够免受一些故障的影响。
|
||||
以下是 Deployment 配置,将其保存为 `my-scheduler.yaml`:
|
||||
|
||||
{{% code file="admin/sched/my-scheduler.yaml" %}}
|
||||
{{% code_sample file="admin/sched/my-scheduler.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the above manifest, you use a [KubeSchedulerConfiguration](/docs/reference/scheduling/config/)
|
||||
|
@ -128,24 +128,26 @@ the `kube-scheduler` during initialization with the `--config` option. The `my-s
|
|||
<!--
|
||||
In the aforementioned Scheduler Configuration, your scheduler implementation is represented via
|
||||
a [KubeSchedulerProfile](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile).
|
||||
-->
|
||||
在前面提到的调度器配置中,你的调度器呈现为一个
|
||||
[KubeSchedulerProfile](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
To determine if a scheduler is responsible for scheduling a specific Pod, the `spec.schedulerName` field in a
|
||||
PodTemplate or Pod manifest must match the `schedulerName` field of the `KubeSchedulerProfile`.
|
||||
All schedulers running in the cluster must have unique names.
|
||||
{{< /note >}}
|
||||
-->
|
||||
在前面提到的调度器配置中,你的调度器通过 [KubeSchedulerProfile](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile) 进行实现。
|
||||
{{< note >}}
|
||||
要确定一个调度器是否可以调度特定的 Pod,PodTemplate 或 Pod 清单中的 `spec.schedulerName`
|
||||
字段必须匹配 `KubeSchedulerProfile` 中的 `schedulerName` 字段。
|
||||
所有运行在集群中的调度器必须拥有唯一的名称。
|
||||
运行在集群中的所有调度器必须拥有唯一的名称。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Also, note that you create a dedicated service account `my-scheduler` and bind the ClusterRole
|
||||
`system:kube-scheduler` to it so that it can acquire the same privileges as `kube-scheduler`.
|
||||
-->
|
||||
还要注意,我们创建了一个专用服务账号 `my-scheduler` 并将集群角色 `system:kube-scheduler`
|
||||
还要注意,我们创建了一个专用的服务账号 `my-scheduler` 并将集群角色 `system:kube-scheduler`
|
||||
绑定到它,以便它可以获得与 `kube-scheduler` 相同的权限。
|
||||
|
||||
<!--
|
||||
|
@ -155,8 +157,8 @@ detailed description of other command line arguments and
|
|||
[Scheduler Configuration reference](/docs/reference/config-api/kube-scheduler-config.v1beta3/) for
|
||||
detailed description of other customizable `kube-scheduler` configurations.
|
||||
-->
|
||||
请参阅 [kube-scheduler 文档](/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
获取其他命令行参数以及 [Scheduler 配置参考](/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
请参阅 [kube-scheduler 文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
获取其他命令行参数以及 [Scheduler 配置参考](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
|
||||
获取自定义 `kube-scheduler` 配置的详细说明。
|
||||
|
||||
<!--
|
||||
|
@ -236,7 +238,7 @@ Add your scheduler name to the resourceNames of the rule applied for `endpoints`
|
|||
kubectl edit clusterrole system:kube-scheduler
|
||||
```
|
||||
|
||||
{{% code file="admin/sched/clusterrole.yaml" %}}
|
||||
{{% code_sample file="admin/sched/clusterrole.yaml" %}}
|
||||
|
||||
<!--
|
||||
## Specify schedulers for pods
|
||||
|
@ -257,7 +259,7 @@ scheduler in that pod spec. Let's look at three examples.
|
|||
-->
|
||||
- Pod spec 没有任何调度器名称
|
||||
|
||||
{{% code file="admin/sched/pod1.yaml" %}}
|
||||
{{% code_sample file="admin/sched/pod1.yaml" %}}
|
||||
|
||||
<!--
|
||||
When no scheduler name is supplied, the pod is automatically scheduled using the
|
||||
|
@ -279,7 +281,7 @@ scheduler in that pod spec. Let's look at three examples.
|
|||
-->
|
||||
- Pod spec 设置为 `default-scheduler`
|
||||
|
||||
{{% code file="admin/sched/pod2.yaml" %}}
|
||||
{{% code_sample file="admin/sched/pod2.yaml" %}}
|
||||
|
||||
<!--
|
||||
A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`. In this case, we supply the name of the
|
||||
|
@ -302,7 +304,7 @@ scheduler in that pod spec. Let's look at three examples.
|
|||
-->
|
||||
- Pod spec 设置为 `my-scheduler`
|
||||
|
||||
{{% code file="admin/sched/pod3.yaml" %}}
|
||||
{{% code_sample file="admin/sched/pod3.yaml" %}}
|
||||
|
||||
<!--
|
||||
In this case, we specify that this pod should be scheduled using the scheduler that we
|
||||
|
|
|
@ -437,7 +437,7 @@ By using environment variables you can change values that are inserted into `cas
|
|||
* Learn more about the [*KubernetesSeedProvider*](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
|
||||
* See more custom [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md)
|
||||
-->
|
||||
* 了解如何[扩缩 StatefulSet](/docs/tasks/run-application/scale-stateful-set/)。
|
||||
* 了解如何[扩缩 StatefulSet](/zh-cn/docs/tasks/run-application/scale-stateful-set/)。
|
||||
* 了解有关 [**KubernetesSeedProvider**](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java) 的更多信息
|
||||
* 查看更多自定义 [Seed Provider Configurations](https://git.k8s.io/examples/cassandra/java/README.md)
|
||||
|
||||
|
|
|
@ -770,7 +770,7 @@ Use the command below to get the logging configuration from one of Pods in the `
|
|||
### 配置日志 {#configuring-logging}
|
||||
|
||||
`zkGenConfig.sh` 脚本产生的一个文件控制了 ZooKeeper 的日志行为。
|
||||
ZooKeeper 使用了 [Log4j](http://logging.apache.org/log4j/2.x/)
|
||||
ZooKeeper 使用了 [Log4j](https://logging.apache.org/log4j/2.x/)
|
||||
并默认使用基于文件大小和时间的滚动文件追加器作为日志配置。
|
||||
|
||||
从 `zk` StatefulSet 的一个 Pod 中获取日志配置。
|
||||
|
|
Loading…
Reference in New Issue