Merge pull request #33966 from Sea-n/zh-reviewer-tasks

[zh] Remove reviewer for tasks
pull/33988/head
Kubernetes Prow Robot 2022-05-27 00:55:07 -07:00 committed by GitHub
commit 14c481137e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 74 additions and 89 deletions

View File

@ -1,8 +1,4 @@
---
reviewers:
- vishh
- derekwaynecarr
- dashpole
title: 为系统守护进程预留计算资源
content_type: task
min-kubernetes-server-version: 1.8
@ -28,7 +24,7 @@ node.
The `kubelet` exposes a feature named 'Node Allocatable' that helps to reserve
compute resources for system daemons. Kubernetes recommends cluster
administrators to configure `Node Allocatable` based on their workload density
administrators to configure 'Node Allocatable' based on their workload density
on each node.
-->
Kubernetes 的节点可以按照 `Capacity` 调度。默认情况下 pod 能够使用节点全部可用容量。
@ -36,7 +32,7 @@ Kubernetes 的节点可以按照 `Capacity` 调度。默认情况下 pod 能够
除非为这些系统守护进程留出资源,否则它们将与 pod 争夺资源并导致节点资源短缺问题。
`kubelet` 公开了一个名为 'Node Allocatable' 的特性,有助于为系统守护进程预留计算资源。
Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置 `Node Allocatable`
Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置 “Node Allocatable”
## {{% heading "prerequisites" %}}
@ -72,7 +68,7 @@ Resources can be reserved for two categories of system daemons in the `kubelet`.
Kubernetes 节点上的 'Allocatable' 被定义为 pod 可用计算资源量。
调度器不会超额申请 'Allocatable'。
目前支持 'CPU', 'memory' 和 'ephemeral-storage' 这几个参数。
目前支持 'CPU''memory' 和 'ephemeral-storage' 这几个参数。
可分配的节点暴露为 API 中 `v1.Node` 对象的一部分,也是 CLI 中
`kubectl describe node` 的一部分。
@ -87,7 +83,7 @@ enable the new cgroup hierarchy via the `--cgroups-per-qos` flag. This flag is
enabled by default. When enabled, the `kubelet` will parent all end-user pods
under a cgroup hierarchy managed by the `kubelet`.
-->
### 启用 QoS 和 Pod 级别的 cgroups
### 启用 QoS 和 Pod 级别的 cgroups {#enabling-qos-and-pod-level-cgroups}
为了恰当的在节点范围实施节点可分配约束,你必须通过 `--cgroups-per-qos`
标志启用新的 cgroup 层次结构。这个标志是默认启用的。
@ -110,10 +106,10 @@ transient slices for resources that are supported by that init system.
Depending on the configuration of the associated container runtime,
operators may have to choose a particular cgroup driver to ensure
proper system behavior. For example, if operators use the `systemd`
cgroup driver provided by the `docker` runtime, the `kubelet` must
cgroup driver provided by the `containerd` runtime, the `kubelet` must
be configured to use the `systemd` cgroup driver.
-->
### 配置 cgroup 驱动
### 配置 cgroup 驱动 {#configuring-a-cgroup-driver}
`kubelet` 支持在主机上使用 cgroup 驱动操作 cgroup 层次结构。
驱动通过 `--cgroup-driver` 标志配置。
@ -127,7 +123,7 @@ be configured to use the `systemd` cgroup driver.
取决于相关容器运行时的配置,操作员可能需要选择一个特定的 cgroup 驱动
来保证系统正常运行。
例如,如果操作员使用 `docker` 运行时提供的 `systemd` cgroup 驱动时,
例如,如果操作员使用 `containerd` 运行时提供的 `systemd` cgroup 驱动时,
必须配置 `kubelet` 使用 `systemd` cgroup 驱动。
<!--
@ -191,7 +187,6 @@ exist. Kubelet will fail if an invalid cgroup is specified.
- **Kubelet Flag**: `--system-reserved=[cpu=100m][,][memory=100Mi][,][ephemeral-storage=1Gi][,][pid=1000]`
- **Kubelet Flag**: `--system-reserved-cgroup=`
`system-reserved` is meant to capture resource reservation for OS system daemons
like `sshd`, `udev`, etc. `system-reserved` should reserve `memory` for the
`kernel` too since `kernel` memory is not accounted to pods in Kubernetes at this time.
@ -237,14 +232,15 @@ exist. `kubelet` will fail if an invalid cgroup is specified.
<!--
### Explicitly Reserved CPU List
-**Kubelet Flag**: `--reserved-cpus=0-3`
-->
### 显式保留的 CPU 列表 {#explicitly-reserved-cpu-list}
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
-**Kubelet 标志**: `--reserved-cpus=0-3`
<!--
**Kubelet Flag**: `--reserved-cpus=0-3`
-->
**Kubelet 标志**`--reserved-cpus=0-3`
<!--
`reserved-cpus` is meant to define an explicit CPU set for OS system daemons and
@ -283,7 +279,7 @@ cpuset 上,应使用 Kubernetes 之外的其他机制。
<!--
### Eviction Thresholds
- **Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
**Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
Memory pressure at the node level leads to System OOMs which affects the entire
node and all pods running on it. Nodes can go offline temporarily until memory
@ -299,7 +295,7 @@ available for pods.
-->
### 驱逐阈值 {#eviction-Thresholds}
- **Kubelet 标志**: `--eviction-hard=[memory.available<500Mi]`
**Kubelet 标志**`--eviction-hard=[memory.available<500Mi]`
节点级别的内存压力将导致系统内存不足,这将影响到整个节点及其上运行的所有 Pod。
节点可以暂时离线直到内存已经回收为止。
@ -314,7 +310,7 @@ available for pods.
<!--
### Enforcing Node Allocatable
-**Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
**Kubelet Flag**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
The scheduler treats 'Allocatable' as the available `capacity` for pods.
@ -333,7 +329,7 @@ respectively.
-->
### 实施节点可分配约束 {#enforcing-node-allocatable}
-**Kubelet 标志**: `--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
**Kubelet 标志**`--enforce-node-allocatable=pods[,][system-reserved][,][kube-reserved]`
调度器将 'Allocatable' 视为 Pod 可用的 `capacity`(资源容量)。
@ -344,7 +340,7 @@ respectively.
可通过设置 kubelet `--enforce-node-allocatable` 标志值为 `pods` 控制这个措施。
可选地,通过在同一标志中同时指定 `kube-reserved``system-reserved` 值,
可以使 `kubelet` 强制实施 `kube-reserved``system-reserved`约束。
可以使 `kubelet` 强制实施 `kube-reserved``system-reserved` 约束。
请注意,要想执行 `kube-reserved` 或者 `system-reserved` 约束,
需要对应设置 `--kube-reserved-cgroup` 或者 `--system-reserved-cgroup`
@ -355,7 +351,7 @@ System daemons are expected to be treated similar to
[Guaranteed pods](/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed).
System daemons can burst within their bounding control groups and this behavior needs
to be managed as part of kubernetes deployments. For example, `kubelet` should
have its own control group and share `Kube-reserved` resources with the
have its own control group and share `kube-reserved` resources with the
container runtime. However, Kubelet cannot burst and use up all available Node
resources if `kube-reserved` is enforced.
-->
@ -366,7 +362,7 @@ resources if `kube-reserved` is enforced.
一样对待。
系统守护进程可以在与其对应的控制组中出现突发资源用量,这一行为要作为
kubernetes 部署的一部分进行管理。
例如,`kubelet` 应该有它自己的控制组并和容器运行时共享 `Kube-reserved` 资源。
例如,`kubelet` 应该有它自己的控制组并和容器运行时共享 `kube-reserved` 资源。
不过,如果执行了 `kube-reserved` 约束,则 kubelet 不可出现突发负载并用光
节点的所有可用资源。
@ -391,7 +387,7 @@ ability to recover if any process in that group is oom-killed.
* 作为起步,可以先针对 `pods` 上执行 'Allocatable' 约束。
* 一旦用于追踪系统守护进程的监控和告警的机制到位,可尝试基于用量估计的
方式执行 `kube-reserved`策略。
方式执行 `kube-reserved` 策略。
* 随着时间推进,在绝对必要的时候可以执行 `system-reserved` 策略。
<!--
@ -437,7 +433,7 @@ much CPU as they can, pods together cannot consume more than 14.5 CPUs.
If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
exceed their reservation, `kubelet` evicts pods whenever the overall node memory
usage is higher than 31.5Gi or `storage` is greater than 90Gi
usage is higher than 31.5Gi or `storage` is greater than 90Gi.
-->
在这个场景下,'Allocatable' 将会是 14.5 CPUs、28.5Gi 内存以及 `88Gi` 本地存储。
调度器保证这个节点上的所有 Pod 的内存 `requests` 总量不超过 28.5Gi
@ -448,6 +444,6 @@ kubelet 将会驱逐它们。
14.5 CPUs 的资源。
当没有执行 `kube-reserved` 和/或 `system-reserved` 策略且系统守护进程
使用量超过其预留时,如果节点内存用量高于 31.5Gi 或`存储`大于 90Gi
使用量超过其预留时,如果节点内存用量高于 31.5Gi 或 `storage` 大于 90Gi
kubelet 将会驱逐 Pod。

View File

@ -1,15 +1,14 @@
---
reviewers:
- cdrage
title: 将 Docker Compose 文件转换为 Kubernetes 资源
content_type: task
weight: 200
---
<!--
reviewers:
- cdrage
title: Translate a Docker Compose File to Kubernetes Resources
content_type: task
weight: 170
weight: 200
-->
<!-- overview -->
@ -36,7 +35,7 @@ More information can be found on the Kompose website at [http://kompose.io](http
We have multiple ways to install Kompose. Our preferred method is downloading the binary from the latest GitHub release.
-->
## 安装 Kompose
## 安装 Kompose {#install-kompose}
我们有很多种方式安装 Kompose。首选方式是从最新的 GitHub 发布页面下载二进制文件。
@ -132,7 +131,7 @@ brew install kompose
<!--
## Use Kompose
-->
## 使用 Kompose
## 使用 Kompose {#use-kompose}
<!--
In a few steps, we'll take you from Docker Compose to Kubernetes. All
@ -141,9 +140,10 @@ you need is an existing `docker-compose.yml` file.
再需几步,我们就把你从 Docker Compose 带到 Kubernetes。
你只需要一个现有的 `docker-compose.yml` 文件。
1. <!--Go to the directory containing your `docker-compose.yml` file. If you don't
have one, test using this one.-->
进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。
<!--
1. Go to the directory containing your `docker-compose.yml` file. If you don't have one, test using this one.
-->
1. 进入 `docker-compose.yml` 文件所在的目录。如果没有,请使用下面这个进行测试。
```yaml
version: "2"
@ -174,10 +174,10 @@ you need is an existing `docker-compose.yml` file.
<!--
2. To convert the `docker-compose.yml` file to files that you can use with
`kubectl`, run `kompose convert` and then `kubectl create -f <output file>`.
`kubectl`, run `kompose convert` and then `kubectl apply -f <output file>`.
-->
2. 要将 `docker-compose.yml` 转换为 `kubectl` 可用的文件,请运行 `kompose convert`
命令进行转换,然后运行 `kubectl create -f <output file>` 进行创建。
命令进行转换,然后运行 `kubectl apply -f <output file>` 进行创建。
```shell
kompose convert
@ -318,7 +318,7 @@ Kompose 支持将 V1、V2 和 V3 版本的 Docker Compose 文件转换为 Kubern
<!--
### Kubernetes `kompose convert` example
-->
### Kubernetes `kompose convert` 示例
### Kubernetes `kompose convert` 示例 {#kubernetes-kompose-convert-example}
```shell
kompose --file docker-voting.yml convert
@ -390,7 +390,7 @@ When multiple docker-compose files are provided the configuration is merged. Any
<!--
### OpenShift `kompose convert` example
-->
### OpenShift `kompose convert` 示例
### OpenShift `kompose convert` 示例 {#openshift-kompose-convert-example}
```shell
kompose --provider openshift --file docker-voting.yml convert
@ -584,7 +584,7 @@ For example:
- 如果使用 Kubernetes 驱动,会有一个 Ingress 资源被创建,并且假定
已经配置了相应的 Ingress 控制器。
- 如果使用 OpenShift 驱动, 则会有一个 route 被创建。
- 如果使用 OpenShift 驱动则会有一个 route 被创建。
例如:
@ -680,10 +680,10 @@ services:
If the Docker Compose file has a volume specified for a service, the Deployment (Kubernetes) or DeploymentConfig (OpenShift) strategy is changed to "Recreate" instead of "RollingUpdate" (default). This is done to avoid multiple instances of a service from accessing a volume at the same time.
-->
### 关于 Deployment Config 的提醒
### 关于 Deployment Config 的提醒 {#warning-about-deployment-configurations}
如果 Docker Compose 文件中为服务声明了卷Deployment (Kubernetes)
DeploymentConfig (OpenShift) 策略会从 "RollingUpdate" (默认) 变为 "Recreate"
如果 Docker Compose 文件中为服务声明了卷DeploymentKubernetes
DeploymentConfigOpenShift策略会从 “RollingUpdate”默认变为 “Recreate”
这样做的目的是为了避免服务的多个实例同时访问卷。
<!--
@ -692,7 +692,7 @@ Please note that changing service name might break some `docker-compose` files.
-->
如果 Docker Compose 文件中的服务名包含 `_`(例如 `web_service`
那么将会被替换为 `-`,服务也相应的会重命名(例如 `web-service`)。
Kompose 这样做的原因是 "Kubernetes" 不允许对象名称中包含 `_`
Kompose 这样做的原因是 “Kubernetes” 不允许对象名称中包含 `_`
请注意,更改服务名称可能会破坏一些 `docker-compose` 文件。
@ -711,4 +711,3 @@ Kompose 支持的 Docker Compose 版本包括1、2 和 3。
所有三个版本的兼容性列表请查看我们的
[转换文档](https://github.com/kubernetes/kompose/blob/master/docs/conversion.md)
文档中列出了所有不兼容的 Docker Compose 关键字。

View File

@ -1,12 +1,8 @@
---
reviewers:
- Random-Liu
- feiskyer
- mrunalp
title: 使用 crictl 对 Kubernetes 节点进行调试
content_type: task
weight: 30
---
<!--
reviewers:
- Random-Liu
@ -14,6 +10,7 @@ reviewers:
- mrunalp
title: Debugging Kubernetes nodes with crictl
content_type: task
weight: 30
-->
<!-- overview -->
@ -50,7 +47,7 @@ different architectures. Download the version that corresponds to your version
of Kubernetes. Extract it and move it to a location on your system path, such as
`/usr/local/bin/`.
-->
## 安装 crictl
## 安装 crictl {#installing-crictl}
你可以从 cri-tools [发布页面](https://github.com/kubernetes-sigs/cri-tools/releases)
下载一个压缩的 `crictl` 归档文件,用于几种不同的架构。
@ -63,7 +60,7 @@ of Kubernetes. Extract it and move it to a location on your system path, such as
The `crictl` command has several subcommands and runtime flags. Use
`crictl help` or `crictl <subcommand> help` for more details.
-->
## 一般用法
## 一般用法 {#general-usage}
`crictl` 命令有几个子命令和运行时参数。
有关详细信息,请使用 `crictl help``crictl <subcommand> help` 获取帮助信息。
@ -120,7 +117,7 @@ documentation](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/cri
The following examples show some `crictl` commands and example output.
-->
## crictl 命令示例
## crictl 命令示例 {#example-crictl-commands}
{{< warning >}}
<!--
@ -138,7 +135,7 @@ kubelet 最终将删除它们。
List all pods:
-->
### 打印 Pod 清单
### 打印 Pod 清单 {#list-pods}
打印所有 Pod 的清单:
@ -202,7 +199,7 @@ POD ID CREATED STATE NAME
List all images:
-->
### 打印镜像清单
### 打印镜像清单 {#list-containers}
打印所有镜像清单:
@ -268,7 +265,7 @@ sha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569
List all containers:
-->
### 打印容器清单
### 打印容器清单 {#list-containers}
打印所有容器清单:
@ -313,7 +310,7 @@ CONTAINER ID IMAGE
<!--
### Execute a command in a running container
-->
### 在正在运行的容器上执行命令
### 在正在运行的容器上执行命令 {#execute-a-command-in-a-running-container}
```shell
crictl exec -i -t 1f73f2d81bf98 ls
@ -333,7 +330,7 @@ bin dev etc home proc root sys tmp usr var
Get all container logs:
-->
### 获取容器日志
### 获取容器日志 {#get-a-container-s-logs}
获取容器的所有日志:
@ -377,7 +374,7 @@ Using `crictl` to run a pod sandbox is useful for debugging container runtimes.
On a running Kubernetes cluster, the sandbox will eventually be stopped and
deleted by the Kubelet.
-->
### 运行 Pod 沙盒
### 运行 Pod 沙盒 {#run-a-pod-sandbox}
`crictl` 运行 Pod 沙盒对容器运行时排错很有帮助。
在运行的 Kubernetes 集群中,沙盒会随机地被 kubelet 停止和删除。
@ -422,7 +419,7 @@ Using `crictl` to create a container is useful for debugging container runtimes.
On a running Kubernetes cluster, the sandbox will eventually be stopped and
deleted by the Kubelet.
-->
### 创建容器
### 创建容器 {#create-a-container}
`crictl` 创建容器对容器运行时排错很有帮助。
在运行的 Kubernetes 集群中,沙盒会随机的被 kubelet 停止和删除。
@ -471,13 +468,13 @@ deleted by the Kubelet.
```json
{
"metadata": {
"name": "busybox"
"name": "busybox"
},
"image":{
"image": "busybox"
"image": "busybox"
},
"command": [
"top"
"top"
],
"log_path":"busybox.log",
"linux": {
@ -520,7 +517,7 @@ deleted by the Kubelet.
To start a container, pass its ID to `crictl start`:
-->
### 启动容器
### 启动容器 {#start-a-container}
要启动容器,要将容器 ID 传给 `crictl start`
@ -562,5 +559,5 @@ CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
* [Learn more about `crictl`](https://github.com/kubernetes-sigs/cri-tools).
* [Map `docker` CLI commands to `crictl`](/docs/reference/tools/map-crictl-dockercli/).
-->
* [进一步了解 `crictl`](https://github.com/kubernetes-sigs/cri-tools).
* [将 `docker` CLI 命令映射到 `crictl`](/zh/docs/reference/tools/map-crictl-dockercli/).
* [进一步了解 `crictl`](https://github.com/kubernetes-sigs/cri-tools)
* [将 `docker` CLI 命令映射到 `crictl`](/zh/docs/reference/tools/map-crictl-dockercli/)

View File

@ -1,17 +1,14 @@
---
reviewers:
- derekwaynecarr
title: 管理巨页HugePages
content_type: task
description: 将大页配置和管理为集群中的可调度资源。
---
<!--
---
reviewers:
- derekwaynecarr
title: Manage HugePages
content_type: task
---
description: Configure and manage huge pages as a schedulable resource in a cluster.
--->
<!-- overview -->
@ -30,13 +27,13 @@ Kubernetes 支持在 Pod 应用中使用预先分配的巨页。本文描述了
<!--
1. Kubernetes nodes must pre-allocate huge pages in order for the node to report
its huge page capacity. A node may only pre-allocate huge pages for a single
size.
its huge page capacity. A node can pre-allocate huge pages for multiple
sizes.
The nodes will automatically discover and report all huge page resources as a
schedulable resource.
The nodes will automatically discover and report all huge page resources as
schedulable resources.
--->
1. 为了使节点能够上报巨页容量Kubernetes 节点必须预先分配巨页。每个节点只能预先分配一种特定规格的巨页。
1. 为了使节点能够上报巨页容量Kubernetes 节点必须预先分配巨页。每个节点能够预先分配多种规格的巨页。
节点会自动发现全部巨页资源,并作为可供调度的资源进行上报。
@ -59,9 +56,14 @@ A pod may consume multiple huge page sizes in a single pod spec. In this case it
must use `medium: HugePages-<hugepagesize>` notation for all volume mounts.
--->
用户可以通过在容器级别的资源需求中使用资源名称 `hugepages-<size>` 来使用巨页,其中的 size 是特定节点上支持的以整数值表示的最小二进制单位。 例如,如果一个节点支持 2048KiB 和 1048576KiB 页面大小,它将公开可调度的资源 `hugepages-2Mi``hugepages-1Gi`。与 CPU 或内存不同巨页不支持过量使用overcommit。注意在请求巨页资源时还必须请求内存或 CPU 资源。
用户可以通过在容器级别的资源需求中使用资源名称 `hugepages-<size>`
来使用巨页,其中的 size 是特定节点上支持的以整数值表示的最小二进制单位。
例如,如果一个节点支持 2048KiB 和 1048576KiB 页面大小,它将公开可调度的资源
`hugepages-2Mi``hugepages-1Gi`。与 CPU 或内存不同巨页不支持过量使用overcommit
注意,在请求巨页资源时,还必须请求内存或 CPU 资源。
同一 Pod 的 spec 中可能会消耗不同尺寸的巨页。在这种情况下,它必须对所有挂载卷使用 `medium: HugePages-<hugepagesize>` 标识。
同一 Pod 的 spec 中可能会消耗不同尺寸的巨页。在这种情况下,它必须对所有挂载卷使用
`medium: HugePages-<hugepagesize>` 标识。
```yaml
apiVersion: v1
@ -130,7 +132,7 @@ spec:
<!--
- Huge page requests must equal the limits. This is the default if limits are
specified, but requests are not.
- Huge pages are isolated at a container scope, so each container has own
- Huge pages are isolated at a container scope, so each container has own
limit on their cgroup sandbox as requested in a container spec.
- EmptyDir volumes backed by huge pages may not consume more huge page memory
than the pod request.
@ -139,12 +141,6 @@ spec:
- Huge page usage in a namespace is controllable via ResourceQuota similar
to other compute resources like `cpu` or `memory` using the `hugepages-<size>`
token.
- Support of multiple sizes huge pages is feature gated. It can be
enabled with the `HugePageStorageMediumSize` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{<
glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{<
glossary_tooltip text="kube-apiserver"
term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=false`).
--->
- 巨页的资源请求值必须等于其限制值。该条件在指定了资源限制,而没有指定请求的情况下默认成立。

View File

@ -1,10 +1,7 @@
---
title: 使用 SC 安装服务目录
reviewers:
- chenopis
content_type: task
---
<!--
title: Install Service Catalog using SC
reviewers:
@ -61,7 +58,7 @@ The installer runs on your local computer as a CLI tool named `sc`.
Install using `go get`:
-->
## 在本地环境中安装 `sc`
## 在本地环境中安装 `sc` {#install-sc-in-your-local-environment}
安装程序在你的本地计算机上以 CLI 工具的形式运行,名为 `sc`
@ -81,7 +78,7 @@ go get github.com/GoogleCloudPlatform/k8s-service-catalog/installer/cmd/sc
First, verify that all dependencies have been installed. Run:
-->
## 在 Kubernetes 集群中安装服务目录
## 在 Kubernetes 集群中安装服务目录 {#install-service-catalog-in-your-kubernetes-cluster}
首先,检查是否已经安装了所有依赖项。运行:
@ -112,7 +109,7 @@ sc install --etcd-backup-storageclass "standard"
If you would like to uninstall Service Catalog from your Kubernetes cluster using the `sc` tool, run:
-->
## 卸载服务目录
## 卸载服务目录 {#uninstall-service-catalog}
如果你想使用 `sc` 工具从 Kubernetes 集群卸载服务目录,请运行: