[zh-cn] sync tasks/administer-cluster/*
Signed-off-by: xin.li <xin.li@daocloud.io>pull/43037/head
parent
eb1ead1185
commit
8cd9c7dd62
|
@ -137,7 +137,7 @@ To limit the access to the `nginx` service so that only Pods with the label `acc
|
|||
如果想限制对 `nginx` 服务的访问,只让那些拥有标签 `access: true` 的 Pod 访问它,
|
||||
那么可以创建一个如下所示的 NetworkPolicy 对象:
|
||||
|
||||
{{% code file="service/networking/nginx-policy.yaml" %}}
|
||||
{{% code_sample file="service/networking/nginx-policy.yaml" %}}
|
||||
|
||||
<!--
|
||||
The name of a NetworkPolicy object must be a valid
|
||||
|
@ -224,4 +224,3 @@ wget --spider --timeout=1 nginx
|
|||
Connecting to nginx (10.100.0.16:80)
|
||||
remote file exists
|
||||
```
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ kube-dns.
|
|||
<!--
|
||||
### Create a simple Pod to use as a test environment
|
||||
|
||||
{{% code file="admin/dns/dnsutils.yaml" %}}
|
||||
{{% code_sample file="admin/dns/dnsutils.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
This example creates a pod in the `default` namespace. DNS name resolution for
|
||||
|
@ -276,7 +276,7 @@ The service name is `kube-dns` for both CoreDNS and kube-dns deployments.
|
|||
-->
|
||||
|
||||
{{< note >}}
|
||||
不管是 CoreDNS 还是 kube-dns,这个服务的名字都会是 `kube-dns` 。
|
||||
不管是 CoreDNS 还是 kube-dns,这个服务的名字都会是 `kube-dns`。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -411,6 +411,7 @@ CoreDNS 必须能够列出 {{< glossary_tooltip text="service" term_id="service"
|
|||
{{< glossary_tooltip text="endpoint" term_id="endpoint" >}} 相关的资源来正确解析服务名称。
|
||||
|
||||
示例错误消息:
|
||||
|
||||
```
|
||||
2022-03-18T07:12:15.699431183Z [INFO] 10.96.144.227:52299 - 3686 "A IN serverproxy.contoso.net.cluster.local. udp 52 false 512" SERVFAIL qr,aa,rd 145 0.000091221s
|
||||
```
|
||||
|
@ -428,6 +429,7 @@ kubectl describe clusterrole system:coredns -n kube-system
|
|||
Expected output:
|
||||
-->
|
||||
预期输出:
|
||||
|
||||
```
|
||||
PolicyRule:
|
||||
Resources Non-Resource URLs Resource Names Verbs
|
||||
|
@ -482,6 +484,7 @@ This query is limited to the pod's namespace:
|
|||
如果 Pod 和服务的名字空间不相同,则 DNS 查询必须指定服务所在的名字空间。
|
||||
|
||||
该查询仅限于 Pod 所在的名字空间:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup <service-name>
|
||||
```
|
||||
|
@ -490,6 +493,7 @@ kubectl exec -i -t dnsutils -- nslookup <service-name>
|
|||
This query specifies the namespace:
|
||||
-->
|
||||
指定名字空间的查询:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t dnsutils -- nslookup <service-name>.<namespace>
|
||||
```
|
||||
|
|
|
@ -147,7 +147,7 @@ Create a file named `dns-horizontal-autoscaler.yaml` with this content:
|
|||
|
||||
创建文件 `dns-horizontal-autoscaler.yaml`,内容如下所示:
|
||||
|
||||
{{% code file="admin/dns/dns-horizontal-autoscaler.yaml" %}}
|
||||
{{% code_sample file="admin/dns/dns-horizontal-autoscaler.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the file, replace `<SCALE_TARGET>` with your scale target.
|
||||
|
@ -397,7 +397,7 @@ patterns: *linear* and *ladder*.
|
|||
-->
|
||||
* 扩缩参数是可以被修改的,而且不需要重建或重启 autoscaler Pod。
|
||||
|
||||
* autoscaler 提供了一个控制器接口来支持两种控制模式:*linear* 和 *ladder*。
|
||||
* autoscaler 提供了一个控制器接口来支持两种控制模式:**linear** 和 **ladder**。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -407,5 +407,5 @@ patterns: *linear* and *ladder*.
|
|||
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
|
||||
|
||||
-->
|
||||
* 阅读[为关键插件 Pod 提供的调度保障](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)
|
||||
* 进一步了解 [cluster-proportional-autoscaler 实现](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)
|
||||
* 阅读[为关键插件 Pod 提供的调度保障](/zh-cn/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/)。
|
||||
* 进一步了解 [cluster-proportional-autoscaler 实现](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)。
|
||||
|
|
|
@ -65,7 +65,7 @@ Here is a manifest for an example ResourceQuota:
|
|||
|
||||
下面是 ResourceQuota 的示例清单:
|
||||
|
||||
{{% code file="admin/resource/quota-mem-cpu.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-mem-cpu.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the ResourceQuota:
|
||||
|
@ -116,7 +116,7 @@ Here is a manifest for an example Pod:
|
|||
|
||||
以下是 Pod 的示例清单:
|
||||
|
||||
{{% code file="admin/resource/quota-mem-cpu-pod.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-mem-cpu-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
@ -186,7 +186,7 @@ Here is a manifest for a second Pod:
|
|||
|
||||
以下为第二个 Pod 的清单:
|
||||
|
||||
{{% code file="admin/resource/quota-mem-cpu-pod-2.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-mem-cpu-pod-2.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the manifest, you can see that the Pod has a memory request of 700 MiB.
|
||||
|
@ -289,4 +289,3 @@ kubectl delete namespace quota-mem-cpu-example
|
|||
* [为容器和 Pod 分配内存资源](/zh-cn/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [为容器和 Pod 分配 CPU 资源](/zh-cn/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
* [为 Pod 配置服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ Here is an example manifest for a ResourceQuota:
|
|||
|
||||
下面是 ResourceQuota 的示例清单:
|
||||
|
||||
{{% code file="admin/resource/quota-pod.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the ResourceQuota:
|
||||
|
@ -104,7 +104,7 @@ Here is an example manifest for a {{< glossary_tooltip term_id="deployment" >}}:
|
|||
-->
|
||||
下面是一个 {{< glossary_tooltip term_id="deployment" >}} 的示例清单:
|
||||
|
||||
{{% code file="admin/resource/quota-pod-deployment.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-pod-deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
In that manifest, `replicas: 3` tells Kubernetes to attempt to create three new Pods, all
|
||||
|
|
|
@ -52,7 +52,8 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
|
|||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](/docs/setup/).
|
||||
2. You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}}, {{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
2. You have a basic understanding of Kubernetes {{< glossary_tooltip text="Pods" term_id="pod" >}},
|
||||
{{< glossary_tooltip term_id="service" text="Services" >}}, and {{< glossary_tooltip text="Deployments" term_id="deployment" >}}.
|
||||
-->
|
||||
## 环境准备 {#prerequisites}
|
||||
|
||||
|
@ -133,7 +134,7 @@ Use the file [`namespace-dev.yaml`](/examples/admin/namespace-dev.yaml) which de
|
|||
-->
|
||||
文件 [`namespace-dev.yaml`](/examples/admin/namespace-dev.yaml) 描述了 `development` 名字空间:
|
||||
|
||||
{{% code language="yaml" file="admin/namespace-dev.yaml" %}}
|
||||
{{% code_sample language="yaml" file="admin/namespace-dev.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the `development` namespace using kubectl.
|
||||
|
@ -151,7 +152,7 @@ Save the following contents into file [`namespace-prod.yaml`](/examples/admin/na
|
|||
将下列的内容保存到文件 [`namespace-prod.yaml`](/examples/admin/namespace-prod.yaml) 中,
|
||||
这些内容是对 `production` 名字空间的描述:
|
||||
|
||||
{{% code language="yaml" file="admin/namespace-prod.yaml" %}}
|
||||
{{% code_sample language="yaml" file="admin/namespace-prod.yaml" %}}
|
||||
|
||||
<!--
|
||||
And then let's create the `production` namespace using kubectl.
|
||||
|
@ -239,7 +240,8 @@ lithe-cocoa-92103_kubernetes
|
|||
```
|
||||
|
||||
<!--
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
The next step is to define a context for the kubectl client to work in each namespace.
|
||||
he value of "cluster" and "user" fields are copied from the current context.
|
||||
-->
|
||||
下一步是为 kubectl 客户端定义一个上下文,以便在每个名字空间中工作。
|
||||
"cluster" 和 "user" 字段的值将从当前上下文中复制。
|
||||
|
@ -339,7 +341,7 @@ Let's create some contents.
|
|||
-->
|
||||
让我们创建一些内容。
|
||||
|
||||
{{% code file="admin/snowflake-deployment.yaml" %}}
|
||||
{{% code_sample file="admin/snowflake-deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
Apply the manifest to create a Deployment
|
||||
|
@ -351,7 +353,8 @@ kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml
|
|||
```
|
||||
|
||||
<!--
|
||||
We have created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that serves the hostname.
|
||||
We have created a deployment whose replica size is 2 that is running the pod called
|
||||
`snowflake` with a basic container that serves the hostname.
|
||||
-->
|
||||
我们创建了一个副本大小为 2 的 Deployment,该 Deployment 运行名为 `snowflake` 的 Pod,
|
||||
其中包含一个仅提供主机名服务的基本容器。
|
||||
|
@ -374,7 +377,8 @@ snowflake-3968820950-vgc4n 1/1 Running 0 2m
|
|||
```
|
||||
|
||||
<!--
|
||||
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
|
||||
And this is great, developers are able to do what they want, and they do not have
|
||||
o worry about affecting content in the `production` namespace.
|
||||
-->
|
||||
这很棒,开发人员可以做他们想要的事情,而不必担心影响 `production` 名字空间中的内容。
|
||||
|
||||
|
|
|
@ -22,7 +22,8 @@ object.
|
|||
-->
|
||||
本文讨论如何为 API 对象配置配额,包括 PersistentVolumeClaim 和 Service。
|
||||
配额限制了可以在命名空间中创建的特定类型对象的数量。
|
||||
你可以在 [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core) 对象中指定配额。
|
||||
你可以在 [ResourceQuota](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcequota-v1-core)
|
||||
对象中指定配额。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -53,7 +54,7 @@ Here is the configuration file for a ResourceQuota object:
|
|||
|
||||
下面是一个 ResourceQuota 对象的配置文件:
|
||||
|
||||
{{% code file="admin/resource/quota-objects.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-objects.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the ResourceQuota:
|
||||
|
@ -102,7 +103,7 @@ Here is the configuration file for a PersistentVolumeClaim object:
|
|||
|
||||
下面是一个 PersistentVolumeClaim 对象的配置文件:
|
||||
|
||||
{{% code file="admin/resource/quota-objects-pvc.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-objects-pvc.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the PersistentVolumeClaim:
|
||||
|
@ -141,7 +142,7 @@ Here is the configuration file for a second PersistentVolumeClaim:
|
|||
|
||||
下面是第二个 PersistentVolumeClaim 的配置文件:
|
||||
|
||||
{{% code file="admin/resource/quota-objects-pvc-2.yaml" %}}
|
||||
{{% code_sample file="admin/resource/quota-objects-pvc-2.yaml" %}}
|
||||
|
||||
<!--
|
||||
Attempt to create the second PersistentVolumeClaim:
|
||||
|
|
|
@ -176,7 +176,7 @@ manager as a Daemonset in your cluster, use the following as a guideline:
|
|||
-->
|
||||
对于已经存在于 Kubernetes 内核中的提供商,你可以在集群中将 in-tree 云管理控制器作为守护进程运行。请使用如下指南:
|
||||
|
||||
{{% code file="admin/cloud/ccm-example.yaml" %}}
|
||||
{{% code_sample file="admin/cloud/ccm-example.yaml" %}}
|
||||
|
||||
<!--
|
||||
## Limitations
|
||||
|
|
|
@ -182,3 +182,49 @@ Here are some helpful resources to get started with `policy-controller`:
|
|||
|
||||
- [安装](https://github.com/sigstore/helm-charts/tree/main/charts/policy-controller)
|
||||
- [配置选项](https://github.com/sigstore/policy-controller/tree/main/config)
|
||||
|
||||
<!--
|
||||
## Verify the Software Bill Of Materials
|
||||
|
||||
You can verify the Kubernetes Software Bill of Materials (SBOM) by using the
|
||||
sigstore certificate and signature, or the corresponding SHA files:
|
||||
-->
|
||||
## 验证软件物料清单 {#verify-the-software-bill-of-materials}
|
||||
|
||||
你可以使用 sigstore 证书和签名或相应的 SHA 文件来验证 Kubernetes 软件物料清单(SBOM):
|
||||
|
||||
<!--
|
||||
# Retrieve the latest available Kubernetes release version
|
||||
|
||||
# Verify the SHA512 sum
|
||||
|
||||
# Verify the SHA256 sum
|
||||
|
||||
# Retrieve sigstore signature and certificate
|
||||
|
||||
# Verify the sigstore signature
|
||||
-->
|
||||
|
||||
```shell
|
||||
# 检索最新可用的 Kubernetes 发行版本
|
||||
VERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
# 验证 SHA512 sum
|
||||
curl -Ls "https://sbom.k8s.io/$VERSION/release" -o "$VERSION.spdx"
|
||||
echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha512") $VERSION.spdx" | sha512sum --check
|
||||
|
||||
# 验证 SHA256 sum
|
||||
echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha256") $VERSION.spdx" | sha256sum --check
|
||||
|
||||
# 检索 sigstore 签名和证书
|
||||
curl -Ls "https://sbom.k8s.io/$VERSION/release.sig" -o "$VERSION.spdx.sig"
|
||||
curl -Ls "https://sbom.k8s.io/$VERSION/release.cert" -o "$VERSION.spdx.cert"
|
||||
|
||||
# 验证 sigstore 签名
|
||||
cosign verify-blob \
|
||||
--certificate "$VERSION.spdx.cert" \
|
||||
--signature "$VERSION.spdx.sig" \
|
||||
--certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \
|
||||
--certificate-oidc-issuer https://accounts.google.com \
|
||||
"$VERSION.spdx"
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue