Merge pull request #42977 from my-git9/patch-125
[zh-cn]sync access-application-cluster/* debug/* access-authn-authz/*pull/42980/head
commit
a8b2cdc758
|
@ -151,7 +151,7 @@ For example:
|
|||
|
||||
例如:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-create.yaml" >}}
|
||||
{{< code_sample file="access/certificate-signing-request/clusterrole-create.yaml" >}}
|
||||
|
||||
<!--
|
||||
To allow approving a CertificateSigningRequest:
|
||||
|
@ -177,7 +177,7 @@ For example:
|
|||
|
||||
例如:
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
|
||||
{{< code_sample file="access/certificate-signing-request/clusterrole-approve.yaml" >}}
|
||||
|
||||
<!--
|
||||
To allow signing a CertificateSigningRequest:
|
||||
|
@ -199,7 +199,7 @@ To allow signing a CertificateSigningRequest:
|
|||
resource(资源):`signers`,
|
||||
resourceName:`<signerNameDomain>/<signerNamePath>` 或 `<signerNameDomain>/*`
|
||||
|
||||
{{< codenew file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
|
||||
{{< code_sample file="access/certificate-signing-request/clusterrole-sign.yaml" >}}
|
||||
|
||||
<!--
|
||||
## Signers
|
||||
|
|
|
@ -2303,7 +2303,7 @@ you can create the following ClusterRole:
|
|||
|
||||
如果你希望在新集群的聚合角色里保留此访问权限,你可以创建下面的 ClusterRole:
|
||||
|
||||
{{< codenew file="access/endpoints-aggregated.yaml" >}}
|
||||
{{< code_sample file="access/endpoints-aggregated.yaml" >}}
|
||||
|
||||
<!--
|
||||
## Upgrading from ABAC
|
||||
|
|
|
@ -462,7 +462,7 @@ Here is a sample manifest for such a Secret:
|
|||
|
||||
以下是此类 Secret 的示例清单:
|
||||
|
||||
{{< codenew file="secret/serviceaccount/mysecretname.yaml" >}}
|
||||
{{% code_sample file="secret/serviceaccount/mysecretname.yaml" %}}
|
||||
|
||||
<!--
|
||||
To create a Secret based on this example, run:
|
||||
|
|
|
@ -115,24 +115,32 @@ The following is an example of a ValidatingAdmissionPolicy.
|
|||
|
||||
以下是一个 ValidatingAdmissionPolicy 的示例:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/basic-example-policy.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/basic-example-policy.yaml" %}}
|
||||
|
||||
<!--
|
||||
`spec.validations` contains CEL expressions which use the [Common Expression Language (CEL)](https://github.com/google/cel-spec)
|
||||
to validate the request. If an expression evaluates to false, the validation check is enforced
|
||||
according to the `spec.failurePolicy` field.
|
||||
|
||||
To configure a validating admission policy for use in a cluster, a binding is required.
|
||||
The following is an example of a ValidatingAdmissionPolicyBinding.:
|
||||
-->
|
||||
`spec.validations` 包含使用[通用表达式语言 (CEL)](https://github.com/google/cel-spec)
|
||||
来验证请求的 CEL 表达式。
|
||||
如果表达式的计算结果为 false,则根据 `spec.failurePolicy` 字段强制执行验证检查处理。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
You can quickly test CEL expressions in [CEL Playground](https://playcel.undistro.io).
|
||||
-->
|
||||
你可以在 [CEL Playground](https://playcel.undistro.io) 中快速验证 CEL 表达式。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
To configure a validating admission policy for use in a cluster, a binding is required.
|
||||
The following is an example of a ValidatingAdmissionPolicyBinding:
|
||||
-->
|
||||
要配置一个在某集群中使用的验证准入策略,需要一个绑定。
|
||||
以下是一个 ValidatingAdmissionPolicyBinding 的示例:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/basic-example-binding.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/basic-example-binding.yaml" %}}
|
||||
|
||||
<!--
|
||||
When trying to create a deployment with replicas set not satisfying the validation expression, an
|
||||
|
@ -226,7 +234,7 @@ with parameter configuration.
|
|||
|
||||
如果需要参数配置,下面是一个带有参数配置的 ValidatingAdmissionPolicy 的例子:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/policy-with-param.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/policy-with-param.yaml" %}}
|
||||
|
||||
<!--
|
||||
The `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used
|
||||
|
@ -261,7 +269,7 @@ every resource request that matches the binding:
|
|||
要配置一个在某集群中使用的验证准入策略,需要创建绑定和参数资源。
|
||||
以下是 ValidatingAdmissionPolicyBinding **集群范围**参数的示例 - 相同的参数将用于验证与绑定匹配的每个资源请求:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/binding-with-param.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/binding-with-param.yaml" %}}
|
||||
|
||||
<!--
|
||||
Notice this binding applies a parameter to the policy for all resources which
|
||||
|
@ -274,7 +282,7 @@ The parameter resource could be as following:
|
|||
-->
|
||||
参数资源可以如下:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/replicalimit-param.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/replicalimit-param.yaml" %}}
|
||||
|
||||
<!--
|
||||
This policy parameter resource limits deployments to a max of 3 replicas.
|
||||
|
@ -285,7 +293,7 @@ to have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBind
|
|||
一个准入策略可以有多个绑定。
|
||||
要绑定所有的其他环境,限制 maxReplicas 为 100,请创建另一个 ValidatingAdmissionPolicyBinding:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/binding-with-param-prod.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/binding-with-param-prod.yaml" %}}
|
||||
|
||||
<!--
|
||||
Notice this binding applies a different parameter to resources which
|
||||
|
@ -298,7 +306,7 @@ And have a parameter resource:
|
|||
-->
|
||||
并有一个参数资源:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/replicalimit-param-prod.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/replicalimit-param-prod.yaml" %}}
|
||||
|
||||
<!--
|
||||
For each admission request, the API server evaluates CEL expressions of each
|
||||
|
@ -415,7 +423,7 @@ searches for parameters in that namespace.
|
|||
|
||||
作为 ValidatingAdmissionPolicy 及其 ValidatingAdmissionPolicyBinding 的作者,
|
||||
你可以选择指定其作用于集群范围还是某个命名空间。如果你为绑定的 `paramRef` 指定 `namespace`,
|
||||
则控制平面仅在该名字空间中搜索参数。
|
||||
则控制平面仅在该命名空间中搜索参数。
|
||||
|
||||
<!--
|
||||
However, if `namespace` is not specified in the ValidatingAdmissionPolicyBinding, the
|
||||
|
@ -504,7 +512,7 @@ Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:
|
|||
|
||||
请注意,`failurePolicy` 是在 `ValidatingAdmissionPolicy` 中定义的:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/failure-policy-ignore.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/failure-policy-ignore.yaml" %}}
|
||||
|
||||
<!--
|
||||
### Validation Expression
|
||||
|
@ -662,7 +670,7 @@ Here is an example illustrating a few different uses for match conditions:
|
|||
-->
|
||||
以下示例说明了匹配条件的几个不同用法:
|
||||
|
||||
{{% codenew file="access/validating-admission-policy-match-conditions.yaml" %}}
|
||||
{{% code_sample file="access/validating-admission-policy-match-conditions.yaml" %}}
|
||||
|
||||
<!--
|
||||
Match conditions have access to the same CEL variables as validation expressions.
|
||||
|
@ -698,7 +706,7 @@ For example, here is an admission policy with an audit annotation:
|
|||
|
||||
例如,以下是带有审计注解的准入策略:
|
||||
|
||||
{{% codenew file="access/validating-admission-policy-audit-annotation.yaml" %}}
|
||||
{{% code_sample file="access/validating-admission-policy-audit-annotation.yaml" %}}
|
||||
|
||||
<!--
|
||||
When an API request is validated with this admission policy, the resulting audit event will look like:
|
||||
|
@ -772,7 +780,7 @@ we can have the following validation:
|
|||
|
||||
例如,为了在策略引用参数时更好地告知用户拒绝原因,我们可以有以下验证:
|
||||
|
||||
{{% codenew file="access/deployment-replicas-policy.yaml" %}}
|
||||
{{% code_sample file="access/deployment-replicas-policy.yaml" %}}
|
||||
|
||||
<!--
|
||||
After creating a params object that limits the replicas to 3 and setting up the binding,
|
||||
|
@ -825,7 +833,7 @@ For example, given the following policy definition:
|
|||
|
||||
例如,给定以下策略定义:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/typechecking.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/typechecking.yaml" %}}
|
||||
|
||||
<!--
|
||||
The status will yield the following information:
|
||||
|
@ -850,7 +858,7 @@ For example, the following policy definition
|
|||
如果在 `spec.matchConstraints` 中匹配了多个资源,则所有匹配的资源都将进行检查。
|
||||
例如,以下策略定义:
|
||||
|
||||
{{% codenew language="yaml" file="validatingadmissionpolicy/typechecking-multiple-match.yaml" %}}
|
||||
{{% code_sample language="yaml" file="validatingadmissionpolicy/typechecking-multiple-match.yaml" %}}
|
||||
|
||||
<!--
|
||||
will have multiple types and type checking result of each type in the warning message.
|
||||
|
@ -932,7 +940,7 @@ The following is a more complex example of enforcing that image repo names match
|
|||
|
||||
以下是强制镜像仓库名称与其命名空间中定义的环境相匹配的一个较复杂示例。
|
||||
|
||||
{{< codenew file="access/image-matches-namespace-environment.policy.yaml" >}}
|
||||
{{< code_sample file="access/image-matches-namespace-environment.policy.yaml" >}}
|
||||
|
||||
<!--
|
||||
With the policy bound to the namespace `default`, which is labeled `environment: prod`,
|
||||
|
|
|
@ -38,7 +38,7 @@ for the Pod:
|
|||
在这个练习中,你会创建一个包含两个容器的 Pod。两个容器共享一个卷用于他们之间的通信。
|
||||
Pod 的配置文件如下:
|
||||
|
||||
{{% code file="pods/two-container-pod.yaml" %}}
|
||||
{{% code_sample file="pods/two-container-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the configuration file, you can see that the Pod has a Volume named
|
||||
|
@ -215,4 +215,3 @@ the shared Volume is lost.
|
|||
* 参考[在 Pod 中的容器之间共享进程命名空间](/zh-cn/docs/tasks/configure-pod-container/share-process-namespace/)
|
||||
* 参考 [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
|
||||
* 参考 [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ file for the backend Deployment:
|
|||
|
||||
后端是一个简单的 hello 欢迎微服务应用。这是后端应用的 Deployment 配置文件:
|
||||
|
||||
{{% code file="service/access/backend-deployment.yaml" %}}
|
||||
{{% code_sample file="service/access/backend-deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the backend Deployment:
|
||||
|
@ -145,7 +145,7 @@ First, explore the Service configuration file:
|
|||
|
||||
首先,浏览 Service 的配置文件:
|
||||
|
||||
{{% code file="service/access/backend-service.yaml" %}}
|
||||
{{% code_sample file="service/access/backend-service.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the configuration file, you can see that the Service, named `hello` routes
|
||||
|
@ -197,7 +197,7 @@ to proxy requests to the `hello` backend Service. Here is the nginx configuratio
|
|||
前端 Deployment 中的 Pods 运行一个 nginx 镜像,这个已经配置好的镜像会将请求转发
|
||||
给后端的 `hello` Service。下面是 nginx 的配置文件:
|
||||
|
||||
{{% code file="service/access/frontend-nginx.conf" %}}
|
||||
{{% code_sample file="service/access/frontend-nginx.conf" %}}
|
||||
|
||||
<!--
|
||||
Similar to the backend, the frontend has a Deployment and a Service. An important
|
||||
|
@ -210,9 +210,9 @@ accessible from outside the cluster.
|
|||
重要区别是前端 Service 的配置文件包含了 `type: LoadBalancer`,也就是说,Service
|
||||
会使用你的云服务商的默认负载均衡设备,从而实现从集群外访问的目的。
|
||||
|
||||
{{% code file="service/access/frontend-service.yaml" %}}
|
||||
{{% code_sample file="service/access/frontend-service.yaml" %}}
|
||||
|
||||
{{% code file="service/access/frontend-deployment.yaml" %}}
|
||||
{{% code_sample file="service/access/frontend-deployment.yaml" %}}
|
||||
|
||||
|
||||
<!--
|
||||
|
@ -346,4 +346,4 @@ kubectl delete deployment frontend backend
|
|||
-->
|
||||
* 进一步了解 [Service](/zh-cn/docs/concepts/services-networking/service/)
|
||||
* 进一步了解 [ConfigMap](/zh-cn/docs/tasks/configure-pod-container/configure-pod-configmap/)
|
||||
* 进一步了解 [Service 和 Pods 的 DNS](/zh-cn/docs/concepts/services-networking/dns-pod-service/)
|
||||
* 进一步了解 [Service 和 Pod 的 DNS](/zh-cn/docs/concepts/services-networking/dns-pod-service/)
|
||||
|
|
|
@ -207,7 +207,7 @@ The following manifest defines an Ingress that sends traffic to your Service via
|
|||
|
||||
1. 根据下面的 YAML 创建文件 `example-ingress.yaml`:
|
||||
|
||||
{{% code file="service/networking/example-ingress.yaml" %}}
|
||||
{{% code_sample file="service/networking/example-ingress.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. Create the Ingress object by running the following command:
|
||||
|
@ -456,4 +456,4 @@ The following manifest defines an Ingress that sends traffic to your Service via
|
|||
-->
|
||||
* 进一步了解 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/)
|
||||
* 进一步了解 [Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers/)
|
||||
* 进一步了解[服务](/zh-cn/docs/concepts/services-networking/service/)
|
||||
* 进一步了解 [Service](/zh-cn/docs/concepts/services-networking/service/)
|
||||
|
|
|
@ -45,7 +45,7 @@ Here is the configuration file for the application Deployment:
|
|||
|
||||
这是应用程序部署的配置文件:
|
||||
|
||||
{{% code file="service/access/hello-application.yaml" %}}
|
||||
{{% code_sample file="service/access/hello-application.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. Run a Hello World application in your cluster:
|
||||
|
|
|
@ -43,7 +43,7 @@ For this example we'll use a Deployment to create two pods, similar to the earli
|
|||
-->
|
||||
与之前的例子类似,我们使用一个 Deployment 来创建两个 Pod。
|
||||
|
||||
{{< codenew file="application/nginx-with-request.yaml" >}}
|
||||
{{% code_sample file="application/nginx-with-request.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create deployment by running following command:
|
||||
|
|
|
@ -47,7 +47,7 @@ The manifest for that Pod specifies a command that runs when the container start
|
|||
在本练习中,你将创建运行一个容器的 Pod。
|
||||
配置文件指定在容器启动时要运行的命令。
|
||||
|
||||
{{< codenew file="debug/termination.yaml" >}}
|
||||
{{% code_sample file="debug/termination.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. Create a Pod based on the YAML configuration file:
|
||||
|
@ -210,4 +210,3 @@ is empty and the container exited with an error. The log output is limited to
|
|||
资源的 `terminationMessagePath` 字段。
|
||||
* 了解[检索日志](/zh-cn/docs/concepts/cluster-administration/logging/)。
|
||||
* 了解 [Go 模板](https://pkg.go.dev/text/template)。
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ runs the nginx image. Here is the configuration file for the Pod:
|
|||
-->
|
||||
在本练习中,你将创建包含一个容器的 Pod。容器运行 nginx 镜像。下面是 Pod 的配置文件:
|
||||
|
||||
{{< codenew file="application/shell-demo.yaml" >}}
|
||||
{{% code_sample file="application/shell-demo.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
|
|
@ -148,7 +148,7 @@ Below is an example audit policy file:
|
|||
|
||||
以下是一个审计策略文件的示例:
|
||||
|
||||
{{< codenew file="audit/audit-policy.yaml" >}}
|
||||
{{% code_sample file="audit/audit-policy.yaml" %}}
|
||||
|
||||
<!--
|
||||
You can use a minimal audit policy file to log all requests at the `Metadata` level:
|
||||
|
|
|
@ -24,7 +24,7 @@ To learn how to install and use Node Problem Detector, see
|
|||
[Node Problem Detector project documentation](https://github.com/kubernetes/node-problem-detector).
|
||||
-->
|
||||
|
||||
*节点问题检测器(Node Problem Detector)* 是一个守护程序,用于监视和报告节点的健康状况。
|
||||
**节点问题检测器(Node Problem Detector)** 是一个守护程序,用于监视和报告节点的健康状况。
|
||||
你可以将节点问题探测器以 `DaemonSet` 或独立守护程序运行。
|
||||
节点问题检测器从各种守护进程收集节点问题,并以节点
|
||||
[Condition](/zh-cn/docs/concepts/architecture/nodes/#condition) 和
|
||||
|
@ -77,7 +77,7 @@ to detect customized node problems. For example:
|
|||
<!--
|
||||
1. Create a Node Problem Detector configuration similar to `node-problem-detector.yaml`:
|
||||
|
||||
{{< codenew file="debug/node-problem-detector.yaml" >}}
|
||||
{{% code_sample file="debug/node-problem-detector.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
You should verify that the system log directory is right for your operating system distribution.
|
||||
|
@ -90,7 +90,7 @@ to detect customized node problems. For example:
|
|||
```
|
||||
-->
|
||||
1. 创建类似于 `node-strought-detector.yaml` 的节点问题检测器配置:
|
||||
{{< codenew file="debug/node-problem-detector.yaml" >}}
|
||||
{{% code_sample file="debug/node-problem-detector.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
你应该检查系统日志目录是否适用于操作系统发行版本。
|
||||
|
@ -148,7 +148,7 @@ to overwrite the configuration:
|
|||
|
||||
1. Change the `node-problem-detector.yaml` to use the `ConfigMap`:
|
||||
|
||||
{{< codenew file="debug/node-problem-detector-configmap.yaml" >}}
|
||||
{{% code_sample file="debug/node-problem-detector-configmap.yaml" %}}
|
||||
|
||||
1. Recreate the Node Problem Detector with the new configuration file:
|
||||
|
||||
|
@ -165,9 +165,9 @@ to overwrite the configuration:
|
|||
kubectl create configmap node-problem-detector-config --from-file=config/
|
||||
```
|
||||
|
||||
1. 更改 `node-problem-detector.yaml` 以使用 ConfigMap:
|
||||
1. 更改 `node-problem-detector.yaml` 以使用 ConfigMap:
|
||||
|
||||
{{< codenew file="debug/node-problem-detector-configmap.yaml" >}}
|
||||
{{% code_sample file="debug/node-problem-detector-configmap.yaml" %}}
|
||||
|
||||
1. 使用新的配置文件重新创建节点问题检测器:
|
||||
|
||||
|
@ -303,13 +303,13 @@ The following exporters are supported:
|
|||
导出器(Exporter)向特定后端报告节点问题和/或指标。
|
||||
支持下列导出器:
|
||||
|
||||
- **Kubernetes exporter**: 此导出器向 Kubernetes API 服务器报告节点问题。
|
||||
- **Kubernetes exporter**:此导出器向 Kubernetes API 服务器报告节点问题。
|
||||
临时问题报告为事件,永久性问题报告为节点状况。
|
||||
|
||||
- **Prometheus exporter**: 此导出器在本地将节点问题和指标报告为 Prometheus(或 OpenMetrics)指标。
|
||||
- **Prometheus exporter**:此导出器在本地将节点问题和指标报告为 Prometheus(或 OpenMetrics)指标。
|
||||
你可以使用命令行参数指定导出器的 IP 地址和端口。
|
||||
|
||||
- **Stackdriver exporter**: 此导出器向 Stackdriver Monitoring API 报告节点问题和指标。
|
||||
- **Stackdriver exporter**:此导出器向 Stackdriver Monitoring API 报告节点问题和指标。
|
||||
可以使用[配置文件](https://github.com/kubernetes/node-problem-detector/blob/v0.8.12/config/exporter/stackdriver-exporter.json)自定义导出行为。
|
||||
|
||||
<!-- discussion -->
|
||||
|
|
Loading…
Reference in New Issue