Merge pull request #44394 from my-git9/kms-932

[zh-cn] sync kms-provider configure-multiple-schedulers deprecation-guide
pull/44456/head
Kubernetes Prow Robot 2023-12-18 14:16:52 +01:00 committed by GitHub
commit c1a799d808
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 56 additions and 48 deletions

View File

@ -33,6 +33,36 @@ deprecated API versions to newer and more stable API versions.
-->
## 各发行版本中移除的 API {#removed-apis-by-release}
### v1.32
<!--
The **v1.32** release will stop serving the following deprecated API versions:
-->
**v1.32** 发行版本将停止提供以下已弃用的 API 版本:
<!--
#### Flow control resources {#flowcontrol-resources-v132}
The **flowcontrol.apiserver.k8s.io/v1beta3** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.32.
-->
#### 流控制资源 {#flowcontrol-resources-v132}
FlowSchema 和 PriorityLevelConfiguration 的
**flowcontrol.apiserver.k8s.io/v1beta3** API 版本将不再在 v1.32 中提供。
<!--
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1** API version, available since v1.29.
* All existing persisted objects are accessible via the new API
* Notable changes in **flowcontrol.apiserver.k8s.io/v1**:
* The PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field
only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.
-->
* 迁移清单和 API 客户端以使用 **flowcontrol.apiserver.k8s.io/v1** API 版本(自 v1.29 起可用)。
* 所有现有的持久对象都可以通过新的 API 访问。
* **flowcontrol.apiserver.k8s.io/v1** 中的显着变化:
* PriorityLevelConfiguration 的 `spec.limited.nominalConcurrencyShares`
字段仅在未指定时默认为 30并且显式值 0 时不会更改为 30。
### v1.29
<!--
@ -48,20 +78,28 @@ The **v1.29** release will stop serving the following deprecated API versions:
<!--
The **flowcontrol.apiserver.k8s.io/v1beta2** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.29.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1beta3** API version, available since v1.26.
* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io/v1** API version, available since v1.29, or the **flowcontrol.apiserver.k8s.io/v1beta3** API version, available since v1.26.
* All existing persisted objects are accessible via the new API
* Notable changes in **flowcontrol.apiserver.k8s.io/v1**:
* The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field
is renamed to `spec.limited.nominalConcurrencyShares` and only defaults to 30 when unspecified,
and an explicit value of 0 is not changed to 30.
* Notable changes in **flowcontrol.apiserver.k8s.io/v1beta3**:
* The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field is renamed to `spec.limited.nominalConcurrencyShares`
-->
**flowcontrol.apiserver.k8s.io/v1beta2** API 版本的 FlowSchema
和 PriorityLevelConfiguration 将不会在 v1.29 中提供。
* 迁移清单和 API 客户端使用 **flowcontrol.apiserver.k8s.io/v1beta3** API 版本,
此 API 从 v1.26 版本开始可用
* 迁移清单和 API 客户端使用 **flowcontrol.apiserver.k8s.io/v1** API 版本(自 v1.29 版本开始可用)
**flowcontrol.apiserver.k8s.io/v1beta3** API 版本(自 v1.26 起可用)
* 所有的已保存的对象都可以通过新的 API 来访问;
* **flowcontrol.apiserver.k8s.io/v1** 中的显着变化:
* PriorityLevelConfiguration 的 `spec.limited.assuredConcurrencyShares`
字段已被重命名为 `spec.limited.nominalConcurrencyShares`,仅在未指定时默认为 30
并且显式值 0 不会更改为 30。
* **flowcontrol.apiserver.k8s.io/v1beta3** 中需要额外注意的变更:
* PriorityLevelConfiguration 的 `spec.limited.assuredConcurrencyShares`
字段已被更名为 `spec.limited.nominalConcurrencyShares`
字段已被更名为 `spec.limited.nominalConcurrencyShares`
### v1.27

View File

@ -179,13 +179,13 @@ KMS v2 does not support the `cachesize` property. All data encryption keys (DEKs
the clear once the server has unwrapped them via a call to the KMS. Once cached, DEKs can be used
to perform decryption indefinitely without making a call to the KMS.
-->
KMS v2 不支持 `cachesize` 属性。一旦服务器通过调用 KMS 解密了数据加密密钥 (DEK)
KMS v2 不支持 `cachesize` 属性。一旦服务器通过调用 KMS 解密了数据加密密钥DEK
所有的 DEK 将会以明文形式被缓存。一旦被缓存DEK 可以无限期地用于解密操作,而无需再次调用 KMS。
<!--
See [Understanding the encryption at rest configuration.](/docs/tasks/administer-cluster/encrypt-data)
-->
参见[理解静态配置加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data)
参见[理解静态配置加密](/zh-cn/docs/tasks/administer-cluster/encrypt-data)
<!--
## Implementing a KMS plugin
@ -623,7 +623,7 @@ For details about the `EncryptionConfiguration` format, please check the
[API server encryption API reference](/docs/reference/config-api/apiserver-encryption.v1/).
-->
有关 `EncryptionConfiguration` 格式的更多详细信息,请参阅
[kube-apiserver 加密 API 参考 (v1)](/zh-cn/docs/reference/config-api/apiserver-encryption.v1/).
[kube-apiserver 加密 API 参考v1](/zh-cn/docs/reference/config-api/apiserver-encryption.v1/)。
<!--
## Verifying that the data is encrypted
@ -751,43 +751,14 @@ To switch from a local encryption provider to the `kms` provider and re-encrypt
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
```
<!--
## Disabling encryption at rest
## {{% heading "whatsnext" %}}
To disable encryption at rest:
-->
## 禁用静态数据加密 {#disabling-encryption-at-rest}
要禁用静态数据加密:
<!-- preserve legacy hyperlinks -->
<a id="disabling-encryption-at-rest" />
<!--
1. Place the `identity` provider as the first entry in the configuration file:
If you no longer want to use encryption for data persisted in the Kubernetes API, read
[decrypt data that are already stored at rest](/docs/tasks/administer-cluster/decrypt-data/).
-->
1. 将 `identity` 驱动作为配置文件中的第一个条目:
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- identity: {}
- kms:
apiVersion: v2
name : myKmsPlugin
endpoint: unix:///tmp/socketfile.sock
```
<!--
1. Restart all `kube-apiserver` processes.
1. Run the following command to force all secrets to be decrypted.
-->
2. 重启所有 `kube-apiserver` 进程。
3. 运行下列命令强制重新加密所有 Secret。
```shell
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
```
如果你不想再对 Kubernetes API 中保存的数据加密,
请阅读[解密已静态存储的数据](/zh-cn/docs/tasks/administer-cluster/decrypt-data/)。

View File

@ -137,11 +137,10 @@ the `kube-scheduler` during initialization with the `--config` option. The `my-s
<!--
In the aforementioned Scheduler Configuration, your scheduler implementation is represented via
a [KubeSchedulerProfile](/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile).
a [KubeSchedulerProfile](/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile).
-->
在前面提到的调度器配置中,你的调度器呈现为一个
[KubeSchedulerProfile](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerProfile)。
[KubeSchedulerProfile](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1/#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile)。
{{< note >}}
<!--
To determine if a scheduler is responsible for scheduling a specific Pod, the `spec.schedulerName` field in a
@ -164,11 +163,11 @@ Also, note that you create a dedicated service account `my-scheduler` and bind t
Please see the
[kube-scheduler documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for
detailed description of other command line arguments and
[Scheduler Configuration reference](/docs/reference/config-api/kube-scheduler-config.v1beta3/) for
[Scheduler Configuration reference](/docs/reference/config-api/kube-scheduler-config.v1/) for
detailed description of other customizable `kube-scheduler` configurations.
-->
请参阅 [kube-scheduler 文档](/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/)
获取其他命令行参数以及 [Scheduler 配置参考](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3/)
获取其他命令行参数以及 [Scheduler 配置参考](/zh-cn/docs/reference/config-api/kube-scheduler-config.v1/)
获取自定义 `kube-scheduler` 配置的详细说明。
<!--