Merge remote-tracking branch 'upstream/main' into dev-1.29
commit
2d9fbc1c7e
|
@ -4,7 +4,7 @@ noedit: true
|
|||
cid: docsHome
|
||||
layout: docsportal_home
|
||||
class: gridPage gridPageHome
|
||||
linkTitle: "Home"
|
||||
linkTitle: "Dokumentation"
|
||||
main_menu: true
|
||||
weight: 10
|
||||
hide_feedback: true
|
||||
|
|
|
@ -12,18 +12,13 @@ Ein Tutorial zeigt, wie Sie ein Ziel erreichen, das größer ist als eine einzel
|
|||
Ein Tutorial besteht normalerweise aus mehreren Abschnitten, die jeweils eine Abfolge von Schritten haben.
|
||||
Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise ein Lesezeichen zur Seite mit dem [Standardisierten Glossar](/docs/reference/glossary/) setzen um später Informationen nachzuschlagen.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Grundlagen
|
||||
|
||||
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) ist ein ausführliches interaktives Lernprogramm, das Ihnen hilft, das Kubernetes-System zu verstehen und einige grundlegende Kubernetes-Funktionen auszuprobieren.
|
||||
|
||||
* [Scalable Microservices mit Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) (Englisch)
|
||||
|
||||
* [Einführung in Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) (Englisch)
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
|
||||
## Konfiguration
|
||||
|
@ -33,36 +28,26 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
|
|||
## Stateless Anwendungen
|
||||
|
||||
* [Freigeben einer externen IP-Adresse für den Zugriff auf eine Anwendung in einem Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Beispiel: Bereitstellung der PHP-Gästebuchanwendung mit Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Stateful Anwendungen
|
||||
|
||||
* [StatefulSet Grundlagen](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [Beispiel: WordPress und MySQL mit persistenten Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [Beispiel: Bereitstellen von Cassandra mit Stateful-Sets](/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [ZooKeeper, ein verteiltes CP-System](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
* [Seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
|
||||
## Services
|
||||
|
||||
* [Source IP verwenden](/docs/tutorials/services/source-ip/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Wenn Sie ein Tutorial schreiben möchten, lesen Sie
|
||||
[Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
|
||||
für weitere Informationen zum Typ der Tutorial-Seite und zur Tutorial-Vorlage.
|
||||
|
||||
|
||||
|
|
|
@ -677,6 +677,13 @@ There may be Secrets for several Pods on the same node. However, only the
|
|||
Secrets that a Pod requests are potentially visible within its containers.
|
||||
Therefore, one Pod does not have access to the Secrets of another Pod.
|
||||
|
||||
### Configure least-privilege access to Secrets
|
||||
|
||||
To enhance the security measures around Secrets, Kubernetes provides a mechanism: you can
|
||||
annotate a ServiceAccount as `kubernetes.io/enforce-mountable-secrets: "true"`.
|
||||
|
||||
For more information, you can refer to the [documentation about this annotation](/docs/concepts/security/service-accounts/#enforce-mountable-secrets).
|
||||
|
||||
{{< warning >}}
|
||||
Any containers that run with `privileged: true` on a node can access all
|
||||
Secrets used on that node.
|
||||
|
|
|
@ -86,7 +86,8 @@ like `free -m`. This is important because `free -m` does not work in a
|
|||
container, and if users use the [node allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)
|
||||
feature, out of resource decisions
|
||||
are made local to the end user Pod part of the cgroup hierarchy as well as the
|
||||
root node. This [script](/examples/admin/resource/memory-available.sh)
|
||||
root node. This [script](/examples/admin/resource/memory-available.sh) or
|
||||
[cgroupv2 script](/examples/admin/resource/memory-available-cgroupv2.sh)
|
||||
reproduces the same set of steps that the kubelet performs to calculate
|
||||
`memory.available`. The kubelet excludes inactive_file (the number of bytes of
|
||||
file-backed memory on the inactive LRU list) from its calculation, as it assumes that
|
||||
|
|
|
@ -43,15 +43,17 @@ A scheduling or binding cycle can be aborted if the Pod is determined to
|
|||
be unschedulable or if there is an internal error. The Pod will be returned to
|
||||
the queue and retried.
|
||||
|
||||
## Extension points
|
||||
## Interfaces
|
||||
|
||||
The following picture shows the scheduling context of a Pod and the extension
|
||||
points that the scheduling framework exposes. In this picture "Filter" is
|
||||
equivalent to "Predicate" and "Scoring" is equivalent to "Priority function".
|
||||
The following picture shows the scheduling context of a Pod and the interfaces
|
||||
that the scheduling framework exposes.
|
||||
|
||||
One plugin may register at multiple extension points to perform more complex or
|
||||
One plugin may implement multiple interfaces to perform more complex or
|
||||
stateful tasks.
|
||||
|
||||
Some interfaces match the scheduler extension points which can be configured through
|
||||
[Scheduler Configuration](/docs/reference/scheduling/config/#extension-points).
|
||||
|
||||
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="Scheduling framework extension points" class="diagram-large">}}
|
||||
|
||||
### PreEnqueue {#pre-enqueue}
|
||||
|
@ -65,6 +67,28 @@ Otherwise, it's placed in the internal unschedulable Pods list, and doesn't get
|
|||
For more details about how internal scheduler queues work, read
|
||||
[Scheduling queue in kube-scheduler](https://github.com/kubernetes/community/blob/f03b6d5692bd979f07dd472e7b6836b2dad0fd9b/contributors/devel/sig-scheduling/scheduler_queues.md).
|
||||
|
||||
### EnqueueExtension
|
||||
|
||||
EnqueueExtension is the interface where the plugin can control
|
||||
whether to retry scheduling of Pods rejected by this plugin, based on changes in the cluster.
|
||||
Plugins that implement PreEnqueue, PreFilter, Filter, Reserve or Permit should implement this interface.
|
||||
|
||||
#### QueueingHint
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="beta" >}}
|
||||
|
||||
QueueingHint is a callback function for deciding whether a Pod can be requeued to the active queue or backoff queue.
|
||||
It's executed every time a certain kind of event or change happens in the cluster.
|
||||
When the QueueingHint finds that the event might make the Pod schedulable,
|
||||
the Pod is put into the active queue or the backoff queue
|
||||
so that the scheduler will retry the scheduling of the Pod.
|
||||
|
||||
{{< note >}}
|
||||
QueueingHint evaluation during scheduling is a beta-level feature and is enabled by default in 1.28.
|
||||
You can disable it via the
|
||||
`SchedulerQueueingHints` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
{{< /note >}}
|
||||
|
||||
### QueueSort {#queue-sort}
|
||||
|
||||
These plugins are used to sort Pods in the scheduling queue. A queue sort plugin
|
||||
|
@ -148,7 +172,7 @@ NormalizeScore extension point.
|
|||
|
||||
### Reserve {#reserve}
|
||||
|
||||
A plugin that implements the Reserve extension has two methods, namely `Reserve`
|
||||
A plugin that implements the Reserve interface has two methods, namely `Reserve`
|
||||
and `Unreserve`, that back two informational scheduling phases called Reserve
|
||||
and Unreserve, respectively. Plugins which maintain runtime state (aka "stateful
|
||||
plugins") should use these phases to be notified by the scheduler when resources
|
||||
|
@ -218,7 +242,7 @@ skipped**.
|
|||
|
||||
### PostBind {#post-bind}
|
||||
|
||||
This is an informational extension point. Post-bind plugins are called after a
|
||||
This is an informational interface. Post-bind plugins are called after a
|
||||
Pod is successfully bound. This is the end of a binding cycle, and can be used
|
||||
to clean up associated resources.
|
||||
|
||||
|
|
|
@ -62,6 +62,12 @@ recommendations include:
|
|||
* Implement audit rules that alert on specific events, such as concurrent
|
||||
reading of multiple Secrets by a single user
|
||||
|
||||
#### Additional ServiceAccount annotations for Secret management
|
||||
|
||||
You can also use the `kubernetes.io/enforce-mountable-secrets` annotation on
|
||||
a ServiceAccount to enforce specific rules on how Secrets are used in a Pod.
|
||||
For more details, see the [documentation on this annotation](/docs/reference/labels-annotations-taints/#enforce-mountable-secrets).
|
||||
|
||||
### Improve etcd management policies
|
||||
|
||||
Consider wiping or shredding the durable storage used by `etcd` once it is
|
||||
|
|
|
@ -196,6 +196,36 @@ or using a custom mechanism such as an [authentication webhook](/docs/reference/
|
|||
You can also use TokenRequest to obtain short-lived tokens for your external application.
|
||||
{{< /note >}}
|
||||
|
||||
### Restricting access to Secrets {#enforce-mountable-secrets}
|
||||
|
||||
Kubernetes provides an annotation called `kubernetes.io/enforce-mountable-secrets`
|
||||
that you can add to your ServiceAccounts. When this annotation is applied,
|
||||
the ServiceAccount's secrets can only be mounted on specified types of resources,
|
||||
enhancing the security posture of your cluster.
|
||||
|
||||
You can add the annotation to a ServiceAccount using a manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
annotations:
|
||||
kubernetes.io/enforce-mountable-secrets: "true"
|
||||
name: my-serviceaccount
|
||||
namespace: my-namespace
|
||||
```
|
||||
When this annotation is set to "true", the Kubernetes control plane ensures that
|
||||
the Secrets from this ServiceAccount are subject to certain mounting restrictions.
|
||||
|
||||
1. The name of each Secret that is mounted as a volume in a Pod must appear in the `secrets` field of the
|
||||
Pod's ServiceAccount.
|
||||
1. The name of each Secret referenced using `envFrom` in a Pod must also appear in the `secrets`
|
||||
field of the Pod's ServiceAccount.
|
||||
1. The name of each Secret referenced using `imagePullSecrets` in a Pod must also appear in the `secrets`
|
||||
field of the Pod's ServiceAccount.
|
||||
|
||||
By understanding and enforcing these restrictions, cluster administrators can maintain a tighter security profile and ensure that secrets are accessed only by the appropriate resources.
|
||||
|
||||
## Authenticating service account credentials {#authenticating-credentials}
|
||||
|
||||
ServiceAccounts use signed
|
||||
|
|
|
@ -8,7 +8,7 @@ weight: 20
|
|||
|
||||
Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers), formal approvers,
|
||||
reviewers and members of SIG Docs take week-long shifts
|
||||
[triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)
|
||||
[triaging and categorising issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
|
||||
for the repository.
|
||||
|
||||
<!-- body -->
|
||||
|
@ -18,7 +18,7 @@ for the repository.
|
|||
Each day in a week-long shift the Issue Wrangler will be responsible for:
|
||||
|
||||
- Triaging and tagging incoming issues daily. See
|
||||
[Triage and categorize issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)
|
||||
[Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues)
|
||||
for guidelines on how SIG Docs uses metadata.
|
||||
- Keeping an eye on stale & rotten issues within the kubernetes/website repository.
|
||||
- Maintenance of the [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1).
|
||||
|
|
|
@ -845,6 +845,10 @@ The Kubernetes project strongly recommends enabling this admission controller.
|
|||
You should enable this admission controller if you intend to make any use of Kubernetes
|
||||
`ServiceAccount` objects.
|
||||
|
||||
Regarding the annotation `kubernetes.io/enforce-mountable-secrets`: While the annotation's name suggests it only concerns the mounting of Secrets,
|
||||
its enforcement also extends to other ways Secrets are used in the context of a Pod.
|
||||
Therefore, it is crucial to ensure that all the referenced secrets are correctly specified in the ServiceAccount.
|
||||
|
||||
### StorageObjectInUseProtection
|
||||
|
||||
**Type**: Mutating.
|
||||
|
|
|
@ -548,8 +548,23 @@ Example: `kubernetes.io/enforce-mountable-secrets: "true"`
|
|||
Used on: ServiceAccount
|
||||
|
||||
The value for this annotation must be **true** to take effect.
|
||||
This annotation indicates that Pods running as this ServiceAccount may only reference
|
||||
Secret API objects specified in the ServiceAccount's `secrets` field.
|
||||
When you set this annotation to "true", Kubernetes enforces the following rules for
|
||||
Pods running as this ServiceAccount:
|
||||
|
||||
1. Secrets mounted as volumes must be listed in the ServiceAccount's `secrets` field.
|
||||
1. Secrets referenced in `envFrom` for containers (including sidecar containers and init containers)
|
||||
must also be listed in the ServiceAccount's secrets field.
|
||||
If any container in a Pod references a Secret not listed in the ServiceAccount's `secrets` field
|
||||
(and even if the reference is marked as `optional`), then the Pod will fail to start,
|
||||
and an error indicating the non-compliant secret reference will be generated.
|
||||
1. Secrets referenced in a Pod's `imagePullSecrets` must be present in the
|
||||
ServiceAccount's `imagePullSecrets` field, the Pod will fail to start,
|
||||
and an error indicating the non-compliant image pull secret reference will be generated.
|
||||
|
||||
When you create or update a Pod, these rules are checked. If a Pod doesn't follow them, it won't start and you'll see an error message.
|
||||
If a Pod is already running and you change the `kubernetes.io/enforce-mountable-secrets` annotation
|
||||
to true, or you edit the associated ServiceAccount to remove the reference to a Secret
|
||||
that the Pod is already using, the Pod continues to run.
|
||||
|
||||
### node.kubernetes.io/exclude-from-external-load-balancers
|
||||
|
||||
|
|
|
@ -54,7 +54,6 @@ their authors, not the Kubernetes team.
|
|||
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
|
||||
| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) |
|
||||
| Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) |
|
||||
| Go | [github.com/ericchiang/k8s](https://github.com/ericchiang/k8s) |
|
||||
| Java (OSGi) | [bitbucket.org/amdatulabs/amdatu-kubernetes](https://bitbucket.org/amdatulabs/amdatu-kubernetes) |
|
||||
| Java (Fabric8, OSGi) | [github.com/fabric8io/kubernetes-client](https://github.com/fabric8io/kubernetes-client) |
|
||||
| Java | [github.com/manusa/yakc](https://github.com/manusa/yakc) |
|
||||
|
|
|
@ -180,7 +180,7 @@ Managers identify distinct workflows that are modifying the object (especially
|
|||
useful on conflicts!), and can be specified through the
|
||||
[`fieldManager`](/docs/reference/kubernetes-api/common-parameters/common-parameters/#fieldManager)
|
||||
query parameter as part of a modifying request. When you Apply to a resource,
|
||||
the `fieldManager` parameter is required
|
||||
the `fieldManager` parameter is required.
|
||||
For other updates, the API server infers a field manager identity from the
|
||||
"User-Agent:" HTTP header (if present).
|
||||
|
||||
|
|
|
@ -72,6 +72,8 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
|||
|
||||
### Preparing the hosts
|
||||
|
||||
#### Component installation
|
||||
|
||||
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
|
||||
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
|
||||
|
@ -84,6 +86,61 @@ kubeadm to tell it what to do. This crashloop is expected and normal.
|
|||
After you initialize your control-plane, the kubelet runs normally.
|
||||
{{< /note >}}
|
||||
|
||||
#### Network setup
|
||||
|
||||
kubeadm similarly to other Kubernetes components tries to find a usable IP on
|
||||
the network interface associated with the default gateway on a host. Such
|
||||
an IP is then used for the advertising and/or listening performed by a component.
|
||||
|
||||
To find out what this IP is on a Linux host you can use:
|
||||
|
||||
```shell
|
||||
ip route show # Look for a line starting with "default via"
|
||||
```
|
||||
|
||||
Kubernetes components do not accept custom network interface as an option,
|
||||
therefore a custom IP address must be passed as a flag to all components instances
|
||||
that need such a custom configuration.
|
||||
|
||||
To configure the API server advertise address for control plane nodes created with both
|
||||
`init` and `join`, the flag `--apiserver-advertise-address` can be used.
|
||||
Preferably, this option can be set in the [kubeadm API](/docs/reference/config-api/kubeadm-config.v1beta3)
|
||||
as `InitConfiguration.localAPIEndpoint` and `JoinConfiguration.controlPlane.localAPIEndpoint`.
|
||||
|
||||
For kubelets on all nodes, the `--node-ip` option can be passed in
|
||||
`.nodeRegistration.kubeletExtraArgs` inside a kubeadm configuration file
|
||||
(`InitConfiguration` or `JoinConfiguration`).
|
||||
|
||||
For dual-stack see
|
||||
[Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support).
|
||||
|
||||
{{< note >}}
|
||||
IP addresses become part of certificates SAN fields. Changing these IP addresses would require
|
||||
signing new certificates and restarting the affected components, so that the change in
|
||||
certificate files is reflected. See
|
||||
[Manual certificate renewal](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal)
|
||||
for more details on this topic.
|
||||
{{</ note >}}
|
||||
|
||||
{{< warning >}}
|
||||
The Kubernetes project recommends against this approach (configuring all component instances
|
||||
with custom IP addresses). Instead, the Kubernetes maintainers recommend to setup the host network,
|
||||
so that the default gateway IP is the one that Kubernetes components auto-detect and use.
|
||||
On Linux nodes, you can use commands such as `ip route` to configure networking; your operating
|
||||
system might also provide higher level network management tools. If your node's default gateway
|
||||
is a public IP address, you should configure packet filtering or other security measures that
|
||||
protect the nodes and your cluster.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< note >}}
|
||||
If the host does not have a default gateway, it is recommended to setup one. Otherwise,
|
||||
without passing a custom IP address to a Kubernetes component, the component
|
||||
will exit with an error. If two or more default gateways are present on the host,
|
||||
a Kubernetes component will try to use the first one it encounters that has a suitable
|
||||
global unicast IP address. While making this choice, the exact ordering of gateways
|
||||
might vary between different operating systems and kernel versions.
|
||||
{{< /note >}}
|
||||
|
||||
### Preparing the required container images
|
||||
|
||||
This step is optional and only applies in case you wish `kubeadm init` and `kubeadm join`
|
||||
|
@ -117,11 +174,6 @@ a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
|||
known endpoints. To use different container runtime or if there are more than one installed
|
||||
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See
|
||||
[Installing a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||
1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
|
||||
with the default gateway to set the advertise address for this particular control-plane node's API server.
|
||||
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
||||
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
||||
must specify an IPv6 address, for example `--apiserver-advertise-address=2001:db8::101`
|
||||
|
||||
To initialize the control-plane node run:
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ When accessing the Kubernetes API for the first time, we suggest using the
|
|||
Kubernetes CLI, `kubectl`.
|
||||
|
||||
To access a cluster, you need to know the location of the cluster and have credentials
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
to access it. Typically, this is automatically set-up when you work through
|
||||
a [Getting started guide](/docs/setup/),
|
||||
or someone else set up the cluster and provided you with credentials and a location.
|
||||
|
||||
|
@ -36,20 +36,20 @@ Kubectl handles locating and authenticating to the apiserver.
|
|||
If you want to directly access the REST API with an http client like
|
||||
curl or wget, or a browser, there are several ways to locate and authenticate:
|
||||
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
- Run kubectl in proxy mode.
|
||||
- Recommended approach.
|
||||
- Uses stored apiserver location.
|
||||
- Verifies identity of apiserver using self-signed cert. No MITM possible.
|
||||
- Authenticates to apiserver.
|
||||
- In future, may do intelligent client-side load-balancing and failover.
|
||||
- Provide the location and credentials directly to the http client.
|
||||
- Alternate approach.
|
||||
- Works with some types of client code that are confused by using a proxy.
|
||||
- Need to import a root cert into your browser to protect against MITM.
|
||||
|
||||
### Using kubectl proxy
|
||||
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles
|
||||
locating the apiserver and authenticating.
|
||||
Run it like this:
|
||||
|
||||
|
@ -83,7 +83,6 @@ The output is similar to this:
|
|||
}
|
||||
```
|
||||
|
||||
|
||||
### Without kubectl proxy
|
||||
|
||||
Use `kubectl apply` and `kubectl describe secret...` to create a token for the default service account with grep/cut:
|
||||
|
@ -163,16 +162,16 @@ The output is similar to this:
|
|||
}
|
||||
```
|
||||
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
The above examples use the `--insecure` flag. This leaves it subject to MITM
|
||||
attacks. When kubectl accesses the cluster it uses a stored root certificate
|
||||
and client certificates to access the server. (These are installed in the
|
||||
`~/.kube` directory). Since cluster certificates are typically self-signed, it
|
||||
may take special configuration to get your http client to use root
|
||||
certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Controlling Access to the API](/docs/concepts/security/controlling-access)
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Controlling Access to the API](/docs/concepts/security/controlling-access)
|
||||
describes how a cluster admin can configure this.
|
||||
|
||||
## Programmatic access to the API
|
||||
|
@ -182,20 +181,30 @@ client libraries.
|
|||
|
||||
### Go client
|
||||
|
||||
* To get the library, run the following command: `go get k8s.io/client-go@kubernetes-<kubernetes-version-number>`, see [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md#for-the-casual-user) for detailed installation instructions. See [https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go#compatibility-matrix) to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects, so if needed, please import API definitions from client-go rather than from the main repository, e.g., `import "k8s.io/client-go/kubernetes"` is correct.
|
||||
* To get the library, run the following command: `go get k8s.io/client-go@kubernetes-<kubernetes-version-number>`,
|
||||
see [INSTALL.md](https://github.com/kubernetes/client-go/blob/master/INSTALL.md#for-the-casual-user)
|
||||
for detailed installation instructions. See
|
||||
[https://github.com/kubernetes/client-go](https://github.com/kubernetes/client-go#compatibility-matrix)
|
||||
to see which versions are supported.
|
||||
* Write an application atop of the client-go clients. Note that client-go defines its own API objects,
|
||||
so if needed, please import API definitions from client-go rather than from the main repository,
|
||||
e.g., `import "k8s.io/client-go/kubernetes"` is correct.
|
||||
|
||||
The Go client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this
|
||||
[example](https://git.k8s.io/client-go/examples/out-of-cluster-client-configuration/main.go).
|
||||
|
||||
If the application is deployed as a Pod in the cluster, please refer to the [next section](#accessing-the-api-from-a-pod).
|
||||
|
||||
### Python client
|
||||
|
||||
To use [Python client](https://github.com/kubernetes-client/python), run the following command: `pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python) for more installation options.
|
||||
To use [Python client](https://github.com/kubernetes-client/python), run the following command:
|
||||
`pip install kubernetes`. See [Python Client Library page](https://github.com/kubernetes-client/python)
|
||||
for more installation options.
|
||||
|
||||
The Python client can use the same [kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this [example](https://github.com/kubernetes-client/python/tree/master/examples).
|
||||
as the kubectl CLI does to locate and authenticate to the apiserver. See this
|
||||
[example](https://github.com/kubernetes-client/python/tree/master/examples).
|
||||
|
||||
### Other languages
|
||||
|
||||
|
@ -218,52 +227,51 @@ For information about connecting to other services running on a Kubernetes clust
|
|||
|
||||
## Requesting redirects
|
||||
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
The redirect capabilities have been deprecated and removed. Please use a proxy (see below) instead.
|
||||
|
||||
## So Many Proxies
|
||||
## So many proxies
|
||||
|
||||
There are several different proxies you may encounter when using Kubernetes:
|
||||
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
1. The [kubectl proxy](#directly-accessing-the-rest-api):
|
||||
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
- runs on a user's desktop or in a pod
|
||||
- proxies from a localhost address to the Kubernetes apiserver
|
||||
- client to proxy uses HTTP
|
||||
- proxy to apiserver uses HTTPS
|
||||
- locates apiserver
|
||||
- adds authentication headers
|
||||
|
||||
1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services):
|
||||
1. The [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster-services/#discovering-builtin-services):
|
||||
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
- is a bastion built into the apiserver
|
||||
- connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
|
||||
- runs in the apiserver processes
|
||||
- client to proxy uses HTTPS (or http if apiserver so configured)
|
||||
- proxy to target may use HTTP or HTTPS as chosen by proxy using available information
|
||||
- can be used to reach a Node, Pod, or Service
|
||||
- does load balancing when used to reach a Service
|
||||
|
||||
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
|
||||
1. The [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
|
||||
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is only used to reach services
|
||||
- runs on each node
|
||||
- proxies UDP and TCP
|
||||
- does not understand HTTP
|
||||
- provides load balancing
|
||||
- is only used to reach services
|
||||
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
1. A Proxy/Load-balancer in front of apiserver(s):
|
||||
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
- existence and implementation varies from cluster to cluster (e.g. nginx)
|
||||
- sits between all clients and one or more apiservers
|
||||
- acts as load balancer if there are several apiservers.
|
||||
|
||||
1. Cloud Load Balancers on external services:
|
||||
1. Cloud Load Balancers on external services:
|
||||
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
- are provided by some cloud providers (e.g. AWS ELB, Google Cloud Load Balancer)
|
||||
- are created automatically when the Kubernetes service has type `LoadBalancer`
|
||||
- use UDP/TCP only
|
||||
- implementation varies by cloud provider.
|
||||
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
Kubernetes users will typically not need to worry about anything other than the first two types. The cluster admin
|
||||
will typically ensure that the latter types are set up correctly.
|
||||
|
||||
|
|
|
@ -110,13 +110,13 @@ description: |-
|
|||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<h3>Scaling a Deployment</h3>
|
||||
<p>To list your Deployments use the <code>get deployments</code> subcommand:
|
||||
<code><b>kubectl get deployments</b></code></p>
|
||||
<p>To list your Deployments, use the <code>get deployments</code> subcommand:</p>
|
||||
<p><code><b>kubectl get deployments</b></code></p>
|
||||
<p>The output should be similar to:</p>
|
||||
<pre>
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kubernetes-bootcamp 1/1 1 1 11m
|
||||
</pre>
|
||||
<pre>
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kubernetes-bootcamp 1/1 1 1 11m
|
||||
</pre>
|
||||
<p>We should have 1 Pod. If not, run the command again. This shows:</p>
|
||||
<ul>
|
||||
<li><em>NAME</em> lists the names of the Deployments in the cluster.</li>
|
||||
|
@ -125,8 +125,8 @@ description: |-
|
|||
<li><em>AVAILABLE</em> displays how many replicas of the application are available to your users.</li>
|
||||
<li><em>AGE</em> displays the amount of time that the application has been running.</li>
|
||||
</ul>
|
||||
<p>To see the ReplicaSet created by the Deployment, run
|
||||
<code><b>kubectl get rs</b></code></p>
|
||||
<p>To see the ReplicaSet created by the Deployment, run:</p>
|
||||
<p><code><b>kubectl get rs</b></code></p>
|
||||
<p>Notice that the name of the ReplicaSet is always formatted as <tt>[DEPLOYMENT-NAME]-[RANDOM-STRING]</tt>. The random string is randomly generated and uses the <em>pod-template-hash</em> as a seed.</p>
|
||||
<p>Two important columns of this output are:</p>
|
||||
<ul>
|
||||
|
|
|
@ -0,0 +1,30 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This script reproduces what the kubelet does
|
||||
# to calculate memory.available relative to kubepods cgroup.
|
||||
|
||||
# current memory usage
|
||||
memory_capacity_in_kb=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
|
||||
memory_capacity_in_bytes=$((memory_capacity_in_kb * 1024))
|
||||
memory_usage_in_bytes=$(cat /sys/fs/cgroup/kubepods.slice/memory.current)
|
||||
memory_total_inactive_file=$(cat /sys/fs/cgroup/kubepods.slice/memory.stat | grep inactive_file | awk '{print $2}')
|
||||
|
||||
memory_working_set=${memory_usage_in_bytes}
|
||||
if [ "$memory_working_set" -lt "$memory_total_inactive_file" ];
|
||||
then
|
||||
memory_working_set=0
|
||||
else
|
||||
memory_working_set=$((memory_usage_in_bytes - memory_total_inactive_file))
|
||||
fi
|
||||
|
||||
memory_available_in_bytes=$((memory_capacity_in_bytes - memory_working_set))
|
||||
memory_available_in_kb=$((memory_available_in_bytes / 1024))
|
||||
memory_available_in_mb=$((memory_available_in_kb / 1024))
|
||||
|
||||
echo "memory.capacity_in_bytes $memory_capacity_in_bytes"
|
||||
echo "memory.usage_in_bytes $memory_usage_in_bytes"
|
||||
echo "memory.total_inactive_file $memory_total_inactive_file"
|
||||
echo "memory.working_set $memory_working_set"
|
||||
echo "memory.available_in_bytes $memory_available_in_bytes"
|
||||
echo "memory.available_in_kb $memory_available_in_kb"
|
||||
echo "memory.available_in_mb $memory_available_in_mb"
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
reviewers:
|
||||
- ramrodo
|
||||
- krol3
|
||||
- electrocucaracha
|
||||
title: Almacenamiento en Windows
|
||||
content_type: concept
|
||||
weight: 110
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Esta página proporciona una descripción general del almacenamiento específico para el sistema operativo Windows.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Almacenamiento persistente {#storage}
|
||||
|
||||
Windows tiene un controlador de sistema de archivos en capas para montar las capas del contenedor y crear un sistema de archivos de copia basado en NTFS. Todas las rutas de archivos en el contenedor se resuelven únicamente dentro del contexto de ese contenedor.
|
||||
|
||||
- Con Docker, los montajes de volumen solo pueden apuntar a un directorio en el contenedor y no a un archivo individual. Esta limitación no se aplica a containerd.
|
||||
|
||||
- Los montajes de volumen no pueden proyectar archivos o directorios de vuelta al sistema de archivos del host.
|
||||
|
||||
- No se admiten sistemas de archivos de solo lectura debido a que siempre se requiere acceso de escritura para el registro de Windows y la base de datos SAM. Sin embargo, se admiten volúmenes de solo lectura.
|
||||
|
||||
- Las máscaras y permisos de usuario en los volúmenes no están disponibles. Debido a que la base de datos SAM no se comparte entre el host y el contenedor, no hay un mapeo entre ellos. Todos los permisos se resuelven dentro del contexto del contenedor.
|
||||
|
||||
Como resultado, las siguientes funcionalidades de almacenamiento no son compatibles en nodos de Windows:
|
||||
|
||||
- Montajes de subruta de volumen: solo es posible montar el volumen completo en un contenedor de Windows
|
||||
- Montaje de subruta de volumen para secretos
|
||||
- Proyección de montaje en el host
|
||||
- Sistema de archivos raíz de solo lectura (los volúmenes mapeados todavía admiten `readOnly`)
|
||||
- Mapeo de dispositivos de bloque
|
||||
- Memoria como medio de almacenamiento (por ejemplo, `emptyDir.medium` configurado como `Memory`)
|
||||
- Características del sistema de archivos como uid/gid; permisos de sistema de archivos de Linux por usuario
|
||||
- Configuración de [permisos de secretos con DefaultMode](/docs/tasks/inject-data-application/distribute-credentials-secure/#set-posix-permissions-for-secret-keys) (debido a la dependencia de UID/GID)
|
||||
- Soporte de almacenamiento/volumen basado en NFS
|
||||
- Ampliación del volumen montado (resizefs)
|
||||
|
||||
Los {{< glossary_tooltip text="volúmenes" term_id="volume" >}} de Kubernetes habilitan la implementación de aplicaciones complejas, con requisitos de persistencia de datos y uso compartido de volúmenes de Pod, en Kubernetes.
|
||||
La gestión de volúmenes persistentes asociados a un backend o protocolo de almacenamiento específico incluye acciones como la provisión/desprovisión/redimensión de volúmenes, la conexión/desconexión de un volumen de/para un nodo de Kubernetes, y el montaje/desmontaje de un volumen de/para contenedores individuales en un Pod que necesita persistir datos.
|
||||
|
||||
Los componentes de gestión de volúmenes se envían como [plugin](/docs/concepts/storage/volumes/#volume-types) de volumen de Kubernetes.
|
||||
Las siguiente variedad de clases de plugins de volumen de Kubernetes son compatibles en Windows:
|
||||
|
||||
- [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume)
|
||||
|
||||
- Ten en cuenta que los FlexVolumes han sido descontinuados a partir de la versión 1.23.
|
||||
|
||||
- [`CSI Plugins`](/docs/concepts/storage/volumes/#csi)
|
||||
|
||||
##### Plugins de volumen incorporados
|
||||
|
||||
Los siguientes plugins incorporados admiten almacenamiento persistente en nodos de Windows:
|
||||
|
||||
- [`azureFile`](/docs/concepts/storage/volumes/#azurefile)
|
||||
- [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk)
|
||||
- [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume)
|
|
@ -4,6 +4,8 @@ abstract: "自動化されたコンテナのデプロイ・スケール・管理
|
|||
cid: home
|
||||
---
|
||||
|
||||
{{< site-searchbar >}}
|
||||
|
||||
{{< blocks/section id="oceanNodes" >}}
|
||||
{{% blocks/feature image="flower" %}}
|
||||
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/" >}})は、デプロイやスケーリングを自動化したり、コンテナ化されたアプリケーションを管理したりするための、オープンソースのシステムです。
|
||||
|
|
|
@ -53,7 +53,7 @@ Kubernetesスケジューラーには、ノードに接続できるボリュー
|
|||
|
||||
* Azureでは、ノードの種類に応じて、最大64個のディスクをノードに接続できます。詳細については、[Azureの仮想マシンのサイズ](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes)を参照してください。
|
||||
|
||||
* CSIストレージドライバーが(`NodeGetInfo`を使用して)ノードの最大ボリューム数をアドバタイズする場合、{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}はその制限を尊重します。詳細については、[CSIの仕様](https://github.com/ontainer-storage-interface/spec/blob/master/spec.md#nodegetinfo)を参照してください。
|
||||
* CSIストレージドライバーが(`NodeGetInfo`を使用して)ノードの最大ボリューム数をアドバタイズする場合、{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}はその制限を尊重します。詳細については、[CSIの仕様](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo)を参照してください。
|
||||
|
||||
* CSIドライバーに移行されたツリー内プラグインによって管理されるボリュームの場合、ボリュームの最大数はCSIドライバーによって報告される数になります。
|
||||
|
||||
|
|
|
@ -332,7 +332,7 @@ Pod失敗ポリシーまたはPod失敗のバックオフポリシーのいず
|
|||
これらはAPIの要件と機能です:
|
||||
- `.spec.podFailurePolicy`フィールドをJobに使いたい場合は、`.spec.restartPolicy`を`Never`に設定してそのJobのPodテンプレートも定義する必要があります。
|
||||
- `spec.podFailurePolicy.rules`で指定したPod失敗ポリシーのルールが順番に評価されます。あるPodの失敗がルールに一致すると、残りのルールは無視されます。Pod失敗に一致するルールがない場合は、デフォルトの処理が適用されます。
|
||||
- `spec.podFailurePolicy.rules[*].containerName`を指定することで、ルールを特定のコンテナに制限することができます。指定しない場合、ルールはすべてのコンテナに適用されます。指定する場合は、Pod テンプレート内のコンテナ名または`initContainer`名のいずれかに一致する必要があります。
|
||||
- `spec.podFailurePolicy.rules[*].onExitCodes.containerName`を指定することで、ルールを特定のコンテナに制限することができます。指定しない場合、ルールはすべてのコンテナに適用されます。指定する場合は、Pod テンプレート内のコンテナ名または`initContainer`名のいずれかに一致する必要があります。
|
||||
- Pod失敗ポリシーが`spec.podFailurePolicy.rules[*].action`にマッチしたときに実行されるアクションを指定できます。指定可能な値は以下のとおりです。
|
||||
- `FailJob`: PodのJobを`Failed`としてマークし、実行中の Pod をすべて終了させる必要があることを示します。
|
||||
- `Ignore`: `.spec.backoffLimit`のカウンターは加算されず、代替のPodが作成すべきであることを示します。
|
||||
|
|
|
@ -142,7 +142,7 @@ Cluster Domain | Service (ns/name) | StatefulSet (ns/name) | StatefulSet Domain
|
|||
|
||||
### 安定したストレージ
|
||||
|
||||
Kubernetesは各VolumeClaimTemplateに対して、1つの[PersistentVolume](/docs/concepts/storage/persistent-volumes/)を作成します。上記のnginxの例において、各Podは`my-storage-class`というStorageClassをもち、1Gibのストレージ容量を持った単一のPersistentVolumeを受け取ります。もしStorageClassが指定されていない場合、デフォルトのStorageClassが使用されます。PodがNode上にスケジュール(もしくは再スケジュール)されたとき、その`volumeMounts`はPersistentVolume Claimに関連したPersistentVolumeをマウントします。
|
||||
Kubernetesは各VolumeClaimTemplateに対して、1つの[PersistentVolume](/docs/concepts/storage/persistent-volumes/)を作成します。上記のnginxの例において、各Podは`my-storage-class`というStorageClassをもち、1GiBのストレージ容量を持った単一のPersistentVolumeを受け取ります。もしStorageClassが指定されていない場合、デフォルトのStorageClassが使用されます。PodがNode上にスケジュール(もしくは再スケジュール)されたとき、その`volumeMounts`はPersistentVolume Claimに関連したPersistentVolumeをマウントします。
|
||||
注意点として、PodのPersistentVolume Claimと関連したPersistentVolumeは、PodやStatefulSetが削除されたときに削除されません。
|
||||
削除する場合は手動で行わなければなりません。
|
||||
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Helmチャート
|
||||
id: helm-chart
|
||||
date: 2018-04-12
|
||||
full_link: https://helm.sh/docs/topics/charts/
|
||||
short_description: >
|
||||
Helmツールで管理できる、事前構成されたKubernetesリソースのパッケージ。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
---
|
||||
Helmツールで管理できる、事前構成されたKubernetesリソースのパッケージ。
|
||||
|
||||
<!--more-->
|
||||
|
||||
チャートは、Kubernetesアプリケーションを作成および共有する再現可能な方法を提供します。
|
||||
単一のチャートを使用して、memcached Podなどの単純なもの、またはHTTPサーバー、データベース、キャッシュなどを含む完全なWebアプリスタックなどの複雑なものをデプロイできます。
|
||||
|
|
@ -111,7 +111,7 @@ exit # コンテナ内のシェルを終了する
|
|||
シェルではない通常のコマンドウインドウ内で、実行中のコンテナの環境変数の一覧を表示します:
|
||||
|
||||
```shell
|
||||
kubectl exec shell-demo env
|
||||
kubectl exec shell-demo -- env
|
||||
```
|
||||
|
||||
他のコマンドを試します。以下がいくつかの例です:
|
||||
|
|
|
@ -54,7 +54,7 @@ content_type: concept
|
|||
* [クラスターレベルのPod Securityの標準の適用](/docs/tutorials/security/cluster-level-pss/)
|
||||
* [NamespaceレベルのPod Securityの標準の適用](/docs/tutorials/security/ns-level-pss/)
|
||||
* [AppArmor](/docs/tutorials/security/apparmor/)
|
||||
* [seccomp](/docs/tutorials/security/seccomp/)
|
||||
* [Seccomp](/docs/tutorials/security/seccomp/)
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -28,11 +28,8 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
<!--* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
|
||||
-->
|
||||
* [Основи Kubernetes](/docs/tutorials/kubernetes-basics/) - детальний навчальний матеріал з інтерактивними уроками, що допоможе вам зрозуміти Kubernetes і спробувати його базову функціональність.
|
||||
|
||||
* [Масштабовані мікросервіси з Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
|
||||
|
||||
* [Вступ до Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
|
||||
|
||||
* [Привіт Minikube](/docs/tutorials/hello-minikube/)
|
||||
|
||||
<!--## Configuration
|
||||
|
@ -40,37 +37,29 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
## Конфігурація
|
||||
|
||||
* [Приклад: Конфігурування Java мікросервісу](/docs/tutorials/configuration/configure-java-microservice/)
|
||||
|
||||
* [Конфігурування Redis використовуючи ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/)
|
||||
|
||||
## Застосунки без стану (Stateless Applications) {#застосунки-без-стану}
|
||||
|
||||
* [Відкриття зовнішньої IP-адреси для доступу до програми в кластері](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Приклад: Розгортання застосунку PHP Guestbook з Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Застосунки зі станом (Stateful Applications) {#застосунки-зі-станом}
|
||||
|
||||
* [Основи StatefulSet](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [Приклад: WordPress та MySQL із постійними томами](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [Приклад: Розгортання Cassandra зі Stateful Sets](/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [Запуск ZooKeeper, координатора розподіленої системи](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Кластери
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
* [seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
* [Seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
|
||||
## Сервіси
|
||||
|
||||
* [Використання Source IP](/docs/tutorials/services/source-ip/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
|
@ -81,5 +70,3 @@ for information about the tutorial page type and the tutorial template.
|
|||
Якщо ви хочете написати навчальний матеріал, у статті
|
||||
[Використання шаблонів сторінок](/docs/home/contribute/page-templates/)
|
||||
ви знайдете інформацію про тип навчальної сторінки і шаблон.
|
||||
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ Google 每周运行数十亿个容器,Kubernetes 基于与之相同的原则
|
|||
{{% blocks/feature image="blocks" %}}
|
||||
|
||||
<!-- #### Never Outgrow -->
|
||||
#### 处处适用
|
||||
#### 永不过时
|
||||
|
||||
<!-- Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications
|
||||
consistently and easily no matter how complex your need is. -->
|
||||
|
@ -45,8 +45,8 @@ consistently and easily no matter how complex your need is. -->
|
|||
|
||||
{{% blocks/feature image="suitcase" %}}
|
||||
|
||||
<!-- #### Run Anywhere -->
|
||||
#### 永不过时
|
||||
<!-- #### Run K8s Anywhere -->
|
||||
#### 处处适用
|
||||
|
||||
<!-- Kubernetes is open source giving you the freedom to take advantage of on-premises,
|
||||
hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
|
||||
|
@ -70,14 +70,14 @@ Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">观看视频</button>
|
||||
<br>
|
||||
<br>
|
||||
<!-- <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on April 18-21, 2023</a> -->
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">参加 2023 年 4 月 18-21 日的欧洲 KubeCon + CloudNativeCon</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<!-- <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 6-9, 2023</a> -->
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">参加 2023 年 11 月 6-9 日的北美 KubeCon + CloudNativeCon</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<!-- <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024</a> -->
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">参加 2024 年 3 月 19-22 日的欧洲 KubeCon + CloudNativeCon</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -326,7 +326,7 @@ please join us in the #sig-network-gateway-api channel on Kubernetes Slack or ou
|
|||
请通过 Kubernetes Slack 的 #sig-network-gateway-api 频道或我们每周的
|
||||
[社区电话会议](https://gateway-api.sigs.k8s.io/contributing/community/#meetings)加入我们。
|
||||
|
||||
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md
|
||||
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/main/geps/gep-1016.md
|
||||
[grpc]:https://grpc.io/
|
||||
[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085
|
||||
[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go
|
||||
|
|
|
@ -55,7 +55,7 @@ apiserver 中;因此,它会收到 404("Not Found")的响应报错,这
|
|||
-->
|
||||
## 如何解决此问题?
|
||||
|
||||
{{< figure src="/images/blog/2023-08-28-a-new-alpha-mechanism-for-safer-cluster-upgrades/mvp-flow-diagram.svg" class="diagram-large" >}}
|
||||
{{< figure src="/images/blog/2023-08-28-a-new-alpha-mechanism-for-safer-cluster-upgrades/mvp-flow-diagram_zh.svg" class="diagram-large" >}}
|
||||
|
||||
<!--
|
||||
The new feature “Mixed Version Proxy” provides the kube-apiserver with the capability to proxy a request to a peer kube-apiserver which is aware of the requested resource and hence can serve the request. To do this, a new filter has been added to the handler chain in the API server's aggregation layer.
|
||||
|
|
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'Kubernetes 1.29 中的移除、弃用和主要变更'
|
||||
date: 2023-11-16
|
||||
slug: kubernetes-1-29-upcoming-changes
|
||||
---
|
||||
<!--
|
||||
layout: blog
|
||||
title: 'Kubernetes Removals, Deprecations, and Major Changes in Kubernetes 1.29'
|
||||
date: 2023-11-16
|
||||
slug: kubernetes-1-29-upcoming-changes
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Authors:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
|
||||
-->
|
||||
**作者:** Carol Valencia, Kristin Martin, Abigail McCarthy, James Quigley, Hosam Kamel
|
||||
|
||||
**译者:** [Michael Yao](https://github.com/windsonsea) (DaoCloud)
|
||||
|
||||
<!--
|
||||
As with every release, Kubernetes v1.29 will introduce feature deprecations and removals. Our continued ability to produce high-quality releases is a testament to our robust development cycle and healthy community. The following are some of the deprecations and removals coming in the Kubernetes 1.29 release.
|
||||
-->
|
||||
和其他每次发布一样,Kubernetes v1.29 将弃用和移除一些特性。
|
||||
一贯以来生成高质量发布版本的能力是开发周期稳健和社区健康的证明。
|
||||
下文列举即将发布的 Kubernetes 1.29 中的一些弃用和移除事项。
|
||||
|
||||
<!--
|
||||
## The Kubernetes API removal and deprecation process
|
||||
|
||||
The Kubernetes project has a well-documented deprecation policy for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API is one that has been marked for removal in a future Kubernetes release; it will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
|
||||
-->
|
||||
## Kubernetes API 移除和弃用流程
|
||||
|
||||
Kubernetes 项目对特性有一个文档完备的弃用策略。此策略规定,只有当同一 API 有了较新的、稳定的版本可用时,
|
||||
原有的稳定 API 才可以被弃用,各个不同稳定级别的 API 都有一个最短的生命周期。
|
||||
弃用的 API 指的是已标记为将在后续某个 Kubernetes 发行版本中被移除的 API;
|
||||
移除之前该 API 将继续发挥作用(从被弃用起至少一年时间),但使用时会显示一条警告。
|
||||
被移除的 API 将在当前版本中不再可用,此时你必须转为使用替代的 API。
|
||||
|
||||
<!--
|
||||
* Generally available (GA) or stable API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes.
|
||||
* Beta or pre-release API versions must be supported for 3 releases after deprecation.
|
||||
* Alpha or experimental API versions may be removed in any release without prior deprecation notice.
|
||||
-->
|
||||
- 正式发布(GA)或稳定的 API 版本可能被标记为已弃用,但只有在 Kubernetes 主版本变化时才会被移除。
|
||||
- 测试版(Beta)或预发布 API 版本在弃用后必须在后续 3 个版本中继续支持。
|
||||
- Alpha 或实验性 API 版本可以在任何版本中被移除,不另行通知。
|
||||
|
||||
<!--
|
||||
Whether an API is removed as a result of a feature graduating from beta to stable or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the documentation.
|
||||
-->
|
||||
无论一个 API 是因为某特性从 Beta 进阶至稳定阶段而被移除,还是因为该 API 根本没有成功,
|
||||
所有移除均遵从上述弃用策略。无论何时移除一个 API,文档中都会列出迁移选项。
|
||||
|
||||
<!--
|
||||
## A note about the k8s.gcr.io redirect to registry.k8s.io
|
||||
|
||||
To host its container images, the Kubernetes project uses a community-owned image registry called registry.k8s.io. Starting last March traffic to the old k8s.gcr.io registry began being redirected to registry.k8s.io. The deprecated k8s.gcr.io registry will eventually be phased out. For more details on this change or to see if you are impacted, please read [k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know](/blog/2023/03/10/image-registry-redirect/).
|
||||
-->
|
||||
## k8s.gcr.io 重定向到 registry.k8s.io 相关说明
|
||||
|
||||
Kubernetes 项目为了托管其容器镜像,使用社区自治的一个名为 registry.k8s.io 的镜像仓库。
|
||||
从最近的 3 月份起,所有流向 k8s.gcr.io 旧仓库的请求开始被重定向到 registry.k8s.io。
|
||||
已弃用的 k8s.gcr.io 仓库最终将被淘汰。有关这一变更的细节或若想查看你是否受到影响,参阅
|
||||
[k8s.gcr.io 重定向到 registry.k8s.io - 用户须知](/zh-cn/blog/2023/03/10/image-registry-redirect/)。
|
||||
|
||||
<!--
|
||||
## A note about the Kubernetes community-owned package repositories
|
||||
|
||||
Earlier in 2023, the Kubernetes project [introduced](/blog/2023/08/15/pkgs-k8s-io-introduction/) `pkgs.k8s.io`, community-owned software repositories for Debian and RPM packages. The community-owned repositories replaced the legacy Google-owned repositories (`apt.kubernetes.io` and `yum.kubernetes.io`).
|
||||
On September 13, 2023, those legacy repositories were formally deprecated and their contents frozen.
|
||||
-->
|
||||
## Kubernetes 社区自治软件包仓库相关说明
|
||||
|
||||
在 2023 年年初,Kubernetes 项目[引入了](/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/) `pkgs.k8s.io`,
|
||||
这是 Debian 和 RPM 软件包所用的社区自治软件包仓库。这些社区自治的软件包仓库取代了先前由 Google 管理的仓库
|
||||
(`apt.kubernetes.io` 和 `yum.kubernetes.io`)。在 2023 年 9 月 13 日,这些老旧的仓库被正式弃用,其内容被冻结。
|
||||
|
||||
<!--
|
||||
For more information on this change or to see if you are impacted, please read the [deprecation announcement](/blog/2023/08/31/legacy-package-repository-deprecation/).
|
||||
-->
|
||||
有关这一变更的细节或你若想查看是否受到影响,
|
||||
请参阅[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)。
|
||||
|
||||
<!--
|
||||
## Deprecations and removals for Kubernetes v1.29
|
||||
|
||||
See the official list of [API removals](/docs/reference/using-api/deprecation-guide/#v1-29) for a full list of planned deprecations for Kubernetes v1.29.
|
||||
-->
|
||||
## Kubernetes v1.29 的弃用和移除说明
|
||||
|
||||
有关 Kubernetes v1.29 计划弃用的完整列表,
|
||||
参见官方 [API 移除](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-29)列表。
|
||||
|
||||
<!--
|
||||
### Removal of in-tree integrations with cloud providers ([KEP-2395](https://kep.k8s.io/2395))
|
||||
|
||||
The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `DisableCloudProviders` and `DisableKubeletCloudCredentialProviders` will both be set to `true` by default for Kubernetes v1.29. This change will require that users who are currently using in-tree cloud provider integrations (Azure, GCE, or vSphere) enable external cloud controller managers, or opt in to the legacy integration by setting the associated feature gates to `false`.
|
||||
-->
|
||||
### 移除与云驱动的内部集成([KEP-2395](https://kep.k8s.io/2395))
|
||||
|
||||
对于 Kubernetes v1.29,默认特性门控 `DisableCloudProviders` 和 `DisableKubeletCloudCredentialProviders`
|
||||
都将被设置为 `true`。这个变更将要求当前正在使用内部云驱动集成(Azure、GCE 或 vSphere)的用户启用外部云控制器管理器,
|
||||
或者将关联的特性门控设置为 `false` 以选择传统的集成方式。
|
||||
|
||||
<!--
|
||||
Enabling external cloud controller managers means you must run a suitable cloud controller manager within your cluster's control plane; it also requires setting the command line argument `--cloud-provider=external` for the kubelet (on every relevant node), and across the control plane (kube-apiserver and kube-controller-manager).
|
||||
-->
|
||||
启用外部云控制器管理器意味着你必须在集群的控制平面中运行一个合适的云控制器管理器;
|
||||
同时还需要为 kubelet(在每个相关节点上)及整个控制平面(kube-apiserver 和 kube-controller-manager)
|
||||
设置命令行参数 `--cloud-provider=external`。
|
||||
|
||||
<!--
|
||||
For more information about how to enable and run external cloud controller managers, read [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) and [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/).
|
||||
|
||||
For general information about cloud controller managers, please see
|
||||
[Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) in the Kubernetes documentation.
|
||||
-->
|
||||
有关如何启用和运行外部云控制器管理器的细节,
|
||||
参阅[管理云控制器管理器](/zh-cn/docs/tasks/administer-cluster/running-cloud-controller/)和
|
||||
[迁移多副本的控制面以使用云控制器管理器](/zh-cn/docs/tasks/administer-cluster/controller-manager-leader-migration/)。
|
||||
|
||||
有关云控制器管理器的常规信息,请参阅 Kubernetes
|
||||
文档中的[云控制器管理器](/zh-cn/docs/concepts/architecture/cloud-controller/)。
|
||||
|
||||
<!--
|
||||
### Removal of the `v1beta2` flow control API group
|
||||
|
||||
The _flowcontrol.apiserver.k8s.io/v1beta2_ API version of FlowSchema and PriorityLevelConfiguration will [no longer be served](/docs/reference/using-api/deprecation-guide/#v1-29) in Kubernetes v1.29.
|
||||
|
||||
To prepare for this, you can edit your existing manifests and rewrite client software to use the `flowcontrol.apiserver.k8s.io/v1beta3` API version, available since v1.26. All existing persisted objects are accessible via the new API. Notable changes in `flowcontrol.apiserver.k8s.io/v1beta3` include
|
||||
that the PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field was renamed to `spec.limited.nominalConcurrencyShares`.
|
||||
-->
|
||||
### 移除 `v1beta2` 流量控制 API 组
|
||||
|
||||
在 Kubernetes v1.29 中,将[不再提供](/zh-cn/docs/reference/using-api/deprecation-guide/#v1-29)
|
||||
FlowSchema 和 PriorityLevelConfiguration 的 **flowcontrol.apiserver.k8s.io/v1beta2** API 版本。
|
||||
|
||||
为了做好准备,你可以编辑现有的清单(Manifest)并重写客户端软件,以使用自 v1.26 起可用的
|
||||
`flowcontrol.apiserver.k8s.io/v1beta3` API 版本。所有现有的持久化对象都可以通过新的 API 访问。
|
||||
`flowcontrol.apiserver.k8s.io/v1beta3` 中的显著变化包括将 PriorityLevelConfiguration 的
|
||||
`spec.limited.assuredConcurrencyShares` 字段更名为 `spec.limited.nominalConcurrencyShares`。
|
||||
|
||||
<!--
|
||||
### Deprecation of the `status.nodeInfo.kubeProxyVersion` field for Node
|
||||
|
||||
The `.status.kubeProxyVersion` field for Node objects will be [marked as deprecated](https://github.com/kubernetes/enhancements/issues/4004) in v1.29 in preparation for its removal in a future release. This field is not accurate and is set by kubelet, which does not actually know the kube-proxy version, or even if kube-proxy is running.
|
||||
-->
|
||||
### 弃用针对 Node 的 `status.nodeInfo.kubeProxyVersion` 字段
|
||||
|
||||
在 v1.29 中,针对 Node 对象的 `.status.kubeProxyVersion` 字段将被
|
||||
[标记为弃用](https://github.com/kubernetes/enhancements/issues/4004),
|
||||
准备在未来某个发行版本中移除。这是因为此字段并不准确,它由 kubelet 设置,
|
||||
而 kubelet 实际上并不知道 kube-proxy 版本,甚至不知道 kube-proxy 是否在运行。
|
||||
|
||||
<!--
|
||||
## Want to know more?
|
||||
|
||||
Deprecations are announced in the Kubernetes release notes. You can see the announcements of pending deprecations in the release notes for:
|
||||
-->
|
||||
## 了解更多
|
||||
|
||||
弃用信息是在 Kubernetes 发布说明(Release Notes)中公布的。你可以在以下版本的发布说明中看到待弃用的公告:
|
||||
|
||||
* [Kubernetes v1.25](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md#deprecation)
|
||||
* [Kubernetes v1.26](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md#deprecation)
|
||||
* [Kubernetes v1.27](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#deprecation)
|
||||
* [Kubernetes v1.28](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.28.md#deprecation)
|
||||
|
||||
<!--
|
||||
We will formally announce the deprecations that come with [Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation) as part of the CHANGELOG for that release.
|
||||
|
||||
For information on the deprecation and removal process, refer to the official Kubernetes [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api) document.
|
||||
-->
|
||||
我们将在
|
||||
[Kubernetes v1.29](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.29.md#deprecation)
|
||||
的 CHANGELOG 中正式宣布与该版本相关的弃用信息。
|
||||
|
||||
有关弃用和移除流程的细节,参阅 Kubernetes
|
||||
官方[弃用策略](/zh-cn/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api)文档。
|
|
@ -209,7 +209,7 @@ clients that access it.
|
|||
|
||||
<!-- image source: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
|
||||
|
||||
{{< figure src="/docs/concepts/extend-kubernetes/extension-points.svg"
|
||||
{{< figure src="/docs/concepts/extend-kubernetes/extension-points.png"
|
||||
alt="用符号表示的七个编号的 Kubernetes 扩展点"
|
||||
class="diagram-large" caption="Kubernetes 扩展点" >}}
|
||||
|
||||
|
@ -296,7 +296,8 @@ several types of extensions.
|
|||
注意,某些方案可能需要同时采用几种类型的扩展。
|
||||
|
||||
<!-- image source for flowchart: https://docs.google.com/drawings/d/1sdviU6lDz4BpnzJNHfNpQrqI9F19QZ07KnhnxVrp2yg/edit -->
|
||||
{{< figure src="/zh-cn/docs/concepts/extend-kubernetes/flowchart.png"
|
||||
|
||||
{{< figure src="/zh-cn/docs/concepts/extend-kubernetes/flowchart.svg"
|
||||
alt="附带使用场景问题和实现指南的流程图。绿圈表示是;红圈表示否。"
|
||||
class="diagram-large" caption="选择一个扩展方式的流程图指导" >}}
|
||||
|
||||
|
|
|
@ -0,0 +1,556 @@
|
|||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
||||
<!-- Do not edit this file with editors other than draw.io -->
|
||||
|
||||
<svg
|
||||
version="1.1"
|
||||
width="809px"
|
||||
height="638px"
|
||||
viewBox="-0.5 -0.5 809 638"
|
||||
content="<mxfile host="app.diagrams.net" modified="2023-09-22T08:57:24.261Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" etag="EPPLAoCIsPnsc_DPojmh" version="21.8.0" type="device"><diagram name="Page-1" id="2_0QGAMlV9SS6-_4G6dn">7ZxZc6M4EIB/jR/jAnE/ZnLt1OyRrVTtTPYlRUC22cHIKyCx99dvCyRzKTFOwMaHpyoDjSSg+1Oru8EeaVfz5R11F7PfiI/DEVL85Ui7HiGETAP+MsEqF5i2kwumNPBzkVoIHoL/MBcqXJoGPo4rDRNCwiRYVIUeiSLsJRWZSyl5rTabkLB61oU7xQ3Bg+eGTen3wE9mXKqaTnHgFxxMZ/zUNrLyA8+u93NKSRrx80UkwvmRuSuG4fcYz1yfvJZE2s1Iu6KEJPnWfHmFQ6ZVobG83+0bR9eXTHGUtOkwmeNfo9uHby+Gvnh6Sp/+NP75+4KP8uKGKVcFv9hkJXRDZ2T+nIKCv7zOggQ/LFyPHXkFEEA2S+Yh7Kmw6YbBNIJtDy4IUxBMgjC8IiGh2UCahsyrGxPkcULJTyyOZCqDxiRKSo0n2WfdWFhFHRsga966uA9ME7wsibgq7jCZ44SuoIlAVlGtMaeWQ4ssbqnXAgFN57JZyfqqwYUux266Hr7QP2xwE2xhDiQxhxkmXD3MLmLqmP+mDJwvarGZzQ+hwEIoVFluV5i3EJa2pvz/7MzP0vbsei7yq7mEBqq9WEpHebx5EAOBRp7rg4MsvzMhrsEHOgMfgDfDV2NNN9i/GlY+nrhpdlopgTLSFpgGYFpM2ZmDaAoHFBnqHRCpappS4dFu4qhqRhNHpy8a9VN1DqphVE1h7N0zGBs9w4en6B3JltsRWz/Escv7r7B/s0xwFAckiiu9Ws3cbFHEPp8w283e2+zTgAGOKNmnR8sju7Es6E3jrxf1svHNvmxv9mf7a2b2FUnZ7bnZWBkIru/DX1BjQHHI1BPhV3ZstYAojbdJZhDpKN/SZ0wj8JBMzqjRbtsSAiZJqhjI536JDS4SziTEEzYCM28AwdwlF88D3w/fWjOqXLIL5OGoajYdzRrELtAydXusahW2HGNsak0vr2tjS+LoDXst7pwx6+gij9//2GvgsXEdq0cj3Ye4KqoHFGNFcUqfPccXdh9+rS2J7/o/isEmgcf2YSy49TQhkNAxJ5O5Q+wH7NgkwKHPHF8QsasFtaw7ZHefr6K51zxsv9hJxuWgsa1WiNSQZG2FxEx1JN6vLwydo3N9+066Pun7esmubLPqDCGU3LP7E8Wv9/IrHPmXrL5VqLGk/uqUxcsg+cGPsO1HJs/0yfaul6Vm1yu+86ZiY5JSD2+eNNivFNaa6i9p15DEzEIGYSa415dqOU6mcX6GexJkU3PtW5SadfVaoJTfEO9WrpI1RkIbRkpcOsVJY6QMgfWNf4KKzTWgHjK9S8/DcXyaKZ/dqATqknx/tymfyBOOaFEaYjwuQ0q+TvURj8sw2/ESdLIlPkn939h//V/tpczXSUrkzdxoyvKbvO7DPD0NV6wkDvc0X4R4DsaANZywVIhM1g2f0yBMLrIE6ZwOVQC0q97AlOVCOhrLCkG98be51Hhoy845F5JEPLWFyNx/LnSuPw6Lu048nFFfYi1z76DJio4nEfGoViPisbT9RzzHV307T3xUC232P+2RrNZ22JidAxvJexs2GtyKs76NQeV0JCnndH+RMJ2zR/hwi8oDpi+BJ/a+RlOK41js3mMaB3ECNhJ9Tj6TUy3HHKMadpJaAjLkj/Wt3sjbcSn5azShLpg09ZKU4pOpHxt6dbVzJNnUbqvHaHP1uNtnCPzxOKv9nIbRQUHG0Izeopbb8+NE0Cdd/cgaKkgVAt5TsYSg6Jztid6tjSh63ouXc0effZrJNZU/2nunHVfxQJ56qrZV9TxIqcHV9qln44VXVauN1PNTTyQrPH4C3gJEo4yhugFBAX0B+mOZcyn022D7cUKNloSiYRO6fuFna0KdWs2yETb1TaisQtkJoY6Daq6yLaVbuOa3Ke3YlVrH4UoPGNQWFc6Dc6Uf51F8nXATj9aZx554lFV4D4PHT1CntcROHTZ2ut4Vdlq9wNYzdiKAPS3s9LbYaWfueuKuxVeqj4+7tgn0sF4bPibsUF/YKeArKumJ7jgt4du6dtRdGiLeZNrsCQeeiBwwkrIq9CeQlHu0saKYbYuSW3vQDnm02/JoDptH0+6KR3PHNUat4wL5QazM4h43Fw6VM3c9cSd7qfvouWudAA+88HLA3PX2TMVx6g/3DiAgdNoCOayAsAgAxbf41V0DCYS4q1KzBWsQb3PJivHulTU6mHqlA2zk19Dt/OirUN7/3NiFXx5Wql78DI944aIO7/CmATLll/zmNKh3qM2bbacB7BY/L5g3L369Ubv5Hw==</diagram></mxfile>"
|
||||
style="background-color: rgb(255, 255, 255);"
|
||||
id="svg29"
|
||||
sodipodi:docname="flowchart.svg"
|
||||
inkscape:version="1.3-beta (cedbd6c6ff, 2023-05-28)"
|
||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
xmlns:svg="http://www.w3.org/2000/svg">
|
||||
<sodipodi:namedview
|
||||
id="namedview29"
|
||||
pagecolor="#ffffff"
|
||||
bordercolor="#000000"
|
||||
borderopacity="0.25"
|
||||
inkscape:showpageshadow="2"
|
||||
inkscape:pageopacity="0.0"
|
||||
inkscape:pagecheckerboard="0"
|
||||
inkscape:deskcolor="#d1d1d1"
|
||||
inkscape:zoom="1.4796238"
|
||||
inkscape:cx="376.44703"
|
||||
inkscape:cy="290.95233"
|
||||
inkscape:window-width="1920"
|
||||
inkscape:window-height="1137"
|
||||
inkscape:window-x="-8"
|
||||
inkscape:window-y="-8"
|
||||
inkscape:window-maximized="1"
|
||||
inkscape:current-layer="g28" />
|
||||
<defs
|
||||
id="defs1" />
|
||||
<g
|
||||
id="g28">
|
||||
<path
|
||||
d="M 630.35 154 L 749.35 206.5 L 630.35 259 L 511.35 206.5 Z"
|
||||
fill="#326ce6"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="path1" />
|
||||
<ellipse
|
||||
cx="77.35"
|
||||
cy="52.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse1" />
|
||||
<path
|
||||
d="M 303.1 0 L 422.1 52.5 L 303.1 105 L 184.1 52.5 Z"
|
||||
fill="#326ce6"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="path2" />
|
||||
<rect
|
||||
x="0.35"
|
||||
y="133"
|
||||
width="154"
|
||||
height="42"
|
||||
fill="#ffffff"
|
||||
stroke="#000000"
|
||||
stroke-width="1.05"
|
||||
pointer-events="all"
|
||||
id="rect2" />
|
||||
<rect
|
||||
x="252.79"
|
||||
y="31.94"
|
||||
width="100.63"
|
||||
height="41.13"
|
||||
fill="none"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="rect3" />
|
||||
<g
|
||||
transform="translate(-0.5 -0.5)scale(0.7)"
|
||||
id="g3">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:15.2381px;font-family:sans-serif;-inkscape-font-specification:sans-serif;fill:#9ce8fc;stroke-width:1.42857"
|
||||
x="432.02502"
|
||||
y="71.00174"
|
||||
id="text30"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan29"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="432.02502"
|
||||
y="71.00174">你是否想要在 Kubernetes API</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="432.02502"
|
||||
y="90.433144"
|
||||
id="tspan30">中添加全新的类别?</tspan></text>
|
||||
</g>
|
||||
<g
|
||||
transform="matrix(0.7,0,0,0.7,341.57579,125.42893)"
|
||||
id="g3-3">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:15.2381px;font-family:sans-serif;-inkscape-font-specification:sans-serif;fill:#9ce8fc;stroke-width:1.42857"
|
||||
x="418.37085"
|
||||
y="110.25751"
|
||||
id="text30-2"><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="418.37085"
|
||||
y="110.25751"
|
||||
id="tspan30-5">你是否想要限制或自动编辑</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="418.37085"
|
||||
y="129.68892"
|
||||
id="tspan31">某些或所有 API 类别的字段?</tspan></text>
|
||||
</g>
|
||||
<ellipse
|
||||
cx="630.35"
|
||||
cy="52.51"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse3" />
|
||||
<rect
|
||||
x="564.07"
|
||||
y="189"
|
||||
width="152.03"
|
||||
height="35"
|
||||
fill="none"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="rect4" />
|
||||
<ellipse
|
||||
cx="448.35"
|
||||
cy="206.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse5" />
|
||||
<path
|
||||
d="M 495.6 206.5 L 513.1 206.5"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path6" />
|
||||
<rect
|
||||
x="371.35"
|
||||
y="280"
|
||||
width="154"
|
||||
height="42"
|
||||
fill="#ffffff"
|
||||
stroke="#000000"
|
||||
stroke-width="1.05"
|
||||
pointer-events="all"
|
||||
id="rect6" />
|
||||
<ellipse
|
||||
cx="630.35"
|
||||
cy="311.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse7" />
|
||||
<path
|
||||
d="M 630.35 364 L 749.35 416.5 L 630.35 469 L 511.35 416.5 Z"
|
||||
fill="#326ce6"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="path8" />
|
||||
<rect
|
||||
x="555.1"
|
||||
y="399"
|
||||
width="169.75"
|
||||
height="35"
|
||||
fill="none"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="rect8" />
|
||||
<ellipse
|
||||
cx="420.35"
|
||||
cy="416.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse9" />
|
||||
<ellipse
|
||||
cx="586.6"
|
||||
cy="528.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse10" />
|
||||
<path
|
||||
d="M 420.35 476 L 539.35 528.5 L 420.35 581 L 301.35 528.5 Z"
|
||||
fill="#326ce6"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="path11" />
|
||||
<ellipse
|
||||
cx="742.35"
|
||||
cy="528.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse11" />
|
||||
<ellipse
|
||||
cx="254.1"
|
||||
cy="528.5"
|
||||
rx="47.25"
|
||||
ry="31.499999999999996"
|
||||
fill="#454545"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="ellipse12" />
|
||||
<rect
|
||||
x="356.47"
|
||||
y="504"
|
||||
width="177.63"
|
||||
height="49"
|
||||
fill="none"
|
||||
stroke="none"
|
||||
pointer-events="all"
|
||||
id="rect13" />
|
||||
<rect
|
||||
x="177.1"
|
||||
y="595"
|
||||
width="154"
|
||||
height="42"
|
||||
fill="#ffffff"
|
||||
stroke="#000000"
|
||||
stroke-width="1.05"
|
||||
pointer-events="all"
|
||||
id="rect14" />
|
||||
<rect
|
||||
x="509.6"
|
||||
y="595"
|
||||
width="154"
|
||||
height="42"
|
||||
fill="#ffffff"
|
||||
stroke="#000000"
|
||||
stroke-width="1.05"
|
||||
pointer-events="all"
|
||||
id="rect15" />
|
||||
<path
|
||||
d="M 124.6 52.5 L 189.1 53.23"
|
||||
fill="none"
|
||||
stroke="#000000"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path16" />
|
||||
<path
|
||||
d="M 77.35 133 L 77.35 84"
|
||||
fill="none"
|
||||
stroke="#000000"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path17" />
|
||||
<path
|
||||
d="M 583.1 52.51 L 420.2 52.5"
|
||||
fill="none"
|
||||
stroke="#000000"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path18" />
|
||||
<path
|
||||
d="M 630.35 154 L 630.35 84.01"
|
||||
fill="none"
|
||||
stroke="#000000"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path19" />
|
||||
<path
|
||||
d="M 630.35 280 L 630.35 259"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path20" />
|
||||
<path
|
||||
d="M 630.35 364 L 630.35 343"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path21" />
|
||||
<path
|
||||
d="M 448.35 280 L 448.35 238"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path22" />
|
||||
<path
|
||||
d="M 467.6 416.5 L 514.44 416.39"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path23" />
|
||||
<path
|
||||
d="M 420.35 476.63 L 420.35 448"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path24" />
|
||||
<path
|
||||
d="M 254.1 595 L 254.1 560"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path25" />
|
||||
<path
|
||||
d="M 586.6 595 L 586.6 560"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path26" />
|
||||
<path
|
||||
d="M 789.6 528.5 L 807.1 528.5 L 807.1 416.5 L 747.21 416.5"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path27" />
|
||||
<path
|
||||
d="M 663.6 616 L 681.1 616 L 681.1 528.5 L 695.1 528.5"
|
||||
fill="none"
|
||||
stroke="rgb(0, 0, 0)"
|
||||
stroke-width="0.7"
|
||||
stroke-miterlimit="10"
|
||||
pointer-events="stroke"
|
||||
id="path28" />
|
||||
<g
|
||||
id="g1-5"
|
||||
transform="translate(-0.94906787,208.10827)">
|
||||
<g
|
||||
transform="matrix(0.7,0,0,0.7,341.57579,125.42893)"
|
||||
id="g3-3-5">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:15.2381px;font-family:sans-serif;-inkscape-font-specification:sans-serif;fill:#9ce8fc;stroke-width:1.42857"
|
||||
x="415.47437"
|
||||
y="113.63674"
|
||||
id="text30-2-8"><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="415.47437"
|
||||
y="113.63674"
|
||||
id="tspan31-9">你是否想要更改内置</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="415.47437"
|
||||
y="133.06816"
|
||||
id="tspan34">API 类别的底层实现?</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<g
|
||||
id="g1-5-4"
|
||||
transform="translate(-212.80121,318.9108)">
|
||||
<g
|
||||
transform="matrix(0.7,0,0,0.7,341.57579,125.42893)"
|
||||
id="g3-3-5-2">
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:15.2381px;font-family:sans-serif;-inkscape-font-specification:sans-serif;fill:#9ce8fc;stroke-width:1.42857"
|
||||
x="416.92261"
|
||||
y="107.84377"
|
||||
id="text30-2-8-9"><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="416.92261"
|
||||
y="107.84377"
|
||||
id="tspan34-4">你是否想要更改 Volume、</tspan><tspan
|
||||
sodipodi:role="line"
|
||||
style="font-size:15.2381px;text-align:center;text-anchor:middle;fill:#ffffff;stroke-width:1.42857"
|
||||
x="416.92261"
|
||||
y="128.373"
|
||||
id="tspan36">Service、Ingress、PersistentVolume?</tspan></text>
|
||||
</g>
|
||||
</g>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="76.546616"
|
||||
y="56.609112"
|
||||
id="text37"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37"
|
||||
x="76.546616"
|
||||
y="56.609112">是</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="450.05069"
|
||||
y="210.53658"
|
||||
id="text37-5"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-8"
|
||||
x="450.05069"
|
||||
y="210.53658">是</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="419.82144"
|
||||
y="418.95734"
|
||||
id="text37-2"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-4"
|
||||
x="419.82144"
|
||||
y="418.95734">是</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="631.0235"
|
||||
y="313.58737"
|
||||
id="text37-2-4"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-4-6"
|
||||
x="631.0235"
|
||||
y="313.58737">否</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="254.47926"
|
||||
y="534.35229"
|
||||
id="text37-55"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-40"
|
||||
x="254.47926"
|
||||
y="534.35229">是</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="633.18011"
|
||||
y="57.422901"
|
||||
id="text37-2-4-4"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-4-6-8"
|
||||
x="633.18011"
|
||||
y="57.422901">否</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="585.76648"
|
||||
y="532.07996"
|
||||
id="text37-2-4-46"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-4-6-1"
|
||||
x="585.76648"
|
||||
y="532.07996">否</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:10.6667px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="742.38641"
|
||||
y="531.4621"
|
||||
id="text37-2-4-8"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan37-4-6-3"
|
||||
x="742.38641"
|
||||
y="531.4621">否</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:8px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="80.742378"
|
||||
y="156.72789"
|
||||
id="text38"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan38"
|
||||
x="80.742378"
|
||||
y="156.72789"
|
||||
style="font-size:8px;fill:#000000">转到“API Extensions”</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:8px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="451.74719"
|
||||
y="303.39474"
|
||||
id="text38-9"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan38-5"
|
||||
x="451.74719"
|
||||
y="303.39474"
|
||||
style="font-size:8px;fill:#000000">转到“API Access Extensions”</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:8px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="253.34416"
|
||||
y="618.552"
|
||||
id="text38-7"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan38-7"
|
||||
x="253.34416"
|
||||
y="618.552"
|
||||
style="font-size:8px;fill:#000000">转到“Infrastructure”</tspan></text>
|
||||
<text
|
||||
xml:space="preserve"
|
||||
style="font-size:8px;font-family:sans-serif;-inkscape-font-specification:sans-serif;text-align:center;text-anchor:middle;fill:#ffffff"
|
||||
x="594.18701"
|
||||
y="619.79028"
|
||||
id="text38-7-8"><tspan
|
||||
sodipodi:role="line"
|
||||
id="tspan38-7-6"
|
||||
x="594.18701"
|
||||
y="619.79028"
|
||||
style="font-size:8px;fill:#000000">转到“Automation”</tspan></text>
|
||||
</g>
|
||||
<switch
|
||||
id="switch29">
|
||||
<g
|
||||
requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"
|
||||
id="g29" />
|
||||
<a
|
||||
transform="translate(0,-5)"
|
||||
xlink:href="https://www.drawio.com/doc/faq/svg-export-text-problems"
|
||||
target="_blank"
|
||||
id="a29">
|
||||
<text
|
||||
text-anchor="middle"
|
||||
font-size="10px"
|
||||
x="50%"
|
||||
y="100%"
|
||||
id="text29">Text is not SVG - cannot display</text>
|
||||
</a>
|
||||
</switch>
|
||||
</svg>
|
After Width: | Height: | Size: 19 KiB |
|
@ -181,7 +181,7 @@ Eviction signal value is calculated periodically and does NOT enforce the limit.
|
|||
PID limiting - per Pod and per Node sets the hard limit.
|
||||
Once the limit is hit, workload will start experiencing failures when trying to get a new PID.
|
||||
It may or may not lead to rescheduling of a Pod,
|
||||
depending on how workload reacts on these failures and how liveleness and readiness
|
||||
depending on how workload reacts on these failures and how liveness and readiness
|
||||
probes are configured for the Pod. However, if limits were set correctly,
|
||||
you can guarantee that other Pods workload and system processes will not run out of PIDs
|
||||
when one Pod is misbehaving.
|
||||
|
|
|
@ -918,7 +918,7 @@ The two options are discussed in more detail in the following sections.
|
|||
<!--
|
||||
As previously mentioned, you should consider isolating each workload in its own namespace, even if
|
||||
you are using dedicated clusters or virtualized control planes. This ensures that each workload
|
||||
only has access to its own resources, such as Config Maps and Secrets, and allows you to tailor
|
||||
only has access to its own resources, such as ConfigMaps and Secrets, and allows you to tailor
|
||||
dedicated security policies for each workload. In addition, it is a best practice to give each
|
||||
namespace names that are unique across your entire fleet (that is, even if they are in separate
|
||||
clusters), as this gives you the flexibility to switch between dedicated and shared clusters in
|
||||
|
|
|
@ -894,9 +894,9 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
#### Reserve Nodeport Ranges to avoid collisions when port assigning
|
||||
#### Reserve Nodeport ranges to avoid collisions {#avoid-nodeport-collisions}
|
||||
-->
|
||||
#### 预留 NodePort 端口范围以避免分配端口时发生冲突
|
||||
#### 预留 NodePort 端口范围以避免发生冲突 {#avoid-nodeport-collisions}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.28" state="beta" >}}
|
||||
|
||||
|
|
|
@ -1613,7 +1613,7 @@ the Job status, allowing the Pod to be removed by other controllers or users.
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
See [My pod stays terminating](/docs/tasks/debug-application/debug-pods) if you
|
||||
See [My pod stays terminating](/docs/tasks/debug/debug-application/debug-pods/) if you
|
||||
observe that pods from a Job are stucked with the tracking finalizer.
|
||||
-->
|
||||
如果你发现来自 Job 的某些 Pod 因存在负责跟踪的 Finalizer 而无法正常终止,
|
||||
|
|
|
@ -231,7 +231,7 @@ flowchart LR
|
|||
subgraph changes[你的变更]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
3[创建一个分支<br>例如: my_new_branch] --> 3a[使用文本编辑器<br>进行修改] --> 4["使用 Hugo 在本地<br>预览你的变更<br>(localhost:1313)<br>或构建容器镜像"]
|
||||
3[创建一个分支<br>例如:my_new_branch] --> 3a[使用文本编辑器<br>进行修改] --> 4["使用 Hugo 在本地<br>预览你的变更<br>(localhost:1313)<br>或构建容器镜像"]
|
||||
end
|
||||
subgraph changes2[提交 / 推送]
|
||||
direction TB
|
||||
|
@ -592,11 +592,15 @@ Alternately, install and use the `hugo` command on your computer:
|
|||
### 从你的克隆副本向 kubernetes/website 发起拉取请求(PR) {#open-a-pr}
|
||||
|
||||
<!--
|
||||
Figure 3 shows the steps to open a PR from your fork to the kubernetes/website. The details follow.
|
||||
Figure 3 shows the steps to open a PR from your fork to the [kubernetes/website](https://github.com/kubernetes/website). The details follow.
|
||||
|
||||
Please, note that contributors can mention `kubernetes/website` as `k/website`.
|
||||
-->
|
||||
图 3 显示了从你的克隆副本向 kubernetes/website 发起 PR 的步骤。
|
||||
详细信息如下。
|
||||
|
||||
请注意,贡献者可以将 `kubernetes/website` 称为 `k/website`。
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
|
@ -1055,4 +1059,3 @@ possible when you file issues or PRs.
|
|||
- Read [Reviewing](/docs/contribute/review/reviewing-prs) to learn more about the review process.
|
||||
-->
|
||||
- 阅读[评阅](/zh-cn/docs/contribute/review/reviewing-prs)节,学习评阅过程。
|
||||
|
||||
|
|
|
@ -12,7 +12,10 @@ weight: 20
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
|
||||
Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),formal approvers,
|
||||
and reviewers, members of SIG Docs take week long shifts
|
||||
[triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)
|
||||
for the repository.
|
||||
-->
|
||||
除了承担 [PR 管理者](/zh-cn/docs/contribute/participate/pr-wranglers)的职责外,
|
||||
SIG Docs 正式的批准人(Approver)、评审人(Reviewer)和成员(Member)
|
||||
|
@ -25,7 +28,9 @@ SIG Docs 正式的批准人(Approver)、评审人(Reviewer)和成员(M
|
|||
|
||||
Each day in a week-long shift the Issue Wrangler will be responsible for:
|
||||
|
||||
- Triaging and tagging incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/content/en/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata.
|
||||
- Triaging and tagging incoming issues daily. See
|
||||
[Triage and categorize issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)
|
||||
for guidelines on how SIG Docs uses metadata.
|
||||
- Keeping an eye on stale & rotten issues within the kubernetes/website repository.
|
||||
- Maintenance of the [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1).
|
||||
-->
|
||||
|
@ -34,18 +39,19 @@ Each day in a week-long shift the Issue Wrangler will be responsible for:
|
|||
在为期一周的轮值期内,Issue 管理者每天负责:
|
||||
|
||||
- 对收到的 Issue 进行日常分类和标记。有关 SIG Docs 如何使用元数据的指导说明,
|
||||
参阅[归类 Issue](https://github.com/kubernetes/website/blob/main/content/en/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)。
|
||||
参阅[归类 Issue](/zh-cn/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues)。
|
||||
- 密切关注 kubernetes/website 代码仓库中陈旧和过期的 Issue。
|
||||
- 维护 [Issues 看板](https://github.com/orgs/kubernetes/projects/72/views/1)。
|
||||
|
||||
<!--
|
||||
### Requirements
|
||||
## Requirements
|
||||
|
||||
- Must be an active member of the Kubernetes organization.
|
||||
- A minimum of 15 [non-trivial](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits) contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
|
||||
- Performing the role in an informal capacity already
|
||||
- A minimum of 15 [non-trivial](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits)
|
||||
contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
|
||||
- Performing the role in an informal capacity already.
|
||||
-->
|
||||
### 要求 {#requirements}
|
||||
## 要求 {#requirements}
|
||||
|
||||
- 必须是 Kubernetes 组织的活跃成员。
|
||||
- 至少为 Kubernetes 做了 15
|
||||
|
@ -54,12 +60,16 @@ Each day in a week-long shift the Issue Wrangler will be responsible for:
|
|||
- 已经以非正式身份履行该职责。
|
||||
|
||||
<!--
|
||||
### Helpful [Prow commands](https://prow.k8s.io/command-help) for wranglers
|
||||
## Helpful Prow commands for wranglers
|
||||
|
||||
Below are some commonly used commands for Issue Wranglers:
|
||||
-->
|
||||
### 对管理者有帮助的 [Prow 命令](https://prow.k8s.io/command-help)
|
||||
## 对管理者有帮助的 Prow 命令
|
||||
|
||||
以下是 Issue 管理者的一些常用命令:
|
||||
|
||||
<!--
|
||||
```
|
||||
```bash
|
||||
# reopen an issue
|
||||
/reopen
|
||||
|
||||
|
@ -94,7 +104,7 @@ Each day in a week-long shift the Issue Wrangler will be responsible for:
|
|||
/close not-planned
|
||||
```
|
||||
-->
|
||||
```
|
||||
```bash
|
||||
# 重新打开 Issue
|
||||
/reopen
|
||||
|
||||
|
@ -130,11 +140,18 @@ Each day in a week-long shift the Issue Wrangler will be responsible for:
|
|||
```
|
||||
|
||||
<!--
|
||||
### When to close Issues
|
||||
|
||||
For an open source project to succeed, good issue management is crucial. But it is also critical to resolve issues in order to maintain the repository and communicate clearly with contributors and users.
|
||||
To find more Prow commands, refer to the [Command Help](https://prow.k8s.io/command-help) documentation.
|
||||
-->
|
||||
### 何时关闭 Issue {#when-to-close-issues}
|
||||
要查找更多 Prow 命令,请参阅[命令帮助](https://prow.k8s.io/command-help)文档。
|
||||
|
||||
<!--
|
||||
## When to close Issues
|
||||
|
||||
For an open source project to succeed, good issue management is crucial.
|
||||
But it is also critical to resolve issues in order to maintain the repository
|
||||
and communicate clearly with contributors and users.
|
||||
-->
|
||||
## 何时关闭 Issue {#when-to-close-issues}
|
||||
|
||||
一个开源项目想要成功,良好的 Issue 管理非常关键。
|
||||
但解决 Issue 也很重要,这样才能维护代码仓库,并与贡献者和用户进行清晰明确的交流。
|
||||
|
@ -142,7 +159,8 @@ For an open source project to succeed, good issue management is crucial. But it
|
|||
<!--
|
||||
Close issues when:
|
||||
|
||||
- A similar issue is reported more than once.You will first need to tag it as /triage duplicate; link it to the main issue & then close it. It is also advisable to direct the users to the original issue.
|
||||
- A similar issue is reported more than once.You will first need to tag it as `/triage duplicate`;
|
||||
link it to the main issue & then close it. It is also advisable to direct the users to the original issue.
|
||||
- It is very difficult to understand and address the issue presented by the author with the information provided.
|
||||
However, encourage the user to provide more details or reopen the issue if they can reproduce it later.
|
||||
- The same functionality is implemented elsewhere. One can close this issue and direct user to the appropriate place.
|
||||
|
@ -152,7 +170,7 @@ Close issues when:
|
|||
-->
|
||||
关闭 Issue 的时机包括:
|
||||
|
||||
- 类似的 Issue 被多次报告。你首先需要将其标记为 /triage duplicate;
|
||||
- 类似的 Issue 被多次报告。你首先需要将其标记为 `/triage duplicate`;
|
||||
将其链接到主要 Issue 然后关闭它。还建议将用户引导至最初的 Issue。
|
||||
- 通过所提供的信息很难理解和解决作者提出的 Issue。
|
||||
但要鼓励用户提供更多细节,或者在以后可以重现 Issue 时重新打开此 Issue 。
|
||||
|
|
|
@ -420,11 +420,15 @@ and Netlify previews to verify the diagram is properly rendered.
|
|||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
The Mermaid live editor feature set may not support the kubernetes/website Mermaid feature set.
|
||||
The Mermaid live editor feature set may not support the [kubernetes/website](https://github.com/kubernetes/website) Mermaid feature set.
|
||||
And please, note that contributors can mention `kubernetes/website` as `k/website`.
|
||||
You might see a syntax error or a blank screen after the Hugo build.
|
||||
If that is the case, consider using the Mermaid+SVG method.
|
||||
-->
|
||||
Mermaid 在线编辑器的功能特性可能不支持 K8s 网站的 Mermaid 特性。
|
||||
Mermaid 在线编辑器的功能特性可能不支持
|
||||
[kubernetes/website](https://github.com/kubernetes/website)
|
||||
的 Mermaid 特性。
|
||||
请注意,贡献者可以将 `kubernetes/website` 称为 `k/website`。
|
||||
你可能在 Hugo 构建过程中看到语法错误提示或者空白屏幕。
|
||||
如果发生这类情况,可以考虑使用 Mermaid+SVG 方法。
|
||||
{{< /caution >}}
|
||||
|
@ -548,14 +552,15 @@ The following lists advantages of the Mermaid+SVG method:
|
|||
|
||||
* Live editor tool.
|
||||
* Live editor tool supports the most current Mermaid feature set.
|
||||
* Employ existing kubernetes/website methods for handling `.svg` image files.
|
||||
* Employ existing [kubernetes/website](https://github.com/kubernetes/website) methods for handling `.svg` image files.
|
||||
* Environment doesn't require Mermaid support.
|
||||
-->
|
||||
使用 Mermaid+SVG 方法的好处有:
|
||||
|
||||
* 可以直接使用在线编辑器工具
|
||||
* 在线编辑器支持的 Mermaid 特性集合最新
|
||||
* 可以利用 K8s 网站用来处理 `.svg` 图片文件的现有方法
|
||||
* 可以利用 [kubernetes/website](https://github.com/kubernetes/website)
|
||||
用来处理 `.svg` 图片文件的现有方法
|
||||
* 工作环境不需要 Mermaid 支持
|
||||
|
||||
<!--
|
||||
|
|
|
@ -16,7 +16,7 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
This page provides an overview of authenticating.
|
||||
This page provides an overview of authentication.
|
||||
-->
|
||||
本页提供身份认证有关的概述。
|
||||
|
||||
|
@ -106,7 +106,8 @@ Kubernetes 通过身份认证插件利用客户端证书、持有者令牌(Bea
|
|||
<!--
|
||||
* Username: a string which identifies the end user. Common values might be `kube-admin` or `jane@example.com`.
|
||||
* UID: a string which identifies the end user and attempts to be more consistent and unique than username.
|
||||
* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users. Common values might be `system:masters` or `devops-team`.
|
||||
* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users.
|
||||
Common values might be `system:masters` or `devops-team`.
|
||||
* Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful.
|
||||
-->
|
||||
* 用户名:用来辩识最终用户的字符串。常见的值可以是 `kube-admin` 或 `jane@example.com`。
|
||||
|
@ -154,7 +155,7 @@ API 服务器并不保证身份认证模块的运行顺序。
|
|||
来实现。
|
||||
|
||||
<!--
|
||||
### X509 Client Certs
|
||||
### X509 Client certificates
|
||||
|
||||
Client certificate authentication is enabled by passing the `--client-ca-file=SOMEFILE`
|
||||
option to API server. The referenced file must contain one or more certificate authorities
|
||||
|
@ -192,9 +193,10 @@ See [Managing Certificates](/docs/tasks/administer-cluster/certificates/) for ho
|
|||
参阅[管理证书](/zh-cn/docs/tasks/administer-cluster/certificates/)了解如何生成客户端证书。
|
||||
|
||||
<!--
|
||||
### Static Token File
|
||||
### Static token file
|
||||
|
||||
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option on the command line. Currently, tokens last indefinitely, and the token list cannot be
|
||||
The API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option
|
||||
on the command line. Currently, tokens last indefinitely, and the token list cannot be
|
||||
changed without restarting the API server.
|
||||
|
||||
The token file is a csv file with a minimum of 3 columns: token, user name, user uid,
|
||||
|
@ -220,7 +222,7 @@ token,user,uid,"group1,group2,group3"
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
#### Putting a Bearer Token in a Request
|
||||
#### Putting a bearer token in a request
|
||||
|
||||
When using bearer token authentication from an http client, the API
|
||||
server expects an `Authorization` header with a value of `Bearer
|
||||
|
@ -242,7 +244,7 @@ Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
|
|||
```
|
||||
|
||||
<!--
|
||||
### Bootstrap Tokens
|
||||
### Bootstrap tokens
|
||||
-->
|
||||
### 启动引导令牌 {#bootstrap-tokens}
|
||||
|
||||
|
@ -308,15 +310,15 @@ how to manage these tokens with `kubeadm`.
|
|||
`kubeadm` 来管理这些令牌。
|
||||
|
||||
<!--
|
||||
### Service Account Tokens
|
||||
### Service account tokens
|
||||
|
||||
A service account is an automatically enabled authenticator that uses signed
|
||||
bearer tokens to verify requests. The plugin takes two optional flags:
|
||||
|
||||
* `--service-account-key-file` File containing PEM-encoded x509 RSA or ECDSA
|
||||
private or public keys, used to verify ServiceAccount tokens. The specified file
|
||||
can contain multiple keys, and the flag can be specified multiple times with
|
||||
different files. If unspecified, --tls-private-key-file is used.
|
||||
private or public keys, used to verify ServiceAccount tokens. The specified file
|
||||
can contain multiple keys, and the flag can be specified multiple times with
|
||||
different files. If unspecified, --tls-private-key-file is used.
|
||||
* `--service-account-lookup` If enabled, tokens which are deleted from the API will be revoked.
|
||||
-->
|
||||
### 服务账号令牌 {#service-account-tokens}
|
||||
|
@ -457,7 +459,7 @@ ID 令牌是一种由服务器签名的 JWT 令牌,其中包含一些可预知
|
|||
<!--
|
||||
To identify the user, the authenticator uses the `id_token` (not the `access_token`)
|
||||
from the OAuth2 [token response](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse)
|
||||
as a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token
|
||||
as a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token
|
||||
is included in a request.
|
||||
-->
|
||||
要识别用户,身份认证组件使用 OAuth2
|
||||
|
@ -495,14 +497,14 @@ sequenceDiagram
|
|||
|
||||
<!--
|
||||
1. Login to your identity provider
|
||||
2. Your identity provider will provide you with an `access_token`, `id_token` and a `refresh_token`
|
||||
3. When using `kubectl`, use your `id_token` with the `--token` flag or add it directly to your `kubeconfig`
|
||||
4. `kubectl` sends your `id_token` in a header called Authorization to the API server
|
||||
5. The API server will make sure the JWT signature is valid by checking against the certificate named in the configuration
|
||||
6. Check to make sure the `id_token` hasn't expired
|
||||
7. Make sure the user is authorized
|
||||
8. Once authorized the API server returns a response to `kubectl`
|
||||
9. `kubectl` provides feedback to the user
|
||||
1. Your identity provider will provide you with an `access_token`, `id_token` and a `refresh_token`
|
||||
1. When using `kubectl`, use your `id_token` with the `--token` flag or add it directly to your `kubeconfig`
|
||||
1. `kubectl` sends your `id_token` in a header called Authorization to the API server
|
||||
1. The API server will make sure the JWT signature is valid by checking against the certificate named in the configuration
|
||||
1. Check to make sure the `id_token` hasn't expired
|
||||
1. Make sure the user is authorized
|
||||
1. Once authorized the API server returns a response to `kubectl`
|
||||
1. `kubectl` provides feedback to the user
|
||||
-->
|
||||
1. 登录到你的身份服务(Identity Provider)
|
||||
2. 你的身份服务将为你提供 `access_token`、`id_token` 和 `refresh_token`
|
||||
|
@ -517,16 +519,20 @@ sequenceDiagram
|
|||
|
||||
<!--
|
||||
Since all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to
|
||||
"phone home" to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication. It does offer a few challenges:
|
||||
"phone home" to the identity provider. In a model where every request is stateless this provides a
|
||||
very scalable solution for authentication. It does offer a few challenges:
|
||||
-->
|
||||
由于用来验证你是谁的所有数据都在 `id_token` 中,Kubernetes 不需要再去联系身份服务。
|
||||
在一个所有请求都是无状态请求的模型中,这一工作方式可以使得身份认证的解决方案更容易处理大规模请求。
|
||||
不过,此访问也有一些挑战:
|
||||
|
||||
<!--
|
||||
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
|
||||
2. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes) so it can be very annoying to have to get a new token every few minutes.
|
||||
3. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy that injects the `id_token`.
|
||||
1. Kubernetes has no "web interface" to trigger the authentication process. There is no browser or
|
||||
interface to collect credentials which is why you need to authenticate to your identity provider first.
|
||||
1. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes)
|
||||
so it can be very annoying to have to get a new token every few minutes.
|
||||
1. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy
|
||||
that injects the `id_token`.
|
||||
-->
|
||||
1. Kubernetes 没有提供用来触发身份认证过程的 "Web 界面"。
|
||||
因为不存在用来收集用户凭据的浏览器或用户接口,你必须自己先行完成对身份服务的认证过程。
|
||||
|
@ -604,20 +610,27 @@ Tremolo Security 的 [OpenUnison](https://openunison.github.io/)。
|
|||
<!--
|
||||
For an identity provider to work with Kubernetes it must:
|
||||
|
||||
1. Support [OpenID connect discovery](https://openid.net/specs/openid-connect-discovery-1_0.html); not all do.
|
||||
2. Run in TLS with non-obsolete ciphers
|
||||
3. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)
|
||||
1. Support [OpenID connect discovery](https://openid.net/specs/openid-connect-discovery-1_0.html); not all do.
|
||||
1. Run in TLS with non-obsolete ciphers
|
||||
1. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)
|
||||
-->
|
||||
要在 Kubernetes 环境中使用某身份服务,该服务必须:
|
||||
|
||||
1. 支持 [OpenID connect 发现](https://openid.net/specs/openid-connect-discovery-1_0.html);
|
||||
但事实上并非所有服务都具备此能力
|
||||
2. 运行 TLS 协议且所使用的加密组件都未过时
|
||||
3. 拥有由 CA 签名的证书(即使 CA 不是商业 CA 或者是自签名的 CA 也可以)
|
||||
1. 支持 [OpenID connect 发现](https://openid.net/specs/openid-connect-discovery-1_0.html);
|
||||
但事实上并非所有服务都具备此能力
|
||||
2. 运行 TLS 协议且所使用的加密组件都未过时
|
||||
3. 拥有由 CA 签名的证书(即使 CA 不是商业 CA 或者是自签名的 CA 也可以)
|
||||
|
||||
<!--
|
||||
A note about requirement #3 above, requiring a CA signed certificate. If you deploy your own identity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST have your identity provider's web server certificate signed by a certificate with the `CA` flag set to `TRUE`, even if it is self signed. This is due to GoLang's TLS client implementation being very strict to the standards around certificate validation. If you don't have a CA handy, you can use [this script](https://github.com/dexidp/dex/blob/master/examples/k8s/gencert.sh) from the Dex team to create a simple CA and a signed certificate and key pair.
|
||||
Or you can use [this similar script](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh) that generates SHA256 certs with a longer life and larger key size.
|
||||
A note about requirement #3 above, requiring a CA signed certificate. If you deploy your own
|
||||
identity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST
|
||||
have your identity provider's web server certificate signed by a certificate with the `CA` flag
|
||||
set to `TRUE`, even if it is self signed. This is due to GoLang's TLS client implementation
|
||||
being very strict to the standards around certificate validation. If you don't have a CA handy,
|
||||
you can use [this script](https://github.com/dexidp/dex/blob/master/examples/k8s/gencert.sh)
|
||||
from the Dex team to create a simple CA and a signed certificate and key pair. Or you can use
|
||||
[this similar script](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh)
|
||||
that generates SHA256 certs with a longer life and larger key size.
|
||||
-->
|
||||
关于上述第三条需求,即要求具备 CA 签名的证书,有一些额外的注意事项。
|
||||
如果你部署了自己的身份服务,而不是使用云厂商(如 Google 或 Microsoft)所提供的服务,
|
||||
|
@ -642,9 +655,12 @@ Setup instructions for specific systems:
|
|||
|
||||
##### Option 1 - OIDC Authenticator
|
||||
|
||||
The first option is to use the kubectl `oidc` authenticator, which sets the `id_token` as a bearer token for all requests and refreshes the token once it expires. After you've logged into your provider, use kubectl to add your `id_token`, `refresh_token`, `client_id`, and `client_secret` to configure the plugin.
|
||||
The first option is to use the kubectl `oidc` authenticator, which sets the `id_token` as a bearer token
|
||||
for all requests and refreshes the token once it expires. After you've logged into your provider, use
|
||||
kubectl to add your `id_token`, `refresh_token`, `client_id`, and `client_secret` to configure the plugin.
|
||||
|
||||
Providers that don't return an `id_token` as part of their refresh token response aren't supported by this plugin and should use "Option 2" below.
|
||||
Providers that don't return an `id_token` as part of their refresh token response aren't supported
|
||||
by this plugin and should use "Option 2" below.
|
||||
-->
|
||||
#### 使用 kubectl {#using-kubectl}
|
||||
|
||||
|
@ -706,7 +722,8 @@ users:
|
|||
```
|
||||
|
||||
<!--
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token` and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
Once your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token`
|
||||
and `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube/config`.
|
||||
-->
|
||||
当你的 `id_token` 过期时,`kubectl` 会尝试使用你的 `refresh_token` 来刷新你的
|
||||
`id_token`,并且在 `.kube/config` 文件的 `client_secret` 中存放 `refresh_token`
|
||||
|
@ -816,8 +833,9 @@ contexts:
|
|||
```
|
||||
|
||||
<!--
|
||||
When a client attempts to authenticate with the API server using a bearer token as discussed [above](#putting-a-bearer-token-in-a-request),
|
||||
the authentication webhook POSTs a JSON-serialized `TokenReview` object containing the token to the remote service.
|
||||
When a client attempts to authenticate with the API server using a bearer token as discussed
|
||||
[above](#putting-a-bearer-token-in-a-request), the authentication webhook POSTs a JSON-serialized
|
||||
`TokenReview` object containing the token to the remote service.
|
||||
-->
|
||||
当客户端尝试在 API 服务器上使用持有者令牌完成身份认证
|
||||
(如[前](#putting-a-bearer-token-in-a-request)所述)时,
|
||||
|
@ -826,8 +844,8 @@ the authentication webhook POSTs a JSON-serialized `TokenReview` object containi
|
|||
Kubernetes 不会强制请求提供此 HTTP 头部。
|
||||
|
||||
<!--
|
||||
Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/) as other Kubernetes API objects.
|
||||
Implementers should check the `apiVersion` field of the request to ensure correct deserialization,
|
||||
Note that webhook API objects are subject to the same [versioning compatibility rules](/docs/concepts/overview/kubernetes-api/)
|
||||
as other Kubernetes API objects. Implementers should check the `apiVersion` field of the request to ensure correct deserialization,
|
||||
and **must** respond with a `TokenReview` object of the same version as the request.
|
||||
-->
|
||||
要注意的是,Webhook API 对象和其他 Kubernetes API 对象一样,
|
||||
|
@ -1015,9 +1033,15 @@ API 服务器可以配置成从请求的头部字段值(如 `X-Remote-User`)
|
|||
这一设计是用来与某身份认证代理一起使用 API 服务器,代理负责设置请求的头部字段值。
|
||||
|
||||
<!--
|
||||
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order, for the user identity. The first header containing a value is used as the username.
|
||||
* `--requestheader-group-headers` 1.6+. Optional, case-insensitive. "X-Remote-Group" is suggested. Header names to check, in order, for the user's groups. All values in all specified headers are used as group names.
|
||||
* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested. Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin). Any headers beginning with any of the specified prefixes have the prefix removed. The remainder of the header name is lowercased and [percent-decoded](https://tools.ietf.org/html/rfc3986#section-2.1) and becomes the extra key, and the header value is the extra value.
|
||||
* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order,
|
||||
for the user identity. The first header containing a value is used as the username.
|
||||
* `--requestheader-group-headers` 1.6+. Optional, case-insensitive. "X-Remote-Group" is suggested.
|
||||
Header names to check, in order, for the user's groups. All values in all specified headers are used as group names.
|
||||
* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. "X-Remote-Extra-" is suggested.
|
||||
Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin).
|
||||
Any headers beginning with any of the specified prefixes have the prefix removed.
|
||||
The remainder of the header name is lowercased and [percent-decoded](https://tools.ietf.org/html/rfc3986#section-2.1)
|
||||
and becomes the extra key, and the header value is the extra value.
|
||||
-->
|
||||
* `--requestheader-username-headers` 必需字段,大小写不敏感。
|
||||
用来设置要获得用户身份所要检查的头部字段名称列表(有序)。
|
||||
|
@ -1034,7 +1058,8 @@ API 服务器可以配置成从请求的头部字段值(如 `X-Remote-User`)
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Prior to 1.11.3 (and 1.10.7, 1.9.11), the extra key could only contain characters which were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
|
||||
Prior to 1.11.3 (and 1.10.7, 1.9.11), the extra key could only contain characters which
|
||||
were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
|
||||
-->
|
||||
在 1.13.3 版本之前(包括 1.10.7、1.9.11),附加字段的键名只能包含
|
||||
[HTTP 头部标签的合法字符](https://tools.ietf.org/html/rfc7230#section-3.2.6)。
|
||||
|
@ -1090,8 +1115,12 @@ certificate to the API server for validation against the specified CA before the
|
|||
checked. WARNING: do **not** reuse a CA that is used in a different context unless you understand
|
||||
the risks and the mechanisms to protect the CA's usage.
|
||||
|
||||
* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.
|
||||
* `--requestheader-allowed-names` Optional. List of Common Name values (CNs). If set, a valid client certificate with a CN in the specified list must be presented before the request headers are checked for user names. If empty, any CN is allowed.
|
||||
* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate
|
||||
must be presented and validated against the certificate authorities in the specified file before the
|
||||
request headers are checked for user names.
|
||||
* `--requestheader-allowed-names` Optional. List of Common Name values (CNs). If set, a valid client
|
||||
certificate with a CN in the specified list must be presented before the request headers are checked
|
||||
for user names. If empty, any CN is allowed.
|
||||
-->
|
||||
为了防范头部信息侦听,在请求中的头部字段被检视之前,
|
||||
身份认证代理需要向 API 服务器提供一份合法的客户端证书,供后者使用所给的 CA 来执行验证。
|
||||
|
@ -1182,9 +1211,14 @@ to the impersonated user info.
|
|||
The following HTTP headers can be used to performing an impersonation request:
|
||||
|
||||
* `Impersonate-User`: The username to act as.
|
||||
* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups. Optional. Requires "Impersonate-User".
|
||||
* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user. Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )` must be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6) MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
|
||||
* `Impersonate-Uid`: A unique identifier that represents the user being impersonated. Optional. Requires "Impersonate-User". Kubernetes does not impose any format requirements on this string.
|
||||
* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups.
|
||||
Optional. Requires "Impersonate-User".
|
||||
* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user.
|
||||
Optional. Requires "Impersonate-User". In order to be preserved consistently, `( extra name )`
|
||||
must be lower-case, and any characters which aren't [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6)
|
||||
MUST be utf8 and [percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1).
|
||||
* `Impersonate-Uid`: A unique identifier that represents the user being impersonated. Optional.
|
||||
Requires "Impersonate-User". Kubernetes does not impose any format requirements on this string.
|
||||
-->
|
||||
以下 HTTP 头部字段可用来执行伪装请求:
|
||||
|
||||
|
@ -1201,7 +1235,8 @@ The following HTTP headers can be used to performing an impersonation request:
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Prior to 1.11.3 (and 1.10.7, 1.9.11), `( extra name )` could only contain characters which were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
|
||||
Prior to 1.11.3 (and 1.10.7, 1.9.11), `( extra name )` could only contain characters which
|
||||
were [legal in HTTP header labels](https://tools.ietf.org/html/rfc7230#section-3.2.6).
|
||||
-->
|
||||
在 1.11.3 版本之前(以及 1.10.7、1.9.11),`<附加名称>` 只能包含合法的 HTTP 标签字符。
|
||||
{{< /note >}}
|
||||
|
@ -1999,9 +2034,13 @@ The following `ExecCredential` manifest describes a cluster information sample.
|
|||
{{< feature-state for_k8s_version="v1.28" state="stable" >}}
|
||||
|
||||
<!--
|
||||
If your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out how your Kubernetes cluster maps your authentication information to identify you as a client. This works whether you are authenticating as a user (typically representing a real person) or as a ServiceAccount.
|
||||
If your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out
|
||||
how your Kubernetes cluster maps your authentication information to identify you as a client.
|
||||
This works whether you are authenticating as a user (typically representing
|
||||
a real person) or as a ServiceAccount.
|
||||
|
||||
`SelfSubjectReview` objects do not have any configurable fields. On receiving a request, the Kubernetes API server fills the status with the user attributes and returns it to the user.
|
||||
`SelfSubjectReview` objects do not have any configurable fields. On receiving a request,
|
||||
the Kubernetes API server fills the status with the user attributes and returns it to the user.
|
||||
|
||||
Request example (the body would be a `SelfSubjectReview`):
|
||||
-->
|
||||
|
@ -2052,7 +2091,8 @@ Response example:
|
|||
```
|
||||
|
||||
<!--
|
||||
For convenience, the `kubectl auth whoami` command is present. Executing this command will produce the following output (yet different user attributes will be shown):
|
||||
For convenience, the `kubectl auth whoami` command is present. Executing this command will
|
||||
produce the following output (yet different user attributes will be shown):
|
||||
|
||||
* Simple output example
|
||||
-->
|
||||
|
@ -2164,8 +2204,8 @@ Kubernetes API 服务器在所有身份验证机制
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview` feature is enabled.
|
||||
It is allowed by the `system:basic-user` cluster role.
|
||||
By default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview`
|
||||
feature is enabled. It is allowed by the `system:basic-user` cluster role.
|
||||
-->
|
||||
默认情况下,所有经过身份验证的用户都可以在 `APISelfSubjectReview` 特性被启用时创建 `SelfSubjectReview` 对象。
|
||||
这是 `system:basic-user` 集群角色允许的操作。
|
||||
|
@ -2173,6 +2213,7 @@ It is allowed by the `system:basic-user` cluster role.
|
|||
{{< note >}}
|
||||
<!--
|
||||
You can only make `SelfSubjectReview` requests if:
|
||||
|
||||
* the `APISelfSubjectReview`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
is enabled for your cluster (not needed for Kubernetes {{< skew currentVersion >}}, but older
|
||||
|
|
|
@ -397,7 +397,7 @@ For a reference to old feature gates that are removed, please refer to
|
|||
A feature can be in *Alpha*, *Beta* or *GA* stage.
|
||||
An *Alpha* feature means:
|
||||
-->
|
||||
处于 **Alpha** 、**Beta** 、 **GA** 阶段的特性。
|
||||
处于 **Alpha**、**Beta**、**GA** 阶段的特性。
|
||||
|
||||
**Alpha** 特性代表:
|
||||
|
||||
|
@ -454,7 +454,7 @@ After they exit beta, it may not be practical for us to make more changes.
|
|||
<!--
|
||||
A *General Availability* (GA) feature is also referred to as a *stable* feature. It means:
|
||||
-->
|
||||
**General Availability** (GA) 特性也称为 **稳定** 特性,**GA** 特性代表着:
|
||||
**General Availability**(GA)特性也称为**稳定**特性,**GA** 特性代表着:
|
||||
|
||||
<!--
|
||||
* The feature is always enabled; you cannot disable it.
|
||||
|
@ -479,7 +479,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `AdmissionWebhookMatchConditions`: Enable [match conditions](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchconditions)
|
||||
on mutating & validating admission webhooks.
|
||||
-->
|
||||
- `AdmissionWebhookMatchConditions`: 在转换和验证准入 Webhook
|
||||
- `AdmissionWebhookMatchConditions`:在转换和验证准入 Webhook
|
||||
上启用[匹配条件](/zh-cn/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchconditions)
|
||||
<!--
|
||||
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`)
|
||||
|
@ -623,7 +623,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
on behalf of a CronJob.
|
||||
- `CronJobTimeZone`: Allow the use of the `timeZone` optional field in [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/).
|
||||
-->
|
||||
- `ComponentSLIs`: 在 kubelet、kube-scheduler、kube-proxy、kube-controller-manager、cloud-controller-manager
|
||||
- `ComponentSLIs`:在 kubelet、kube-scheduler、kube-proxy、kube-controller-manager、cloud-controller-manager
|
||||
等 Kubernetes 组件上启用 `/metrics/slis` 端点,从而允许你抓取健康检查指标。
|
||||
- `ConsistentHTTPGetHandlers`:使用探测器为生命周期处理程序规范化 HTTP get URL 和标头传递。
|
||||
- `ConsistentListFromCache`:允许 API 服务器从缓存中提供一致的列表。
|
||||
|
@ -1132,7 +1132,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
允许 kube-proxy 来处理终止过程中的端点。
|
||||
- `QOSReserved`:允许在 QoS 级别进行资源预留,以防止处于较低 QoS 级别的 Pod
|
||||
突发进入处于较高 QoS 级别的请求资源(目前仅适用于内存)。
|
||||
- `ReadWriteOncePod`: 允许使用 `ReadWriteOncePod` 访问模式的 PersistentVolume。
|
||||
- `ReadWriteOncePod`:允许使用 `ReadWriteOncePod` 访问模式的 PersistentVolume。
|
||||
<!--
|
||||
- `RecoverVolumeExpansionFailure`: Enables users to edit their PVCs to smaller
|
||||
sizes so as they can recover from previously issued volume expansion failures.
|
||||
|
@ -1196,16 +1196,23 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
-->
|
||||
- `SeccompDefault`:启用 `RuntimeDefault` 作为所有工作负载的默认 seccomp 配置文件。
|
||||
此 seccomp 配置文件在 Pod 和/或 Container 的 `securityContext` 中被指定。
|
||||
- `SecurityContextDeny`: 此门控表示 `SecurityContextDeny` 准入控制器已弃用。
|
||||
- `SecurityContextDeny`:此门控表示 `SecurityContextDeny` 准入控制器已弃用。
|
||||
- `ServerSideApply`:在 API 服务器上启用[服务器端应用(SSA)](/zh-cn/docs/reference/using-api/server-side-apply/)。
|
||||
- `ServerSideFieldValidation`:启用服务器端字段验证。
|
||||
这意味着验证资源模式在 API 服务器端而不是客户端执行
|
||||
(例如,`kubectl create` 或 `kubectl apply` 命令行)。
|
||||
<!--
|
||||
- `ServiceNodePortStaticSubrange`: Enables the use of different port allocation
|
||||
strategies for NodePort Services. For more details, see
|
||||
[reserve NodePort ranges to avoid collisions](/docs/concepts/services-networking/service/#avoid-nodeport-collisions).
|
||||
-->
|
||||
- `ServiceNodePortStaticSubrange`:允许对 NodePort Service
|
||||
使用不同的端口分配策略。有关更多详细信息,
|
||||
请参阅[保留 NodePort 范围以避免冲突](/zh-cn/docs/concepts/services-networking/service/#avoid-nodeport-collisions)。
|
||||
<!--
|
||||
- `SidecarContainers`: Allow setting the `restartPolicy` of an init container to
|
||||
`Always` so that the container becomes a sidecar container (restartable init containers).
|
||||
See
|
||||
[Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
|
||||
for more details.
|
||||
-->
|
||||
- `SidecarContainers`:允许将 Init 容器的 `restartPolicy` 设置为 `Always`,
|
||||
|
@ -1225,7 +1232,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
-->
|
||||
- `SizeMemoryBackedVolumes`:允许 kubelet 检查基于内存制备的卷的尺寸约束(目前主要针对 `emptyDir` 卷)。
|
||||
- `SkipReadOnlyValidationGCE`:跳过对 GCE 的验证,将在下个版本中启用。
|
||||
- `StableLoadBalancerNodeSet`: 允许服务控制器(KCCM)根据节点状态变化来减少负载均衡器的重新配置。
|
||||
- `StableLoadBalancerNodeSet`:允许服务控制器(KCCM)根据节点状态变化来减少负载均衡器的重新配置。
|
||||
- `StatefulSetStartOrdinal`:允许在 StatefulSet 中配置起始序号。
|
||||
更多细节请参阅[起始序号](/zh-cn/docs/concepts/workloads/controllers/statefulset/#start-ordinal)。
|
||||
<!--
|
||||
|
@ -1258,10 +1265,10 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `TopologyManager`:启用一种机制来协调 Kubernetes 不同组件的细粒度硬件资源分配。
|
||||
详见[控制节点上的拓扑管理策略](/zh-cn/docs/tasks/administer-cluster/topology-manager/)。
|
||||
- `TopologyManagerPolicyAlphaOptions`:允许微调拓扑管理器策略的实验性的、Alpha 质量的选项。
|
||||
此特性门控守护 **一组** 质量级别为 Alpha 的拓扑管理器选项。
|
||||
此特性门控守护**一组**质量级别为 Alpha 的拓扑管理器选项。
|
||||
此特性门控绝对不会进阶至 Beta 或稳定版。
|
||||
- `TopologyManagerPolicyBetaOptions`:允许微调拓扑管理器策略的实验性的、Beta 质量的选项。
|
||||
此特性门控守护 **一组** 质量级别为 Beta 的拓扑管理器选项。
|
||||
此特性门控守护**一组**质量级别为 Beta 的拓扑管理器选项。
|
||||
此特性门控绝对不会进阶至稳定版。
|
||||
<!--
|
||||
- `TopologyManagerPolicyOptions`: Allow fine-tuning of topology manager policies,
|
||||
|
@ -1283,7 +1290,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
topologies based on available PV capacity.
|
||||
-->
|
||||
- `ValidatingAdmissionPolicy`:启用准入控制中所用的对 CEL 校验的 [ValidatingAdmissionPolicy](/zh-cn/docs/reference/access-authn-authz/validating-admission-policy/) 支持。
|
||||
- `VolumeCapacityPriority`: 基于可用 PV 容量的拓扑,启用对不同节点的优先级支持。
|
||||
- `VolumeCapacityPriority`:基于可用 PV 容量的拓扑,启用对不同节点的优先级支持。
|
||||
<!--
|
||||
- `WatchBookmark`: Enable support for watch bookmark events.
|
||||
- `WatchList` : Enable support for [streaming initial state of objects in watch requests](/docs/reference/using-api/api-concepts/#streaming-lists).
|
||||
|
@ -1291,7 +1298,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
|
||||
-->
|
||||
- `WatchBookmark`:启用对 watch 操作中 bookmark 事件的支持。
|
||||
- `WatchList` : 启用对
|
||||
- `WatchList`:启用对
|
||||
[在 watch 请求中流式传输对象的初始状态](/zh-cn/docs/reference/using-api/api-concepts/#streaming-lists)的支持。
|
||||
- `WinDSR`:允许 kube-proxy 为 Windows 创建 DSR 负载均衡。
|
||||
- `WinOverlay`:允许在 Windows 的覆盖网络模式下运行 kube-proxy。
|
||||
|
|
|
@ -19,6 +19,386 @@ auto_generated: true
|
|||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
- [SerializedNodeConfigSource](#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource)
|
||||
|
||||
## `FormatOptions` {#FormatOptions}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
FormatOptions contains options for the different logging formats.
|
||||
-->
|
||||
FormatOptions 包含为不同日志格式提供的选项。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>json</code> <B>[必需]</B><br/>
|
||||
<a href="#JSONOptions"><code>JSONOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
[Alpha] JSON contains options for logging format "json".
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
<p>[Alpha] json 包含 "json" 日志格式的选项。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `JSONOptions` {#JSONOptions}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [FormatOptions](#FormatOptions)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
JSONOptions contains options for logging format "json".
|
||||
-->
|
||||
JSONOptions 包含为 "json" 日志格式提供的选项。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td><code>splitStream</code> <B>[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--[Alpha] SplitStream redirects error messages to stderr while
|
||||
info messages go to stdout, with buffering. The default is to write
|
||||
both to stdout, without buffering. Only available when
|
||||
the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>splitStream</code> 将错误信息重定向到标准错误输出(stderr),
|
||||
而将提示信息重定向到标准输出(stdout),并为二者提供缓存。
|
||||
默认设置是将二者都写出到标准输出,并且不提供缓存。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>infoBufferSize</code> <B>[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
[Alpha] InfoBufferSize sets the size of the info stream when
|
||||
using split streams. The default is zero, which disables buffering.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>infoBufferSize</code> 在分离数据流时用来设置 info 数据流的大小。
|
||||
默认值为 0,相当于禁止缓存。只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LogFormatFactory` {#LogFormatFactory}
|
||||
|
||||
<!--
|
||||
LogFormatFactory provides support for a certain additional,
|
||||
non-default log format.
|
||||
-->
|
||||
<p>LogFormatFactory 提供了对某些附加的、非默认的日志格式的支持。</p>
|
||||
|
||||
## `LoggingConfiguration` {#LoggingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
<!--
|
||||
LoggingConfiguration contains logging options.
|
||||
-->
|
||||
LoggingConfiguration 包含日志选项。
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>format</code> <B>[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Format Flag specifies the structure of log messages.
|
||||
default value of format is `text`
|
||||
-->
|
||||
<code>format<code> 设置日志消息的结构。默认的格式取值为 <code>text</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>flushFrequency</code> <B>[必需]</B><br/>
|
||||
<a href="#TimeOrMetaDuration"><code>TimeOrMetaDuration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Maximum time between log flushes.
|
||||
If a string, parsed as a duration (i.e. "1s")
|
||||
If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000).
|
||||
Ignored if the selected logging backend writes log messages without buffering.
|
||||
-->
|
||||
日志清洗之间的最大时间间隔。
|
||||
如果是字符串,则解析为持续时间(例如 "1s")。
|
||||
如果是整数,则表示为最大纳秒数(例如 1s = 1000000000)。
|
||||
如果所选的日志后端在写入日志消息时未缓冲,则被忽略。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>verbosity</code> <B>[必需]</B><br/>
|
||||
<a href="#VerbosityLevel"><code>VerbosityLevel</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Verbosity is the threshold that determines which log messages are
|
||||
logged. Default is zero which logs only the most important
|
||||
messages. Higher values enable additional messages. Error messages
|
||||
are always logged.
|
||||
-->
|
||||
<code>verbosity</code> 用来确定日志消息记录的详细程度阈值。默认值为 0,
|
||||
意味着仅记录最重要的消息。数值越大,额外的消息越多。出错消息总是会被记录下来。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>vmodule</code> <B>[必需]</B><br/>
|
||||
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
VModule overrides the verbosity threshold for individual files.
|
||||
Only supported for "text" log format.
|
||||
-->
|
||||
<code>vmodule</code> 会在单个文件层面重载 verbosity 阈值的设置。
|
||||
这一选项仅支持 "text" 日志格式。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>options</code> <B>[必需]</B><br/>
|
||||
<a href="#FormatOptions"><code>FormatOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
[Alpha] Options holds additional parameters that are specific
|
||||
to the different logging formats. Only the options for the selected
|
||||
format get used, but all of them get validated.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>options</code> 中包含特定于不同日志格式的附加参数。
|
||||
只有针对所选格式的选项会被使用,但是合法性检查时会查看所有参数。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LoggingOptions` {#LoggingOptions}
|
||||
|
||||
<p>
|
||||
<!--
|
||||
LoggingOptions can be used with ValidateAndApplyWithOptions to override
|
||||
certain global defaults.
|
||||
-->
|
||||
<code>LoggingOptions</code> 可以与 <code>ValidateAndApplyWithOptions</code> 一起使用,以覆盖某些全局默认值。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>ErrorStream</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/io#Writer"><code>io.Writer</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
ErrorStream can be used to override the os.Stderr default.
|
||||
-->
|
||||
<code>ErrorStream</code> 可用于覆盖默认值 <code>os.Stderr</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>InfoStream</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/io#Writer"><code>io.Writer</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
InfoStream can be used to override the os.Stdout default.
|
||||
-->
|
||||
<code>InfoStream</code> 可用于覆盖默认值 <code>os.Stdout</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TimeOrMetaDuration` {#TimeOrMetaDuration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
TimeOrMetaDuration is present only for backwards compatibility for the
|
||||
flushFrequency field, and new fields should use metav1.Duration.
|
||||
-->
|
||||
<code>TimeOrMetaDuration</code> 仅出于向后兼容 <code>flushFrequency<code> 字段而存在,
|
||||
新字段应使用 <code>metav1.Duration<code>。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>Duration</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Duration holds the duration
|
||||
-->
|
||||
<code>Duration<code> 保存持续时间。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>-</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
SerializeAsString controls whether the value is serialized as a string or an integer
|
||||
-->
|
||||
<code>SerializeAsString</code> 控制此值是以字符串还是以整数进行序列化。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TracingConfiguration` {#TracingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
<!--
|
||||
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
|
||||
-->
|
||||
<p>TracingConfiguration 为 OpenTelemetry 追踪客户端提供版本化的配置信息。</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">字段</th><th>描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
Endpoint of the collector this component will report traces to.
|
||||
The connection is insecure, and does not currently support TLS.
|
||||
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
|
||||
-->
|
||||
<p>采集器的端点,此组件将向其报告追踪链路。
|
||||
此连接不安全,目前不支持 TLS。推荐不设置,端点是 otlp grpc 默认值 localhost:4317。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.
|
||||
-->
|
||||
<p>samplingRatePerMillion 是每百万 span 要采集的样本数。推荐不设置。
|
||||
如果不设置,则采样器优先使用其父级 span 的采样率,否则不采样。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `VModuleConfiguration` {#VModuleConfiguration}
|
||||
|
||||
<!--
|
||||
(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`)
|
||||
-->
|
||||
(`[]k8s.io/component-base/logs/api/v1.VModuleItem` 的别名)
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<!--
|
||||
VModuleConfiguration is a collection of individual file names or patterns
|
||||
and the corresponding verbosity threshold.
|
||||
-->
|
||||
VModuleConfiguration 是一个集合,其中包含一个个文件名(或文件名模式)
|
||||
及其对应的详细程度阈值。
|
||||
|
||||
## `VerbosityLevel` {#VerbosityLevel}
|
||||
|
||||
<!--
|
||||
(Alias of `uint32`)
|
||||
-->
|
||||
(`uint32` 的别名)
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<!--
|
||||
VerbosityLevel represents a klog or logr verbosity threshold.
|
||||
-->
|
||||
<p>VerbosityLevel 表示 klog 或 logr 的详细程度(verbosity)阈值。</p>
|
||||
|
||||
## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig}
|
||||
|
||||
<!--
|
||||
|
@ -2001,7 +2381,7 @@ Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'
|
|||
-->
|
||||
<p>containerRuntimeEndpoint 是容器运行时的端点。
|
||||
Linux 支持 UNIX 域套接字,而 Windows 支持命名管道和 TCP 端点。
|
||||
示例:'unix://path/to/runtime.sock', 'npipe:////./pipe/runtime'。</p>
|
||||
示例:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>imageServiceEndpoint</code><br/>
|
||||
|
@ -2231,7 +2611,7 @@ ExecEnvVar 用来在执行基于 exec 的凭据插件时设置环境变量。
|
|||
<!--
|
||||
No description provided.
|
||||
-->
|
||||
无描述。
|
||||
环境变量的名称。
|
||||
</span>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -2243,7 +2623,7 @@ ExecEnvVar 用来在执行基于 exec 的凭据插件时设置环境变量。
|
|||
<!--
|
||||
No description provided.
|
||||
-->
|
||||
无描述。
|
||||
环境变量的取值。
|
||||
</span>
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -2489,7 +2869,7 @@ presenting a client certificate signed by one of the authorities in the bundle
|
|||
is authenticated with a username corresponding to the CommonName,
|
||||
and groups corresponding to the Organization in the client certificate.
|
||||
-->
|
||||
<p><code>clientCAFile</code> 是一个指向 PEM 编发的证书包的路径。
|
||||
<p><code>clientCAFile</code> 是一个指向 PEM 编码的证书包的路径。
|
||||
如果设置了此字段,则能够提供由此证书包中机构之一所签名的客户端证书的请求会被成功认证,
|
||||
并且其用户名对应于客户端证书的 <code>CommonName</code>、组名对应于客户端证书的
|
||||
<code>Organization</code>。</p>
|
||||
|
@ -2586,7 +2966,7 @@ MemoryReservation 为每个 NUMA 节点设置不同类型的内存预留。
|
|||
ResourceChangeDetectionStrategy denotes a mode in which internal
|
||||
managers (secret, configmap) are discovering object changes.
|
||||
-->
|
||||
ResourceChangeDetectionStrategy 给出的是内部管理器(ConfigMap、Secret)
|
||||
ResourceChangeDetectionStrategy 给出的是内部管理器(Secret、ConfigMap)
|
||||
用来发现对象变化的模式。
|
||||
|
||||
## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority}
|
||||
|
@ -2630,383 +3010,3 @@ ShutdownGracePeriodByPodPriority 基于 Pod 关联的优先级类数值来为其
|
|||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `FormatOptions` {#FormatOptions}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
FormatOptions contains options for the different logging formats.
|
||||
-->
|
||||
FormatOptions 包含为不同日志格式提供的选项。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>json</code> <B>[必需]</B><br/>
|
||||
<a href="#JSONOptions"><code>JSONOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
[Alpha] JSON contains options for logging format "json".
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
<p>[Alpha] JSON 包含记录 "json" 格式日志的选项。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `JSONOptions` {#JSONOptions}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [FormatOptions](#FormatOptions)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
JSONOptions contains options for logging format "json".
|
||||
-->
|
||||
JSONOptions 包含为 "json" 日志格式提供的选项。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
<tr><td><code>splitStream</code> <B>[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--[Alpha] SplitStream redirects error messages to stderr while
|
||||
info messages go to stdout, with buffering. The default is to write
|
||||
both to stdout, without buffering. Only available when
|
||||
the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>splitStream</code> 将错误信息重定向到标准错误输出(stderr),
|
||||
而将提示信息重定向到标准输出(stdout),并为二者提供缓存。
|
||||
默认设置是将二者都写出到标准输出,并且不提供缓存。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>infoBufferSize</code> <B>[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#QuantityValue"><code>k8s.io/apimachinery/pkg/api/resource.QuantityValue</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
[Alpha] InfoBufferSize sets the size of the info stream when
|
||||
using split streams. The default is zero, which disables buffering.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>infoBufferSize</code> 在分离数据流时用来设置提示数据流的大小。
|
||||
默认值为 0,相当于禁止缓存。只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LogFormatFactory` {#LogFormatFactory}
|
||||
|
||||
<!--
|
||||
LogFormatFactory provides support for a certain additional,
|
||||
non-default log format.
|
||||
-->
|
||||
<p>LogFormatFactory 提供了对某些附加的、非默认的日志格式的支持。</p>
|
||||
|
||||
## `LoggingConfiguration` {#LoggingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
<!--
|
||||
LoggingConfiguration contains logging options.
|
||||
-->
|
||||
LoggingConfiguration 包含日志选项。
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>format</code> <B>[必需]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Format Flag specifies the structure of log messages.
|
||||
default value of format is `text`
|
||||
-->
|
||||
<code>format<code> 设置日志消息的结构。默认的格式取值为 <code>text</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>flushFrequency</code> <B>[必需]</B><br/>
|
||||
<a href="#TimeOrMetaDuration"><code>TimeOrMetaDuration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Maximum time between log flushes.
|
||||
If a string, parsed as a duration (i.e. "1s")
|
||||
If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000).
|
||||
Ignored if the selected logging backend writes log messages without buffering.
|
||||
-->
|
||||
日志清洗之间的最大时间间隔。
|
||||
如果是字符串,则解析为持续时间(例如 "1s")。
|
||||
如果是整数,则表示为最大纳秒数(例如 1s = 1000000000)。
|
||||
如果所选的日志后端在写入日志消息时未缓冲,则被忽略。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>verbosity</code> <B>[必需]</B><br/>
|
||||
<a href="#VerbosityLevel"><code>VerbosityLevel</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Verbosity is the threshold that determines which log messages are
|
||||
logged. Default is zero which logs only the most important
|
||||
messages. Higher values enable additional messages. Error messages
|
||||
are always logged.
|
||||
-->
|
||||
<code>verbosity</code> 用来确定日志消息记录的详细程度阈值。默认值为 0,
|
||||
意味着仅记录最重要的消息。数值越大,额外的消息越多。出错消息总是会被记录下来。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>vmodule</code> <B>[必需]</B><br/>
|
||||
<a href="#VModuleConfiguration"><code>VModuleConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
VModule overrides the verbosity threshold for individual files.
|
||||
Only supported for "text" log format.
|
||||
-->
|
||||
<code>vmodule</code> 会在单个文件层面重载 verbosity 阈值的设置。
|
||||
这一选项仅支持 "text" 日志格式。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr><td><code>options</code> <B>[必需]</B><br/>
|
||||
<a href="#FormatOptions"><code>FormatOptions</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
[Alpha] Options holds additional parameters that are specific
|
||||
to the different logging formats. Only the options for the selected
|
||||
format get used, but all of them get validated.
|
||||
Only available when the LoggingAlphaOptions feature gate is enabled.
|
||||
-->
|
||||
[Alpha] <code>options</code> 中包含特定于不同日志格式的附加参数。
|
||||
只有针对所选格式的选项会被使用,但是合法性检查时会查看所有参数。
|
||||
只有 LoggingAlphaOptions 特性门控被启用时才可用。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `LoggingOptions` {#LoggingOptions}
|
||||
|
||||
<p>
|
||||
<!--
|
||||
LoggingOptions can be used with ValidateAndApplyWithOptions to override
|
||||
certain global defaults.
|
||||
-->
|
||||
<code>LoggingOptions</code> 可以与 <code>ValidateAndApplyWithOptions</code> 一起使用,以覆盖某些全局默认值。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>ErrorStream</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/io#Writer"><code>io.Writer</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
ErrorStream can be used to override the os.Stderr default.
|
||||
-->
|
||||
<code>ErrorStream</code> 可用于覆盖默认值 <code>os.Stderr</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>InfoStream</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/io#Writer"><code>io.Writer</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
InfoStream can be used to override the os.Stdout default.
|
||||
-->
|
||||
<code>InfoStream</code> 可用于覆盖默认值 <code>os.Stdout</code>。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TimeOrMetaDuration` {#TimeOrMetaDuration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<p>
|
||||
<!--
|
||||
TimeOrMetaDuration is present only for backwards compatibility for the
|
||||
flushFrequency field, and new fields should use metav1.Duration.
|
||||
-->
|
||||
<code>TimeOrMetaDuration</code> 仅出于向后兼容 <code>flushFrequency<code> 字段而存在,
|
||||
新字段应使用 <code>metav1.Duration<code>。
|
||||
</p>
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%"><!--Field-->字段</th><th><!--Description-->描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>Duration</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"><code>meta/v1.Duration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Duration holds the duration
|
||||
-->
|
||||
<code>Duration<code> 保存持续时间。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>-</code> <B><!--[Required]-->[必需]</B><br/>
|
||||
<code>bool</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
SerializeAsString controls whether the value is serialized as a string or an integer
|
||||
-->
|
||||
<code>SerializeAsString</code> 控制此值是以字符串还是以整数进行序列化。
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TracingConfiguration` {#TracingConfiguration}
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
|
||||
<!--
|
||||
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
|
||||
-->
|
||||
<p>TracingConfiguration 为 OpenTelemetry 追踪客户端提供版本化的配置信息。</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">字段</th><th>描述</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
Endpoint of the collector this component will report traces to.
|
||||
The connection is insecure, and does not currently support TLS.
|
||||
Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
|
||||
-->
|
||||
<p>采集器的 endpoint,此组件将向其报告追踪链路。
|
||||
此连接不安全,目前不支持 TLS。推荐不设置,endpoint 是 otlp grpc 默认值,localhost:4317。</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<!--
|
||||
SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Recommended is unset. If unset, sampler respects its parent span's sampling
|
||||
rate, but otherwise never samples.
|
||||
-->
|
||||
<p>samplingRatePerMillion 是每百万 span 要采集的样本数。推荐不设置。
|
||||
如果不设置,则采样器优先使用其父级 span 的采样率,否则不采样。</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `VModuleConfiguration` {#VModuleConfiguration}
|
||||
|
||||
<!--
|
||||
(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`)
|
||||
-->
|
||||
(`[]k8s.io/component-base/logs/api/v1.VModuleItem` 的别名)
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<!--
|
||||
VModuleConfiguration is a collection of individual file names or patterns
|
||||
and the corresponding verbosity threshold.
|
||||
-->
|
||||
VModuleConfiguration 是一个集合,其中包含一个个文件名(或文件名模式)
|
||||
及其对应的详细程度阈值。
|
||||
|
||||
## `VerbosityLevel` {#VerbosityLevel}
|
||||
|
||||
<!--
|
||||
(Alias of `uint32`)
|
||||
-->
|
||||
(`uint32` 的别名)
|
||||
|
||||
<!--
|
||||
**Appears in:**
|
||||
-->
|
||||
**出现在:**
|
||||
|
||||
- [LoggingConfiguration](#LoggingConfiguration)
|
||||
|
||||
<!--
|
||||
VerbosityLevel represents a klog or logr verbosity threshold.
|
||||
-->
|
||||
<p>VerbosityLevel 表示 klog 或 logr 的详细程度(verbosity)阈值。</p>
|
||||
|
|
|
@ -55,7 +55,7 @@ API concepts:
|
|||
|
||||
* A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`)
|
||||
* All resource types have a concrete representation (their object schema) which is called a *kind*
|
||||
* A list of instances of a resource is known as a *collection*
|
||||
* A list of instances of a resource type is known as a *collection*
|
||||
* A single instance of a resource type is called a *resource*, and also usually represents an *object*
|
||||
* For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource
|
||||
-->
|
||||
|
@ -64,7 +64,7 @@ API concepts:
|
|||
Kubernetes 通常使用常见的 RESTful 术语来描述 API 概念:
|
||||
* **资源类型(Resource Type)** 是 URL 中使用的名称(`pods`、`namespaces`、`services`)
|
||||
* 所有资源类型都有一个具体的表示(它们的对象模式),称为 **类别(Kind)**
|
||||
* 资源实例的列表称为 **集合(Collection)**
|
||||
* 资源类型的实例的列表称为 **集合(Collection)**
|
||||
* 资源类型的单个实例称为 **资源(Resource)**,通常也表示一个 **对象(Object)**
|
||||
* 对于某些资源类型,API 包含一个或多个 **子资源(sub-resources)**,这些子资源表示为资源下的 URI 路径
|
||||
|
||||
|
@ -269,7 +269,7 @@ For example:
|
|||
-->
|
||||
1. 列举给定名字空间中的所有 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/namespaces/test/pods
|
||||
---
|
||||
200 OK
|
||||
|
@ -285,16 +285,16 @@ For example:
|
|||
|
||||
<!--
|
||||
2. Starting from resource version 10245, receive notifications of any API operations
|
||||
(such as **create**, **delete**, **apply** or **update**) that affect Pods in the
|
||||
(such as **create**, **delete**, **patch** or **update**) that affect Pods in the
|
||||
_test_ namespace. Each change notification is a JSON document. The HTTP response body
|
||||
(served as `application/json`) consists a series of JSON documents.
|
||||
-->
|
||||
2. 从资源版本 10245 开始,接收影响 _test_ 名字空间中 Pod 的所有 API 操作
|
||||
(例如 **create**、**delete**、**apply** 或 **update**)的通知。
|
||||
(例如 **create**、**delete**、**patch** 或 **update**)的通知。
|
||||
每个更改通知都是一个 JSON 文档。
|
||||
HTTP 响应正文(用作 `application/json`)由一系列 JSON 文档组成。
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
|
||||
---
|
||||
200 OK
|
||||
|
@ -349,7 +349,7 @@ but only includes a `.metadata.resourceVersion` field. For example:
|
|||
这是一种特殊的事件,用于标记客户端请求的给定 `resourceVersion` 的所有更改都已发送。
|
||||
代表 `BOOKMARK` 事件的文档属于请求所请求的类型,但仅包含一个 `.metadata.resourceVersion` 字段。例如:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
|
||||
---
|
||||
200 OK
|
||||
|
@ -446,7 +446,7 @@ in the following sequence of events:
|
|||
接下来你发送了以下请求(通过使用 `resourceVersion=` 设置空的资源版本来明确请求 **一致性读**),
|
||||
这样做的结果是可能收到如下事件序列:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan
|
||||
---
|
||||
200 OK
|
||||
|
@ -504,7 +504,7 @@ API server with an `Accept-Encoding` header, and check the response size and hea
|
|||
要验证 `APIResponseCompression` 是否正常工作,你可以使用一个 `Accept-Encoding`
|
||||
头向 API 服务器发送一个 **get** 或 **list** 请求,并检查响应大小和头信息。例如:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods
|
||||
Accept-Encoding: gzip
|
||||
---
|
||||
|
@ -596,7 +596,7 @@ of 500 pods at a time, request those chunks as follows:
|
|||
-->
|
||||
1. 列举集群中所有 Pod,每次接收至多 500 个 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods?limit=500
|
||||
---
|
||||
200 OK
|
||||
|
@ -620,7 +620,7 @@ of 500 pods at a time, request those chunks as follows:
|
|||
-->
|
||||
2. 继续前面的调用,返回下一组 500 个 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN
|
||||
---
|
||||
200 OK
|
||||
|
@ -644,7 +644,7 @@ of 500 pods at a time, request those chunks as follows:
|
|||
-->
|
||||
3. 继续前面的调用,返回最后 253 个 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2
|
||||
---
|
||||
200 OK
|
||||
|
@ -847,7 +847,7 @@ Kubernetes API 实现标准的 HTTP 内容类型(Content Type)协商:为 `
|
|||
|
||||
例如,以 Table 格式列举集群中所有 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods
|
||||
Accept: application/json;as=Table;g=meta.k8s.io;v=v1
|
||||
---
|
||||
|
@ -872,7 +872,7 @@ plane, the API server returns a default Table response that consists of the reso
|
|||
对于在控制平面上不存在定制的 Table 定义的 API 资源类型而言,服务器会返回一个默认的
|
||||
Table 响应,其中包含资源的 `name` 和 `creationTimestamp` 字段。
|
||||
|
||||
```console
|
||||
```
|
||||
GET /apis/crd.example.com/v1alpha1/namespaces/default/resources
|
||||
---
|
||||
200 OK
|
||||
|
@ -915,7 +915,7 @@ extensions, you should make requests that specify multiple content types in the
|
|||
如果你正在实现使用 Table 信息并且必须针对所有资源类型(包括扩展)工作的客户端,
|
||||
你应该在 `Accept` 请求头中指定多种内容类型的请求。例如:
|
||||
|
||||
```console
|
||||
```
|
||||
Accept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json
|
||||
```
|
||||
|
||||
|
@ -966,7 +966,7 @@ For example:
|
|||
-->
|
||||
1. 以 Protobuf 格式列举集群上的所有 Pod:
|
||||
|
||||
```console
|
||||
```
|
||||
GET /api/v1/pods
|
||||
Accept: application/vnd.kubernetes.protobuf
|
||||
---
|
||||
|
@ -981,7 +981,7 @@ For example:
|
|||
-->
|
||||
2. 通过向服务器发送 Protobuf 编码的数据创建 Pod,但请求以 JSON 形式接收响应:
|
||||
|
||||
```console
|
||||
```
|
||||
POST /api/v1/namespaces/test/pods
|
||||
Content-Type: application/vnd.kubernetes.protobuf
|
||||
Accept: application/json
|
||||
|
@ -1013,7 +1013,7 @@ Protobuf 不适用于定义为 {{< glossary_tooltip term_id="CustomResourceDefin
|
|||
作为客户端,如果你可能需要使用扩展类型,则应在请求 `Accept` 请求头中指定多种内容类型以支持回退到 JSON。
|
||||
例如:
|
||||
|
||||
```console
|
||||
```
|
||||
Accept: application/vnd.kubernetes.protobuf, application/json
|
||||
```
|
||||
|
||||
|
@ -1038,7 +1038,7 @@ Kubernetes 使用封套形式来对 Protobuf 响应进行编码。
|
|||
封套格式如下:
|
||||
|
||||
<!--
|
||||
```console
|
||||
```
|
||||
A four byte magic number prefix:
|
||||
Bytes 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00]
|
||||
|
||||
|
@ -1066,7 +1066,7 @@ An encoded Protobuf message with the following IDL:
|
|||
}
|
||||
```
|
||||
-->
|
||||
```console
|
||||
```
|
||||
四个字节的特殊数字前缀:
|
||||
字节 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00]
|
||||
|
||||
|
@ -1171,7 +1171,7 @@ Once the last finalizer is removed, the resource is actually removed from etcd.
|
|||
<!--
|
||||
## Single resource API
|
||||
|
||||
The Kubernetes API verbs **get**, **create**, **apply**, **update**, **patch**,
|
||||
The Kubernetes API verbs **get**, **create**, **update**, **patch**,
|
||||
**delete** and **proxy** support single resources only.
|
||||
These verbs with single resource support have no support for submitting multiple
|
||||
resources together in an ordered or unordered list or transaction.
|
||||
|
@ -1184,7 +1184,7 @@ resources, and **deletecollection** allows deleting multiple resources.
|
|||
-->
|
||||
## 单个资源 API {#single-resource-api}
|
||||
|
||||
Kubernetes API 动词 **get**、**create**、**apply**、**update**、**patch**、**delete** 和 **proxy** 仅支持单一资源。
|
||||
Kubernetes API 动词 **get**、**create**、**update**、**patch**、**delete** 和 **proxy** 仅支持单一资源。
|
||||
这些具有单一资源支持的动词不支持在有序或无序列表或事务中一起提交多个资源。
|
||||
|
||||
当客户端(包括 kubectl)对一组资源进行操作时,客户端会发出一系列单资源 API 请求,
|
||||
|
@ -1249,11 +1249,11 @@ These situations are:
|
|||
2. 字段在对象中重复出现。
|
||||
|
||||
<!--
|
||||
### Validation for unrecognized or duplicate fields (#setting-the-field-validation-level)
|
||||
### Validation for unrecognized or duplicate fields {#setting-the-field-validation-level}
|
||||
-->
|
||||
### 检查无法识别或重复的字段 {#setting-the-field-validation-level}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
<!--
|
||||
From 1.25 onward, unrecognized or duplicate fields in an object are detected via
|
||||
|
@ -1439,7 +1439,7 @@ Here is an example dry-run request that uses `?dryRun=All`:
|
|||
-->
|
||||
这是一个使用 `?dryRun=All` 的试运行请求的示例:
|
||||
|
||||
```console
|
||||
```
|
||||
POST /api/v1/namespaces/test/pods?dryRun=All
|
||||
Content-Type: application/json
|
||||
Accept: application/json
|
||||
|
@ -1449,7 +1449,6 @@ Accept: application/json
|
|||
The response would look the same as for non-dry-run request, but the values of some
|
||||
generated fields may differ.
|
||||
-->
|
||||
|
||||
响应会与非试运行模式请求的响应看起来相同,只是某些生成字段的值可能会不同。
|
||||
|
||||
<!--
|
||||
|
@ -1512,26 +1511,266 @@ See [Authorization Overview](/docs/reference/access-authn-authz/authorization/).
|
|||
参阅[鉴权概述](/zh-cn/docs/reference/access-authn-authz/authorization/)以了解鉴权细节。
|
||||
|
||||
<!--
|
||||
## Server Side Apply
|
||||
## Updates to existing resources {#patch-and-apply}
|
||||
|
||||
Kubernetes provides several ways to update existing objects.
|
||||
You can read [choosing an update mechanism](#update-mechanism-choose) to
|
||||
learn about which approach might be best for your use case.
|
||||
-->
|
||||
## 服务器端应用 {#server-side-apply}
|
||||
## 更新现有资源 {#patch-and-apply}
|
||||
|
||||
Kubernetes 提供了多种更新现有对象的方式。
|
||||
你可以阅读[选择更新机制](#update-mechanism-choose)以了解哪种方法可能最适合你的用例。
|
||||
|
||||
<!--
|
||||
You can overwrite (**update**) an existing resource - for example, a ConfigMap -
|
||||
using an HTTP PUT. For a PUT request, it is the client's responsibility to specify
|
||||
the `resourceVersion` (taking this from the object being updated). Kubernetes uses
|
||||
that `resourceVersion` information so that the API server can detect lost updates
|
||||
and reject requests made by a client that is out of date with the cluster.
|
||||
In the event that the resource has changed (the `resourceVersion` the client
|
||||
provided is stale), the API server returns a `409 Conflict` error response.
|
||||
-->
|
||||
你可以使用 HTTP PUT 覆盖(**update**)ConfigMap 等现有资源。
|
||||
对于 PUT 请求,客户端需要指定 `resourceVersion`(从要更新的对象中获取此项)。
|
||||
Kubernetes 使用该 `resourceVersion` 信息,这样 API 服务器可以检测丢失的更新并拒绝对集群来说过期的客户端所发出的请求。
|
||||
如果资源已发生变化(即客户端提供的 `resourceVersion` 已过期),API 服务器将返回 `409 Conflict` 错误响应。
|
||||
|
||||
<!--
|
||||
Instead of sending a PUT request, the client can send an instruction to the API
|
||||
server to **patch** an existing resource. A **patch** is typically appropriate
|
||||
if the change that the client wants to make isn't conditional on the existing data. Clients that need effective detection of lost updates should consider
|
||||
making their request conditional on the existing `resourceVersion` (either HTTP PUT or HTTP PATCH),
|
||||
and then handle any retries that are needed in case there is a conflict.
|
||||
|
||||
The Kubernetes API supports four different PATCH operations, determined by their
|
||||
corresponding HTTP `Content-Type` header:
|
||||
-->
|
||||
客户端除了发送 PUT 请求之外,还可以发送指令给 API 服务器对现有资源执行 **patch** 操作。
|
||||
**patch** 通常适用于客户端希望进行的更改并不依赖于现有数据的场景。
|
||||
需要有效检测丢失更新的客户端应该考虑根据现有 `resourceVersion` 来进行有条件的请求
|
||||
(HTTP PUT 或 HTTP PATCH),并在存在冲突时作必要的重试。
|
||||
|
||||
Kubernetes API 支持四种不同的 PATCH 操作,具体取决于它们所对应的 HTTP `Content-Type` 标头:
|
||||
|
||||
<!--
|
||||
`application/apply-patch+yaml`
|
||||
: Server Side Apply YAML (a Kubernetes-specific extension, based on YAML).
|
||||
All JSON documents are valid YAML, so you can also submit JSON using this
|
||||
media type. See [Server Side Apply serialization](/docs/reference/using-api/server-side-apply/#serialization)
|
||||
for more details.
|
||||
To Kubernetes, this is a **create** operation if the object does not exist,
|
||||
or a **patch** operation if the object already exists.
|
||||
-->
|
||||
`application/apply-patch+yaml`
|
||||
: Server Side Apply YAML(基于 YAML 的 Kubernetes 扩展)。
|
||||
所有 JSON 文档都是有效的 YAML,因此你也可以使用此媒体类型提交 JSON。
|
||||
更多细节参阅[服务器端应用序列化](/zh-cn/docs/reference/using-api/server-side-apply/#serialization)。
|
||||
对于 Kubernetes,这一 PATCH 请求在对象不存在时成为 **create** 操作;在对象已存在时成为 **patch** 操作。
|
||||
|
||||
<!--
|
||||
`application/json-patch+json`
|
||||
: JSON Patch, as defined in [RFC6902](https://tools.ietf.org/html/rfc6902).
|
||||
A JSON patch is a sequence of operations that are executed on the resource;
|
||||
for example `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`.
|
||||
To Kubernetes, this is a **patch** operation.
|
||||
|
||||
A **patch** using `application/json-patch+json` can include conditions to
|
||||
validate consistency, allowing the operation to fail if those conditions
|
||||
are not met (for example, to avoid a lost update).
|
||||
-->
|
||||
`application/json-patch+json`
|
||||
: JSON Patch,如 [RFC6902](https://tools.ietf.org/html/rfc6902) 中定义。
|
||||
JSON Patch 是对资源执行的一个操作序列;例如 `{"op": "add", "path": "/a/b/c", "value": [ "foo", "bar" ]}`。
|
||||
对于 Kubernetes,这一 PATCH 请求即是一个 **patch** 操作。
|
||||
|
||||
使用 `application/json-patch+json` 的 **patch** 可以包含用于验证一致性的条件,
|
||||
如果这些条件不满足,则允许此操作失败(例如避免丢失更新)。
|
||||
|
||||
<!--
|
||||
`application/merge-patch+json`
|
||||
: JSON Merge Patch, as defined in [RFC7386](https://tools.ietf.org/html/rfc7386).
|
||||
A JSON Merge Patch is essentially a partial representation of the resource.
|
||||
The submitted JSON is combined with the current resource to create a new one,
|
||||
then the new one is saved.
|
||||
To Kubernetes, this is a **patch** operation.
|
||||
-->
|
||||
`application/merge-patch+json`
|
||||
: JSON Merge Patch,如 [RFC7386](https://tools.ietf.org/html/rfc7386) 中定义。
|
||||
JSON Merge Patch 实质上是资源的部分表示。提交的 JSON 与当前资源合并以创建一个新资源,然后将其保存。
|
||||
对于 Kubernetes,这个 PATCH 请求是一个 **patch** 操作。
|
||||
|
||||
<!--
|
||||
`application/strategic-merge-patch+json`
|
||||
: Strategic Merge Patch (a Kubernetes-specific extension based on JSON).
|
||||
Strategic Merge Patch is a custom implementation of JSON Merge Patch.
|
||||
You can only use Strategic Merge Patch with built-in APIs, or with aggregated
|
||||
API servers that have special support for it. You cannot use
|
||||
`application/strategic-merge-patch+json` with any API
|
||||
defined using a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}.
|
||||
-->
|
||||
`application/strategic-merge-patch+json`
|
||||
: Strategic Merge Patch(基于 JSON 的 Kubernetes 扩展)。
|
||||
Strategic Merge Patch 是 JSON Merge Patch 的自定义实现。
|
||||
你只能在内置 API 或具有特殊支持的聚合 API 服务器中使用 Strategic Merge Patch。
|
||||
你不能针对任何使用 {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}
|
||||
定义的 API 来使用 `application/strategic-merge-patch+json`。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The Kubernetes _server side apply_ mechanism has superseded Strategic Merge
|
||||
Patch.
|
||||
-->
|
||||
Kubernetes **服务器端应用**机制已取代 Strategic Merge Patch。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Kubernetes' [Server Side Apply](/docs/reference/using-api/server-side-apply/)
|
||||
feature allows the control plane to track managed fields for newly created objects.
|
||||
Server Side Apply provides a clear pattern for managing field conflicts,
|
||||
offers server-side `Apply` and `Update` operations, and replaces the
|
||||
offers server-side **apply** and **update** operations, and replaces the
|
||||
client-side functionality of `kubectl apply`.
|
||||
|
||||
The API verb for Server-Side Apply is **apply**.
|
||||
For Server-Side Apply, Kubernetes treats the request as a **create** if the object
|
||||
does not yet exist, and a **patch** otherwise. For other requests that use PATCH
|
||||
at the HTTP level, the logical Kubernetes operation is always **patch**.
|
||||
|
||||
See [Server Side Apply](/docs/reference/using-api/server-side-apply/) for more details.
|
||||
-->
|
||||
Kubernetes 的[服务器端应用](/zh-cn/docs/reference/using-api/server-side-apply/)功能允许控制平面跟踪新创建对象的托管字段。
|
||||
服务端应用为管理字段冲突提供了清晰的模式,提供了服务器端 `Apply` 和 `Update` 操作,
|
||||
服务端应用为管理字段冲突提供了清晰的模式,提供了服务器端 **apply** 和 **update** 操作,
|
||||
并替换了 `kubectl apply` 的客户端功能。
|
||||
|
||||
服务端应用的 API 动词是 **apply**。有关详细信息,
|
||||
请参阅[服务器端应用](/zh-cn/docs/reference/using-api/server-side-apply/)。
|
||||
对于服务器端应用,Kubernetes 在对象尚不存在时将请求视为 **create**,否则视为 **patch**。
|
||||
对于其他在 HTTP 层面使用 PATCH 的请求,逻辑上的 Kubernetes 操作始终是 **patch**。
|
||||
|
||||
更多细节参阅[服务器端应用](/zh-cn/docs/reference/using-api/server-side-apply/)。
|
||||
|
||||
<!--
|
||||
### Choosing an update mechanism {#update-mechanism-choose}
|
||||
|
||||
#### HTTP PUT to replace existing resource {#update-mechanism-update}
|
||||
|
||||
The **update** (HTTP `PUT`) operation is simple to implement and flexible,
|
||||
but has drawbacks:
|
||||
-->
|
||||
### 选择更新机制 {#update-mechanism-choose}
|
||||
|
||||
#### HTTP PUT 替换现有资源 {#update-mechanism-update}
|
||||
|
||||
**update** (HTTP `PUT`) 操作实现简单且灵活,但也存在一些缺点:
|
||||
|
||||
<!--
|
||||
* You need to handle conflicts where the `resourceVersion` of the object changes
|
||||
between your client reading it and trying to write it back. Kubernetes always
|
||||
detects the conflict, but you as the client author need to implement retries.
|
||||
* You might accidentally drop fields if you decode an object locally (for example,
|
||||
using client-go, you could receive fields that your client does not know how to
|
||||
handle - and then drop them as part of your update.
|
||||
* If there's a lot of contention on the object (even on a field, or set of fields,
|
||||
that you're not trying to edit), you might have trouble sending the update.
|
||||
The problem is worse for larger objects and for objects with many fields.
|
||||
-->
|
||||
* 你需要处理对象的 `resourceVersion` 在客户端读取和写回之间发生变化所造成的冲突。
|
||||
Kubernetes 总是会检测到此冲突,但你作为客户端开发者需要实现重试机制。
|
||||
* 如果你在本地解码对象,可能会意外丢失字段。例如你在使用 client-go 时,
|
||||
可能会收到客户端不知道如何处理的一些字段,而客户端在构造更新时会将这些字段丢弃。
|
||||
* 如果对象上存在大量争用(即使是在你不打算编辑的某字段或字段集上),你可能会难以发送更新。
|
||||
对于体量较大或字段较多的对象,这个问题会更为严重。
|
||||
|
||||
<!--
|
||||
#### HTTP PATCH using JSON Patch {#update-mechanism-json-patch}
|
||||
|
||||
A **patch** update is helpful, because:
|
||||
-->
|
||||
#### 使用 JSON Patch 的 HTTP PATCH {#update-mechanism-json-patch}
|
||||
|
||||
**patch** 更新很有帮助,因为:
|
||||
|
||||
<!--
|
||||
* As you're only sending differences, you have less data to send in the `PATCH`
|
||||
request.
|
||||
* You can make changes that rely on existing values, such as copying the
|
||||
value of a particular field into an annotation.
|
||||
-->
|
||||
* 由于你只发送差异,所以你在 `PATCH` 请求中需要发送的数据较少。
|
||||
* 你可以依赖于现有值进行更改,例如将特定字段的值复制到注解中。
|
||||
<!--
|
||||
* Unlike with an **update** (HTTP `PUT`), making your change can happen right away
|
||||
even if there are frequent changes to unrelated fields): you usually would
|
||||
not need to retry.
|
||||
* You might still need to specify the `resourceVersion` (to match an existing object)
|
||||
if you want to be extra careful to avoid lost updates
|
||||
* It's still good practice to write in some retry logic in case of errors.
|
||||
-->
|
||||
* 与 **update**(HTTP `PUT`)不同,即使存在对无关字段的频繁更改,你的更改也可以立即生效:
|
||||
你通常无需重试。
|
||||
* 如果你要特别小心避免丢失更新,仍然可能需要指定 `resourceVersion`(以匹配现有对象)。
|
||||
* 编写一些重试逻辑以处理错误仍然是一个良好的实践。
|
||||
<!--
|
||||
* You can use test conditions to careful craft specific update conditions.
|
||||
For example, you can increment a counter without reading it if the existing
|
||||
value matches what you expect. You can do this with no lost update risk,
|
||||
even if the object has changed in other ways since you last wrote to it.
|
||||
(If the test condition fails, you can fall back to reading the current value
|
||||
and then write back the changed number).
|
||||
-->
|
||||
* 你可以通过测试条件来精确地构造特定的更新条件。
|
||||
例如,如果现有值与你期望的值匹配,你可以递增计数器而无需读取它。
|
||||
即使自上次写入以来对象以其他方式发生了更改,你也可以做到这一点而不会遇到丢失更新的风险。
|
||||
(如果测试条件失败,你可以回退为读取当前值,然后写回更改的数字)。
|
||||
|
||||
<!--
|
||||
However:
|
||||
|
||||
* you need more local (client) logic to build the patch; it helps a lot if you have
|
||||
a library implementation of JSON Patch, or even for making a JSON Patch specifically against Kubernetes
|
||||
* as the author of client software, you need to be careful when building the patch
|
||||
(the HTTP request body) not to drop fields (the order of operations matters)
|
||||
-->
|
||||
然而:
|
||||
|
||||
* 你需要更多本地(客户端)逻辑来构建补丁;如果你拥有实现了 JSON Patch 的库,
|
||||
或者针对 Kubernetes 生成特定的 JSON Patch 的库,将非常有帮助。
|
||||
* 作为客户端软件的开发者,你在构建补丁(HTTP 请求体)时需要小心,避免丢弃字段(操作顺序很重要)。
|
||||
|
||||
<!--
|
||||
#### HTTP PATCH using Server-Side Apply {#update-mechanism-server-side-apply}
|
||||
|
||||
Server-Side Apply has some clear benefits:
|
||||
-->
|
||||
#### 使用服务器端应用的 HTTP PATCH {#update-mechanism-server-side-apply}
|
||||
|
||||
服务器端应用(Server-Side Apply)具有一些明显的优势:
|
||||
|
||||
<!--
|
||||
* A single round trip: it rarely requires making a `GET` request first.
|
||||
* and you can still detect conflicts for unexpected changes
|
||||
* you have the option to force override a conflict, if appropriate
|
||||
* Client implementations are easy to make
|
||||
* You get an atomic create-or-update operation without extra effort
|
||||
(similar to `UPSERT` in some SQL dialects)
|
||||
-->
|
||||
* 仅需一次轮询:通常无需先执行 `GET` 请求。
|
||||
* 并且你仍然可以检测到意外更改造成的冲突
|
||||
* 合适的时候,你可以选择强制覆盖冲突
|
||||
* 客户端实现简单
|
||||
* 你可以轻松获得原子级别的 create 或 update 操作,无需额外工作
|
||||
(类似于某些 SQL 语句中的 `UPSERT`)
|
||||
|
||||
<!--
|
||||
However:
|
||||
|
||||
* Server-Side Apply does not work at all for field changes that depend on a current value of the object
|
||||
* You can only apply updates to objects. Some resources in the Kubernetes HTTP API are
|
||||
not objects (they do not have a `.metadata` field), and Server-Side Apply
|
||||
is only relevant for Kubernetes objects.
|
||||
-->
|
||||
然而:
|
||||
|
||||
* 服务器端应用不适合依赖对象当前值的字段更改
|
||||
* 你只能更新对象。Kubernetes HTTP API 中的某些资源不是对象(它们没有 `.metadata` 字段),
|
||||
并且服务器端应用只能用于 Kubernetes 对象。
|
||||
|
||||
<!--
|
||||
## Resource versions
|
||||
|
|
|
@ -99,7 +99,6 @@ their authors, not the Kubernetes team.
|
|||
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
|
||||
| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) |
|
||||
| Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) |
|
||||
| Go | [github.com/ericchiang/k8s](https://github.com/ericchiang/k8s) |
|
||||
| Java (OSGi) | [bitbucket.org/amdatulabs/amdatu-kubernetes](https://bitbucket.org/amdatulabs/amdatu-kubernetes) |
|
||||
| Java (Fabric8, OSGi) | [github.com/fabric8io/kubernetes-client](https://github.com/fabric8io/kubernetes-client) |
|
||||
| Java | [github.com/manusa/yakc](https://github.com/manusa/yakc) |
|
||||
|
@ -137,7 +136,6 @@ their authors, not the Kubernetes team.
|
|||
| DotNet (RestSharp) | [github.com/masroorhasan/Kubernetes.DotNet](https://github.com/masroorhasan/Kubernetes.DotNet) |
|
||||
| Elixir | [github.com/obmarg/kazan](https://github.com/obmarg/kazan/) |
|
||||
| Elixir | [github.com/coryodaniel/k8s](https://github.com/coryodaniel/k8s) |
|
||||
| Go | [github.com/ericchiang/k8s](https://github.com/ericchiang/k8s) |
|
||||
| Java (OSGi) | [bitbucket.org/amdatulabs/amdatu-kubernetes](https://bitbucket.org/amdatulabs/amdatu-kubernetes) |
|
||||
| Java (Fabric8, OSGi) | [github.com/fabric8io/kubernetes-client](https://github.com/fabric8io/kubernetes-client) |
|
||||
| Java | [github.com/manusa/yakc](https://github.com/manusa/yakc) |
|
||||
|
|
|
@ -488,7 +488,7 @@ in the kubeadm config file.
|
|||
|
||||
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node.
|
||||
-->
|
||||
### 设置 ectd 集群
|
||||
### 设置 etcd 集群
|
||||
|
||||
1. 按照[这里](/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)的指示去设置。
|
||||
|
||||
|
|
|
@ -12,16 +12,46 @@ weight: 120
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
This page explains how to enable a package repository for a new Kubernetes minor release
|
||||
This page explains how to enable a package repository for the desired
|
||||
Kubernetes minor release upon upgrading a cluster. This is only needed
|
||||
for users of the community-owned package repositories hosted at `pkgs.k8s.io`.
|
||||
Unlike the legacy package repositories, the community-owned package repositories are
|
||||
structured in a way that there's a dedicated package repository for each Kubernetes
|
||||
minor version.
|
||||
Unlike the legacy package repositories, the community-owned package
|
||||
repositories are structured in a way that there's a dedicated package
|
||||
repository for each Kubernetes minor version.
|
||||
-->
|
||||
本文阐述如何为 `pkgs.k8s.io` 上托管的、社区自治的软件包仓库的用户,
|
||||
本页介绍了如何在升级集群时启用包含 Kubernetes 次要版本的软件包仓库。
|
||||
这仅适用于使用托管在 `pkgs.k8s.io` 上社区自治软件包仓库的用户。
|
||||
启用新的 Kubernetes 小版本的软件包仓库。与传统的软件包仓库不同,
|
||||
社区自治的软件包仓库所采用的结构为每个 Kubernetes 小版本都有一个专门的软件包仓库。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This guide only covers a part of the Kubernetes upgrade process. Please see the
|
||||
[upgrade guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) for
|
||||
more information about upgrading Kubernetes clusters.
|
||||
-->
|
||||
本指南仅介绍 Kubernetes 升级过程的一部分。
|
||||
有关升级 Kubernetes 集群的更多信息,
|
||||
请参阅[升级指南](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
|
||||
{{</ note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
This step is only needed upon upgrading a cluster to another **minor** release.
|
||||
If you're upgrading to another patch release within the same minor release (e.g.
|
||||
v{{< skew currentVersion >}}.5 to v{{< skew currentVersion >}}.7), you don't
|
||||
need to follow this guide. However, if you're still using the legacy package
|
||||
repositories, you'll need to migrate to the new community-owned package
|
||||
repositories before upgrading (see the next section for more details on how to
|
||||
do this).
|
||||
-->
|
||||
仅在将集群升级到另一个**次要**版本时才需要执行此步骤。
|
||||
如果你要升级到同一次要版本中的另一个补丁版本(例如:v{{< skew currentVersion >}}.5 到
|
||||
v{{< skew currentVersion >}}.7)则不需要遵循本指南。
|
||||
但是,如果你仍在使用旧的软件包仓库,则需要在升级之前迁移到社区自治的新软件包仓库
|
||||
(有关如何执行此操作的更多详细信息,请参阅下一节)。
|
||||
{{</ note >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -579,12 +579,12 @@ Readiness probes runs on the container during its whole lifecycle.
|
|||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Liveness probes *do not* wait for readiness probes to succeed.
|
||||
If you want to wait before executing a liveness probe you should use
|
||||
The readiness and liveness probes do not depend on each other to succeed.
|
||||
If you want to wait before executing a readiness probe, you should use
|
||||
`initialDelaySeconds` or a `startupProbe`.
|
||||
-->
|
||||
存活探针**不等待**就绪性探针成功。
|
||||
如果要在执行存活探针之前等待,应该使用 `initialDelaySeconds` 或 `startupProbe`。
|
||||
存活探针与就绪性探针相互间不等待对方成功。
|
||||
如果要在执行就绪性探针之前等待,应该使用 `initialDelaySeconds` 或 `startupProbe`。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
|
|
|
@ -92,7 +92,7 @@ ConfigMap above as `/redis-master/redis.conf` inside the Pod.
|
|||
|
||||
* 由 `spec.volumes[1]` 创建一个名为 `config` 的卷。
|
||||
* `spec.volumes[1].items[0]` 下的 `key` 和 `path` 会将来自 `example-redis-config`
|
||||
ConfigMap 中的 `redis-config` 密钥公开在 `config` 卷上一个名为 `redis.conf` 的文件中。
|
||||
ConfigMap 中的 `redis-config` 键公开在 `config` 卷上一个名为 `redis.conf` 的文件中。
|
||||
* 然后 `config` 卷被 `spec.containers[0].volumeMounts[1]` 挂载在 `/redis-master`。
|
||||
|
||||
这样做的最终效果是将上面 `example-redis-config` 配置中 `data.redis-config`
|
||||
|
|
|
@ -1,56 +1,77 @@
|
|||
---
|
||||
title: 运行应用程序的多个实例
|
||||
weight: 10
|
||||
description: |-
|
||||
使用 kubectl 扩缩现有的应用程序
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: Running Multiple Instances of Your App
|
||||
weight: 10
|
||||
description: |-
|
||||
Scale an existing app manually using kubectl.
|
||||
---
|
||||
-->
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="zh">
|
||||
|
||||
<body>
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<!-- <h3>Objectives</h3> -->
|
||||
<h3>目标</h3>
|
||||
<!-- <h3>Objectives</h3> -->
|
||||
<h3>目标</h3>
|
||||
<ul>
|
||||
<!-- <li>Scale an app using kubectl.</li> -->
|
||||
<li>用 kubectl 扩缩应用程序</li>
|
||||
<!-- <li>Scale an app using kubectl.</li> -->
|
||||
<li>用 kubectl 扩缩应用程序</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<!-- <h3>Scaling an application</h3> -->
|
||||
<h3>扩缩应用程序</h3>
|
||||
<!-- <h3>Scaling an application</h3> -->
|
||||
<h3>扩缩应用程序</h3>
|
||||
|
||||
<!-- <p>In the previous modules we created a <a href="/docs/concepts/workloads/controllers/deployment/"> Deployment</a>,
|
||||
and then exposed it publicly via a <a href="/docs/concepts/services-networking/service/">Service</a>.
|
||||
The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.</p> -->
|
||||
<p>在之前的模块中,我们创建了一个 <a href="/zh-cn/docs/concepts/workloads/controllers/deployment/"> Deployment</a>,然后通过 <a href="/zh-cn/docs/concepts/services-networking/service/">Service</a>让其可以开放访问。Deployment 仅为跑这个应用程序创建了一个 Pod。 当流量增加时,我们需要扩容应用程序满足用户需求。</p>
|
||||
<!--
|
||||
<p>Previously we created a <a href="/docs/concepts/workloads/controllers/deployment/"> Deployment</a>, and then exposed it publicly via a <a href="/docs/concepts/services-networking/service/">Service</a>. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.</p>
|
||||
<p>If you haven't worked through the earlier sections, start from <a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">Using minikube to create a cluster</a>.</p>
|
||||
-->
|
||||
<p>之前我们创建了一个 <a href="/zh-cn/docs/concepts/workloads/controllers/deployment/"> Deployment</a>,
|
||||
然后通过 <a href="/zh-cn/docs/concepts/services-networking/service/">Service</a> 让其可以公开访问。
|
||||
Deployment 仅创建了一个 Pod 用于运行这个应用程序。当流量增加时,我们需要扩容应用程序满足用户需求。</p>
|
||||
<p>如果你还没有学习过之前的章节, 从 <a href="/zh-cn/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">使用 Minikube 创建集群</a> 开始。</p>
|
||||
|
||||
<!-- <p><b>Scaling</b> is accomplished by changing the number of replicas in a Deployment</p> -->
|
||||
<p><b>扩缩</b> 是通过改变 Deployment 中的副本数量来实现的。</p>
|
||||
<!--
|
||||
<p><em>Scaling</em> is accomplished by changing the number of replicas in a Deployment</p>
|
||||
<p> <b> NOTE </b> If you are trying this after <a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/">the previous section </a>, you may need to start from <a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">creating a cluster</a> as the services may have been deleted </p>
|
||||
-->
|
||||
<p><b>扩缩</b> 是通过改变 Deployment 中的副本数量来实现的。</p>
|
||||
<p> <b> 注意 </b> 如果你是在 <a href="/zh-cn/docs/tutorials/kubernetes-basics/expose/expose-intro/">上一节</a> 之后尝试此操作,
|
||||
你可能需要从 <a href="/zh-cn/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">创建一个集群</a> 开始,
|
||||
因为 Service 可能已被删除。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<!-- <h3>Summary:</h3> -->
|
||||
<h3>小结:</h3>
|
||||
<!-- <h3>Summary:</h3> -->
|
||||
<h3>小结:</h3>
|
||||
<ul>
|
||||
<!-- <li>Scaling a Deployment</li> -->
|
||||
<li>扩缩一个 Deployment</li>
|
||||
<!-- <li>Scaling a Deployment</li> -->
|
||||
<li>扩缩一个 Deployment</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<!-- <p><i> You can create from the start a Deployment with multiple instances using the --replicas parameter for the kubectl run command </i></p> -->
|
||||
<p><i> 在运行 kubectl run 命令时,你可以通过设置 --replicas 参数来设置 Deployment 的副本数。</i></p>
|
||||
<!--
|
||||
<p><i> You can create from the start a Deployment with multiple instances using the --replicas parameter for the kubectl create deployment command </i></p>
|
||||
-->
|
||||
<p><i> 通过在使用 kubectl create deployment 命令时设置 --replicas 参数,你可以在启动 Deployment 时创建多个实例。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -58,8 +79,8 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<!-- <h2 style="color: #3771e3;">Scaling overview</h2> -->
|
||||
<h2 style="color: #3771e3;">扩缩概述</h2>
|
||||
<!-- <h2 style="color: #3771e3;">Scaling overview</h2> -->
|
||||
<h2 style="color: #3771e3;">扩缩概述</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -97,19 +118,28 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<!-- <p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling in will reduce the number of Pods to the new desired state. Kubernetes also supports
|
||||
<a href="/docs/user-guide/horizontal-pod-autoscaling/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p> -->
|
||||
<p>扩展 Deployment 将创建新的 Pods,并将资源调度请求分配到有可用资源的节点上,收缩 会将 Pods 数量减少至所需的状态。Kubernetes 还支持 Pods 的<a href="/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/">自动缩放</a>,但这并不在本教程的讨论范围内。将 Pods 数量收缩到0也是可以的,但这会终止 Deployment 上所有已经部署的 Pods。</p>
|
||||
<!--
|
||||
<p>Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state. Kubernetes also supports <a href="/docs/tasks/run-application/horizontal-pod-autoscale/">autoscaling</a> of Pods, but it is outside of the scope of this tutorial. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.</p>
|
||||
-->
|
||||
<p>对 Deployment 横向扩容将保证新的 Pod 被创建并调度到有可用资源的 Node 上,扩容会将 Pod 数量增加至新的预期状态。
|
||||
Kubernetes 还支持 Pod 的<a href="/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale/">自动扩缩容</a>,
|
||||
但这并不在本教程的讨论范围内。
|
||||
将 Pods 数量收缩到 0 也是可以的,这会终止指定 Deployment 上所有的 Pod。</p>
|
||||
|
||||
<!-- <p>Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment.
|
||||
Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.</p> -->
|
||||
<p>运行应用程序的多个实例需要在它们之间分配流量。服务 (Service)有一种负载均衡器类型,可以将网络流量均衡分配到外部可访问的 Pods 上。服务将会一直通过端点来监视 Pods 的运行,保证流量只分配到可用的 Pods 上。</p>
|
||||
<!--
|
||||
<p>Running multiple instances of an application will require a way to distribute the traffic to all of them. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.</p>
|
||||
-->
|
||||
<p>运行应用程序的多个实例,需要有方法在它们之间分配流量。Service 有一个集成的负载均衡器,
|
||||
将网络流量分配到一个可公开访问的 Deployment 的所有 Pod 上。
|
||||
服务将会一直通过端点来监视 Pod 的运行,保证流量只分配到可用的 Pod 上。</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<!-- <p><i>Scaling is accomplished by changing the number of replicas in a Deployment.</i></p> -->
|
||||
<p><i>扩缩是通过改变 Deployment 中的副本数量来实现的。</i></p>
|
||||
<!--
|
||||
<p><i>Scaling is accomplished by changing the number of replicas in a Deployment.</i></p>
|
||||
-->
|
||||
<p><i>扩缩是通过改变 Deployment 中的副本数量来实现的。</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -118,19 +148,167 @@ weight: 10
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<!-- <p> Once you have multiple instances of an Application running, you would be able to do Rolling updates without downtime. We'll cover that in the next module. Now, let's go to the online terminal and scale our application.</p> -->
|
||||
<p>一旦有了多个应用实例,就可以没有宕机地滚动更新。我们将会在下面的模块中介绍这些。现在让我们使用在线终端来体验一下应用程序的扩缩过程。</p>
|
||||
<!--
|
||||
<p> Once you have multiple instances of an application running, you would be able to do Rolling updates without downtime. We'll cover that in the next section of the tutorial. Now, let's go to the terminal and scale our application.</p>
|
||||
-->
|
||||
<p>一旦有了多个应用实例,就可以进行滚动更新而无需停机。我们将会在教程的下一节介绍这些。
|
||||
现在让我们进入终端,来扩缩我们的应用程序。</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<!-- <a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">Start Interactive Tutorial <span class="btn__next">›</span></a> -->
|
||||
<a class="btn btn-lg btn-success" href="/zh-cn/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">开始互动教程 <span class="btn__next">›</span></a>
|
||||
<!-- <h3>Scaling a Deployment</h3> -->
|
||||
<h3>扩容 Deployment</h3>
|
||||
<!--
|
||||
<p>To list your Deployments use the <code>get deployments</code> subcommand:</p>
|
||||
-->
|
||||
<p>要列出你的 Deployment,使用 <code>get deployments</code> 子命令:</p>
|
||||
<p><code><b>kubectl get deployments</b></code></p>
|
||||
<!--
|
||||
<p>The output should be similar to:</p>
|
||||
-->
|
||||
<p>输出应该类似于:</p>
|
||||
<pre>
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
kubernetes-bootcamp 1/1 1 1 11m
|
||||
</pre>
|
||||
<!--
|
||||
<p>We should have 1 Pod. If not, run the command again. This shows:</p>
|
||||
<ul>
|
||||
<li><em>NAME</em> lists the names of the Deployments in the cluster.</li>
|
||||
<li><em>READY</em> shows the ratio of CURRENT/DESIRED replicas</li>
|
||||
<li><em>UP-TO-DATE</em> displays the number of replicas that have been updated to achieve the desired state.</li>
|
||||
<li><em>AVAILABLE</em> displays how many replicas of the application are available to your users.</li>
|
||||
<li><em>AGE</em> displays the amount of time that the application has been running.</li>
|
||||
</ul>
|
||||
-->
|
||||
<p>我们应该有 1 个 Pod。如果没有,请再次运行该命令。结果显示:</p>
|
||||
<ul>
|
||||
<li><em>NAME</em> 列出 Deployment 在集群中的名称。</li>
|
||||
<li><em>READY</em> 显示当前/预期(CURRENT/DESIRED)副本数的比例/li>
|
||||
<li><em>UP-TO-DATE</em> 显示为了达到预期状态,而被更新的副本的数量。</li>
|
||||
<li><em>AVAILABLE</em> 显示应用程序有多少个副本对你的用户可用。</li>
|
||||
<li><em>AGE</em> 显示应用程序的运行时间。</li>
|
||||
</ul>
|
||||
<!--
|
||||
<p>To see the ReplicaSet created by the Deployment, run:</p>
|
||||
-->
|
||||
<p>要查看由 Deployment 创建的 ReplicaSet,运行:</p>
|
||||
<p><code><b>kubectl get rs</b></code></p>
|
||||
<!--
|
||||
<p>Notice that the name of the ReplicaSet is always formatted as <tt>[DEPLOYMENT-NAME]-[RANDOM-STRING]</tt>. The random string is randomly generated and uses the <em>pod-template-hash</em> as a seed.</p>
|
||||
-->
|
||||
<p>注意 ReplicaSet 名称总是遵循 <tt>[DEPLOYMENT-NAME]-[RANDOM-STRING]</tt> 的格式。
|
||||
随机字符串是使用 <em>pod-template-hash</em> 作为种子随机生成的。</p>
|
||||
<!--
|
||||
<p>Two important columns of this output are:</p>
|
||||
<ul>
|
||||
<li><em>DESIRED</em> displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.</li>
|
||||
<li><em>CURRENT</em> displays how many replicas are currently running.</li>
|
||||
</ul>
|
||||
-->
|
||||
<p>该输出有两个重要的列是:</p>
|
||||
<ul>
|
||||
<li><em>DESIRED</em> 显示了应用程序的预期副本数量,这是你在创建 Deployment 时定义的。这就是预期状态(desired state)。</li>
|
||||
<li><em>CURRENT</em> 显示了当前正在运行的副本数量。</li>
|
||||
</ul>
|
||||
<!--
|
||||
<p>Next, let’s scale the Deployment to 4 replicas. We’ll use the <code>kubectl scale</code> command, followed by the Deployment type, name and desired number of instances:</p>
|
||||
-->
|
||||
<p>接下来,让我们扩容 Deployment 到 4 个副本。
|
||||
我们将使用 <code>kubectl scale</code> 命令,后面给出 Deployment 类型、名称和预期的实例数量:</p>
|
||||
<p><code><b>kubectl scale deployments/kubernetes-bootcamp --replicas=4</b></code></p>
|
||||
<!--
|
||||
<p>To list your Deployments once again, use <code>get deployments</code>:</p>
|
||||
-->
|
||||
<p>要再次列举出你的 Deployment,使用 <code>get deployments</code>:</p>
|
||||
<p><code><b>kubectl get deployments</b></code></p>
|
||||
<!--
|
||||
<p>The change was applied, and we have 4 instances of the application available. Next, let’s check if the number of Pods changed:</p>
|
||||
-->
|
||||
<p>更改已应用,我们有 4 个应用程序实例可用。接下来,让我们检查 Pod 的数量是否发生变化:</p>
|
||||
<p><code><b>kubectl get pods -o wide</b></code></p>
|
||||
<!--
|
||||
<p>There are 4 Pods now, with different IP addresses. The change was registered in the Deployment events log. To check that, use the describe subcommand:</p>
|
||||
-->
|
||||
<p>现在有 4 个 Pod,各有不同的 IP 地址。这一变化会记录到 Deployment 的事件日志中。
|
||||
要检查这一点,请使用 <code>describe</code> 子命令:</p>
|
||||
<p><code><b>kubectl describe deployments/kubernetes-bootcamp</b></code></p>
|
||||
<!--
|
||||
<p>You can also view in the output of this command that there are 4 replicas now.</p>
|
||||
-->
|
||||
<p>你还可以从该命令的输出中看到,现在有 4 个副本。</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<!-- <h3>Load Balancing</h3> -->
|
||||
<h3>负载均衡</h3>
|
||||
<!--
|
||||
<p>Let's check that the Service is load-balancing the traffic. To find out the exposed IP and Port we can use the describe service as we learned in the previous part of the tutorial:</p>
|
||||
<p><code><b>kubectl describe services/kubernetes-bootcamp</b></code></p>
|
||||
<p>Create an environment variable called <tt>NODE_PORT</tt> that has a value as the Node port:</p>
|
||||
-->
|
||||
<p>让我们来检查 Service 是否在进行流量负载均衡。要查找对外公开的 IP 和端口,
|
||||
我们可以使用在教程之前部份学到的 <code>describe services</code>:</p>
|
||||
<p><code><b>kubectl describe services/kubernetes-bootcamp</b></code></p>
|
||||
<!--
|
||||
<p>创建一个名为 <tt>NODE_PORT</tt> 的环境变量,值为 Node 的端口:</p>
|
||||
-->
|
||||
<p><code><b>export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')"</b></code><br />
|
||||
<p><code><b>echo NODE_PORT=$NODE_PORT</b></code></p>
|
||||
<!--
|
||||
<p>Next, we’ll do a <code>curl</code> to the exposed IP address and port. Execute the command multiple times:</p>
|
||||
-->
|
||||
<p>接下来,我们将使用 <code>curl</code> 访问对外公开的 IP 和端口。多次执行以下命令:</p>
|
||||
<p><code><b>curl http://"$(minikube ip):$NODE_PORT"</b></b></b></code></p>
|
||||
<!--
|
||||
<p>We hit a different Pod with every request. This demonstrates that the load-balancing is working.</p>
|
||||
-->
|
||||
<p>我们每个请求都命中了不同的 Pod,这证明负载均衡正在工作。</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<!-- <h3>Scale Down</h3> -->
|
||||
<h3>缩容</h3>
|
||||
<!--
|
||||
<p>To scale down the Deployment to 2 replicas, run again the <code>scale</code> subcommand:</p>
|
||||
-->
|
||||
<p>要将 Deployment 缩容到 2 个副本,请再次运行 <code>scale</code> 子命令:</p>
|
||||
<p><code><b>kubectl scale deployments/kubernetes-bootcamp --replicas=2</b></code></p>
|
||||
<!--
|
||||
<p>List the Deployments to check if the change was applied with the <code>get deployments</code> subcommand:</p>
|
||||
-->
|
||||
<p>要列举 Deployment 以检查是否应用了更改,使用 <code>get deployments</code> 子命令:</p>
|
||||
<p><code><b>kubectl get deployments</b></code></p>
|
||||
<!--
|
||||
<p>The number of replicas decreased to 2. List the number of Pods, with <code>get pods</code>:</p>
|
||||
-->
|
||||
<p>副本数量减少到了 2 个,要列出 Pod 的数量,使用 <code>get pods</code>:</p>
|
||||
<p><code><b>kubectl get pods -o wide</b></code></p>
|
||||
<!--
|
||||
<p>This confirms that 2 Pods were terminated.</p>
|
||||
-->
|
||||
<p>这证实了有 2 个 Pod 被终止。</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<!--
|
||||
<p>
|
||||
Once you're ready, move on to <a href="/docs/tutorials/kubernetes-basics/update/update-intro/" title="Performing a Rolling Update">Performing a Rolling Update</a>.</p>
|
||||
</p>
|
||||
-->
|
||||
<p>
|
||||
准备好之后,继续学习<a href="/zh-cn/docs/tutorials/kubernetes-basics/update/update-intro/" title="Performing a Rolling Update">执行滚动更新</a>。
|
||||
</p>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
|
|
@ -35,6 +35,8 @@ spec:
|
|||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
# 可能需要设置较高的优先级类以确保 DaemonSet Pod 可以抢占正在运行的 Pod
|
||||
# priorityClassName: important
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 37 KiB |
Loading…
Reference in New Issue