Merge branch 'kubernetes:main' into feat/42380/add-128-release-blog
commit
c4e4d4b6b1
|
@ -673,18 +673,19 @@ section#cncf {
|
|||
width: 100%;
|
||||
overflow: hidden;
|
||||
clear: both;
|
||||
display: flex;
|
||||
justify-content: space-evenly;
|
||||
flex-wrap: wrap;
|
||||
|
||||
h4 {
|
||||
line-height: normal;
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
& > div:first-child {
|
||||
float: left;
|
||||
}
|
||||
|
||||
& > div:last-child {
|
||||
float: right;
|
||||
& > div {
|
||||
background-color: #daeaf9;
|
||||
border-radius: 20px;
|
||||
padding: 25px;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -354,12 +354,48 @@ main {
|
|||
word-break: break-word;
|
||||
}
|
||||
|
||||
/* SCSS Related to the Metrics Table */
|
||||
|
||||
@media (max-width: 767px) { // for mobile devices, Display the names, Stability levels & types
|
||||
|
||||
table.metrics {
|
||||
th:nth-child(n + 4),
|
||||
td:nth-child(n + 4) {
|
||||
display: none;
|
||||
}
|
||||
|
||||
td.metric_type{
|
||||
min-width: 7em;
|
||||
}
|
||||
td.metric_stability_level{
|
||||
min-width: 6em;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
table.metrics tbody{ // Tested dimensions to improve overall aesthetic of the table
|
||||
tr {
|
||||
td {
|
||||
font-size: smaller;
|
||||
}
|
||||
td.metric_labels_varying{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_type{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_description{
|
||||
min-width: 10em;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
table.no-word-break td,
|
||||
table.no-word-break code {
|
||||
word-break: normal;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// blockquotes and callouts
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA"
|
||||
date: 2023-08-15T10:00:00-08:00
|
||||
date: 2023-08-16T10:00:00-08:00
|
||||
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
|
||||
---
|
||||
|
|
@ -216,8 +216,6 @@ data has the following advantages:
|
|||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for ConfigMaps marked as immutable.
|
||||
|
||||
This feature is controlled by the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
You can create an immutable ConfigMap by setting the `immutable` field to `true`.
|
||||
For example:
|
||||
|
||||
|
|
|
@ -55,18 +55,75 @@ See [Information security for Secrets](#information-security-for-secrets) for mo
|
|||
|
||||
## Uses for Secrets
|
||||
|
||||
There are three main ways for a Pod to use a Secret:
|
||||
You can use Secrets for purposes such as the following:
|
||||
|
||||
- As [files](#using-secrets-as-files-from-a-pod) in a
|
||||
{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
|
||||
its containers.
|
||||
- As [container environment variable](#using-secrets-as-environment-variables).
|
||||
- By the [kubelet when pulling images](#using-imagepullsecrets) for the Pod.
|
||||
- [Set environment variables for a container](/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data).
|
||||
- [Provide credentials such as SSH keys or passwords to Pods](/docs/tasks/inject-data-application/distribute-credentials-secure/#provide-prod-test-creds).
|
||||
- [Allow the kubelet to pull container images from private registries](/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
|
||||
The Kubernetes control plane also uses Secrets; for example,
|
||||
[bootstrap token Secrets](#bootstrap-token-secrets) are a mechanism to
|
||||
help automate node registration.
|
||||
|
||||
### Use case: dotfiles in a secret volume
|
||||
|
||||
You can make your data "hidden" by defining a key that begins with a dot.
|
||||
This key represents a dotfile or "hidden" file. For example, when the following secret
|
||||
is mounted into a volume, `secret-volume`, the volume will contain a single file,
|
||||
called `.secret-file`, and the `dotfile-test-container` will have this file
|
||||
present at the path `/etc/secret-volume/.secret-file`.
|
||||
|
||||
{{< note >}}
|
||||
Files beginning with dot characters are hidden from the output of `ls -l`;
|
||||
you must use `ls -la` to see them when listing directory contents.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dotfile-secret
|
||||
data:
|
||||
.secret-file: dmFsdWUtMg0KDQo=
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-dotfiles-pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
- "/etc/secret-volume"
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
### Use case: Secret visible to one container in a Pod
|
||||
|
||||
Consider a program that needs to handle HTTP requests, do some complex business
|
||||
logic, and then sign some messages with an HMAC. Because it has complex
|
||||
application logic, there might be an unnoticed remote file reading exploit in
|
||||
the server, which could expose the private key to an attacker.
|
||||
|
||||
This could be divided into two processes in two containers: a frontend container
|
||||
which handles user interaction and business logic, but which cannot see the
|
||||
private key; and a signer container that can see the private key, and responds
|
||||
to simple signing requests from the frontend (for example, over localhost networking).
|
||||
|
||||
With this partitioned approach, an attacker now has to trick the application
|
||||
server into doing something rather arbitrary, which may be harder than getting
|
||||
it to read a file.
|
||||
|
||||
### Alternatives to Secrets
|
||||
|
||||
Rather than using a Secret to protect confidential data, you can pick from alternatives.
|
||||
|
@ -108,8 +165,8 @@ These types vary in terms of the validations performed and the constraints
|
|||
Kubernetes imposes on them.
|
||||
|
||||
| Built-in Type | Usage |
|
||||
| ------------------------------------- | --------------------------------------- |
|
||||
| `Opaque` | arbitrary user-defined data |
|
||||
| ------------------------------------- |---------------------------------------- |
|
||||
| `Opaque` | arbitrary user-defined data |
|
||||
| `kubernetes.io/service-account-token` | ServiceAccount token |
|
||||
| `kubernetes.io/dockercfg` | serialized `~/.dockercfg` file |
|
||||
| `kubernetes.io/dockerconfigjson` | serialized `~/.docker/config.json` file |
|
||||
|
@ -576,17 +633,17 @@ metadata:
|
|||
name: mypod
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: mypod
|
||||
image: redis
|
||||
volumeMounts:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
optional: true
|
||||
mountPath: "/etc/foo"
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: foo
|
||||
secret:
|
||||
secretName: mysecret
|
||||
optional: true
|
||||
```
|
||||
|
||||
By default, Secrets are required. None of a Pod's containers will start until
|
||||
|
@ -697,269 +754,6 @@ for a detailed explanation of that process.
|
|||
|
||||
You cannot use ConfigMaps or Secrets with {{< glossary_tooltip text="static Pods" term_id="static-pod" >}}.
|
||||
|
||||
## Use cases
|
||||
|
||||
### Use case: As container environment variables {#use-case-as-container-environment-variables}
|
||||
|
||||
You can create a Secret and use it to
|
||||
[set environment variables for a container](/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data).
|
||||
|
||||
### Use case: Pod with SSH keys
|
||||
|
||||
Create a Secret containing some SSH keys:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "ssh-key-secret" created
|
||||
```
|
||||
|
||||
You can also create a `kustomization.yaml` with a `secretGenerator` field containing ssh keys.
|
||||
|
||||
{{< caution >}}
|
||||
Think carefully before sending your own SSH keys: other users of the cluster may have access
|
||||
to the Secret.
|
||||
|
||||
You could instead create an SSH private key representing a service identity that you want to be
|
||||
accessible to all the users with whom you share the Kubernetes cluster, and that you can revoke
|
||||
if the credentials are compromised.
|
||||
{{< /caution >}}
|
||||
|
||||
Now you can create a Pod which references the secret with the SSH key and
|
||||
consumes it in a volume:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-test-pod
|
||||
labels:
|
||||
name: secret-test
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: ssh-key-secret
|
||||
containers:
|
||||
- name: ssh-test-container
|
||||
image: mySshImage
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
When the container's command runs, the pieces of the key will be available in:
|
||||
|
||||
```
|
||||
/etc/secret-volume/ssh-publickey
|
||||
/etc/secret-volume/ssh-privatekey
|
||||
```
|
||||
|
||||
The container is then free to use the secret data to establish an SSH connection.
|
||||
|
||||
### Use case: Pods with prod / test credentials
|
||||
|
||||
This example illustrates a Pod which consumes a secret containing production credentials and
|
||||
another Pod which consumes a secret with test environment credentials.
|
||||
|
||||
You can create a `kustomization.yaml` with a `secretGenerator` field or run
|
||||
`kubectl create secret`.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "prod-db-secret" created
|
||||
```
|
||||
|
||||
You can also create a secret for test environment credentials.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "test-db-secret" created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your
|
||||
[shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping.
|
||||
|
||||
In most shells, the easiest way to escape the password is to surround it with single quotes (`'`).
|
||||
For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command this way:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
|
||||
```
|
||||
|
||||
You do not need to escape special characters in passwords from files (`--from-file`).
|
||||
{{< /note >}}
|
||||
|
||||
Now make the Pods:
|
||||
|
||||
```shell
|
||||
cat <<EOF > pod.yaml
|
||||
apiVersion: v1
|
||||
kind: List
|
||||
items:
|
||||
- kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: prod-db-client-pod
|
||||
labels:
|
||||
name: prod-db-client
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: prod-db-secret
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
- kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-db-client-pod
|
||||
labels:
|
||||
name: test-db-client
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-db-secret
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
EOF
|
||||
```
|
||||
|
||||
Add the pods to the same `kustomization.yaml`:
|
||||
|
||||
```shell
|
||||
cat <<EOF >> kustomization.yaml
|
||||
resources:
|
||||
- pod.yaml
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply all those objects on the API server by running:
|
||||
|
||||
```shell
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
Both containers will have the following files present on their filesystems with the values
|
||||
for each container's environment:
|
||||
|
||||
```
|
||||
/etc/secret-volume/username
|
||||
/etc/secret-volume/password
|
||||
```
|
||||
|
||||
Note how the specs for the two Pods differ only in one field; this facilitates
|
||||
creating Pods with different capabilities from a common Pod template.
|
||||
|
||||
You could further simplify the base Pod specification by using two service accounts:
|
||||
|
||||
1. `prod-user` with the `prod-db-secret`
|
||||
1. `test-user` with the `test-db-secret`
|
||||
|
||||
The Pod specification is shortened to:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: prod-db-client-pod
|
||||
labels:
|
||||
name: prod-db-client
|
||||
spec:
|
||||
serviceAccount: prod-db-client
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
```
|
||||
|
||||
### Use case: dotfiles in a secret volume
|
||||
|
||||
You can make your data "hidden" by defining a key that begins with a dot.
|
||||
This key represents a dotfile or "hidden" file. For example, when the following secret
|
||||
is mounted into a volume, `secret-volume`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: dotfile-secret
|
||||
data:
|
||||
.secret-file: dmFsdWUtMg0KDQo=
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: secret-dotfiles-pod
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: dotfile-secret
|
||||
containers:
|
||||
- name: dotfile-test-container
|
||||
image: registry.k8s.io/busybox
|
||||
command:
|
||||
- ls
|
||||
- "-l"
|
||||
- "/etc/secret-volume"
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
```
|
||||
|
||||
The volume will contain a single file, called `.secret-file`, and
|
||||
the `dotfile-test-container` will have this file present at the path
|
||||
`/etc/secret-volume/.secret-file`.
|
||||
|
||||
{{< note >}}
|
||||
Files beginning with dot characters are hidden from the output of `ls -l`;
|
||||
you must use `ls -la` to see them when listing directory contents.
|
||||
{{< /note >}}
|
||||
|
||||
### Use case: Secret visible to one container in a Pod
|
||||
|
||||
Consider a program that needs to handle HTTP requests, do some complex business
|
||||
logic, and then sign some messages with an HMAC. Because it has complex
|
||||
application logic, there might be an unnoticed remote file reading exploit in
|
||||
the server, which could expose the private key to an attacker.
|
||||
|
||||
This could be divided into two processes in two containers: a frontend container
|
||||
which handles user interaction and business logic, but which cannot see the
|
||||
private key; and a signer container that can see the private key, and responds
|
||||
to simple signing requests from the frontend (for example, over localhost networking).
|
||||
|
||||
With this partitioned approach, an attacker now has to trick the application
|
||||
server into doing something rather arbitrary, which may be harder than getting
|
||||
it to read a file.
|
||||
|
||||
## Immutable Secrets {#secret-immutable}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
|
||||
|
|
|
@ -340,12 +340,8 @@ namespaces based on their labels.
|
|||
|
||||
## Targeting a Namespace by its name
|
||||
|
||||
{{< feature-state for_k8s_version="1.22" state="stable" >}}
|
||||
|
||||
The Kubernetes control plane sets an immutable label `kubernetes.io/metadata.name` on all
|
||||
namespaces, provided that the `NamespaceDefaultLabelName`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
The value of the label is the namespace name.
|
||||
namespaces, the value of the label is the namespace name.
|
||||
|
||||
While NetworkPolicy cannot target a namespace by its name with some object field, you can use the
|
||||
standardized label to target a specific namespace.
|
||||
|
|
|
@ -248,11 +248,10 @@ same namespace, so that these conflicts can't occur.
|
|||
|
||||
### Security
|
||||
|
||||
Enabling the GenericEphemeralVolume feature allows users to create
|
||||
PVCs indirectly if they can create Pods, even if they do not have
|
||||
permission to create PVCs directly. Cluster administrators must be
|
||||
aware of this. If this does not fit their security model, they should
|
||||
use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
Using generic ephemeral volumes allows users to create PVCs indirectly
|
||||
if they can create Pods, even if they do not have permission to create PVCs directly.
|
||||
Cluster administrators must be aware of this. If this does not fit their security model,
|
||||
they should use an [admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
that rejects objects like Pods that have a generic ephemeral volume.
|
||||
|
||||
The normal [namespace quota for PVCs](/docs/concepts/policy/resource-quotas/#storage-resource-quota)
|
||||
|
|
|
@ -74,10 +74,10 @@ used for provisioning VolumeSnapshots. This field must be specified.
|
|||
|
||||
### DeletionPolicy
|
||||
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what
|
||||
happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to
|
||||
is to be deleted. The deletionPolicy of a volume snapshot class can either be
|
||||
`Retain` or `Delete`. This field must be specified.
|
||||
Volume snapshot classes have a [deletionPolicy](/docs/concepts/storage/volume-snapshots/#delete).
|
||||
It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot
|
||||
object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can
|
||||
either be `Retain` or `Delete`. This field must be specified.
|
||||
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
|
||||
deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`,
|
||||
|
|
|
@ -1061,7 +1061,7 @@ persistent volume:
|
|||
`ControllerPublishVolume` and `ControllerUnpublishVolume` calls. This field is
|
||||
optional, and may be empty if no secret is required. If the Secret
|
||||
contains more than one secret, all secrets are passed.
|
||||
`nodeExpandSecretRef`: A reference to the secret containing sensitive
|
||||
* `nodeExpandSecretRef`: A reference to the secret containing sensitive
|
||||
information to pass to the CSI driver to complete the CSI
|
||||
`NodeExpandVolume` call. This field is optional, and may be empty if no
|
||||
secret is required. If the object contains more than one secret, all
|
||||
|
@ -1116,8 +1116,8 @@ For more information on how to develop a CSI driver, refer to the
|
|||
|
||||
CSI node plugins need to perform various privileged
|
||||
operations like scanning of disk devices and mounting of file systems. These operations
|
||||
differ for each host operating system. For Linux worker nodes, containerized CSI node
|
||||
node plugins are typically deployed as privileged containers. For Windows worker nodes,
|
||||
differ for each host operating system. For Linux worker nodes, containerized CSI node
|
||||
plugins are typically deployed as privileged containers. For Windows worker nodes,
|
||||
privileged operations for containerized CSI node plugins is supported using
|
||||
[csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed,
|
||||
stand-alone binary that needs to be pre-installed on each Windows node.
|
||||
|
|
|
@ -831,32 +831,12 @@ mismatch.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
{{< note >}}
|
||||
The control plane doesn't track Jobs using finalizers, if the Jobs were created
|
||||
when the feature gate `JobTrackingWithFinalizers` was disabled, even after you
|
||||
upgrade the control plane to 1.26.
|
||||
{{< /note >}}
|
||||
|
||||
The control plane keeps track of the Pods that belong to any Job and notices if
|
||||
any such Pod is removed from the API server. To do that, the Job controller
|
||||
creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
|
||||
controller removes the finalizer only after the Pod has been accounted for in
|
||||
the Job status, allowing the Pod to be removed by other controllers or users.
|
||||
|
||||
Jobs created before upgrading to Kubernetes 1.26 or before the feature gate
|
||||
`JobTrackingWithFinalizers` is enabled are tracked without the use of Pod
|
||||
finalizers.
|
||||
The Job {{< glossary_tooltip term_id="controller" text="controller" >}} updates
|
||||
the status counters for `succeeded` and `failed` Pods based only on the Pods
|
||||
that exist in the cluster. The contol plane can lose track of the progress of
|
||||
the Job if Pods are deleted from the cluster.
|
||||
|
||||
You can determine if the control plane is tracking a Job using Pod finalizers by
|
||||
checking if the Job has the annotation
|
||||
`batch.kubernetes.io/job-tracking`. You should **not** manually add or remove
|
||||
this annotation from Jobs. Instead, you can recreate the Jobs to ensure they
|
||||
are tracked using Pod finalizers.
|
||||
|
||||
### Elastic Indexed Jobs
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
|
|
@ -651,6 +651,12 @@ You can remove ... | You can easily remove ...
|
|||
These steps ... | These simple steps ...
|
||||
{{< /table >}}
|
||||
|
||||
### EditorConfig file
|
||||
The Kubernetes project maintains an EditorConfig file that sets common style preferences in text editors
|
||||
such as VS Code. You can use this file if you want to ensure that your contributions are consistent with
|
||||
the rest of the project. To view the file, refer to
|
||||
[`.editorconfig`](https://github.com/kubernetes/website/blob/main/.editorconfig) in the repository root.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about [writing a new topic](/docs/contribute/style/write-new-topic/).
|
||||
|
|
|
@ -218,7 +218,7 @@ configuration types to be used during a <code>kubeadm init</code> run.</p>
|
|||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">pathType</span>:<span style="color:#bbb"> </span>File<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">scheduler</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraArgs</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">bind-address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraVolumes</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span>- <span style="color:#000;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d14">"some-volume"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">hostPath</span>:<span style="color:#bbb"> </span><span style="color:#d14">"/etc/some-path"</span><span style="color:#bbb">
|
||||
|
|
|
@ -1094,17 +1094,10 @@ Example: `batch.kubernetes.io/job-tracking: ""`
|
|||
|
||||
Used on: Jobs
|
||||
|
||||
The presence of this annotation on a Job indicates that the control plane is
|
||||
The presence of this annotation on a Job used to indicate that the control plane is
|
||||
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
|
||||
The control plane uses this annotation to safely transition to tracking Jobs
|
||||
using finalizers, while the feature is in development.
|
||||
You should **not** manually add or remove this annotation.
|
||||
|
||||
{{< note >}}
|
||||
Starting from Kubernetes 1.26, this annotation is deprecated.
|
||||
Kubernetes 1.27 and newer will ignore this annotation and always track Jobs
|
||||
using finalizers.
|
||||
{{< /note >}}
|
||||
Adding or removing this annotation no longer has an effect (Kubernetes v1.27 and later)
|
||||
All Jobs are tracked with finalizers.
|
||||
|
||||
### job-name (deprecated) {#job-name}
|
||||
|
||||
|
|
|
@ -112,12 +112,14 @@ Here is a configuration file you can use to create a Pod:
|
|||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
secret-test-pod 1/1 Running 0 42m
|
||||
```
|
||||
|
||||
1. Get a shell into the Container that is running in your Pod:
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t secret-test-pod -- /bin/bash
|
||||
```
|
||||
|
@ -126,22 +128,28 @@ Here is a configuration file you can use to create a Pod:
|
|||
`/etc/secret-volume`.
|
||||
|
||||
In your shell, list the files in the `/etc/secret-volume` directory:
|
||||
|
||||
```shell
|
||||
# Run this in the shell inside the container
|
||||
ls /etc/secret-volume
|
||||
```
|
||||
|
||||
The output shows two files, one for each piece of secret data:
|
||||
|
||||
```
|
||||
password username
|
||||
```
|
||||
|
||||
1. In your shell, display the contents of the `username` and `password` files:
|
||||
|
||||
```shell
|
||||
# Run this in the shell inside the container
|
||||
echo "$( cat /etc/secret-volume/username )"
|
||||
echo "$( cat /etc/secret-volume/password )"
|
||||
```
|
||||
|
||||
The output is your username and password:
|
||||
|
||||
```
|
||||
my-app
|
||||
39528$vdg7Jb
|
||||
|
@ -153,8 +161,8 @@ in this directory.
|
|||
|
||||
### Project Secret keys to specific file paths
|
||||
|
||||
You can also control the paths within the volume where Secret keys are projected. Use the `.spec.volumes[].secret.items` field to change the target
|
||||
path of each key:
|
||||
You can also control the paths within the volume where Secret keys are projected. Use the
|
||||
`.spec.volumes[].secret.items` field to change the target path of each key:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -260,13 +268,14 @@ secrets change.
|
|||
kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
- In your shell, display the content of `SECRET_USERNAME` container environment variable
|
||||
- In your shell, display the content of `SECRET_USERNAME` container environment variable.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'
|
||||
```
|
||||
|
||||
The output is
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
backend-admin
|
||||
```
|
||||
|
@ -290,12 +299,14 @@ secrets change.
|
|||
kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml
|
||||
```
|
||||
|
||||
- In your shell, display the container environment variables
|
||||
- In your shell, display the container environment variables.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'
|
||||
```
|
||||
The output is
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
DB_USERNAME=db-admin
|
||||
BACKEND_USERNAME=backend-admin
|
||||
|
@ -313,7 +324,8 @@ This functionality is available in Kubernetes v1.6 and later.
|
|||
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
|
||||
```
|
||||
|
||||
- Use envFrom to define all of the Secret's data as container environment variables. The key from the Secret becomes the environment variable name in the Pod.
|
||||
- Use envFrom to define all of the Secret's data as container environment variables.
|
||||
The key from the Secret becomes the environment variable name in the Pod.
|
||||
|
||||
{{% code file="pods/inject/pod-secret-envFrom.yaml" %}}
|
||||
|
||||
|
@ -323,18 +335,148 @@ This functionality is available in Kubernetes v1.6 and later.
|
|||
kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml
|
||||
```
|
||||
|
||||
- In your shell, display `username` and `password` container environment variables
|
||||
- In your shell, display `username` and `password` container environment variables.
|
||||
|
||||
```shell
|
||||
kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
|
||||
```
|
||||
|
||||
The output is
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
username: my-app
|
||||
password: 39528$vdg7Jb
|
||||
```
|
||||
|
||||
## Example: Provide prod/test credentials to Pods using Secrets {#provide-prod-test-creds}
|
||||
|
||||
This example illustrates a Pod which consumes a secret containing production credentials and
|
||||
another Pod which consumes a secret with test environment credentials.
|
||||
|
||||
1. Create a secret for prod environment credentials:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic prod-db-secret --from-literal=username=produser --from-literal=password=Y4nys7f11
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "prod-db-secret" created
|
||||
```
|
||||
|
||||
1. Create a secret for test environment credentials.
|
||||
|
||||
```shell
|
||||
kubectl create secret generic test-db-secret --from-literal=username=testuser --from-literal=password=iluvtests
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
secret "test-db-secret" created
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Special characters such as `$`, `\`, `*`, `=`, and `!` will be interpreted by your
|
||||
[shell](https://en.wikipedia.org/wiki/Shell_(computing)) and require escaping.
|
||||
|
||||
In most shells, the easiest way to escape the password is to surround it with single quotes (`'`).
|
||||
For example, if your actual password is `S!B\*d$zDsb=`, you should execute the command as follows:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic dev-db-secret --from-literal=username=devuser --from-literal=password='S!B\*d$zDsb='
|
||||
```
|
||||
|
||||
You do not need to escape special characters in passwords from files (`--from-file`).
|
||||
{{< /note >}}
|
||||
|
||||
1. Create the Pod manifests:
|
||||
|
||||
```shell
|
||||
cat <<EOF > pod.yaml
|
||||
apiVersion: v1
|
||||
kind: List
|
||||
items:
|
||||
- kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: prod-db-client-pod
|
||||
labels:
|
||||
name: prod-db-client
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: prod-db-secret
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
- kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-db-client-pod
|
||||
labels:
|
||||
name: test-db-client
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: test-db-secret
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: true
|
||||
mountPath: "/etc/secret-volume"
|
||||
EOF
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
How the specs for the two Pods differ only in one field; this facilitates creating Pods
|
||||
with different capabilities from a common Pod template.
|
||||
{{< /note >}}
|
||||
|
||||
1. Apply all those objects on the API server by running:
|
||||
|
||||
```shell
|
||||
kubectl create -f pod.yaml
|
||||
```
|
||||
|
||||
Both containers will have the following files present on their filesystems with the values
|
||||
for each container's environment:
|
||||
|
||||
```
|
||||
/etc/secret-volume/username
|
||||
/etc/secret-volume/password
|
||||
```
|
||||
|
||||
You could further simplify the base Pod specification by using two service accounts:
|
||||
|
||||
1. `prod-user` with the `prod-db-secret`
|
||||
1. `test-user` with the `test-db-secret`
|
||||
|
||||
The Pod specification is shortened to:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: prod-db-client-pod
|
||||
labels:
|
||||
name: prod-db-client
|
||||
spec:
|
||||
serviceAccount: prod-db-client
|
||||
containers:
|
||||
- name: db-client-container
|
||||
image: myClientImage
|
||||
```
|
||||
|
||||
### References
|
||||
|
||||
- [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
|
||||
|
|
|
@ -154,7 +154,7 @@ description: |-
|
|||
<p><code><b>export POD_NAME="$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')"</b></code><br />
|
||||
<code><b>echo Name of the Pod: $POD_NAME</b></code></p>
|
||||
<p>To see the output of our application, run a <code>curl</code> request:</p>
|
||||
<p><code><b>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/</b></code></p>
|
||||
<p><code><b>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/</b></code></p>
|
||||
<p>The URL is the route to the API of the Pod.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -38,53 +38,13 @@ Let's say you have a Deployment containing of a single `nginx` replica
|
|||
|
||||
{{% code file="service/pod-with-graceful-termination.yaml" %}}
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 120 # extra long grace period
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
# Real life termination may take any time up to terminationGracePeriodSeconds.
|
||||
# In this example - just hang around for at least the duration of terminationGracePeriodSeconds,
|
||||
# at 120 seconds container will be forcibly terminated.
|
||||
# Note, all this time nginx will keep processing requests.
|
||||
command: [
|
||||
"/bin/sh", "-c", "sleep 180"
|
||||
]
|
||||
{{% code file="service/explore-graceful-termination-nginx.yaml" %}}
|
||||
|
||||
---
|
||||
Now create the Deployment Pod and Service using the above files:
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
```shell
|
||||
kubectl apply -f pod-with-graceful-termination.yaml
|
||||
kubectl apply -f explore-graceful-termination-nginx.yaml
|
||||
```
|
||||
|
||||
Once the Pod and Service are running, you can get the name of any associated EndpointSlices:
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
|
@ -78,8 +78,15 @@ releases may also occur in between these.
|
|||
|
||||
| Monthly Patch Release | Cherry Pick Deadline | Target date |
|
||||
| --------------------- | -------------------- | ----------- |
|
||||
| August 2023 | 2023-08-04 | 2023-08-09 |
|
||||
| August 2023 | 2023-08-04 | 2023-08-23 |
|
||||
| September 2023 | 2023-09-08 | 2023-09-13 |
|
||||
| October 2023 | 2023-10-13 | 2023-10-18 |
|
||||
| November 2023 | N/A | N/A |
|
||||
| December 2023 | 2023-12-01 | 2023-12-06 |
|
||||
|
||||
**Note:** Due to overlap with KubeCon NA 2023 and lack of availability of
|
||||
Release Managers and Google Build Admins, it has been decided to skip patch
|
||||
releases in November. Instead, we'll have patch releases early in December.
|
||||
|
||||
## Detailed Release History for Active Branches
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ d'évènements avec le flux de sortie standard. Cette démonstration utilise un
|
|||
manifeste pour un Pod avec un seul conteneur qui écrit du texte sur le flux
|
||||
de sortie standard toutes les secondes.
|
||||
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
{{% codenew file="debug/counter-pod.yaml" %}}
|
||||
|
||||
Pour lancer ce Pod, utilisez la commande suivante :
|
||||
|
||||
|
@ -243,7 +243,7 @@ Un Pod exécute un unique conteneur et ce conteneur écrit dans deux fichiers de
|
|||
journaux différents en utilisant deux format différents. Voici le manifeste du
|
||||
Pod :
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||
{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}}
|
||||
|
||||
Il serait très désordonné d'avoir des évènements avec des formats différents
|
||||
dans le même journal en redirigeant les évènements dans le flux de sortie
|
||||
|
@ -253,8 +253,7 @@ suit un des fichiers et renvoie les évènements sur son propre `stdout`.
|
|||
|
||||
Ci-dessous se trouve le manifeste pour un Pod avec deux conteneurs side-car.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml"
|
||||
>}}
|
||||
{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
|
||||
|
||||
Quand ce Pod s'exécute, chaque journal peut être diffusé séparément en
|
||||
utilisant les commandes suivantes :
|
||||
|
@ -323,7 +322,7 @@ Le premier fichier contient un
|
|||
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) pour
|
||||
configurer fluentd.
|
||||
|
||||
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||
{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}}
|
||||
|
||||
{{< note >}}
|
||||
La configuration de fluentd est hors du cadre de cet article. Vous trouverez
|
||||
|
@ -335,7 +334,7 @@ Le second fichier est un manifeste pour un Pod avec un conteneur side-car qui
|
|||
exécute fluentd. Le Pod monte un volume où fluentd peut récupérer sa
|
||||
configuration.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||
{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
|
||||
|
||||
Apres quelques minutes, les évènements apparaîtront dans l'interface de
|
||||
Stackdriver.
|
||||
|
|
|
@ -330,7 +330,7 @@ type: kubernetes.io/tls
|
|||
|
||||
Référencer ce secret dans un Ingress indiquera au contrôleur d'Ingress de sécuriser le canal du client au load-balancer à l'aide de TLS. Vous devez vous assurer que le secret TLS que vous avez créé provenait d'un certificat contenant un Common Name (CN), aussi appelé nom de domaine pleinement qualifié (FQDN), pour `https-example.foo.com`.
|
||||
|
||||
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
|
||||
{{% codenew file="service/networking/tls-example-ingress.yaml" %}}
|
||||
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -49,7 +49,7 @@ Voici des cas d'utilisation typiques pour les déploiements:
|
|||
Voici un exemple de déploiement.
|
||||
Il crée un ReplicaSet pour faire apparaître trois pods `nginx`:
|
||||
|
||||
{{< codenew file="controllers/nginx-deployment.yaml" >}}
|
||||
{{% codenew file="controllers/nginx-deployment.yaml" %}}
|
||||
|
||||
Dans cet exemple:
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ utilisez plutôt un déploiement et définissez votre application dans la sectio
|
|||
|
||||
## Exemple
|
||||
|
||||
{{< codenew file="controllers/frontend.yaml" >}}
|
||||
{{% codenew file="controllers/frontend.yaml" %}}
|
||||
|
||||
Enregistrer ce manifeste dans `frontend.yaml` et le soumettre à un cluster Kubernetes va créer le ReplicaSet défini et les pods qu’il gère.
|
||||
|
||||
|
@ -145,7 +145,7 @@ labels correspondant au sélecteur de l’un de vos ReplicaSets. Car un ReplicaS
|
|||
|
||||
Prenez l'exemple précédent de ReplicaSet, ainsi que les pods spécifiés dans le manifeste suivant :
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
{{% codenew file="pods/pod-rs.yaml" %}}
|
||||
|
||||
Ces pods n’ayant pas de contrôleur (ni d’objet) en tant que référence propriétaire, ils correspondent au sélecteur de du ReplicaSet frontend, ils seront donc immédiatement acquis par ce ReplicaSet.
|
||||
|
||||
|
@ -291,7 +291,7 @@ Un ReplicaSet peut également être une cible pour
|
|||
Un ReplicaSet peut être mis à l'échelle automatiquement par un HPA. Voici un exemple HPA qui cible
|
||||
le ReplicaSet que nous avons créé dans l'exemple précédent.
|
||||
|
||||
{{< codenew file="controllers/hpa-rs.yaml" >}}
|
||||
{{% codenew file="controllers/hpa-rs.yaml" %}}
|
||||
|
||||
Enregistrer ce manifeste dans `hpa-rs.yaml` et le soumettre à un cluster Kubernetes devrait
|
||||
créer le HPA défini qui scale automatiquement le ReplicaSet cible en fonction de l'utilisation du processeur
|
||||
|
|
|
@ -95,7 +95,7 @@ Supposons que vous ayez un cluster de 4 noeuds où 3 Pods étiquettés `foo:bar`
|
|||
|
||||
Si nous voulons qu'un nouveau Pod soit uniformément réparti avec les Pods existants à travers les zones, la spec peut être :
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
|
||||
{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}}
|
||||
|
||||
`topologyKey: zone` implique que la distribution uniforme sera uniquement appliquée pour les noeuds ayant le label "zone:<any value>" présent. `whenUnsatisfiable: DoNotSchedule` indique au scheduler de laisser le Pod dans l'état Pending si le Pod entrant ne peut pas satisfaire la contrainte.
|
||||
|
||||
|
@ -133,7 +133,7 @@ Cela s'appuie sur l'exemple précédent. Supposons que vous ayez un cluster de 4
|
|||
|
||||
Vous pouvez utiliser 2 TopologySpreadConstraints pour contrôler la répartition des Pods aussi bien dans les zones que dans les noeuds :
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
|
||||
{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}}
|
||||
|
||||
Dans ce cas, pour satisfaire la première contrainte, le Pod entrant peut uniquement être placé dans "zoneB" ; alors que pour satisfaire la seconde contrainte, le Pod entrant peut uniquement être placé dans "node4". Le résultat étant l'intersection des résultats des 2 contraintes, l'unique option possible est de placer le Pod entrant dans "node4".
|
||||
|
||||
|
@ -182,7 +182,7 @@ Il existe quelques conventions implicites qu'il est intéressant de noter ici :
|
|||
|
||||
et vous savez que "zoneC" doit être exclue. Dans ce cas, vous pouvez écrire le yaml ci-dessous, pour que "mypod" soit placé dans "zoneB" plutôt que dans "zoneC". `spec.nodeSelector` est pris en compte de la même manière.
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
|
||||
{{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
|
||||
|
||||
### Contraintes par défaut au niveau du cluster
|
||||
|
||||
|
|
|
@ -108,13 +108,14 @@ Utilisez cette méthode pour inclure des exemples de fichiers YAML lorsque l'éc
|
|||
Lors de l'ajout d'un nouveau fichier d'exemple autonome, tel qu'un fichier YAML, placez le code dans l'un des sous-répertoires `<LANG>/examples/` où `<LANG>` est la langue utilisé dans votre page.
|
||||
Dans votre fichier, utilisez le shortcode `codenew` :
|
||||
|
||||
<pre>{{< codenew file="<RELPATH>/my-example-yaml>" >}}</pre>
|
||||
|
||||
```none
|
||||
{{%/* codenew file="<RELPATH>/my-example-yaml>" */%}}
|
||||
```
|
||||
où `<RELPATH>` est le chemin vers le fichier à inclure, relatif au répertoire `examples`.
|
||||
Le shortcode Hugo suivant fait référence à un fichier YAML situé sur `/content/en/examples/pods/storage/gce-volume.yaml`.
|
||||
|
||||
```none
|
||||
{{</* codenew file="pods/storage/gce-volume.yaml" */>}}
|
||||
{{%/* codenew file="pods/storage/gce-volume.yaml" */%}}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -1228,7 +1228,7 @@ comprend des recommandations pour restreindre cet accès dans les clusters exist
|
|||
Si vous souhaitez que les nouveaux clusters conservent ce niveau d'accès dans les rôles agrégés,
|
||||
vous pouvez créer le ClusterRole suivant :
|
||||
|
||||
{{< codenew file="access/endpoints-aggregated.yaml" >}}
|
||||
{{% codenew file="access/endpoints-aggregated.yaml" %}}
|
||||
|
||||
## Mise à niveau à partir d'ABAC
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ Si votre environnement ne prend pas en charge cette fonction, vous pouvez utilis
|
|||
Le backend est un simple microservice de salutations.
|
||||
Voici le fichier de configuration pour le Deployment backend :
|
||||
|
||||
{{< codenew file="service/access/backend-deployment.yaml" >}}
|
||||
{{% codenew file="service/access/backend-deployment.yaml" %}}
|
||||
|
||||
Créez le Deployment backend :
|
||||
|
||||
|
@ -91,7 +91,7 @@ Un Service utilise des {{< glossary_tooltip text="sélecteurs" term_id="selector
|
|||
|
||||
Tout d'abord, explorez le fichier de configuration du Service :
|
||||
|
||||
{{< codenew file="service/access/backend-service.yaml" >}}
|
||||
{{% codenew file="service/access/backend-service.yaml" %}}
|
||||
|
||||
Dans le fichier de configuration, vous pouvez voir que le Service,
|
||||
nommé `hello`, achemine le trafic vers les Pods ayant les labels `app: hello` et `tier: backend`.
|
||||
|
@ -120,16 +120,16 @@ Les Pods du frontend Deployment exécutent une image nginx
|
|||
configurée pour acheminer les requêtes vers le Service backend `hello`.
|
||||
Voici le fichier de configuration nginx :
|
||||
|
||||
{{< codenew file="service/access/frontend-nginx.conf" >}}
|
||||
{{% codenew file="service/access/frontend-nginx.conf" %}}
|
||||
|
||||
Comme pour le backend, le frontend dispose d'un Deployment et d'un Service.
|
||||
Une différence importante à noter entre les services backend et frontend est que
|
||||
le Service frontend est configuré avec un `type: LoadBalancer`, ce qui signifie que le Service utilise
|
||||
un équilibreur de charge provisionné par votre fournisseur de cloud et sera accessible depuis l'extérieur du cluster.
|
||||
|
||||
{{< codenew file="service/access/frontend-service.yaml" >}}
|
||||
{{% codenew file="service/access/frontend-service.yaml" %}}
|
||||
|
||||
{{< codenew file="service/access/frontend-deployment.yaml" >}}
|
||||
{{% codenew file="service/access/frontend-deployment.yaml" %}}
|
||||
|
||||
Créez le Deployment et le Service frontend :
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ ayant deux instances en cours d'exécution.
|
|||
|
||||
Voici le fichier de configuration pour le déploiement de l'application :
|
||||
|
||||
{{< codenew file="service/access/hello-application.yaml" >}}
|
||||
{{% codenew file="service/access/hello-application.yaml" %}}
|
||||
|
||||
1. Exécutez une application Hello World dans votre cluster :
|
||||
Créez le déploiement de l'application en utilisant le fichier ci-dessus :
|
||||
|
|
|
@ -74,7 +74,7 @@ Pour les cloud-controller-manager ne faisant pas partie de Kubernetes, vous pouv
|
|||
Pour les fournisseurs qui se trouvent déjà dans Kubernetes, vous pouvez exécuter le cloud-controller-manager dans l'arborescence en tant que Daemonset dans votre cluster.
|
||||
Utilisez ce qui suit comme guide:
|
||||
|
||||
{{< codenew file="admin/cloud/ccm-example.yaml" >}}
|
||||
{{% codenew file="admin/cloud/ccm-example.yaml" %}}
|
||||
|
||||
## Limitations
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ dans le manifeste des ressources du conteneur. Pour spécifier une limite de CPU
|
|||
|
||||
Dans cet exercice, vous allez créer un Pod qui a un seul conteneur. Le conteneur a une demande de 0.5 CPU et une limite de 1 CPU. Voici le fichier de configuration du Pod :
|
||||
|
||||
{{< codenew file="pods/resource/cpu-request-limit.yaml" >}}
|
||||
{{% codenew file="pods/resource/cpu-request-limit.yaml" %}}
|
||||
|
||||
La section `args` du fichier de configuration fournit des arguments pour le conteneur lorsqu'il démarre. L'argument `-cpus "2"` demande au conteneur d'utiliser 2 CPUs.
|
||||
|
||||
|
@ -147,7 +147,7 @@ L'ordonnancement des pods est basé sur les demandes. Un Pod est prévu pour se
|
|||
Dans cet exercice, vous allez créer un Pod qui a une demande de CPU si importante qu'elle dépassera la capacité de n'importe quel nœud de votre cluster. Voici le fichier de configuration d'un Pod
|
||||
qui a un seul conteneur. Le conteneur nécessite 100 CPU, ce qui est susceptible de dépasser la capacité de tous les nœuds de votre cluster.
|
||||
|
||||
{{< codenew file="pods/resource/cpu-request-limit-2.yaml" >}}
|
||||
{{% codenew file="pods/resource/cpu-request-limit-2.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ dans le manifeste des ressources du conteneur. Pour spécifier une limite de mé
|
|||
Dans cet exercice, vous créez un pod qui possède un seul conteneur. Le conteneur dispose d'une demande de mémoire de 100 MiB et une limite de mémoire de 200 MiB. Voici le fichier de configuration
|
||||
pour le Pod :
|
||||
|
||||
{{< codenew file="pods/resource/memory-request-limit.yaml" >}}
|
||||
{{% codenew file="pods/resource/memory-request-limit.yaml" %}}
|
||||
|
||||
La section `args` de votre fichier de configuration fournit des arguments pour le conteneur lorsqu'il démarre.
|
||||
Les arguments `"--vm-bytes", "150M"` indiquent au conteneur d'allouer 150 MiB de mémoire.
|
||||
|
@ -123,7 +123,7 @@ Si un conteneur terminé peut être redémarré, le kubelet le redémarre, comme
|
|||
Dans cet exercice, vous créez un Pod qui tente d'allouer plus de mémoire que sa limite.
|
||||
Voici le fichier de configuration d'un Pod qui contient un conteneur avec une demande de mémoire de 50 MiB et une limite de mémoire de 100 MiB :
|
||||
|
||||
{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}}
|
||||
{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}}
|
||||
|
||||
Dans la section `args` du fichier de configuration, vous pouvez voir que le conteneur
|
||||
tentera d'allouer 250 MiB de mémoire, ce qui est bien au-dessus de la limite de 100 MiB.
|
||||
|
@ -226,7 +226,7 @@ L'ordonnancement des modules est basé sur les demandes. Un Pod est schedulé po
|
|||
|
||||
Dans cet exercice, vous allez créer un Pod dont la demande de mémoire est si importante qu'elle dépasse la capacité de la mémoire de n'importe quel nœud de votre cluster. Voici le fichier de configuration d'un Pod qui possède un seul conteneur avec une demande de 1000 GiB de mémoire, qui dépasse probablement la capacité de tous les nœuds de votre cluster.
|
||||
|
||||
{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}}
|
||||
{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ Cette page montre comment assigner un Pod à un nœud particulier dans un cluste
|
|||
|
||||
Le fichier de configuration de pod décrit un pod qui possède un selector de nœud de type `disktype:ssd`. Cela signifie que le pod sera planifié sur un nœud ayant le label `disktype=ssd`.
|
||||
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
{{% codenew file="pods/pod-nginx.yaml" %}}
|
||||
|
||||
1. Utilisez le fichier de configuration pour créer un pod qui sera ordonnancé sur votre nœud choisi :
|
||||
|
||||
|
@ -86,7 +86,7 @@ Le fichier de configuration de pod décrit un pod qui possède un selector de n
|
|||
|
||||
Vous pouvez également ordonnancer un pod sur un nœud spécifique via le paramètre `nodeName`.
|
||||
|
||||
{{< codenew file="pods/pod-nginx-specific-node.yaml" >}}
|
||||
{{% codenew file="pods/pod-nginx-specific-node.yaml" %}}
|
||||
|
||||
Utilisez le fichier de configuration pour créer un pod qui sera ordonnancé sur `foo-node` uniquement.
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ De nombreuses applications fonctionnant pour des longues périodes finissent par
|
|||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un conteneur basé sur l'image `registry.k8s.io/busybox`. Voici le fichier de configuration pour le Pod :
|
||||
|
||||
{{< codenew file="pods/probe/exec-liveness.yaml" >}}
|
||||
{{% codenew file="pods/probe/exec-liveness.yaml" %}}
|
||||
|
||||
Dans le fichier de configuration, vous constatez que le Pod a un seul conteneur.
|
||||
Le champ `periodSeconds` spécifie que le Kubelet doit effectuer un check de liveness toutes les 5 secondes. Le champ `initialDelaySeconds` indique au Kubelet qu'il devrait attendre 5 secondes avant d'effectuer la première probe. Pour effectuer une probe, le Kubelet exécute la commande `cat /tmp/healthy` dans le conteneur. Si la commande réussit, elle renvoie 0, et le Kubelet considère que le conteneur est vivant et en bonne santé. Si la commande renvoie une valeur non nulle, le Kubelet tue le conteneur et le redémarre.
|
||||
|
@ -104,7 +104,7 @@ liveness-exec 1/1 Running 1 1m
|
|||
Un autre type de liveness probe utilise une requête GET HTTP. Voici la configuration
|
||||
d'un Pod qui fait fonctionner un conteneur basé sur l'image `registry.k8s.io/liveness`.
|
||||
|
||||
{{< codenew file="pods/probe/http-liveness.yaml" >}}
|
||||
{{% codenew file="pods/probe/http-liveness.yaml" %}}
|
||||
|
||||
Dans le fichier de configuration, vous pouvez voir que le Pod a un seul conteneur.
|
||||
Le champ `periodSeconds` spécifie que le Kubelet doit effectuer une liveness probe toutes les 3 secondes. Le champ `initialDelaySeconds` indique au Kubelet qu'il devrait attendre 3 secondes avant d'effectuer la première probe. Pour effectuer une probe, le Kubelet envoie une requête HTTP GET au serveur qui s'exécute dans le conteneur et écoute sur le port 8080. Si le handler du chemin `/healthz` du serveur renvoie un code de succès, le Kubelet considère que le conteneur est vivant et en bonne santé. Si le handler renvoie un code d'erreur, le Kubelet tue le conteneur et le redémarre.
|
||||
|
@ -152,7 +152,7 @@ Dans les versions postérieures à la v1.13, les paramètres de la variable d'en
|
|||
Un troisième type de liveness probe utilise un TCP Socket. Avec cette configuration, le Kubelet tentera d'ouvrir un socket vers votre conteneur sur le port spécifié.
|
||||
S'il arrive à établir une connexion, le conteneur est considéré comme étant en bonne santé, s'il n'y arrive pas, c'est un échec.
|
||||
|
||||
{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
|
||||
{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}}
|
||||
|
||||
Comme vous le voyez, la configuration pour un check TCP est assez similaire à un check HTTP.
|
||||
Cet exemple utilise à la fois des readiness et liveness probes. Le Kubelet transmettra la première readiness probe 5 secondes après le démarrage du conteneur. Il tentera de se connecter au conteneur `goproxy` sur le port 8080. Si la probe réussit, le conteneur sera marqué comme prêt. Kubelet continuera à effectuer ce check tous les 10 secondes.
|
||||
|
|
|
@ -75,7 +75,7 @@ pour paramétrer du
|
|||
[provisioning dynamique](/docs/concepts/storage/dynamic-provisioning/).
|
||||
|
||||
Voici le fichier de configuration pour le PersistentVolume de type hostPath:
|
||||
{{< codenew file="pods/storage/pv-volume.yaml" >}}
|
||||
{{% codenew file="pods/storage/pv-volume.yaml" %}}
|
||||
|
||||
Le fichier de configuration spécifie que le chemin du volume sur le noeud est `/mnt/data`. Il spécifie aussi une taille de 10 gibibytes, ainsi qu'un mode d'accès de type `ReadWriteOnce`, impliquant que le volume ne peut être monté en lecture et écriture que par un seul noeud. Le fichier définit un [nom de StorageClass](/docs/concepts/storage/persistent-volumes/#class) à `manual`, ce qui sera utilisé pour attacher un PersistentVolumeClaim à ce PersistentVolume
|
||||
|
||||
|
@ -103,7 +103,7 @@ La prochaine étape est de créer un PersistentVolumeClaim (demande de stockage)
|
|||
Dans cet exercice, vous créez un PersistentVolumeClaim qui demande un volume d'au moins 3 GB, et qui peut être monté en lecture et écriture sur au moins un noeud.
|
||||
|
||||
Voici le fichier de configuration du PersistentVolumeClaim:
|
||||
{{< codenew file="pods/storage/pv-claim.yaml" >}}
|
||||
{{% codenew file="pods/storage/pv-claim.yaml" %}}
|
||||
|
||||
Créez le PersistentVolumeClaim:
|
||||
|
||||
|
@ -137,7 +137,7 @@ La prochaine étape est de créer un Pod qui utilise le PersistentVolumeClaim co
|
|||
|
||||
Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="pods/storage/pv-pod.yaml" >}}
|
||||
{{% codenew file="pods/storage/pv-pod.yaml" %}}
|
||||
|
||||
Notez que le fichier de configuration du Pod spécifie un PersistentVolumeClaim et non un PersistentVolume. Du point de vue du Pod, la demande est un volume de stockage.
|
||||
|
||||
|
@ -200,7 +200,7 @@ Vous pouvez maintenant arrêter la session shell vers votre noeud.
|
|||
Vous pouvez monter plusieurs fois un même PersistentVolume
|
||||
à plusieurs endroits différents dans votre container nginx:
|
||||
|
||||
{{< codenew file="pods/storage/pv-duplicate.yaml" >}}
|
||||
{{% codenew file="pods/storage/pv-duplicate.yaml" %}}
|
||||
|
||||
- `/usr/share/nginx/html` pour le site statique
|
||||
- `/etc/nginx/nginx.conf` pour la configuration par défaut
|
||||
|
|
|
@ -453,7 +453,7 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
|
||||
1. Attribuez la valeur `special.how` défini dans ConfigMap à la variable d'environnement `SPECIAL_LEVEL_KEY` dans la spécification du Pod.
|
||||
|
||||
{{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}}
|
||||
{{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}}
|
||||
|
||||
Créez le pod:
|
||||
|
||||
|
@ -467,7 +467,7 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
|
||||
* Comme avec l'exemple précédent, créez d'abord les ConfigMaps.
|
||||
|
||||
{{< codenew file="configmap/configmaps.yaml" >}}
|
||||
{{% codenew file="configmap/configmaps.yaml" %}}
|
||||
|
||||
Créez le ConfigMap:
|
||||
|
||||
|
@ -477,7 +477,7 @@ configmap/special-config-2-c92b5mmcf2 created
|
|||
|
||||
* Définissez les variables d'environnement dans la spécification Pod.
|
||||
|
||||
{{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}}
|
||||
{{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}}
|
||||
|
||||
Créez le pod:
|
||||
|
||||
|
@ -495,7 +495,7 @@ Cette fonctionnalité est disponible dans Kubernetes v1.6 et versions ultérieur
|
|||
|
||||
* Créez un ConfigMap contenant plusieurs paires clé-valeur.
|
||||
|
||||
{{< codenew file="configmap/configmap-multikeys.yaml" >}}
|
||||
{{% codenew file="configmap/configmap-multikeys.yaml" %}}
|
||||
|
||||
Créez le ConfigMap:
|
||||
|
||||
|
@ -506,7 +506,7 @@ Cette fonctionnalité est disponible dans Kubernetes v1.6 et versions ultérieur
|
|||
* Utilisez `envFrom` pour définir toutes les données du ConfigMap en tant que variables d'environnement du conteneur.
|
||||
La clé de ConfigMap devient le nom de la variable d'environnement dans le pod.
|
||||
|
||||
{{< codenew file="pods/pod-configmap-envFrom.yaml" >}}
|
||||
{{% codenew file="pods/pod-configmap-envFrom.yaml" %}}
|
||||
|
||||
Créez le pod:
|
||||
|
||||
|
@ -522,7 +522,7 @@ Vous pouvez utiliser des variables d'environnement définies par ConfigMap dans
|
|||
|
||||
Par exemple, la spécification de pod suivante
|
||||
|
||||
{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}}
|
||||
{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}}
|
||||
|
||||
créé en exécutant
|
||||
|
||||
|
@ -543,7 +543,7 @@ Le contenu du fichier devient la valeur de la clé.
|
|||
|
||||
Les exemples de cette section se réfèrent à un ConfigMap nommé special-config, illustré ci-dessous.
|
||||
|
||||
{{< codenew file="configmap/configmap-multikeys.yaml" >}}
|
||||
{{% codenew file="configmap/configmap-multikeys.yaml" %}}
|
||||
|
||||
Créez le ConfigMap:
|
||||
|
||||
|
@ -557,7 +557,7 @@ Ajoutez le nom ConfigMap sous la section `volumes` de la spécification Pod.
|
|||
Ceci ajoute les données ConfigMap au répertoire spécifié comme `volumeMounts.mountPath` (dans ce cas, `/etc/config`).
|
||||
La section `command` répertorie les fichiers de répertoire dont les noms correspondent aux clés de ConfigMap.
|
||||
|
||||
{{< codenew file="pods/pod-configmap-volume.yaml" >}}
|
||||
{{% codenew file="pods/pod-configmap-volume.yaml" %}}
|
||||
|
||||
Créez le pod:
|
||||
|
||||
|
@ -581,7 +581,7 @@ S'il y a des fichiers dans le dossier `/etc/config/`, ils seront supprimés.
|
|||
Utilisez le champ `path` pour spécifier le chemin de fichier souhaité pour les éléments de configmap spécifiques.
|
||||
Dans ce cas, le `SPECIAL_LEVEL` sera monté dans le volume `config-volume` au chemin `/etc/config/keys`.
|
||||
|
||||
{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}}
|
||||
{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ Dans cet exercice, vous allez créer un Pod qui a un conteneur d'application et
|
|||
|
||||
Voici le fichier de configuration du Pod :
|
||||
|
||||
{{< codenew file="pods/init-containers.yaml" >}}
|
||||
{{% codenew file="pods/init-containers.yaml" %}}
|
||||
|
||||
Dans le fichier de configuration, vous pouvez voir que le Pod a un Volume que le conteneur d'initialisation et le conteneur d'application partagent.
|
||||
|
||||
|
|
|
@ -269,7 +269,7 @@ Ce comportement est configuré sur un PodSpec utilisant un type de ProjectedVolu
|
|||
[ServiceAccountToken](/docs/concepts/storage/volumes/#projected). Pour fournir un
|
||||
Pod avec un token avec une audience de "vault" et une durée de validité de deux heures, vous devriez configurer ce qui suit dans votre PodSpec :
|
||||
|
||||
{{< codenew file="pods/pod-projected-svc-token.yaml" >}}
|
||||
{{% codenew file="pods/pod-projected-svc-token.yaml" %}}
|
||||
|
||||
Créez le Pod
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ Dans cet exercice, vous créez un pod qui contient un seul conteneur. Ce Pod a u
|
|||
[emptyDir](/fr/docs/concepts/storage/volumes/#emptydir) qui dure toute la vie du Pod, même si le conteneur se termine et redémarre.
|
||||
Voici le fichier de configuration du Pod :
|
||||
|
||||
{{< codenew file="pods/storage/redis.yaml" >}}
|
||||
{{% codenew file="pods/storage/redis.yaml" %}}
|
||||
|
||||
1. Créez le Pod :
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ Les noms de ressources supplémentaires valides ont la forme `example.com/foo` o
|
|||
|
||||
Voici le fichier de configuration d'un Pod qui a un seul conteneur :
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod.yaml" >}}
|
||||
{{% codenew file="pods/resource/extended-resource-pod.yaml" %}}
|
||||
|
||||
Dans le fichier de configuration, vous pouvez constater que le conteneur demande 3 dongles.
|
||||
|
||||
|
@ -70,7 +70,7 @@ Requests:
|
|||
Voici le fichier de configuration d'un Pod qui a un seul conteneur. Le conteneur demande
|
||||
deux dongles.
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod-2.yaml" >}}
|
||||
{{% codenew file="pods/resource/extended-resource-pod-2.yaml" %}}
|
||||
|
||||
Kubernetes ne pourra pas satisfaire la demande de deux dongles, parce que le premier Pod
|
||||
a utilisé trois des quatre dongles disponibles.
|
||||
|
|
|
@ -170,7 +170,7 @@ Vous avez réussi à définir vos identifiants de Docker comme un Secret appelé
|
|||
|
||||
Voici un fichier de configuration pour un Pod qui a besoin d'accéder à vos identifiants Docker dans `regcred` :
|
||||
|
||||
{{< codenew file="pods/private-reg-pod.yaml" >}}
|
||||
{{% codenew file="pods/private-reg-pod.yaml" %}}
|
||||
|
||||
Téléchargez le fichier ci-dessus :
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ Pour qu'un Pod reçoive une classe de QoS Guaranteed :
|
|||
Ci-dessous le fichier de configuration d'un Pod qui a un seul conteneur.
|
||||
Le conteneur dispose d'une limite de mémoire et d'une demande de mémoire, tous deux égaux à 200 MiB. Le conteneur a également une limite CPU et une demande CPU, toutes deux égales à 700 milliCPU :
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod.yaml" >}}
|
||||
{{% codenew file="pods/qos/qos-pod.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
@ -99,7 +99,7 @@ Un Pod reçoit une classe QoS de Burstable si :
|
|||
|
||||
Voici le fichier de configuration d'un pod qui a un seul conteneur. Le conteneur a une limite de mémoire de 200 MiB et une demande de mémoire de 100 MiB.
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-2.yaml" >}}
|
||||
{{% codenew file="pods/qos/qos-pod-2.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
@ -143,7 +143,7 @@ avoir des limites ou des demandes de mémoire ou de CPU.
|
|||
|
||||
Voici le fichier de configuration d'un Pod qui a un seul conteneur. Le conteneur n'a pas des limites ou des demandes de mémoire ou de CPU :
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-3.yaml" >}}
|
||||
{{% codenew file="pods/qos/qos-pod-3.yaml" %}}
|
||||
|
||||
Créez le Pod :
|
||||
|
||||
|
@ -181,7 +181,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example
|
|||
Voici le fichier de configuration d'un Pod qui a deux conteneurs. Un conteneur spécifie une
|
||||
demande de mémoire de 200 MiB. L'autre conteneur ne spécifie aucune demande ou limite.
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-4.yaml" >}}
|
||||
{{% codenew file="pods/qos/qos-pod-4.yaml" %}}
|
||||
|
||||
Notez que le pod répond aux critères de la classe QoS Burstable. En d'autres termes, il ne répond pas aux exigences de la classe de qualité de service Guaranteed, et l'un de ses conteneurs dispose d'une demande de mémoire.
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ Vous pouvez utiliser cette fonctionnalité pour configurer les conteneurs coopé
|
|||
|
||||
Le partage de l'espace de nommage du processus est activé en utilisant le champ `shareProcessNamespace` de `v1.PodSpec`. Par exemple:
|
||||
|
||||
{{< codenew file="pods/share-process-namespace.yaml" >}}
|
||||
{{% codenew file="pods/share-process-namespace.yaml" %}}
|
||||
|
||||
1. Créez le pod `nginx` sur votre cluster :
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ Dans cet exercice, vous allez créer un pod contenant un conteneur.
|
|||
Le conteneur exécute une image nginx.
|
||||
Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="application/shell-demo.yaml" >}}
|
||||
{{% codenew file="application/shell-demo.yaml" %}}
|
||||
|
||||
Créez le Pod:
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ Le champ `command` correspond à `entrypoint` dans certains runtimes de containe
|
|||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un container.
|
||||
Le fichier de configuration pour le Pod défini une commande ainsi que deux arguments:
|
||||
{{< codenew file="pods/commands.yaml" >}}
|
||||
{{% codenew file="pods/commands.yaml" %}}
|
||||
|
||||
1. Créez un Pod en utilisant le fichier YAML de configuration suivant:
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ dans le fichier de configuration.
|
|||
|
||||
Dans cet exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration pour ce Pod contient une variable d'environnement s'appelant `DEMO_GREETING` et sa valeur est `"Hello from the environment"`. Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="pods/inject/envars.yaml" >}}
|
||||
{{% codenew file="pods/inject/envars.yaml" %}}
|
||||
|
||||
1. Créez un Pod à partir de ce fichier:
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ Pour définir une variable d'environnement dépendante, vous pouvez utiliser le
|
|||
|
||||
Dans cette exercice, vous allez créer un Pod qui exécute un container. Le fichier de configuration de ce Pod définit des variables d'environnement interdépendantes avec une ré-utilisation entre les différentes variables. Voici le fichier de configuration de ce Pod:
|
||||
|
||||
{{< codenew file="pods/inject/dependent-envars.yaml" >}}
|
||||
{{% codenew file="pods/inject/dependent-envars.yaml" %}}
|
||||
|
||||
1. Créez un Pod en utilisant ce fichier de configuration:
|
||||
|
||||
|
|
|
@ -39,7 +39,7 @@ afin de réduire les risques de sécurité liés à l'utilisation d'un outil ext
|
|||
Voici un fichier de configuration que vous pouvez utiliser pour créer un Secret
|
||||
qui contiendra votre identifiant et mot de passe:
|
||||
|
||||
{{< codenew file="pods/inject/secret.yaml" >}}
|
||||
{{% codenew file="pods/inject/secret.yaml" %}}
|
||||
|
||||
1. Créez le Secret:
|
||||
|
||||
|
@ -99,7 +99,7 @@ montrée précédemment permet de démontrer et comprendre le fonctionnement des
|
|||
|
||||
Voici un fichier de configuration qui permet de créer un Pod:
|
||||
|
||||
{{< codenew file="pods/inject/secret-pod.yaml" >}}
|
||||
{{% codenew file="pods/inject/secret-pod.yaml" %}}
|
||||
|
||||
1. Créez le Pod:
|
||||
|
||||
|
@ -255,7 +255,7 @@ permettant de redémarrer les containers lors d'une mise à jour du Secret.
|
|||
* Assignez la valeur de `backend-username` définie dans le Secret
|
||||
à la variable d'environnement `SECRET_USERNAME` dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
|
||||
{{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
|
@ -286,7 +286,7 @@ permettant de redémarrer les containers lors d'une mise à jour du Secret.
|
|||
|
||||
* Définissez les variables d'environnement dans la configuration du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
|
||||
{{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
|
@ -323,7 +323,7 @@ Cette fonctionnalité n'est disponible que dans les versions de Kubernetes
|
|||
d'environnement. Les clés du Secret deviendront les noms des variables
|
||||
d'environnement à l'intérieur du Pod.
|
||||
|
||||
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
|
||||
{{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}}
|
||||
|
||||
* Créez le Pod:
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ et vous allez projeter les champs d'informations du Pod à l'intérieur du
|
|||
container via des fichiers dans le container.
|
||||
Voici le fichier de configuration du Pod:
|
||||
|
||||
{{< codenew file="pods/inject/dapi-volume.yaml" >}}
|
||||
{{% codenew file="pods/inject/dapi-volume.yaml" %}}
|
||||
|
||||
Dans la configuration, on peut voir que le Pod a un volume de type `downward API`, et que le container monte ce volume sur le chemin `/etc/podinfo`.
|
||||
|
||||
|
@ -154,7 +154,7 @@ qui appartiennent au
|
|||
[container](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container), plutôt qu'au Pod.
|
||||
Voici un fichier de configuration pour un Pod qui n'a qu'un seul container:
|
||||
|
||||
{{< codenew file="pods/inject/dapi-volume-resources.yaml" >}}
|
||||
{{% codenew file="pods/inject/dapi-volume-resources.yaml" %}}
|
||||
|
||||
Dans cette configuration, on peut voir que le Pod a un volume de type
|
||||
[`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi),
|
||||
|
|
|
@ -32,7 +32,7 @@ Dans cette partie de l'exercice, vous allez créer un Pod qui a un container,
|
|||
et vous allez projeter les champs d'informations du Pod à l'intérieur du
|
||||
container comme variables d'environnement.
|
||||
|
||||
{{< codenew file="pods/inject/dapi-envars-pod.yaml" >}}
|
||||
{{% codenew file="pods/inject/dapi-envars-pod.yaml" %}}
|
||||
|
||||
Dans ce fichier de configuration, on trouve cinq variables d'environnement.
|
||||
Le champ `env` est une liste de variables d'environnement.
|
||||
|
@ -113,7 +113,7 @@ qui est exécuté à l'intérieur du Pod.
|
|||
Voici un fichier de configuration pour un autre Pod qui ne contient qu'un seul
|
||||
container:
|
||||
|
||||
{{< codenew file="pods/inject/dapi-envars-container.yaml" >}}
|
||||
{{% codenew file="pods/inject/dapi-envars-container.yaml" %}}
|
||||
|
||||
Dans ce fichier, vous pouvez voir 4 variables d'environnement.
|
||||
Le champ `env` est une liste de variables d'environnement.
|
||||
|
|
|
@ -53,7 +53,9 @@ Pour apprendre comment déployer le Metrics Server, consultez la [documentation
|
|||
Pour démontrer un HorizontalPodAutoscaler, vous commencerez par démarrer un
|
||||
Deployment qui exécute un conteneur utilisant l'image `hpa-example`
|
||||
et l'expose en tant que {{< glossary_tooltip term_id="service">}} en utilisant le
|
||||
manifeste suivant: {{< codenew file="application/php-apache.yaml" >}}
|
||||
manifeste suivant:
|
||||
|
||||
{{% codenew file="application/php-apache.yaml" %}}
|
||||
|
||||
Pour créer les ressources, exécutez la commande suivante:
|
||||
```shell
|
||||
|
@ -505,7 +507,7 @@ Cela signifie que vous pouvez voir la valeur de votre métrique fluctuer entre `
|
|||
Au lieu d'utiliser la commande `kubectl autoscale` pour créer un HorizontalPodAutoscaler de manière impérative,
|
||||
nous pouvons utiliser le manifeste suivant pour le créer de manière déclarative :
|
||||
|
||||
{{< codenew file=application/hpa/php-apache.yaml >}}
|
||||
{{% codenew file=application/hpa/php-apache.yaml %}}
|
||||
|
||||
Ensuite, créez l'autoscaler en exécutant la commande suivante :
|
||||
|
||||
|
|
|
@ -37,8 +37,8 @@ Remarque: le mot de passe MySQL est défini dans le fichier de configuration YAM
|
|||
Voir les [secrets Kubernetes](/docs/concepts/configuration/secret/)
|
||||
pour une approche sécurisée.
|
||||
|
||||
{{< codenew file="application/mysql/mysql-deployment.yaml" >}}
|
||||
{{< codenew file="application/mysql/mysql-pv.yaml" >}}
|
||||
{{% codenew file="application/mysql/mysql-deployment.yaml" %}}
|
||||
{{% codenew file="application/mysql/mysql-pv.yaml" %}}
|
||||
|
||||
1. Déployez le PV et le PVC du fichier YAML:
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ déploiement Kubernetes, et vous pouvez décrire un
|
|||
déploiement dans un fichier YAML. Par exemple,
|
||||
ce fichier YAML décrit un déploiement qui exécute l'image Docker nginx:1.14.2 :
|
||||
|
||||
{{< codenew file="application/deployment.yaml" >}}
|
||||
{{% codenew file="application/deployment.yaml" %}}
|
||||
|
||||
1. Créez un déploiement basé sur ce fichier YAML:
|
||||
|
||||
|
@ -102,7 +102,7 @@ Vous pouvez mettre à jour le déploiement en appliquant un nouveau fichier YAML
|
|||
Ce fichier YAML indique que le déploiement doit être mis à jour
|
||||
pour utiliser nginx 1.16.1.
|
||||
|
||||
{{< codenew file="application/deployment-update.yaml" >}}
|
||||
{{% codenew file="application/deployment-update.yaml" %}}
|
||||
|
||||
1. Appliquez le nouveau fichier YAML :
|
||||
|
||||
|
@ -121,7 +121,7 @@ pour utiliser nginx 1.16.1.
|
|||
Vous pouvez augmenter le nombre de pods dans votre déploiement en appliquant un nouveau fichier YAML.
|
||||
Ce fichier YAML définit `replicas` à 4, ce qui spécifie que le déploiement devrait avoir quatre pods :
|
||||
|
||||
{{< codenew file="application/deployment-scale.yaml" >}}
|
||||
{{% codenew file="application/deployment-scale.yaml" %}}
|
||||
|
||||
1. Appliquez le nouveau fichier YAML :
|
||||
|
||||
|
|
|
@ -39,9 +39,9 @@ Vous pouvez également suivre ce tutoriel si vous avez installé [Minikube local
|
|||
|
||||
Ce tutoriel fournit une image de conteneur construite à partir des fichiers suivants :
|
||||
|
||||
{{< codenew language="js" file="minikube/server.js" >}}
|
||||
{{% codenew language="js" file="minikube/server.js" %}}
|
||||
|
||||
{{< codenew language="conf" file="minikube/Dockerfile" >}}
|
||||
{{% codenew language="conf" file="minikube/Dockerfile" %}}
|
||||
|
||||
Pour plus d'informations sur la commande `docker build`, lisez la documentation de [Docker](https://docs.docker.com/engine/reference/commandline/build/).
|
||||
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: जॉब (Job)
|
||||
id: job
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/controllers/job/
|
||||
short_description: >
|
||||
एक परिमित या बैच कार्य जो पूरा होने तक चलता है।
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
- workload
|
||||
---
|
||||
एक परिमित या बैच कार्य जो पूरा होने तक चलता है।
|
||||
|
||||
<!--more-->
|
||||
|
||||
जॉब एक या अधिक {{<glossary_tooltip text="पॉड" term_id="pod" >}} ऑब्जेक्ट बनाता है और सुनिश्चित करता है कि उनमें से एक निर्दिष्ट संख्या सफलतापूर्वक समाप्त हो जाए। जैसे ही पॉड्स सफलतापूर्वक पूर्ण होते हैं, जॉब उस सफल समापन को ट्रैक करता है।
|
|
@ -34,7 +34,7 @@ IaaS Provider | Link |
|
|||
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
|
||||
Amazon Web Services | https://aws.amazon.com/security/ |
|
||||
Google Cloud Platform | https://cloud.google.com/security/ |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html |
|
||||
Huawei Cloud | https://www.huaweicloud.com/intl/id-id/securecenter/overallsafety |
|
||||
IBM Cloud | https://www.ibm.com/cloud/security |
|
||||
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security/ |
|
||||
|
|
|
@ -492,7 +492,7 @@ dan `scp` menggunakan pengguna lain tersebut.
|
|||
Berkas `admin.conf` memberikan penggunanya privilese (_privilege_) _superuser_ terhadap klaster.
|
||||
Berkas ini harus digunakan seperlunya. Untuk pengguna biasa, direkomendasikan
|
||||
untuk membuat kredensial unik dengan privilese _whitelist_. Kamu dapat melakukan
|
||||
ini dengan perintah `kubeadm alpha kubeconfig user --client-name <CN>`.
|
||||
ini dengan perintah `kubeadm kubeconfig user --client-name <CN>`.
|
||||
Perintah tersebut akan mencetak berkas KubeConfig ke STDOUT yang harus kamu simpan
|
||||
ke dalam sebuah berkas dan mendistribusikannya pada para pengguna. Setelah itu, whitelist
|
||||
privilese menggunakan `kubectl create (cluster)rolebinding`.
|
||||
|
|
|
@ -210,7 +210,7 @@ kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
|
|||
このセクションの残りの部分は、寿命の長いIP(my-nginx)を持つServiceと、そのIPに名前を割り当てたDNSサーバーがあることを前提にしています。ここではCoreDNSクラスターアドオン(アプリケーション名: `kube-dns`)を使用しているため、標準的なメソッド(`gethostbyname()`など) を使用してクラスター内の任意のPodからServiceに通信できます。CoreDNSが起動していない場合、[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes)または[Installing CoreDNS](/ja/docs/tasks/administer-cluster/coredns/#installing-coredns)を参照し、有効にする事ができます。curlアプリケーションを実行して、これをテストしてみましょう。
|
||||
|
||||
```shell
|
||||
kubectl run curl --image=radial/busyboxplus:curl -i --tty
|
||||
kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm
|
||||
```
|
||||
```
|
||||
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
|
||||
|
|
|
@ -52,7 +52,7 @@ Initコンテナを活用する方法について、いくつかのアイデア
|
|||
|
||||
* シェルコマンドを使って単一の{{< glossary_tooltip text="Service" term_id="service">}}が作成されるのを待機する。
|
||||
```shell
|
||||
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1
|
||||
for i in {1..100}; do sleep 1; if nslookup myservice; then exit 0; fi; done; exit 1
|
||||
```
|
||||
|
||||
* 以下のようなコマンドを使って下位のAPIからPodの情報をリモートサーバに登録する。
|
||||
|
|
|
@ -132,7 +132,7 @@ kubeadmは`kubelet`や`kubectl`をインストールまたは管理**しない**
|
|||
2. Google Cloudの公開鍵をダウンロードします:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Kubernetesの`apt`リポジトリを追加します:
|
||||
|
|
|
@ -147,7 +147,7 @@ Windowsワーカーノードの(管理者)権限を持つPowerShell環境で実
|
|||
|
||||
```PowerShell
|
||||
curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/kubeadm/scripts/PrepareNode.ps1
|
||||
.\PrepareNode.ps1 -KubernetesVersion {{% param "fullversion" %}}
|
||||
.\PrepareNode.ps1 -KubernetesVersion v{{% skew currentPatchVersion %}}
|
||||
```
|
||||
|
||||
1. `kubeadm`を実行してノードに参加します
|
||||
|
|
|
@ -32,8 +32,8 @@ Windowsノードをアップグレードする前にコントロールプレー
|
|||
1. Windowsノードから、kubeadmをアップグレードします。:
|
||||
|
||||
```powershell
|
||||
# {{% param "fullversion" %}}を目的のバージョンに置き換えます
|
||||
curl.exe -Lo C:\k\kubeadm.exe https://dl.k8s.io/{{% param "fullversion" %}}/bin/windows/amd64/kubeadm.exe
|
||||
# {{% skew currentPatchVersion %}}を目的のバージョンに置き換えます
|
||||
curl.exe -Lo C:\k\kubeadm.exe https://dl.k8s.io/v{{% skew currentPatchVersion %}}/bin/windows/amd64/kubeadm.exe
|
||||
```
|
||||
|
||||
### ノードをドレインする
|
||||
|
@ -67,7 +67,7 @@ Windowsノードをアップグレードする前にコントロールプレー
|
|||
|
||||
```powershell
|
||||
stop-service kubelet
|
||||
curl.exe -Lo C:\k\kubelet.exe https://dl.k8s.io/{{% param "fullversion" %}}/bin/windows/amd64/kubelet.exe
|
||||
curl.exe -Lo C:\k\kubelet.exe https://dl.k8s.io/v{{% skew currentPatchVersion %}}/bin/windows/amd64/kubelet.exe
|
||||
restart-service kubelet
|
||||
```
|
||||
|
||||
|
|
|
@ -134,7 +134,7 @@ my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s
|
|||
## Serviceへのアクセス
|
||||
|
||||
KubernetesはServiceを探す2つの主要なモードとして、環境変数とDNSをサポートしています。
|
||||
前者はすぐに動かせるのに対し、後者は[CoreDNSクラスターアドオン](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns)が必要です。
|
||||
前者はすぐに動かせるのに対し、後者は[CoreDNSクラスターアドオン](https://releases.k8s.io/v{{< skew currentPatchVersion >}}/cluster/addons/dns/coredns)が必要です。
|
||||
|
||||
{{< note >}}
|
||||
もしServiceの環境変数が望ましくないなら(想定しているプログラムの環境変数と競合する可能性がある、処理する変数が多すぎる、DNSだけ使いたい、など)、[pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)で`enableServiceLinks`のフラグを`false`にすることで、このモードを無効化できます。
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -53,7 +53,7 @@ Provedor IaaS | Link |
|
|||
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
|
||||
Amazon Web Services | https://aws.amazon.com/security/ |
|
||||
Google Cloud Platform | https://cloud.google.com/security/ |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety.html |
|
||||
Huawei Cloud | https://www.huaweicloud.com/intl/pt-br/securecenter/overallsafety |
|
||||
IBM Cloud | https://www.ibm.com/cloud/security |
|
||||
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security/ |
|
||||
|
|
|
@ -267,7 +267,7 @@ description: |-
|
|||
Para ver a saída da aplicação, execute uma requisição com o comando
|
||||
<code>curl</code>:
|
||||
</p>
|
||||
<p><code><b>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/</b></code></p>
|
||||
<p><code><b>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/</b></code></p>
|
||||
<p>A URL é a rota para a API do Pod.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -140,7 +140,7 @@ More detials can be found in the KEP <https://kep.k8s.io/1040> and the pull requ
|
|||
## Event triggered updates to container status
|
||||
|
||||
`Evented PLEG` (PLEG is short for "Pod Lifecycle Event Generator") is set to be in beta for v1.27,
|
||||
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as a the last
|
||||
Kubernetes offers two ways for the kubelet to detect Pod lifecycle events, such as the last
|
||||
process in a container shutting down.
|
||||
In Kubernetes v1.27, the _event based_ mechanism has graduated to beta but remains
|
||||
disabled by default. If you do explicitly switch to event-based lifecycle change detection,
|
||||
|
@ -190,7 +190,7 @@ CGroup V2 时的内存质量。尽管此特性门控有一定的好处,但如
|
|||
|
||||
<!--
|
||||
Kubelet configuration now includes `memoryThrottlingFactor`. This factor is multiplied by
|
||||
the memory limit or node allocatable memory to set the cgroupv2 memory.high value for enforcing
|
||||
the memory limit or node allocatable memory to set the cgroupv2 `memory.high` value for enforcing
|
||||
MemoryQoS. Decreasing this factor sets a lower high limit for container cgroups, increasing reclaim
|
||||
pressure. Increasing this factor will put less reclaim pressure. The default value is 0.8 initially
|
||||
and will change to 0.9 in Kubernetes v1.27. This parameter adjustment can reduce the potential
|
||||
|
@ -199,7 +199,7 @@ impact of this feature on pod startup speed.
|
|||
Further details can be found in the KEP <https://kep.k8s.io/2570>.
|
||||
-->
|
||||
Kubelet 配置现在包括 `memoryThrottlingFactor`。该因子乘以内存限制或节点可分配内存,
|
||||
可以设置 cgroupv2 memory.high 值来执行 MemoryQoS。
|
||||
可以设置 cgroupv2 `memory.high` 值来执行 MemoryQoS。
|
||||
减小该因子将为容器 cgroup 设置较低的上限,同时增加了回收压力。
|
||||
提高此因子将减少回收压力。默认值最初为 0.8,并将在 Kubernetes v1.27 中更改为 0.9。
|
||||
调整此参数可以减少此特性对 Pod 启动速度的潜在影响。
|
||||
|
@ -226,7 +226,7 @@ container startup by mounting volumes with the correct SELinux label instead of
|
|||
on the volumes recursively. Further details can be found in the KEP <https://kep.k8s.io/1710>.
|
||||
|
||||
To identify the cause of slow pod startup, analyzing metrics and logs can be helpful. Other
|
||||
factorsthat may impact pod startup include container runtime, disk speed, CPU and memory
|
||||
factors that may impact pod startup include container runtime, disk speed, CPU and memory
|
||||
resources on the node.
|
||||
-->
|
||||
SELinux 挂载选项重标记功能在 v1.27 中升至 Beta 版本。
|
||||
|
|
|
@ -0,0 +1,192 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Kubernetes 1.28: 节点非体面关闭进入 GA 阶段(正式发布)"
|
||||
date: 2023-08-15T10:00:00-08:00
|
||||
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
|
||||
---
|
||||
|
||||
<!--
|
||||
layout: blog
|
||||
title: "Kubernetes 1.28: Non-Graceful Node Shutdown Moves to GA"
|
||||
date: 2023-08-15T10:00:00-08:00
|
||||
slug: kubernetes-1-28-non-graceful-node-shutdown-GA
|
||||
-->
|
||||
|
||||
<!--
|
||||
**Authors:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
|
||||
-->
|
||||
**作者:** Xing Yang (VMware) and Ashutosh Kumar (Elastic)
|
||||
|
||||
**译者:** Xin Li (Daocloud)
|
||||
|
||||
<!--
|
||||
The Kubernetes Non-Graceful Node Shutdown feature is now GA in Kubernetes v1.28.
|
||||
It was introduced as
|
||||
[alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
|
||||
in Kubernetes v1.24, and promoted to
|
||||
[beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
|
||||
in Kubernetes v1.26.
|
||||
This feature allows stateful workloads to restart on a different node if the
|
||||
original node is shutdown unexpectedly or ends up in a non-recoverable state
|
||||
such as the hardware failure or unresponsive OS.
|
||||
-->
|
||||
Kubernetes 节点非体面关闭特性现已在 Kubernetes v1.28 中正式发布。
|
||||
|
||||
此特性在 Kubernetes v1.24 中作为 [Alpha](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown)
|
||||
特性引入,并在 Kubernetes v1.26 中转入 [Beta](https://kubernetes.io/blog/2022/12/16/kubernetes-1-26-non-graceful-node-shutdown-beta/)
|
||||
阶段。如果原始节点意外关闭或最终处于不可恢复状态(例如硬件故障或操作系统无响应),
|
||||
此特性允许有状态工作负载在不同节点上重新启动。
|
||||
|
||||
<!--
|
||||
## What is a Non-Graceful Node Shutdown
|
||||
|
||||
In a Kubernetes cluster, a node can be shutdown in a planned graceful way or
|
||||
unexpectedly because of reasons such as power outage or something else external.
|
||||
A node shutdown could lead to workload failure if the node is not drained
|
||||
before the shutdown. A node shutdown can be either graceful or non-graceful.
|
||||
-->
|
||||
## 什么是节点非体面关闭
|
||||
|
||||
在 Kubernetes 集群中,节点可能会按计划正常关闭,也可能由于断电或其他外部原因而意外关闭。
|
||||
如果节点在关闭之前未腾空,则节点关闭可能会导致工作负载失败。节点关闭可以是正常关闭,也可以是非正常关闭。
|
||||
|
||||
<!--
|
||||
The [Graceful Node Shutdown](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)
|
||||
feature allows Kubelet to detect a node shutdown event, properly terminate the pods,
|
||||
and release resources, before the actual shutdown.
|
||||
-->
|
||||
[节点体面关闭](https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/)特性允许
|
||||
kubelet 在实际关闭之前检测节点关闭事件、正确终止该节点上的 Pod 并释放资源。
|
||||
|
||||
<!--
|
||||
When a node is shutdown but not detected by Kubelet's Node Shutdown Manager,
|
||||
this becomes a non-graceful node shutdown.
|
||||
Non-graceful node shutdown is usually not a problem for stateless apps, however,
|
||||
it is a problem for stateful apps.
|
||||
The stateful application cannot function properly if the pods are stuck on the
|
||||
shutdown node and are not restarting on a running node.
|
||||
-->
|
||||
当节点关闭但 kubelet 的节点关闭管理器未检测到时,将造成节点非体面关闭。
|
||||
对于无状态应用程序来说,节点非体面关闭通常不是问题,但是对于有状态应用程序来说,这是一个问题。
|
||||
如果 Pod 停留在关闭节点上并且未在正在运行的节点上重新启动,则有状态应用程序将无法正常运行。
|
||||
|
||||
<!--
|
||||
In the case of a non-graceful node shutdown, you can manually add an `out-of-service` taint on the Node.
|
||||
-->
|
||||
在节点非体面关闭的情况下,你可以在 Node 上手动添加 `out-of-service` 污点。
|
||||
|
||||
```
|
||||
kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute
|
||||
```
|
||||
|
||||
<!--
|
||||
This taint triggers pods on the node to be forcefully deleted if there are no
|
||||
matching tolerations on the pods. Persistent volumes attached to the shutdown node
|
||||
will be detached, and new pods will be created successfully on a different running
|
||||
node.
|
||||
-->
|
||||
如果 Pod 上没有与之匹配的容忍规则,则此污点会触发节点上的 Pod 被强制删除。
|
||||
挂接到关闭中的节点的持久卷将被解除挂接,新的 Pod 将在不同的运行节点上成功创建。
|
||||
|
||||
<!--
|
||||
**Note:** Before applying the out-of-service taint, you must verify that a node is
|
||||
already in shutdown or power-off state (not in the middle of restarting).
|
||||
|
||||
Once all the workload pods that are linked to the out-of-service node are moved to
|
||||
a new running node, and the shutdown node has been recovered, you should remove that
|
||||
taint on the affected node after the node is recovered.
|
||||
-->
|
||||
**注意:**在应用 out-of-service 污点之前,你必须验证节点是否已经处于关闭或断电状态(而不是在重新启动中)。
|
||||
|
||||
与 out-of-service 节点有关联的所有工作负载的 Pod 都被移动到新的运行节点,
|
||||
并且所关闭的节点已恢复之后,你应该删除受影响节点上的污点。
|
||||
|
||||
<!--
|
||||
## What’s new in stable
|
||||
|
||||
With the promotion of the Non-Graceful Node Shutdown feature to stable, the
|
||||
feature gate `NodeOutOfServiceVolumeDetach` is locked to true on
|
||||
`kube-controller-manager` and cannot be disabled.
|
||||
-->
|
||||
## 稳定版中有哪些新内容
|
||||
|
||||
随着非正常节点关闭功能提升到稳定状态,特性门控
|
||||
`NodeOutOfServiceVolumeDetach` 在 `kube-controller-manager` 上被锁定为 true,并且无法禁用。
|
||||
|
||||
<!--
|
||||
Metrics `force_delete_pods_total` and `force_delete_pod_errors_total` in the
|
||||
Pod GC Controller are enhanced to account for all forceful pods deletion.
|
||||
A reason is added to the metric to indicate whether the pod is forcefully deleted
|
||||
because it is terminated, orphaned, terminating with the `out-of-service` taint,
|
||||
or terminating and unscheduled.
|
||||
-->
|
||||
Pod GC 控制器中的指标 `force_delete_pods_total` 和 `force_delete_pod_errors_total`
|
||||
得到增强,以考虑所有 Pod 的强制删除情况。
|
||||
指标中会添加一个 "reason",以指示 Pod 是否因终止、孤儿、因 `out-of-service`
|
||||
污点而终止或因未计划终止而被强制删除。
|
||||
|
||||
<!--
|
||||
A "reason" is also added to the metric `attachdetach_controller_forced_detaches`
|
||||
in the Attach Detach Controller to indicate whether the force detach is caused by
|
||||
the `out-of-service` taint or a timeout.
|
||||
-->
|
||||
Attach Detach Controller 中的指标 `attachdetach_controller_forced_detaches`
|
||||
中还会添加一个 "reason",以指示强制解除挂接是由 `out-of-service` 污点还是超时引起的。
|
||||
|
||||
<!--
|
||||
## What’s next?
|
||||
|
||||
This feature requires a user to manually add a taint to the node to trigger
|
||||
workloads failover and remove the taint after the node is recovered.
|
||||
In the future, we plan to find ways to automatically detect and fence nodes
|
||||
that are shutdown/failed and automatically failover workloads to another node.
|
||||
-->
|
||||
## 接下来
|
||||
|
||||
此特性要求用户手动向节点添加污点以触发工作负载故障转移,并在节点恢复后删除污点。
|
||||
未来,我们计划找到方法来自动检测和隔离关闭/失败的节点,并自动将工作负载故障转移到另一个节点。
|
||||
|
||||
<!--
|
||||
## How can I learn more?
|
||||
|
||||
Check out additional documentation on this feature
|
||||
[here](https://kubernetes.io/docs/concepts/architecture/nodes/#non-graceful-node-shutdown).
|
||||
-->
|
||||
## 如何了解更多?
|
||||
|
||||
在[此处](/zh-cn/docs/concepts/architecture/nodes/#non-graceful-node-shutdown)可以查看有关此特性的其他文档。
|
||||
|
||||
<!--
|
||||
## How to get involved?
|
||||
|
||||
We offer a huge thank you to all the contributors who helped with design,
|
||||
implementation, and review of this feature and helped move it from alpha, beta, to stable:
|
||||
-->
|
||||
我们非常感谢所有帮助设计、实现和审查此功能并帮助其从 Alpha、Beta 到稳定版的贡献者:
|
||||
|
||||
* Michelle Au ([msau42](https://github.com/msau42))
|
||||
* Derek Carr ([derekwaynecarr](https://github.com/derekwaynecarr))
|
||||
* Danielle Endocrimes ([endocrimes](https://github.com/endocrimes))
|
||||
* Baofa Fan ([carlory](https://github.com/carlory))
|
||||
* Tim Hockin ([thockin](https://github.com/thockin))
|
||||
* Ashutosh Kumar ([sonasingh46](https://github.com/sonasingh46))
|
||||
* Hemant Kumar ([gnufied](https://github.com/gnufied))
|
||||
* Yuiko Mouri ([YuikoTakada](https://github.com/YuikoTakada))
|
||||
* Mrunal Patel ([mrunalp](https://github.com/mrunalp))
|
||||
* David Porter ([bobbypage](https://github.com/bobbypage))
|
||||
* Yassine Tijani ([yastij](https://github.com/yastij))
|
||||
* Jing Xu ([jingxu97](https://github.com/jingxu97))
|
||||
* Xing Yang ([xing-yang](https://github.com/xing-yang))
|
||||
|
||||
<!
|
||||
This feature is a collaboration between SIG Storage and SIG Node.
|
||||
For those interested in getting involved with the design and development of any
|
||||
part of the Kubernetes Storage system, join the Kubernetes Storage Special
|
||||
Interest Group (SIG).
|
||||
For those interested in getting involved with the design and development of the
|
||||
components that support the controlled interactions between pods and host
|
||||
resources, join the Kubernetes Node SIG.
|
||||
-->
|
||||
此特性是 SIG Storage 和 SIG Node 之间的协作。对于那些有兴趣参与 Kubernetes
|
||||
存储系统任何部分的设计和开发的人,请加入 Kubernetes 存储特别兴趣小组(SIG)。
|
||||
对于那些有兴趣参与支持 Pod 和主机资源之间受控交互的组件的设计和开发,请加入 Kubernetes Node SIG。
|
|
@ -67,6 +67,7 @@ Add-on 扩展了 Kubernetes 的功能。
|
|||
并且可以使用与网络寻址分离的基于身份的安全模型在 L3 至 L7 上实施网络策略。
|
||||
Cilium 可以作为 kube-proxy 的替代品;它还提供额外的、可选的可观察性和安全功能。
|
||||
Cilium 是一个[孵化级别的 CNCF 项目](https://www.cncf.io/projects/cilium/)。
|
||||
|
||||
<!--
|
||||
* [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly
|
||||
connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave.
|
||||
|
@ -94,6 +95,7 @@ Add-on 扩展了 Kubernetes 的功能。
|
|||
[Tungsten Fabric](https://tungsten.io),是一个开源的多云网络虚拟化和策略管理平台。
|
||||
Contrail 和 Tungsten Fabric 与业务流程系统(例如 Kubernetes、OpenShift、OpenStack 和 Mesos)集成在一起,
|
||||
为虚拟机、容器或 Pod 以及裸机工作负载提供了隔离模式。
|
||||
|
||||
<!--
|
||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is
|
||||
an overlay network provider that can be used with Kubernetes.
|
||||
|
@ -110,6 +112,7 @@ Add-on 扩展了 Kubernetes 的功能。
|
|||
* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) 是一个多插件,
|
||||
可在 Kubernetes 中提供多种网络支持,以支持所有 CNI 插件(例如 Calico、Cilium、Contiv、Flannel),
|
||||
而且包含了在 Kubernetes 中基于 SRIOV、DPDK、OVS-DPDK 和 VPP 的工作负载。
|
||||
|
||||
<!--
|
||||
* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) is a networking
|
||||
provider for Kubernetes based on [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/),
|
||||
|
@ -125,6 +128,7 @@ Add-on 扩展了 Kubernetes 的功能。
|
|||
包括一个基于 OVS 实现的负载均衡器和网络策略。
|
||||
* [Nodus](https://github.com/akraino-edge-stack/icn-nodus) 是一个基于 OVN 的 CNI 控制器插件,
|
||||
提供基于云原生的服务功能链 (SFC)。
|
||||
|
||||
<!--
|
||||
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html) Container Plug-in (NCP)
|
||||
provides integration between VMware NSX-T and container orchestrators such as
|
||||
|
@ -166,17 +170,14 @@ Add-on 扩展了 Kubernetes 的功能。
|
|||
|
||||
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)
|
||||
is a dashboard web interface for Kubernetes.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)
|
||||
is a tool for graphically visualizing your containers, pods, services etc.
|
||||
Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/)
|
||||
or host the UI yourself.
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a
|
||||
tool for visualizing your containers, Pods, Services and more.
|
||||
-->
|
||||
## 可视化管理 {#visualization-and-control}
|
||||
|
||||
* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) 是一个 Kubernetes 的 Web 控制台界面。
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) 是一个图形化工具,
|
||||
用于查看你的容器、Pod、服务等。请和一个 [Weave Cloud 账号](https://cloud.weave.works/) 一起使用,
|
||||
或者自己运行 UI。
|
||||
* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)
|
||||
是一个可视化工具,用于查看你的容器、Pod、服务等。
|
||||
|
||||
<!--
|
||||
## Infrastructure
|
||||
|
|
|
@ -189,7 +189,7 @@ Here's an example Pod that uses values from `game-demo` to configure a Pod:
|
|||
|
||||
下面是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod:
|
||||
|
||||
{{< codenew file="configmap/configure-pod.yaml" >}}
|
||||
{{% code file="configmap/configure-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
|
@ -388,13 +388,9 @@ ConfigMap 的数据有以下好处:
|
|||
这是因为系统会关闭对已标记为不可变更的 ConfigMap 的监视操作。
|
||||
|
||||
<!--
|
||||
This feature is controlled by the `ImmutableEphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
You can create an immutable ConfigMap by setting the `immutable` field to `true`.
|
||||
For example:
|
||||
-->
|
||||
此功能特性由 `ImmutableEphemeralVolumes`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来控制。
|
||||
你可以通过将 `immutable` 字段设置为 `true` 创建不可变更的 ConfigMap。
|
||||
例如:
|
||||
|
||||
|
|
|
@ -157,7 +157,7 @@ Here's an example `.yaml` file that shows the required fields and object spec fo
|
|||
-->
|
||||
这里有一个 `.yaml` 示例文件,展示了 Kubernetes Deployment 的必需字段和对象 `spec`:
|
||||
|
||||
{{< codenew file="application/deployment.yaml" >}}
|
||||
{{< code file="application/deployment.yaml" >}}
|
||||
|
||||
<!--
|
||||
One way to create a Deployment using a `.yaml` file like the one above is to use the
|
||||
|
|
|
@ -103,14 +103,14 @@ For example, you define a `LimitRange` with this manifest:
|
|||
|
||||
例如,你使用如下清单定义一个 `LimitRange`:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
|
||||
|
||||
<!--
|
||||
along with a Pod that declares a CPU resource request of `700m`, but not a limit:
|
||||
-->
|
||||
以及一个声明 CPU 资源请求为 `700m` 但未声明限制值的 Pod:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
|
||||
|
||||
<!--
|
||||
then that Pod will not be scheduled, failing with an error similar to:
|
||||
|
@ -126,7 +126,7 @@ If you set both `request` and `limit`, then that new Pod will be scheduled succe
|
|||
-->
|
||||
如果你同时设置了 `request` 和 `limit`,那么即使使用相同的 `LimitRange`,新 Pod 也会被成功调度:
|
||||
|
||||
{{< codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" >}}
|
||||
{{% code file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
|
||||
|
||||
<!--
|
||||
## Example resource constraints
|
||||
|
|
|
@ -60,15 +60,13 @@ specific Pods:
|
|||
|
||||
Like many other Kubernetes objects, nodes have
|
||||
[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
|
||||
Kubernetes also populates a standard set of labels on all nodes in a cluster. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/)
|
||||
for a list of common node labels.
|
||||
Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster.
|
||||
-->
|
||||
## 节点标签 {#built-in-node-labels}
|
||||
|
||||
与很多其他 Kubernetes 对象类似,节点也有[标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。
|
||||
你可以[手动地添加标签](/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)。
|
||||
Kubernetes 也会为集群中所有节点添加一些标准的标签。
|
||||
参见[常用的标签、注解和污点](/zh-cn/docs/reference/labels-annotations-taints/)以了解常见的节点标签。
|
||||
Kubernetes 也会为集群中所有节点添加一些[标准的标签](/zh-cn/docs/reference/node/node-labels/)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -233,7 +231,7 @@ For example, consider the following Pod spec:
|
|||
你可以使用 Pod 规约中的 `.spec.affinity.nodeAffinity` 字段来设置节点亲和性。
|
||||
例如,考虑下面的 Pod 规约:
|
||||
|
||||
{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-node-affinity.yaml" %}}
|
||||
|
||||
<!--
|
||||
In this example, the following rules apply:
|
||||
|
@ -333,7 +331,7 @@ For example, consider the following Pod spec:
|
|||
|
||||
例如,考虑下面的 Pod 规约:
|
||||
|
||||
{{< codenew file="pods/pod-with-affinity-anti-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-affinity-anti-affinity.yaml" %}}
|
||||
|
||||
<!--
|
||||
If there are two possible nodes that match the
|
||||
|
@ -536,7 +534,7 @@ Consider the following Pod spec:
|
|||
|
||||
考虑下面的 Pod 规约:
|
||||
|
||||
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
|
||||
{{% code file="pods/pod-with-pod-affinity.yaml" %}}
|
||||
|
||||
<!--
|
||||
This example defines one Pod affinity rule and one Pod anti-affinity rule. The
|
||||
|
@ -913,13 +911,13 @@ The following operators can only be used with `nodeAffinity`.
|
|||
<!--
|
||||
| Operator | Behaviour |
|
||||
| :------------: | :-------------: |
|
||||
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than or equal to the integer that results from parsing the value of a label named by this selector |
|
||||
| `Gt` | The supplied value will be parsed as an integer, and that integer is less than the integer that results from parsing the value of a label named by this selector |
|
||||
| `Lt` | The supplied value will be parsed as an integer, and that integer is greater than the integer that results from parsing the value of a label named by this selector |
|
||||
-->
|
||||
| 操作符 | 行为 |
|
||||
| :------------: | :-------------: |
|
||||
| `Gt` | 提供的值将被解析为整数,并且该整数小于等于通过解析此选择算符命名的标签的值所得到的整数 |
|
||||
| `Lt` | 提供的值将被解析为整数,并且该整数大于等于通过解析此选择算符命名的标签的值所得到的整数 |
|
||||
| `Gt` | 提供的值将被解析为整数,并且该整数小于通过解析此选择算符命名的标签的值所得到的整数 |
|
||||
| `Lt` | 提供的值将被解析为整数,并且该整数大于通过解析此选择算符命名的标签的值所得到的整数 |
|
||||
|
||||
{{<note>}}
|
||||
<!--
|
||||
|
|
|
@ -101,7 +101,7 @@ IaaS 提供商 | 链接 |
|
|||
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
|
||||
Amazon Web Services | https://aws.amazon.com/security |
|
||||
Google Cloud Platform | https://cloud.google.com/security |
|
||||
Huawei Cloud | https://www.huaweicloud.com/securecenter/overallsafety |
|
||||
Huawei Cloud | https://www.huaweicloud.com/intl/zh-cn/securecenter/overallsafety |
|
||||
IBM Cloud | https://www.ibm.com/cloud/security |
|
||||
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
|
||||
Oracle Cloud Infrastructure | https://www.oracle.com/security |
|
||||
|
|
|
@ -523,7 +523,7 @@ The following is an example Pod with custom DNS settings:
|
|||
-->
|
||||
以下是具有自定义 DNS 设置的 Pod 示例:
|
||||
|
||||
{{< codenew file="service/networking/custom-dns.yaml" >}}
|
||||
{{% code file="service/networking/custom-dns.yaml" %}}
|
||||
|
||||
<!--
|
||||
When the Pod above is created, the container `test` gets the following contents
|
||||
|
|
|
@ -268,7 +268,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
和[无头服务](/zh-cn/docs/concepts/services-networking/service/#headless-services)的行为方式
|
||||
与此相同。)
|
||||
|
||||
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
|
||||
|
@ -298,7 +298,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
* 对于启用了双协议栈的集群,将 `.spec.ipFamilyPolicy` 设置为
|
||||
`RequireDualStack` 时,其行为与 `PreferDualStack` 相同。
|
||||
|
||||
{{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}}
|
||||
{{% code file="service/networking/dual-stack-preferred-svc.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well
|
||||
|
@ -312,7 +312,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
|
|||
`.spec.ClusterIP` 被设置成 IPv6 地址,因为它是 `.spec.ClusterIPs` 数组中的第一个元素,
|
||||
覆盖其默认值。
|
||||
|
||||
{{% codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
|
||||
{{% code file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
|
||||
|
||||
<!--
|
||||
#### Dual-stack defaults on existing Services
|
||||
|
@ -337,7 +337,7 @@ dual-stack.)
|
|||
`.spec.ipFamilyPolicy` 为 `SingleStack` 并设置 `.spec.ipFamilies`
|
||||
为服务的当前地址族。
|
||||
|
||||
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
<!--
|
||||
You can validate this behavior by using kubectl to inspect an existing service.
|
||||
|
@ -387,7 +387,7 @@ dual-stack.)
|
|||
并设置 `.spec.ipFamilies` 为第一个服务集群 IP 范围的地址族(通过配置 kube-apiserver 的
|
||||
`--service-cluster-ip-range` 参数),即使 `.spec.ClusterIP` 的设置值为 `None` 也如此。
|
||||
|
||||
{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}
|
||||
|
||||
<!--
|
||||
You can validate this behavior by using kubectl to inspect an existing headless service with selectors.
|
||||
|
|
|
@ -142,8 +142,7 @@ A minimal Ingress resource example:
|
|||
一个最小的 Ingress 资源示例:
|
||||
|
||||
|
||||
{{< codenew file="service/networking/minimal-ingress.yaml" >}}
|
||||
|
||||
{{% code file="service/networking/minimal-ingress.yaml" %}}
|
||||
<!--
|
||||
An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
|
||||
The name of an Ingress object must be a valid
|
||||
|
@ -275,7 +274,7 @@ with static assets.
|
|||
`Resource` 后端与 Service 后端是互斥的,在二者均被设置时会无法通过合法性检查。
|
||||
`Resource` 后端的一种常见用法是将所有入站数据导向带有静态资产的对象存储后端。
|
||||
|
||||
{{< codenew file="service/networking/ingress-resource-backend.yaml" >}}
|
||||
{{% code file="service/networking/ingress-resource-backend.yaml" %}}
|
||||
|
||||
<!--
|
||||
After creating the Ingress above, you can view it with the following command:
|
||||
|
@ -432,7 +431,7 @@ equal to the suffix of the wildcard rule.
|
|||
| `*.foo.com` | `baz.bar.foo.com` | 不匹配,通配符仅覆盖了一个 DNS 标签 |
|
||||
| `*.foo.com` | `foo.com` | 不匹配,通配符仅覆盖了一个 DNS 标签 |
|
||||
|
||||
{{< codenew file="service/networking/ingress-wildcard-host.yaml" >}}
|
||||
{{% code file="service/networking/ingress-wildcard-host.yaml" %}}
|
||||
|
||||
<!--
|
||||
## Ingress class
|
||||
|
@ -448,7 +447,7 @@ Ingress 可以由不同的控制器实现,通常使用不同的配置。
|
|||
每个 Ingress 应当指定一个类,也就是一个对 IngressClass 资源的引用。
|
||||
IngressClass 资源包含额外的配置,其中包括应当实现该类的控制器名称。
|
||||
|
||||
{{< codenew file="service/networking/external-lb.yaml" >}}
|
||||
{{% code file="service/networking/external-lb.yaml" %}}
|
||||
|
||||
<!--
|
||||
The `.spec.parameters` field of an IngressClass lets you reference another
|
||||
|
@ -653,7 +652,7 @@ default `IngressClass`:
|
|||
不过仍然[推荐](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do)
|
||||
设置默认的 `IngressClass`。
|
||||
|
||||
{{< codenew file="service/networking/default-ingressclass.yaml" >}}
|
||||
{{% code file="service/networking/default-ingressclass.yaml" %}}
|
||||
|
||||
<!--
|
||||
## Types of Ingress
|
||||
|
@ -671,7 +670,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
|
|||
现有的 Kubernetes 概念允许你暴露单个 Service (参见[替代方案](#alternatives))。
|
||||
你也可以通过指定无规则的**默认后端**来对 Ingress 进行此操作。
|
||||
|
||||
{{< codenew file="service/networking/test-ingress.yaml" >}}
|
||||
{{% code file="service/networking/test-ingress.yaml" %}}
|
||||
|
||||
<!--
|
||||
If you create it using `kubectl apply -f` you should be able to view the state
|
||||
|
@ -722,7 +721,7 @@ It would require an Ingress such as:
|
|||
-->
|
||||
这将需要一个如下所示的 Ingress:
|
||||
|
||||
{{< codenew file="service/networking/simple-fanout-example.yaml" >}}
|
||||
{{% code file="service/networking/simple-fanout-example.yaml" %}}
|
||||
|
||||
<!--
|
||||
When you create the Ingress with `kubectl apply -f`:
|
||||
|
@ -790,7 +789,7 @@ the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
|||
以下 Ingress 让后台负载均衡器基于
|
||||
[host 头部字段](https://tools.ietf.org/html/rfc7230#section-5.4)来路由请求。
|
||||
|
||||
{{< codenew file="service/networking/name-virtual-host-ingress.yaml" >}}
|
||||
{{% code file="service/networking/name-virtual-host-ingress.yaml" %}}
|
||||
|
||||
<!--
|
||||
If you create an Ingress resource without any hosts defined in the rules, then any
|
||||
|
@ -809,7 +808,7 @@ and `second.bar.com` to `service3`.
|
|||
例如,以下 Ingress 会将请求 `first.bar.com` 的流量路由到 `service1`,将请求
|
||||
`second.bar.com` 的流量路由到 `service2`,而所有其他流量都会被路由到 `service3`。
|
||||
|
||||
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
|
||||
{{% code file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
|
||||
|
||||
<!--
|
||||
### TLS
|
||||
|
@ -868,7 +867,7 @@ section.
|
|||
因此,`tls` 字段中的 `hosts` 的取值需要与 `rules` 字段中的 `host` 完全匹配。
|
||||
{{< /note >}}
|
||||
|
||||
{{< codenew file="service/networking/tls-example-ingress.yaml" >}}
|
||||
{{% code file="service/networking/tls-example-ingress.yaml" %}}
|
||||
|
||||
<!--
|
||||
There is a gap between TLS features supported by various Ingress
|
||||
|
|
|
@ -340,7 +340,7 @@ An administrator can manually reclaim the volume with the following steps.
|
|||
|
||||
<!--
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
(such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
|
||||
(such as an AWS EBS or GCE PD volume) still exists after the PV is deleted.
|
||||
1. Manually clean up the data on the associated storage asset accordingly.
|
||||
1. Manually delete the associated storage asset.
|
||||
|
||||
|
@ -348,7 +348,7 @@ If you want to reuse the same storage asset, create a new PersistentVolume with
|
|||
the same storage asset definition.
|
||||
-->
|
||||
1. 删除 PersistentVolume 对象。与之相关的、位于外部基础设施中的存储资产
|
||||
(例如 AWS EBS、GCE PD、Azure Disk 或 Cinder 卷)在 PV 删除之后仍然存在。
|
||||
(例如 AWS EBS 或 GCE PD 卷)在 PV 删除之后仍然存在。
|
||||
1. 根据情况,手动清除所关联的存储资产上的数据。
|
||||
1. 手动删除所关联的存储资产。
|
||||
|
||||
|
@ -359,8 +359,8 @@ the same storage asset definition.
|
|||
|
||||
For volume plugins that support the `Delete` reclaim policy, deletion removes
|
||||
both the PersistentVolume object from Kubernetes, as well as the associated
|
||||
storage asset in the external infrastructure, such as an AWS EBS, GCE PD,
|
||||
Azure Disk, or Cinder volume. Volumes that were dynamically provisioned
|
||||
storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume.
|
||||
Volumes that were dynamically provisioned
|
||||
inherit the [reclaim policy of their StorageClass](#reclaim-policy), which
|
||||
defaults to `Delete`. The administrator should configure the StorageClass
|
||||
according to users' expectations; otherwise, the PV must be edited or
|
||||
|
@ -370,8 +370,7 @@ patched after it is created. See
|
|||
#### 删除(Delete) {#delete}
|
||||
|
||||
对于支持 `Delete` 回收策略的卷插件,删除动作会将 PersistentVolume 对象从
|
||||
Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS、GCE PD、Azure Disk 或
|
||||
Cinder 卷)中移除所关联的存储资产。
|
||||
Kubernetes 中移除,同时也会从外部基础设施(如 AWS EBS 或 GCE PD 卷)中移除所关联的存储资产。
|
||||
动态制备的卷会继承[其 StorageClass 中设置的回收策略](#reclaim-policy),
|
||||
该策略默认为 `Delete`。管理员需要根据用户的期望来配置 StorageClass;
|
||||
否则 PV 卷被创建之后必须要被编辑或者修补。
|
||||
|
@ -616,15 +615,20 @@ the following types of volumes:
|
|||
-->
|
||||
现在,对扩充 PVC 申领的支持默认处于被启用状态。你可以扩充以下类型的卷:
|
||||
|
||||
* azureDisk
|
||||
* azureFile
|
||||
* awsElasticBlockStore
|
||||
* cinder (已弃用)
|
||||
<!--
|
||||
* azureFile (deprecated)
|
||||
* {{< glossary_tooltip text="csi" term_id="csi" >}}
|
||||
* flexVolume (已弃用)
|
||||
* gcePersistentDisk
|
||||
* flexVolume (deprecated)
|
||||
* gcePersistentDisk (deprecated)
|
||||
* rbd
|
||||
* portworxVolume
|
||||
* portworxVolume (deprecated)
|
||||
-->
|
||||
* azureFile(已弃用)
|
||||
* {{< glossary_tooltip text="csi" term_id="csi" >}}
|
||||
* flexVolume(已弃用)
|
||||
* gcePersistentDisk(已弃用)
|
||||
* rbd
|
||||
* portworxVolume(已弃用)
|
||||
|
||||
<!--
|
||||
You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.
|
||||
|
@ -885,14 +889,8 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
|
|||
The following types of PersistentVolume are deprecated.
|
||||
This means that support is still available but will be removed in a future Kubernetes release.
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
(**deprecated** in v1.17)
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(**deprecated** in v1.19)
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
(**deprecated** in v1.21)
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
(**deprecated** in v1.18)
|
||||
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
|
||||
(**deprecated** in v1.23)
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
|
||||
|
@ -904,13 +902,8 @@ This means that support is still available but will be removed in a future Kuber
|
|||
-->
|
||||
以下的持久卷已被弃用。这意味着当前仍是支持的,但是 Kubernetes 将来的发行版会将其移除。
|
||||
|
||||
* [`awsElasticBlockStore`](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore) - AWS 弹性块存储(EBS)
|
||||
(于 v1.17 **弃用**)
|
||||
* [`azureDisk`](/zh-cn/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(于 v1.19 **弃用**)
|
||||
* [`azureFile`](/zh-cn/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
(于 v1.21 **弃用**)
|
||||
* [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder(OpenStack 块存储)(于 v1.18 **弃用**)
|
||||
* [`flexVolume`](/zh-cn/docs/concepts/storage/volumes/#flexVolume) - FlexVolume (于 v1.23 **弃用**)
|
||||
* [`gcePersistentDisk`](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
|
||||
(于 v1.17 **弃用**)
|
||||
|
@ -922,6 +915,12 @@ This means that support is still available but will be removed in a future Kuber
|
|||
<!--
|
||||
Older versions of Kubernetes also supported the following in-tree PersistentVolume types:
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
(**not available** in v1.27)
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(**not available** in v1.27)
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
(**not available** in v1.26)
|
||||
* `photonPersistentDisk` - Photon controller persistent disk.
|
||||
(**not available** starting v1.15)
|
||||
* [`scaleIO`](/docs/concepts/storage/volumes/#scaleio) - ScaleIO volume
|
||||
|
@ -936,6 +935,12 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu
|
|||
|
||||
旧版本的 Kubernetes 仍支持这些“树内(In-Tree)”持久卷类型:
|
||||
|
||||
* [`awsElasticBlockStore`](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
(v1.27 开始**不可用**)
|
||||
* [`azureDisk`](/zh-cn/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
(v1.27 开始**不可用**)
|
||||
* [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
(v1.27 开始**不可用**)
|
||||
* `photonPersistentDisk` - Photon 控制器持久化盘。(从 v1.15 版本开始将**不可用**)
|
||||
* [`scaleIO`](/zh-cn/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷(v1.21 之后**不可用**)
|
||||
* [`flocker`](/zh-cn/docs/concepts/storage/volumes/#flocker) - Flocker 存储
|
||||
|
@ -1158,16 +1163,27 @@ Kubernetes 使用卷访问模式来匹配 PersistentVolumeClaim 和 PersistentVo
|
|||
> 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式挂载。
|
||||
|
||||
<!--
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany|
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
|
||||
| :--- | :---: | :---: | :---: | - |
|
||||
| AzureFile | ✓ | ✓ | ✓ | - |
|
||||
| CephFS | ✓ | ✓ | ✓ | - |
|
||||
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
|
||||
| FC | ✓ | ✓ | - | - |
|
||||
| FlexVolume | ✓ | ✓ | depends on the driver | - |
|
||||
| GCEPersistentDisk | ✓ | ✓ | - | - |
|
||||
| Glusterfs | ✓ | ✓ | ✓ | - |
|
||||
| HostPath | ✓ | - | - | - |
|
||||
| iSCSI | ✓ | ✓ | - | - |
|
||||
| NFS | ✓ | ✓ | ✓ | - |
|
||||
| RBD | ✓ | ✓ | - | - |
|
||||
| VsphereVolume | ✓ | - | - (works when Pods are collocated) | - |
|
||||
| PortworxVolume | ✓ | - | ✓ | - | - |
|
||||
-->
|
||||
|
||||
| 卷插件 | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
|
||||
| :--- | :---: | :---: | :---: | - |
|
||||
| AWSElasticBlockStore | ✓ | - | - | - |
|
||||
| AzureFile | ✓ | ✓ | ✓ | - |
|
||||
| AzureDisk | ✓ | - | - | - |
|
||||
| CephFS | ✓ | ✓ | ✓ | - |
|
||||
| Cinder | ✓ | - | ([如果多次挂接卷可用](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/features.md#multi-attach-volumes)) | - |
|
||||
| CSI | 取决于驱动 | 取决于驱动 | 取决于驱动 | 取决于驱动 |
|
||||
| FC | ✓ | ✓ | - | - |
|
||||
| FlexVolume | ✓ | ✓ | 取决于驱动 | - |
|
||||
|
@ -1213,11 +1229,9 @@ Current reclaim policies are:
|
|||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
|
||||
* Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk,
|
||||
or OpenStack Cinder volume is deleted
|
||||
* Delete -- associated storage asset such as AWS EBS or GCE PD volume is deleted
|
||||
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk,
|
||||
and Cinder volumes support deletion.
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS and GCE PD volumes support deletion.
|
||||
-->
|
||||
### 回收策略 {#reclaim-policy}
|
||||
|
||||
|
@ -1225,10 +1239,10 @@ and Cinder volumes support deletion.
|
|||
|
||||
* Retain -- 手动回收
|
||||
* Recycle -- 基本擦除 (`rm -rf /thevolume/*`)
|
||||
* Delete -- 诸如 AWS EBS、GCE PD、Azure Disk 或 OpenStack Cinder 卷这类关联存储资产也被删除
|
||||
* Delete -- 诸如 AWS EBS 或 GCE PD 卷这类关联存储资产也被删除
|
||||
|
||||
目前,仅 NFS 和 HostPath 支持回收(Recycle)。
|
||||
AWS EBS、GCE PD、Azure Disk 和 Cinder 卷都支持删除(Delete)。
|
||||
AWS EBS 和 GCE PD 卷支持删除(Delete)。
|
||||
|
||||
<!--
|
||||
### Mount Options
|
||||
|
@ -1252,11 +1266,8 @@ The following volume types support mount options:
|
|||
-->
|
||||
以下卷类型支持挂载选项:
|
||||
|
||||
* `awsElasticBlockStore`
|
||||
* `azureDisk`
|
||||
* `azureFile`
|
||||
* `cephfs`
|
||||
* `cinder`(于 v1.18 **弃用**)
|
||||
* `gcePersistentDisk`
|
||||
* `iscsi`
|
||||
* `nfs`
|
||||
|
@ -1285,15 +1296,11 @@ it will become fully deprecated in a future Kubernetes release.
|
|||
{{< note >}}
|
||||
<!--
|
||||
For most volume types, you do not need to set this field. It is automatically
|
||||
populated for [AWS EBS](/docs/concepts/storage/volumes/#awselasticblockstore),
|
||||
[GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) and
|
||||
[Azure Disk](/docs/concepts/storage/volumes/#azuredisk) volume block types. You
|
||||
need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
populated for [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) volume block types.
|
||||
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
-->
|
||||
对大多数类型的卷而言,你不需要设置节点亲和性字段。
|
||||
[AWS EBS](/zh-cn/docs/concepts/storage/volumes/#awselasticblockstore)、
|
||||
[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 和
|
||||
[Azure Disk](/zh-cn/docs/concepts/storage/volumes/#azuredisk) 卷类型都能自动设置相关字段。
|
||||
[GCE PD](/zh-cn/docs/concepts/storage/volumes/#gcepersistentdisk) 卷类型能自动设置相关字段。
|
||||
你需要为 [local](/zh-cn/docs/concepts/storage/volumes/#local) 卷显式地设置此属性。
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -1644,14 +1651,20 @@ applicable:
|
|||
-->
|
||||
以下卷插件支持原始块卷,包括其动态制备(如果支持的话)的卷:
|
||||
|
||||
* AWSElasticBlockStore
|
||||
* AzureDisk
|
||||
<!--
|
||||
* CSI
|
||||
* FC (Fibre Channel)
|
||||
* GCEPersistentDisk
|
||||
* iSCSI
|
||||
* Local volume
|
||||
* RBD (Ceph Block Device)
|
||||
* VsphereVolume
|
||||
-->
|
||||
* CSI
|
||||
* FC(光纤通道)
|
||||
* GCEPersistentDisk
|
||||
* iSCSI
|
||||
* Local 卷
|
||||
* OpenStack Cinder
|
||||
* RBD(Ceph 块设备)
|
||||
* VsphereVolume
|
||||
|
||||
|
|
|
@ -38,9 +38,9 @@ systems.
|
|||
-->
|
||||
## 介绍 {#introduction}
|
||||
|
||||
StorageClass 为管理员提供了描述存储 "类" 的方法。
|
||||
StorageClass 为管理员提供了描述存储"类"的方法。
|
||||
不同的类型可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。
|
||||
Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为 "配置文件"。
|
||||
Kubernetes 本身并不清楚各种类代表的什么。这个类的概念在其他存储系统中有时被称为"配置文件"。
|
||||
|
||||
<!--
|
||||
## The StorageClass Resource
|
||||
|
@ -61,7 +61,8 @@ of a class when first creating StorageClass objects, and the objects cannot
|
|||
be updated once they are created.
|
||||
-->
|
||||
StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。
|
||||
当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,一旦创建了对象就不能再对其更新。
|
||||
当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,
|
||||
一旦创建了对象就不能再对其更新。
|
||||
|
||||
<!--
|
||||
Administrators can specify a default StorageClass only for PVCs that don't
|
||||
|
@ -127,11 +128,8 @@ for provisioning PVs. This field must be specified.
|
|||
|
||||
| 卷插件 | 内置制备器 | 配置示例 |
|
||||
| :------------------- | :--------: | :-----------------------------------: |
|
||||
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
|
||||
| AzureFile | ✓ | [Azure File](#azure-file) |
|
||||
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
|
||||
| CephFS | - | - |
|
||||
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) |
|
||||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
|
||||
|
@ -220,11 +218,8 @@ Volume type | Required Kubernetes version
|
|||
| 卷类型 | Kubernetes 版本要求 |
|
||||
| :------------------- | :------------------------ |
|
||||
| gcePersistentDisk | 1.11 |
|
||||
| awsElasticBlockStore | 1.11 |
|
||||
| Cinder | 1.11 |
|
||||
| rbd | 1.11 |
|
||||
| Azure File | 1.11 |
|
||||
| Azure Disk | 1.11 |
|
||||
| Portworx | 1.11 |
|
||||
| FlexVolume | 1.13 |
|
||||
| CSI | 1.14 (alpha), 1.16 (beta) |
|
||||
|
@ -302,15 +297,11 @@ PersistentVolume 会根据 Pod 调度约束指定的拓扑来选择或制备。
|
|||
<!--
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
-->
|
||||
以下插件支持动态制备的 `WaitForFirstConsumer` 模式:
|
||||
以下插件支持动态制备的 `WaitForFirstConsumer` 模式:
|
||||
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
|
||||
<!--
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
|
@ -429,7 +420,7 @@ Storage Classes 的参数描述了存储类的卷。取决于制备器,可以
|
|||
例如,参数 type 的值 io1 和参数 iopsPerGB 特定于 EBS PV。
|
||||
当参数被省略时,会使用默认值。
|
||||
|
||||
一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB, 包括参数的键和值。
|
||||
一个 StorageClass 最多可以定义 512 个参数。这些参数对象的总长度不能超过 256 KiB,包括参数的键和值。
|
||||
|
||||
### AWS EBS
|
||||
|
||||
|
@ -469,12 +460,12 @@ parameters:
|
|||
encrypting the volume. If none is supplied but `encrypted` is true, a key is
|
||||
generated by AWS. See AWS docs for valid ARN value.
|
||||
-->
|
||||
- `type`:`io1`,`gp2`,`sc1`,`st1`。详细信息参见
|
||||
- `type`:`io1`、`gp2`、`sc1`、`st1`。详细信息参见
|
||||
[AWS 文档](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)。默认值:`gp2`。
|
||||
- `zone`(弃用):AWS 区域。如果没有指定 `zone` 和 `zones`,
|
||||
- `zone`(已弃用):AWS 区域。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
- `zones`(弃用):以逗号分隔的 AWS 区域列表。
|
||||
- `zones`(已弃用):以逗号分隔的 AWS 区域列表。
|
||||
如果没有指定 `zone` 和 `zones`,通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone`和`zones`参数不能同时使用。
|
||||
- `iopsPerGB`:只适用于 `io1` 卷。每 GiB 每秒 I/O 操作。
|
||||
|
@ -484,7 +475,7 @@ parameters:
|
|||
这里需要输入一个字符串,即 `"10"`,而不是 `10`。
|
||||
- `fsType`:受 Kubernetes 支持的文件类型。默认值:`"ext4"`。
|
||||
- `encrypted`:指定 EBS 卷是否应该被加密。合法值为 `"true"` 或者 `"false"`。
|
||||
这里需要输入字符串,即 `"true"`, 而非 `true`。
|
||||
这里需要输入字符串,即 `"true"`,而非 `true`。
|
||||
- `kmsKeyId`:可选。加密卷时使用密钥的完整 Amazon 资源名称。
|
||||
如果没有提供,但 `encrypted` 值为 true,AWS 生成一个密钥。关于有效的 ARN 值,请参阅 AWS 文档。
|
||||
|
||||
|
@ -524,13 +515,13 @@ parameters:
|
|||
- `replication-type`: `none` or `regional-pd`. Default: `none`.
|
||||
-->
|
||||
- `type`:`pd-standard` 或者 `pd-ssd`。默认:`pd-standard`
|
||||
- `zone`(弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,
|
||||
- `zone`(已弃用):GCE 区域。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
- `zones`(弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,
|
||||
- `zones`(已弃用):逗号分隔的 GCE 区域列表。如果没有指定 `zone` 和 `zones`,
|
||||
通常卷会在 Kubernetes 集群节点所在的活动区域中轮询调度(round-robin)分配。
|
||||
`zone` 和 `zones` 参数不能同时使用。
|
||||
- `fstype`:`ext4` 或 `xfs`。 默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
- `fstype`:`ext4` 或 `xfs`。默认:`ext4`。宿主机操作系统必须支持所定义的文件系统类型。
|
||||
- `replication-type`:`none` 或者 `regional-pd`。默认值:`none`。
|
||||
|
||||
<!--
|
||||
|
@ -554,8 +545,8 @@ using `allowedTopologies`.
|
|||
|
||||
强烈建议设置 `volumeBindingMode: WaitForFirstConsumer`,这样设置后,
|
||||
当你创建一个 Pod,它使用的 PersistentVolumeClaim 使用了这个 StorageClass,
|
||||
区域性持久化磁盘会在两个区域里制备。 其中一个区域是 Pod 所在区域。
|
||||
另一个区域是会在集群管理的区域中任意选择。磁盘区域可以通过 `allowedTopologies` 加以限制。
|
||||
区域性持久化磁盘会在两个区域里制备。其中一个区域是 Pod 所在区域,
|
||||
另一个区域是会在集群管理的区域中任意选择,磁盘区域可以通过 `allowedTopologies` 加以限制。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -611,34 +602,6 @@ Kubernetes 不包含内部 NFS 驱动。你需要使用外部驱动为 NFS 创
|
|||
- [NFS Ganesha 服务器和外部驱动](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
- [NFS subdir 外部驱动](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
|
||||
### OpenStack Cinder
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: gold
|
||||
provisioner: kubernetes.io/cinder
|
||||
parameters:
|
||||
availability: nova
|
||||
```
|
||||
|
||||
<!--
|
||||
- `availability`: Availability Zone. If not specified, volumes are generally
|
||||
round-robin-ed across all active zones where Kubernetes cluster has a node.
|
||||
-->
|
||||
- `availability`:可用区域。如果没有指定,通常卷会在 Kubernetes 集群节点所在的活动区域中轮转调度。
|
||||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
|
||||
<!--
|
||||
This internal provisioner of OpenStack is deprecated. Please use
|
||||
[the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
-->
|
||||
OpenStack 的内部驱动已经被弃用。请使用
|
||||
[OpenStack 的外部云驱动](https://github.com/kubernetes/cloud-provider-openstack)。
|
||||
{{< /note >}}
|
||||
|
||||
### vSphere {#vsphere}
|
||||
|
||||
<!--
|
||||
|
@ -681,7 +644,7 @@ The following examples use the VMware Cloud Provider (vCP) StorageClass provisio
|
|||
-->
|
||||
#### vCP 制备器 {#vcp-provisioner}
|
||||
|
||||
以下示例使用 VMware Cloud Provider (vCP) StorageClass 制备器。
|
||||
以下示例使用 VMware Cloud Provider(vCP)StorageClass 制备器。
|
||||
|
||||
<!--
|
||||
1. Create a StorageClass with a user specified disk format.
|
||||
|
@ -939,7 +902,7 @@ parameters:
|
|||
当 `kind` 的值是 `shared` 时,所有非托管磁盘都在集群的同一个资源组中的几个共享存储帐户中创建。
|
||||
当 `kind` 的值是 `dedicated` 时,将为在集群的同一个资源组中新的非托管磁盘创建新的专用存储帐户。
|
||||
- `resourceGroup`:指定要创建 Azure 磁盘所属的资源组。必须是已存在的资源组名称。
|
||||
若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。
|
||||
若未指定资源组,磁盘会默认放入与当前 Kubernetes 集群相同的资源组中。
|
||||
<!--
|
||||
* Premium VM can attach both Standard_LRS and Premium_LRS disks, while Standard
|
||||
VM can only attach Standard_LRS disks.
|
||||
|
|
|
@ -31,7 +31,7 @@ respect those limits. Otherwise, Pods scheduled on a Node could get stuck
|
|||
waiting for volumes to attach.
|
||||
-->
|
||||
谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。
|
||||
Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。
|
||||
Kubernetes 需要尊重这些限制。否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -108,7 +108,7 @@ Dynamic volume limits are supported for following volume types.
|
|||
For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node
|
||||
type and enforces the appropriate maximum number of volumes for the node. For example:
|
||||
-->
|
||||
对于由内建插件管理的卷,Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。 例如:
|
||||
对于由内建插件管理的卷,Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。例如:
|
||||
|
||||
<!--
|
||||
* On
|
||||
|
@ -129,20 +129,18 @@ Refer to the [CSI specifications](https://github.com/container-storage-interface
|
|||
* For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver.
|
||||
-->
|
||||
* 在
|
||||
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中,
|
||||
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将127个卷关联到节点。
|
||||
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中,
|
||||
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将 127 个卷关联到节点。
|
||||
|
||||
* 对于 M5、C5、R5、T3 和 Z1D 类型实例的 Amazon EBS 磁盘,Kubernetes 仅允许 25 个卷关联到节点。
|
||||
对于 ec2 上的其他实例类型
|
||||
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>,
|
||||
Kubernetes 允许 39 个卷关联至节点。
|
||||
对于 ec2 上的其他实例类型
|
||||
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>,
|
||||
Kubernetes 允许 39 个卷关联至节点。
|
||||
|
||||
* 在 Azure 环境中, 根据节点类型,最多 64 个磁盘可以关联至一个节点。
|
||||
更多详细信息,请参阅[Azure 虚拟机的数量大小](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes)。
|
||||
更多详细信息,请参阅 [Azure 虚拟机的数量大小](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes)。
|
||||
|
||||
* 如果 CSI 存储驱动程序(使用 `NodeGetInfo` )为节点通告卷数上限,则 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} 将遵守该限制值。
|
||||
参考 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) 获取更多详细信息。
|
||||
参考 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) 获取更多详细信息。
|
||||
|
||||
* 对于由已迁移到 CSI 驱动程序的树内插件管理的卷,最大卷数将是 CSI 驱动程序报告的卷数。
|
||||
|
||||
|
||||
|
|
|
@ -25,8 +25,8 @@ and report them as events on {{< glossary_tooltip text="PVCs" term_id="persisten
|
|||
or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
|
||||
-->
|
||||
{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷健康监测支持 CSI 驱动从底层的存储系统着手,
|
||||
探测异常的卷状态,并以事件的形式上报到 {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
|
||||
或 {{< glossary_tooltip text="Pods" term_id="pod" >}}.
|
||||
探测异常的卷状态,并以事件的形式上报到 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}
|
||||
或 {{< glossary_tooltip text="Pod" term_id="pod" >}}.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -46,7 +46,7 @@ an event will be reported on the related
|
|||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC)
|
||||
when an abnormal volume condition is detected on a CSI volume.
|
||||
-->
|
||||
Kubernetes _卷健康监测_ 是 Kubernetes 容器存储接口(CSI)实现的一部分。
|
||||
Kubernetes **卷健康监测**是 Kubernetes 容器存储接口(CSI)实现的一部分。
|
||||
卷健康监测特性由两个组件实现:外部健康监测控制器和 {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}。
|
||||
|
||||
如果 CSI 驱动器通过控制器的方式支持卷健康监测特性,那么只要在 CSI 卷上监测到异常卷状态,就会在
|
||||
|
@ -79,7 +79,8 @@ is healthy. For more information, please check
|
|||
此外,卷运行状况信息作为 Kubelet VolumeStats 指标公开。
|
||||
添加了一个新的指标 kubelet_volume_stats_health_status_abnormal。
|
||||
该指标包括两个标签:`namespace` 和 `persistentvolumeclaim`。
|
||||
计数为 1 或 0。1 表示卷不正常,0 表示卷正常。更多信息请访问[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes)。
|
||||
计数为 1 或 0。1 表示卷不正常,0 表示卷正常。更多信息请访问
|
||||
[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes)。
|
||||
|
||||
<!--
|
||||
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
|
|
|
@ -77,7 +77,7 @@ This example CronJob manifest prints the current time and a hello message every
|
|||
|
||||
下面的 CronJob 示例清单会在每分钟打印出当前时间和问候消息:
|
||||
|
||||
{{< codenew file="application/job/cronjob.yaml" >}}
|
||||
{{% code file="application/job/cronjob.yaml" %}}
|
||||
|
||||
<!--
|
||||
([Running Automated Tasks with a CronJob](/docs/tasks/job/automated-tasks-with-cron-jobs/)
|
||||
|
|
|
@ -73,7 +73,7 @@ describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
|
|||
你可以在 YAML 文件中描述 DaemonSet。
|
||||
例如,下面的 daemonset.yaml 文件描述了一个运行 fluentd-elasticsearch Docker 镜像的 DaemonSet:
|
||||
|
||||
{{< codenew file="controllers/daemonset.yaml" >}}
|
||||
{{% code file="controllers/daemonset.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create a DaemonSet based on the YAML file:
|
||||
|
|
|
@ -73,7 +73,7 @@ It takes around 10s to complete.
|
|||
下面是一个 Job 配置示例。它负责计算 π 到小数点后 2000 位,并将结果打印出来。
|
||||
此计算大约需要 10 秒钟完成。
|
||||
|
||||
{{< codenew file="controllers/job.yaml" >}}
|
||||
{{% code file="controllers/job.yaml" %}}
|
||||
|
||||
<!--
|
||||
You can run the example with this command:
|
||||
|
@ -692,7 +692,7 @@ Here is a manifest for a Job that defines a `podFailurePolicy`:
|
|||
-->
|
||||
下面是一个定义了 `podFailurePolicy` 的 Job 的清单:
|
||||
|
||||
{{< codenew file="/controllers/job-pod-failure-policy-example.yaml" >}}
|
||||
{{% code file="/controllers/job-pod-failure-policy-example.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the example above, the first rule of the Pod failure policy specifies that
|
||||
|
@ -1443,52 +1443,17 @@ mismatch.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The control plane doesn't track Jobs using finalizers, if the Jobs were created
|
||||
when the feature gate `JobTrackingWithFinalizers` was disabled, even after you
|
||||
upgrade the control plane to 1.26.
|
||||
-->
|
||||
如果 Job 是在特性门控 `JobTrackingWithFinalizers` 被禁用时创建的,即使你将控制面升级到 1.26,
|
||||
控制面也不会使用 Finalizer 跟踪 Job。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
The control plane keeps track of the Pods that belong to any Job and notices if
|
||||
any such Pod is removed from the API server. To do that, the Job controller
|
||||
creates Pods with the finalizer `batch.kubernetes.io/job-tracking`. The
|
||||
controller removes the finalizer only after the Pod has been accounted for in
|
||||
the Job status, allowing the Pod to be removed by other controllers or users.
|
||||
|
||||
Jobs created before upgrading to Kubernetes 1.26 or before the feature gate
|
||||
`JobTrackingWithFinalizers` is enabled are tracked without the use of Pod
|
||||
finalizers.
|
||||
The Job {{< glossary_tooltip term_id="controller" text="controller" >}} updates
|
||||
the status counters for `succeeded` and `failed` Pods based only on the Pods
|
||||
that exist in the cluster. The contol plane can lose track of the progress of
|
||||
the Job if Pods are deleted from the cluster.
|
||||
-->
|
||||
控制面会跟踪属于任何 Job 的 Pod,并通知是否有任何这样的 Pod 被从 API 服务器中移除。
|
||||
为了实现这一点,Job 控制器创建的 Pod 带有 Finalizer `batch.kubernetes.io/job-tracking`。
|
||||
控制器只有在 Pod 被记入 Job 状态后才会移除 Finalizer,允许 Pod 可以被其他控制器或用户移除。
|
||||
|
||||
在升级到 Kubernetes 1.26 之前或在启用特性门控 `JobTrackingWithFinalizers`
|
||||
之前创建的 Job 被跟踪时不使用 Pod Finalizer。
|
||||
Job {{< glossary_tooltip term_id="controller" text="控制器" >}}仅根据集群中存在的 Pod
|
||||
更新 `succeeded` 和 `failed` Pod 的状态计数器。如果 Pod 被从集群中删除,控制面可能无法跟踪 Job 的进度。
|
||||
|
||||
<!--
|
||||
You can determine if the control plane is tracking a Job using Pod finalizers by
|
||||
checking if the Job has the annotation
|
||||
`batch.kubernetes.io/job-tracking`. You should **not** manually add or remove
|
||||
this annotation from Jobs. Instead, you can recreate the Jobs to ensure they
|
||||
are tracked using Pod finalizers.
|
||||
-->
|
||||
你可以根据检查 Job 是否含有 `batch.kubernetes.io/job-tracking` 注解,
|
||||
来确定控制面是否正在使用 Pod Finalizer 追踪 Job。
|
||||
你**不**应该给 Job 手动添加或删除该注解。
|
||||
取而代之的是你可以重新创建 Job 以确保使用 Pod Finalizer 跟踪这些 Job。
|
||||
|
||||
<!--
|
||||
### Elastic Indexed Jobs
|
||||
-->
|
||||
|
|
|
@ -113,7 +113,7 @@ Deployment,并在 spec 部分定义你的应用。
|
|||
-->
|
||||
## 示例 {#example}
|
||||
|
||||
{{< codenew file="controllers/frontend.yaml" >}}
|
||||
{{% code file="controllers/frontend.yaml" %}}
|
||||
|
||||
<!--
|
||||
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
|
||||
|
@ -263,7 +263,7 @@ Pod,它还可以像前面小节中所描述的那样获得其他 Pod。
|
|||
|
||||
以前面的 frontend ReplicaSet 为例,并在以下清单中指定这些 Pod:
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
{{% code file="pods/pod-rs.yaml" %}}
|
||||
|
||||
<!--
|
||||
As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
|
||||
|
@ -668,7 +668,7 @@ ReplicaSet 也可以作为[水平的 Pod 扩缩器 (HPA)](/zh-cn/docs/tasks/run-
|
|||
的目标。也就是说,ReplicaSet 可以被 HPA 自动扩缩。
|
||||
以下是 HPA 以我们在前一个示例中创建的副本集为目标的示例。
|
||||
|
||||
{{< codenew file="controllers/hpa-rs.yaml" >}}
|
||||
{{% code file="controllers/hpa-rs.yaml" %}}
|
||||
|
||||
<!--
|
||||
Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
|
||||
|
|
|
@ -2,6 +2,9 @@
|
|||
title: ReplicationController
|
||||
content_type: concept
|
||||
weight: 90
|
||||
description: >-
|
||||
用于管理可水平扩展的工作负载的旧版 API。
|
||||
被 Deployment 和 ReplicaSet API 取代。
|
||||
---
|
||||
|
||||
<!--
|
||||
|
@ -11,6 +14,9 @@ reviewers:
|
|||
title: ReplicationController
|
||||
content_type: concept
|
||||
weight: 90
|
||||
description: >-
|
||||
Legacy API for managing workloads that can scale horizontally.
|
||||
Superseded by the Deployment and ReplicaSet APIs.
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
@ -34,7 +40,7 @@ always up and available.
|
|||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## How a ReplicationController Works
|
||||
## How a ReplicationController works
|
||||
|
||||
If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the
|
||||
ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a
|
||||
|
@ -75,7 +81,7 @@ This example ReplicationController config runs three copies of the nginx web ser
|
|||
|
||||
这个示例 ReplicationController 配置运行 nginx Web 服务器的三个副本。
|
||||
|
||||
{{< codenew file="controllers/replication.yaml" >}}
|
||||
{{% code file="controllers/replication.yaml" %}}
|
||||
|
||||
<!--
|
||||
Run the example job by downloading the example file and then running this command:
|
||||
|
@ -608,4 +614,3 @@ ReplicationController。
|
|||
- 了解 [Depolyment](/zh-cn/docs/concepts/workloads/controllers/deployment/),ReplicationController 的替代品。
|
||||
- `ReplicationController` 是 Kubernetes REST API 的一部分,阅读 {{< api-reference page="workload-resources/replication-controller-v1" >}}
|
||||
对象定义以了解 replication controllers 的 API。
|
||||
|
||||
|
|
|
@ -85,17 +85,16 @@ Pod 类似于共享名字空间并共享文件系统卷的一组容器。
|
|||
## Using Pods
|
||||
|
||||
The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`.
|
||||
|
||||
{{< codenew file="pods/simple-pod.yaml" >}}
|
||||
|
||||
To create the Pod shown above, run the following command:
|
||||
-->
|
||||
## 使用 Pod {#using-pods}
|
||||
|
||||
下面是一个 Pod 示例,它由一个运行镜像 `nginx:1.14.2` 的容器组成。
|
||||
|
||||
{{< codenew file="pods/simple-pod.yaml" >}}
|
||||
{{% code file="pods/simple-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
To create the Pod shown above, run the following command:
|
||||
-->
|
||||
要创建上面显示的 Pod,请运行以下命令:
|
||||
|
||||
```shell
|
||||
|
@ -115,10 +114,9 @@ Pod 通常不是直接创建的,而是使用工作负载资源创建的。
|
|||
### 用于管理 Pod 的工作负载资源 {#workload-resources-for-managing-pods}
|
||||
|
||||
<!--
|
||||
Usually you don't need to create Pods directly, even singleton Pods.
|
||||
Instead, create them using workload resources such as {{< glossary_tooltip text="Deployment"
|
||||
Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as {{< glossary_tooltip text="Deployment"
|
||||
term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}.
|
||||
If your Pods need to track state, consider the
|
||||
If your Pods need to track state, consider the
|
||||
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource.
|
||||
|
||||
Pods in a Kubernetes cluster are used in two main ways:
|
||||
|
@ -209,9 +207,7 @@ that updates those files from a remote source, as in the following diagram:
|
|||
{{< figure src="/zh-cn/docs/images/pod.svg" alt="Pod 创建示意图" class="diagram-medium" >}}
|
||||
|
||||
<!--
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
|
||||
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
|
||||
Init containers run and complete before the app containers are started.
|
||||
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}} as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}. Init containers run and complete before the app containers are started.
|
||||
|
||||
Pods natively provide two kinds of shared resources for their constituent containers:
|
||||
[networking](#pod-networking) and [storage](#pod-storage).
|
||||
|
@ -272,7 +268,9 @@ Pod 的名称必须是一个合法的
|
|||
{{< feature-state state="stable" for_k8s_version="v1.25" >}}
|
||||
|
||||
<!--
|
||||
You should set the `.spec.os.name` field to either `windows` or `linux` to indicate the OS on which you want the pod to run. These two are the only operating systems supported for now by Kubernetes. In future, this list may be expanded.
|
||||
You should set the `.spec.os.name` field to either `windows` or `linux` to indicate the OS on
|
||||
which you want the pod to run. These two are the only operating systems supported for now by
|
||||
Kubernetes. In future, this list may be expanded.
|
||||
|
||||
In Kubernetes v{{< skew currentVersion >}}, the value you set for this field has no
|
||||
effect on {{< glossary_tooltip text="scheduling" term_id="kube-scheduler" >}} of the pods.
|
||||
|
@ -611,7 +609,7 @@ using the kubelet to supervise the individual [control plane components](/docs/c
|
|||
The kubelet automatically tries to create a {{< glossary_tooltip text="mirror Pod" term_id="mirror-pod" >}}
|
||||
on the Kubernetes API server for each static Pod.
|
||||
This means that the Pods running on a node are visible on the API server,
|
||||
but cannot be controlled from there.
|
||||
but cannot be controlled from there. See the guide [Create static Pods](/docs/tasks/configure-pod-container/static-pod) for more information.
|
||||
-->
|
||||
静态 Pod 通常绑定到某个节点上的 {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}。
|
||||
其主要用途是运行自托管的控制面。
|
||||
|
@ -621,6 +619,7 @@ but cannot be controlled from there.
|
|||
`kubelet` 自动尝试为每个静态 Pod 在 Kubernetes API
|
||||
服务器上创建一个{{< glossary_tooltip text="镜像 Pod" term_id="mirror-pod" >}}。
|
||||
这意味着在节点上运行的 Pod 在 API 服务器上是可见的,但不可以通过 API 服务器来控制。
|
||||
有关更多信息,请参阅[创建静态 Pod](/zh-cn/docs/tasks/configure-pod-container/static-pod) 的指南。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -668,7 +667,7 @@ in the Pod Lifecycle documentation.
|
|||
The {{< api-reference page="workload-resources/pod-v1" >}}
|
||||
object definition describes the object in detail.
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](/blog/2015/06/the-distributed-system-toolkit-patterns/) explains common layouts for Pods with more than one container.
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
|
||||
* Read about [Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/)
|
||||
-->
|
||||
* 了解 [Pod 生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。
|
||||
* 了解 [RuntimeClass](/zh-cn/docs/concepts/containers/runtime-class/),
|
||||
|
@ -690,8 +689,15 @@ To understand the context for why Kubernetes wraps a common Pod API in other res
|
|||
或 {{< glossary_tooltip text="Deployment" term_id="deployment" >}})
|
||||
封装通用的 Pod API,相关的背景信息可以在前人的研究中找到。具体包括:
|
||||
|
||||
<!--
|
||||
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
|
||||
* [Borg](https://research.google.com/pubs/pub43438.html)
|
||||
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
|
||||
* [Omega](https://research.google/pubs/pub41684/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
|
||||
-->
|
||||
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
|
||||
* [Borg](https://research.google.com/pubs/pub43438.html)
|
||||
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
|
||||
* [Omega](https://research.google/pubs/pub41684/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/)。
|
||||
|
|
|
@ -449,13 +449,14 @@ Renders to:
|
|||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
### Source code files
|
||||
You can use the `{{</* codenew */>}}` shortcode to embed the contents of file
|
||||
## Source code files
|
||||
You can use the `{{</* code */>}}` shortcode to embed the contents of file
|
||||
in a code block to allow users to download or copy its content to their clipboard.
|
||||
This shortcode is used when the contents of the sample file is generic and reusable,
|
||||
and you want the users to try it out themselves.
|
||||
-->
|
||||
你可以使用 `{{</* codenew */>}}` 短代码将文件内容嵌入代码块中,
|
||||
## 源代码文件
|
||||
你可以使用 `{{</* code */>}}` 短代码将文件内容嵌入代码块中,
|
||||
以允许用户下载或复制其内容到他们的剪贴板。
|
||||
当示例文件的内容是通用的、可复用的,并且希望用户自己尝试使用示例文件时,
|
||||
可以使用此短代码。
|
||||
|
@ -475,7 +476,7 @@ For example:
|
|||
如果未提供 `language` 参数,短代码将尝试根据文件扩展名推测编程语言。
|
||||
|
||||
```none
|
||||
{{</* codenew language="yaml" file="application/deployment-scale.yaml" */>}}
|
||||
{{</* code language="yaml" file="application/deployment-scale.yaml" */>}}
|
||||
```
|
||||
|
||||
<!--
|
||||
|
@ -483,28 +484,35 @@ The output is:
|
|||
-->
|
||||
输出是:
|
||||
|
||||
{{< codenew language="yaml" file="application/deployment-scale.yaml" >}}
|
||||
{{< code language="yaml" file="application/deployment-scale.yaml" >}}
|
||||
|
||||
<!--
|
||||
When adding a new sample file, such as a YAML file, create the file in one
|
||||
of the `<LANG>/examples/` subdirectories where `<LANG>` is the language for
|
||||
the page. In the markdown of your page, use the `codenew` shortcode:
|
||||
the page. In the markdown of your page, use the `code` shortcode:
|
||||
-->
|
||||
添加新的示例文件(例如 YAML 文件)时,在 `<LANG>/examples/`
|
||||
子目录之一中创建该文件,其中 `<LANG>` 是页面的语言。
|
||||
在你的页面的 markdown 文本中,使用 `codenew` 短代码:
|
||||
在你的页面的 markdown 文本中,使用 `code` 短代码:
|
||||
|
||||
```none
|
||||
{{</* codenew file="<RELATIVE-PATH>/example-yaml>" */>}}
|
||||
{{</* code file="<RELATIVE-PATH>/example-yaml>" */>}}
|
||||
```
|
||||
|
||||
其中 `<RELATIVE-PATH>` 是要包含的示例文件的路径,相对于 `examples` 目录。
|
||||
以下短代码引用位于 `/content/en/examples/configmap/configmaps.yaml` 的 YAML 文件。
|
||||
|
||||
```none
|
||||
{{</* codenew file="configmap/configmaps.yaml" */>}}
|
||||
{{</* code file="configmap/configmaps.yaml" */>}}
|
||||
```
|
||||
|
||||
<!--
|
||||
The legacy `{{%/* codenew */%}}` shortcode is being replaced by `{{%/* code */%}}`.
|
||||
Use `{{%/* code */%}}` in new documentation.
|
||||
-->
|
||||
传统的 `{{%/* codenew */%}}` 短代码将被替换为 `{{%/* code */%}}`。
|
||||
在新文档中使用 `{{%/* code */%}}`。
|
||||
|
||||
<!--
|
||||
## Third party content marker
|
||||
-->
|
||||
|
|
|
@ -204,6 +204,11 @@ the `admissionregistration.k8s.io/v1alpha1` API.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller allows all pods into the cluster. It is **deprecated** because
|
||||
its behavior is the same as if there were no admission controller at all.
|
||||
|
@ -214,6 +219,11 @@ its behavior is the same as if there were no admission controller at all.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
Rejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.
|
||||
-->
|
||||
|
@ -238,6 +248,11 @@ required.
|
|||
|
||||
### CertificateApproval {#certificateapproval}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller observes requests to approve CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to **approve** certificate requests with the
|
||||
|
@ -256,6 +271,11 @@ information on the permissions required to perform different actions on Certific
|
|||
|
||||
### CertificateSigning {#certificatesigning}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
|
||||
and performs an additional authorization checks to ensure the signing user has permission to **sign** certificate
|
||||
|
@ -274,6 +294,11 @@ information on the permissions required to perform different actions on Certific
|
|||
|
||||
### CertificateSubjectRestriction {#certificatesubjectrestriction}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName`
|
||||
of `kubernetes.io/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute')
|
||||
|
@ -285,6 +310,11 @@ CertificateSigningRequest 资源创建请求,并拒绝所有将 “group”(
|
|||
|
||||
### DefaultIngressClass {#defaultingressclass}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller observes creation of `Ingress` objects that do not request any specific
|
||||
ingress class and automatically adds a default ingress class to them. This way, users that do not
|
||||
|
@ -316,6 +346,11 @@ classes and how to mark one as default.
|
|||
|
||||
### DefaultStorageClass {#defaultstorageclass}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class
|
||||
and automatically adds a default storage class to them.
|
||||
|
@ -346,6 +381,11 @@ storage classes and how to mark a storage class as default.
|
|||
|
||||
### DefaultTolerationSeconds {#defaulttolerationseconds}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller sets the default forgiveness toleration for pods to tolerate
|
||||
the taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters
|
||||
|
@ -364,6 +404,11 @@ The default value for `default-not-ready-toleration-seconds` and `default-unreac
|
|||
|
||||
### DenyServiceExternalIPs {#denyserviceexternalips}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller rejects all net-new usage of the `Service` field `externalIPs`. This
|
||||
feature is very powerful (allows network traffic interception) and not well
|
||||
|
@ -393,6 +438,11 @@ This admission controller is disabled by default.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller mitigates the problem where the API server gets flooded by
|
||||
requests to store new Events. The cluster admin can specify event rate limits by:
|
||||
|
@ -465,6 +515,11 @@ This admission controller is disabled by default.
|
|||
|
||||
### ExtendedResourceToleration {#extendedresourcetoleration}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This plug-in facilitates creation of dedicated nodes with extended resources.
|
||||
If operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to
|
||||
|
@ -485,6 +540,11 @@ This admission controller is disabled by default.
|
|||
|
||||
### ImagePolicyWebhook {#imagepolicywebhook}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions.
|
||||
|
||||
|
@ -753,6 +813,11 @@ In any case, the annotations are provided by the user and are not validated by K
|
|||
|
||||
### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller denies any pod that defines `AntiAffinity` topology key other than
|
||||
`kubernetes.io/hostname` in `requiredDuringSchedulingRequiredDuringExecution`.
|
||||
|
@ -766,6 +831,11 @@ This admission controller is disabled by default.
|
|||
|
||||
### LimitRanger {#limitranger}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating and Validating.
|
||||
-->
|
||||
**类别**:变更和验证。
|
||||
|
||||
<!--
|
||||
This admission controller will observe the incoming request and ensure that it does not violate
|
||||
any of the constraints enumerated in the `LimitRange` object in a `Namespace`. If you are using
|
||||
|
@ -790,6 +860,11 @@ for more details.
|
|||
|
||||
### MutatingAdmissionWebhook {#mutatingadmissionwebhook}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller calls any mutating webhooks which match the request. Matching
|
||||
webhooks are called in serial; each one may modify the object if it desires.
|
||||
|
@ -844,6 +919,11 @@ group/version via the `--runtime-config` flag, both are on by default.
|
|||
|
||||
### NamespaceAutoProvision {#namespaceautoprovision}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller examines all incoming requests on namespaced resources and checks
|
||||
if the referenced namespace does exist.
|
||||
|
@ -857,6 +937,11 @@ a namespace prior to its usage.
|
|||
|
||||
### NamespaceExists {#namespaceexists}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller checks all requests on namespaced resources other than `Namespace` itself.
|
||||
If the namespace referenced from a request doesn't exist, the request is rejected.
|
||||
|
@ -866,6 +951,11 @@ If the namespace referenced from a request doesn't exist, the request is rejecte
|
|||
|
||||
### NamespaceLifecycle {#namespacelifecycle}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller enforces that a `Namespace` that is undergoing termination cannot have
|
||||
new objects created in it, and ensures that requests in a non-existent `Namespace` are rejected.
|
||||
|
@ -886,6 +976,11 @@ running this admission controller.
|
|||
|
||||
### NodeRestriction {#noderestriction}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission controller,
|
||||
kubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:<nodeName>`.
|
||||
|
@ -943,6 +1038,11 @@ permissions required to operate correctly.
|
|||
|
||||
### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller protects the access to the `metadata.ownerReferences` of an object
|
||||
so that only users with **delete** permission to the object can change it.
|
||||
|
@ -960,6 +1060,11 @@ subresource of the referenced *owner* can change it.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller implements additional validations for checking incoming
|
||||
`PersistentVolumeClaim` resize requests.
|
||||
|
@ -1003,6 +1108,11 @@ For more information about persistent volume claims, see [PersistentVolumeClaims
|
|||
|
||||
{{< feature-state for_k8s_version="v1.13" state="deprecated" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller automatically attaches region or zone labels to PersistentVolumes
|
||||
as defined by the cloud provider (for example, Azure or GCP).
|
||||
|
@ -1027,6 +1137,11 @@ This admission controller is disabled by default.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller defaults and limits what node selectors may be used within a namespace
|
||||
by reading a namespace annotation and a global configuration.
|
||||
|
@ -1133,6 +1248,11 @@ PodNodeSelector 允许 Pod 强制在特定标签的节点上运行。
|
|||
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
The PodSecurity admission controller checks new Pods before they are
|
||||
admitted, determines if it should be admitted based on the requested security context and the restrictions on permitted
|
||||
|
@ -1159,6 +1279,11 @@ PodSecurity 取代了一个名为 PodSecurityPolicy 的旧准入控制器。
|
|||
|
||||
{{< feature-state for_k8s_version="v1.7" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating and Validating.
|
||||
-->
|
||||
**类别**:变更和验证。
|
||||
|
||||
<!--
|
||||
The PodTolerationRestriction admission controller verifies any conflict between tolerations of a
|
||||
pod and the tolerations of its namespace.
|
||||
|
@ -1211,17 +1336,26 @@ This admission controller is disabled by default.
|
|||
<!--
|
||||
### Priority {#priority}
|
||||
|
||||
**Type**: Mutating and Validating.
|
||||
|
||||
The priority admission controller uses the `priorityClassName` field and populates the integer
|
||||
value of the priority.
|
||||
If the priority class is not found, the Pod is rejected.
|
||||
-->
|
||||
### 优先级 {#priority}
|
||||
|
||||
**类别**:变更和验证。
|
||||
|
||||
优先级准入控制器使用 `priorityClassName` 字段并用整型值填充优先级。
|
||||
如果找不到优先级,则拒绝 Pod。
|
||||
|
||||
### ResourceQuota {#resourcequota}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller will observe the incoming request and ensure that it does not violate
|
||||
any of the constraints enumerated in the `ResourceQuota` object in a `Namespace`. If you are
|
||||
|
@ -1242,6 +1376,11 @@ and the [example of Resource Quota](/docs/concepts/policy/resource-quotas/) for
|
|||
|
||||
### RuntimeClass {#runtimeclass}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating and Validating.
|
||||
-->
|
||||
**类别**:变更和验证。
|
||||
|
||||
<!--
|
||||
If you define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
configured, this admission controller checks incoming Pods.
|
||||
|
@ -1264,6 +1403,11 @@ for more information.
|
|||
|
||||
### SecurityContextDeny {#securitycontextdeny}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.27" state="deprecated" >}}
|
||||
|
||||
{{< caution >}}
|
||||
|
@ -1333,6 +1477,11 @@ article details the PodSecurityPolicy historical context and the birth of the
|
|||
|
||||
### ServiceAccount {#serviceaccount}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating and Validating.
|
||||
-->
|
||||
**类别**:变更和验证。
|
||||
|
||||
<!--
|
||||
This admission controller implements automation for
|
||||
[serviceAccounts](/docs/tasks/configure-pod-container/configure-service-account/).
|
||||
|
@ -1347,6 +1496,11 @@ You should enable this admission controller if you intend to make any use of Kub
|
|||
|
||||
### StorageObjectInUseProtection {#storageobjectinuseprotection}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
The `StorageObjectInUseProtection` plugin adds the `kubernetes.io/pvc-protection` or `kubernetes.io/pv-protection`
|
||||
finalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV).
|
||||
|
@ -1364,6 +1518,11 @@ for more detailed information.
|
|||
|
||||
### TaintNodesByCondition {#taintnodesbycondition}
|
||||
|
||||
<!--
|
||||
**Type**: Mutating.
|
||||
-->
|
||||
**类别**:变更。
|
||||
|
||||
<!--
|
||||
This admission controller {{< glossary_tooltip text="taints" term_id="taint" >}} newly created
|
||||
Nodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods
|
||||
|
@ -1377,6 +1536,11 @@ conditions.
|
|||
|
||||
### ValidatingAdmissionPolicy {#validatingadmissionpolicy}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
[This admission controller](/docs/reference/access-authn-authz/validating-admission-policy/) implements the CEL validation for incoming matched requests.
|
||||
It is enabled when both feature gate `validatingadmissionpolicy` and `admissionregistration.k8s.io/v1alpha1` group/version are enabled.
|
||||
|
@ -1388,6 +1552,11 @@ CEL 校验。当 `validatingadmissionpolicy` 和 `admissionregistration.k8s.io/v
|
|||
|
||||
### ValidatingAdmissionWebhook {#validatingadmissionwebhook}
|
||||
|
||||
<!--
|
||||
**Type**: Validating.
|
||||
-->
|
||||
**类别**:验证。
|
||||
|
||||
<!--
|
||||
This admission controller calls any validating webhooks which match the request. Matching
|
||||
webhooks are called in parallel; if any of them rejects the request, the request
|
||||
|
|
|
@ -377,7 +377,7 @@ https://pkg.go.dev/k8s.io/kubelet/config/v1beta1#KubeletConfiguration。</p>
|
|||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">pathType</span>:<span style="color:#bbb"> </span>File<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"></span><span style="color:#000;font-weight:bold">scheduler</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraArgs</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">bind-address</span>:<span style="color:#bbb"> </span><span style="color:#d14">"10.100.0.1"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">extraVolumes</span>:<span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span>- <span style="color:#000;font-weight:bold">name</span>:<span style="color:#bbb"> </span><span style="color:#d14">"some-volume"</span><span style="color:#bbb">
|
||||
</span><span style="color:#bbb"> </span><span style="color:#000;font-weight:bold">hostPath</span>:<span style="color:#bbb"> </span><span style="color:#d14">"/etc/some-path"</span><span style="color:#bbb">
|
||||
|
|
|
@ -160,7 +160,7 @@ JSONPath regular expressions are not supported. If you want to match using regul
|
|||
kubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'
|
||||
|
||||
# The following command achieves the desired result
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image'
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).metadata.name'
|
||||
```
|
||||
-->
|
||||
{{< note >}}
|
||||
|
@ -172,6 +172,6 @@ kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-
|
|||
kubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'
|
||||
|
||||
# 下面的命令可以获得所需的结果
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image'
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).metadata.name'
|
||||
```
|
||||
{{< /note >}}
|
||||
|
|
|
@ -1940,7 +1940,7 @@ A single application container that you want to run within a pod.
|
|||
|
||||
如果为 true,则以只读方式挂载,否则(false 或未设置)以读写方式挂载。默认为 false。
|
||||
|
||||
- **volumeMounts.subPath** (boolean)
|
||||
- **volumeMounts.subPath** (string)
|
||||
|
||||
卷中的路径,容器中的卷应该这一路径安装。默认为 ""(卷的根)。
|
||||
|
||||
|
|
|
@ -83,7 +83,7 @@ Starting from v1.9, this label is deprecated.
|
|||
<!--
|
||||
### app.kubernetes.io/instance
|
||||
|
||||
Type: Label
|
||||
Type: Label
|
||||
|
||||
Example: `app.kubernetes.io/instance: "mysql-abcxzy"`
|
||||
|
||||
|
@ -765,7 +765,7 @@ This label can have one of three values: `Reconcile`, `EnsureExists`, or `Ignore
|
|||
- `Ignore`: Addon resources will be ignored. This mode is useful for add-ons that are not
|
||||
compatible with the add-on manager or that are managed by another controller.
|
||||
|
||||
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
|
||||
For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md).
|
||||
-->
|
||||
### addonmanager.kubernetes.io/mode
|
||||
|
||||
|
@ -785,7 +785,7 @@ For more details, see [Addon-manager](https://github.com/kubernetes/kubernetes/b
|
|||
- `Ignore`:插件资源将被忽略。此模式对于与外接插件管理器不兼容或由其他控制器管理的插件程序非常有用。
|
||||
|
||||
有关详细信息,请参见
|
||||
[Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)
|
||||
[Addon-manager](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md)。
|
||||
|
||||
<!--
|
||||
### beta.kubernetes.io/arch (deprecated)
|
||||
|
@ -1469,7 +1469,7 @@ Kubernetes makes a few assumptions about the structure of zones and regions:
|
|||
|
||||
1. regions and zones are hierarchical: zones are strict subsets of regions and
|
||||
no zone can be in 2 regions
|
||||
2) zone names are unique across regions; for example region "africa-east-1" might be comprised
|
||||
2. zone names are unique across regions; for example region "africa-east-1" might be comprised
|
||||
of zones "africa-east-1a" and "africa-east-1b"
|
||||
-->
|
||||
Kubernetes 对 Zone 和 Region 的结构做了一些假设:
|
||||
|
@ -1540,7 +1540,7 @@ Example: `volume.beta.kubernetes.io/storage-provisioner: "k8s.io/minikube-hostpa
|
|||
Used on: PersistentVolumeClaim
|
||||
|
||||
This annotation has been deprecated since v1.23.
|
||||
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner)
|
||||
See [volume.kubernetes.io/storage-provisioner](#volume-kubernetes-io-storage-provisioner).
|
||||
-->
|
||||
### volume.beta.kubernetes.io/storage-provisioner (已弃用) {#volume-beta-kubernetes-io-storage-provisioner}
|
||||
|
||||
|
@ -1581,7 +1581,7 @@ This annotation has been deprecated. Instead, set the
|
|||
[`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class)
|
||||
for the PersistentVolumeClaim or PersistentVolume.
|
||||
-->
|
||||
此注解可以为 PersistentVolume (PV) 或 PersistentVolumeClaim (PVC) 指定
|
||||
此注解可以为 PersistentVolume(PV)或 PersistentVolumeClaim(PVC)指定
|
||||
[StorageClass](/zh-cn/docs/concepts/storage/storage-classes/)。
|
||||
当 `storageClassName` 属性和 `volume.beta.kubernetes.io/storage-class` 注解均被指定时,
|
||||
注解 `volume.beta.kubernetes.io/storage-class` 将优先于 `storageClassName` 属性。
|
||||
|
@ -1997,7 +1997,7 @@ resource without a class specified will be assigned this default class.
|
|||
资源将被设置为此默认类。
|
||||
|
||||
<!--
|
||||
### alpha.kubernetes.io/provided-node-ip
|
||||
### alpha.kubernetes.io/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}
|
||||
|
||||
Type: Annotation
|
||||
|
||||
|
@ -2012,7 +2012,7 @@ and legacy in-tree cloud providers), it sets this annotation on the Node to deno
|
|||
set from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid
|
||||
by the cloud-controller-manager.
|
||||
-->
|
||||
### alpha.kubernetes.io/provided-node-ip {#alpha-kubernetes-io-provided-node-ip}
|
||||
### alpha.kubernetes.io/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}
|
||||
|
||||
类别:注解
|
||||
|
||||
|
@ -2094,8 +2094,7 @@ container.
|
|||
<!--
|
||||
This annotation is deprecated. You should use the
|
||||
[`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container)
|
||||
annotation instead.
|
||||
Kubernetes versions 1.25 and newer ignore this annotation.
|
||||
annotation instead. Kubernetes versions 1.25 and newer ignore this annotation.
|
||||
-->
|
||||
此注解已被弃用。取而代之的是使用
|
||||
[`kubectl.kubernetes.io/default-container`](#kubectl-kubernetes-io-default-container) 注解。
|
||||
|
@ -2143,11 +2142,8 @@ Example: `batch.kubernetes.io/job-tracking: ""`
|
|||
|
||||
Used on: Jobs
|
||||
|
||||
The presence of this annotation on a Job indicates that the control plane is
|
||||
The presence of this annotation on a Job used to indicate that the control plane is
|
||||
[tracking the Job status using finalizers](/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers).
|
||||
The control plane uses this annotation to safely transition to tracking Jobs
|
||||
using finalizers, while the feature is in development.
|
||||
You should **not** manually add or remove this annotation.
|
||||
-->
|
||||
### batch.kubernetes.io/job-tracking (已弃用) {#batch-kubernetes-io-job-tracking}
|
||||
|
||||
|
@ -2158,18 +2154,13 @@ You should **not** manually add or remove this annotation.
|
|||
用于:Job
|
||||
|
||||
Job 上存在此注解表明控制平面正在[使用 Finalizer 追踪 Job](/zh-cn/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。
|
||||
控制平面使用此注解来安全地转换为使用 Finalizer 追踪 Job,而此特性正在开发中。
|
||||
你 **不** 可以手动添加或删除此注解。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Starting from Kubernetes 1.26, this annotation is deprecated.
|
||||
Kubernetes 1.27 and newer will ignore this annotation and always track Jobs
|
||||
using finalizers.
|
||||
Adding or removing this annotation no longer has an effect (Kubernetes v1.27 and later)
|
||||
All Jobs are tracked with finalizers.
|
||||
-->
|
||||
从 Kubernetes 1.26 开始,该注解被弃用。
|
||||
Kubernetes 1.27 及以上版本将忽略此注解,并始终使用 Finalizer 追踪 Job。
|
||||
{{< /note >}}
|
||||
添加或删除此注解不再有效(Kubernetes v1.27 及更高版本),
|
||||
所有 Job 均通过 Finalizer 进行追踪。
|
||||
|
||||
<!--
|
||||
### job-name (deprecated) {#job-name}
|
||||
|
@ -2605,7 +2596,7 @@ Type: Label
|
|||
|
||||
Example: `feature.node.kubernetes.io/network-sriov.capable: "true"`
|
||||
|
||||
Used on: Node
|
||||
Used on: Node
|
||||
|
||||
These labels are used by the Node Feature Discovery (NFD) component to advertise
|
||||
features on a node. All built-in labels use the `feature.node.kubernetes.io` label
|
||||
|
@ -3975,7 +3966,7 @@ ignores that node while calculating Topology Aware Hints.
|
|||
|
||||
类别:标签
|
||||
|
||||
用于: 节点
|
||||
用于:节点
|
||||
|
||||
用来指示该节点用于运行控制平面组件的标记标签。Kubeadm 工具将此标签应用于其管理的控制平面节点。
|
||||
其他集群管理工具通常也会设置此污点。
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
content_type: "reference"
|
||||
title: Kubelet 设备管理器 API 版本
|
||||
weight: 10
|
||||
weight: 50
|
||||
---
|
||||
<!--
|
||||
content_type: "reference"
|
||||
title: Kubelet Device Manager API Versions
|
||||
weight: 10
|
||||
weight: 50
|
||||
-->
|
||||
|
||||
<!--
|
||||
|
|
|
@ -100,7 +100,7 @@ a `disktype=ssd` label.
|
|||
此 Pod 配置文件描述了一个拥有节点选择器 `disktype: ssd` 的 Pod。这表明该 Pod
|
||||
将被调度到有 `disktype=ssd` 标签的节点。
|
||||
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
{{% codenew file="pods/pod-nginx.yaml" %}}
|
||||
|
||||
<!--
|
||||
1. Use the configuration file to create a pod that will get scheduled on your
|
||||
|
@ -140,7 +140,7 @@ You can also schedule a pod to one specific node via setting `nodeName`.
|
|||
|
||||
你也可以通过设置 `nodeName` 将某个 Pod 调度到特定的节点。
|
||||
|
||||
{{< codenew file="pods/pod-nginx-specific-node.yaml" >}}
|
||||
{{% codenew file="pods/pod-nginx-specific-node.yaml" %}}
|
||||
|
||||
<!--
|
||||
Use the configuration file to create a pod that will get scheduled on `foo-node` only.
|
||||
|
|
|
@ -54,7 +54,7 @@ Here is a configuration file for a Windows Pod that has the `runAsUserName` fiel
|
|||
|
||||
这儿有一个已经设置了 `runAsUserName` 字段的 Windows Pod 的配置文件:
|
||||
|
||||
{{< codenew file="windows/run-as-username-pod.yaml" >}}
|
||||
{{% code file="windows/run-as-username-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
@ -134,7 +134,7 @@ Here is the configuration file for a Pod that has one Container, and the `runAsU
|
|||
|
||||
这里有一个 Pod 的配置文件,其中只有一个容器,并且在 Pod 级别和容器级别都设置了 `runAsUserName`:
|
||||
|
||||
{{< codenew file="windows/run-as-username-container.yaml" >}}
|
||||
{{% code file="windows/run-as-username-container.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
|
|
@ -54,7 +54,7 @@ Here is the configuration file for a Pod that has one Container:
|
|||
|
||||
下面是包含一个容器的 Pod 配置文件:
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod.yaml" >}}
|
||||
{{% code file="pods/resource/extended-resource-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
In the configuration file, you can see that the Container requests 3 dongles.
|
||||
|
@ -109,7 +109,7 @@ two dongles.
|
|||
|
||||
下面是包含一个容器的 Pod 配置文件,容器请求了 2 个 dongles。
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod-2.yaml" >}}
|
||||
{{% code file="pods/resource/extended-resource-pod-2.yaml" %}}
|
||||
|
||||
<!--
|
||||
Kubernetes will not be able to satisfy the request for two dongles, because the first Pod
|
||||
|
|
|
@ -95,7 +95,7 @@ memory request, both equal to 200 MiB. The Container has a CPU limit and a CPU r
|
|||
下面是包含一个 Container 的 Pod 清单。该 Container 设置了内存请求和内存限制,值都是 200 MiB。
|
||||
该 Container 设置了 CPU 请求和 CPU 限制,值都是 700 milliCPU:
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod.yaml" >}}
|
||||
{{% code file="pods/qos/qos-pod.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
@ -186,7 +186,7 @@ and a memory request of 100 MiB.
|
|||
下面是包含一个 Container 的 Pod 清单。该 Container 设置的内存限制为 200 MiB,
|
||||
内存请求为 100 MiB。
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-2.yaml" >}}
|
||||
{{% code file="pods/qos/qos-pod-2.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
@ -256,7 +256,7 @@ limits or requests:
|
|||
|
||||
下面是包含一个 Container 的 Pod 清单。该 Container 没有设置内存和 CPU 限制或请求。
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-3.yaml" >}}
|
||||
{{% code file="pods/qos/qos-pod-3.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Pod:
|
||||
|
@ -316,7 +316,7 @@ request of 200 MiB. The other Container does not specify any requests or limits.
|
|||
下面是包含两个 Container 的 Pod 清单。一个 Container 指定内存请求为 200 MiB。
|
||||
另外一个 Container 没有指定任何请求或限制。
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-4.yaml" >}}
|
||||
{{% code file="pods/qos/qos-pod-4.yaml" %}}
|
||||
|
||||
<!--
|
||||
Notice that this Pod meets the criteria for QoS class `Burstable`. That is, it does not meet the
|
||||
|
|
|
@ -166,7 +166,7 @@ Consider the following manifest for a Pod that has one Container.
|
|||
|
||||
考虑以下包含一个容器的 Pod 的清单。
|
||||
|
||||
{{< codenew file="pods/qos/qos-pod-5.yaml" >}}
|
||||
{{% code file="pods/qos/qos-pod-5.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the pod in the `qos-example` namespace:
|
||||
|
|
|
@ -37,7 +37,7 @@ is a Pod that has one container:
|
|||
|
||||
下面是具有两个副本的 Deployment 的配置文件。每个副本是一个 Pod,有一个容器:
|
||||
|
||||
{{% codenew file="application/deployment-patch.yaml" %}}
|
||||
{{% code file="application/deployment-patch.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Deployment:
|
||||
|
@ -418,7 +418,7 @@ Here's the configuration file for a Deployment that uses the `RollingUpdate` str
|
|||
-->
|
||||
## 使用带 retainKeys 策略的策略合并 patch 更新 Deployment {#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy}
|
||||
|
||||
{{% codenew file="application/deployment-retainkeys.yaml" %}}
|
||||
{{% code file="application/deployment-retainkeys.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the deployment:
|
||||
|
@ -651,7 +651,7 @@ Deployment 是支持这些子资源的其中一个例子。
|
|||
|
||||
下面是有两个副本的 Deployment 的清单。
|
||||
|
||||
{{% codenew file="application/deployment.yaml" %}}
|
||||
{{% code file="application/deployment.yaml" %}}
|
||||
|
||||
<!--
|
||||
Create the Deployment:
|
||||
|
|
|
@ -311,14 +311,14 @@ Example PDB Using minAvailable:
|
|||
-->
|
||||
使用 minAvailable 的 PDB 示例:
|
||||
|
||||
{{% codenew file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" %}}
|
||||
{{% code file="policy/zookeeper-pod-disruption-budget-minavailable.yaml" %}}
|
||||
|
||||
<!--
|
||||
Example PDB Using maxUnavailable:
|
||||
-->
|
||||
使用 maxUnavailable 的 PDB 示例:
|
||||
|
||||
{{% codenew file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" %}}
|
||||
{{% code file="policy/zookeeper-pod-disruption-budget-maxunavailable.yaml" %}}
|
||||
|
||||
<!--
|
||||
For example, if the above `zk-pdb` object selects the pods of a StatefulSet of size 3, both
|
||||
|
|
|
@ -104,7 +104,7 @@ using the following manifest:
|
|||
为了演示 HorizontalPodAutoscaler,你将首先启动一个 Deployment 用 `hpa-example` 镜像运行一个容器,
|
||||
然后使用以下清单文件将其暴露为一个 {{< glossary_tooltip term_id="service">}}:
|
||||
|
||||
{{% codenew file="application/php-apache.yaml" %}}
|
||||
{{% code file="application/php-apache.yaml" %}}
|
||||
|
||||
<!--
|
||||
To do so, run the following command:
|
||||
|
@ -775,7 +775,7 @@ can use the following manifest to create it declaratively:
|
|||
-->
|
||||
除了使用 `kubectl autoscale` 命令,也可以使用以下清单以声明方式创建 HorizontalPodAutoscaler:
|
||||
|
||||
{{% codenew file="application/hpa/php-apache.yaml" %}}
|
||||
{{% code file="application/hpa/php-apache.yaml" %}}
|
||||
|
||||
<!--
|
||||
Then, create the autoscaler by executing the following command:
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue