Merge pull request #26083 from reylejano/merged-master-dev-1.21
Merged master into dev 1.21 - 1/13/21pull/26152/head
commit
14d97e0177
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
name: Scheduled Netlify site build
|
||||
on:
|
||||
schedule: # Build twice daily: shortly after midnight and noon (UTC)
|
||||
# Offset is to be nice to the build service
|
||||
- cron: '4 0,12 * * *'
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger build on Netlify
|
||||
env:
|
||||
TOKEN: ${{ secrets.NETLIFY_BUILD_HOOK_KEY }}
|
||||
run: >-
|
||||
curl -s -H "Accept: application/json" -H "Content-Type: application/json" -X POST -d "{}" "https://api.netlify.com/build_hooks/${TOKEN}"
|
|
@ -7,12 +7,9 @@ aliases:
|
|||
- mrbobbytables
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- castrojo
|
||||
- cody-clark
|
||||
- kbarnard10
|
||||
- mrbobbytables
|
||||
- onlydole
|
||||
- parispittman
|
||||
- vonguard
|
||||
sig-docs-de-owners: # Admins for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
|
@ -215,3 +212,12 @@ aliases:
|
|||
- idvoretskyi
|
||||
- MaxymVlasov
|
||||
- Potapy4
|
||||
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
|
||||
committee-steering: # provide PR approvals for announcements
|
||||
- cblecker
|
||||
- derekwaynecarr
|
||||
- dims
|
||||
- liggitt
|
||||
- mrbobbytables
|
||||
- nikhita
|
||||
- parispittman
|
||||
|
|
|
@ -4,6 +4,9 @@
|
|||
|
||||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
||||
+ [Contributing to the docs](#contributing-to-the-docs)
|
||||
+ [Localization ReadMes](#localization-readmemds)
|
||||
|
||||
# Using this repository
|
||||
|
||||
You can run the website locally using Hugo (Extended version), or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
|
||||
|
|
|
@ -578,3 +578,64 @@ body.td-documentation {
|
|||
color: black;
|
||||
text-decoration: none !important;
|
||||
}
|
||||
|
||||
@media print {
|
||||
/* Do not print announcements */
|
||||
#announcement, section#announcement, #fp-announcement, section#fp-announcement {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
#announcement, #fp-announcement {
|
||||
> * {
|
||||
color: inherit;
|
||||
background: inherit;
|
||||
}
|
||||
|
||||
a {
|
||||
color: inherit;
|
||||
border-bottom: 1px solid #fff;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: inherit;
|
||||
border-bottom: none;
|
||||
}
|
||||
}
|
||||
|
||||
#announcement {
|
||||
padding-top: 105px;
|
||||
padding-bottom: 25px;
|
||||
}
|
||||
|
||||
.header-hero {
|
||||
padding-top: 40px;
|
||||
}
|
||||
|
||||
/* Extra announcement height only for landscape viewports */
|
||||
@media (min-aspect-ratio: 8/9) {
|
||||
#fp-announcement {
|
||||
min-height: 25vh;
|
||||
}
|
||||
}
|
||||
|
||||
#fp-announcement aside {
|
||||
padding-top: 115px;
|
||||
padding-bottom: 25px;
|
||||
}
|
||||
|
||||
.announcement {
|
||||
.content {
|
||||
margin-bottom: 0px;
|
||||
}
|
||||
|
||||
|
||||
> p {
|
||||
.gridPage #announcement .content p,
|
||||
.announcement > h4,
|
||||
.announcement > h3 {
|
||||
color: #ffffff;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ disableBrowserError = true
|
|||
|
||||
disableKinds = ["taxonomy", "taxonomyTerm"]
|
||||
|
||||
ignoreFiles = [ "^OWNERS$", "README[-]+[a-z]*\\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
|
||||
ignoreFiles = [ "(?:^|/)OWNERS$", "README[-]+[a-z]*\\.md", "^node_modules$", "content/en/docs/doc-contributor-tools" ]
|
||||
|
||||
timeout = 3000
|
||||
|
||||
|
@ -154,11 +154,6 @@ githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"
|
|||
# GitHub repository link for editing a page and opening issues.
|
||||
github_repo = "https://github.com/kubernetes/website"
|
||||
|
||||
# param for displaying an announcement block on every page.
|
||||
# See /i18n/en.toml for message text and title.
|
||||
announcement = true
|
||||
announcement_bg = "#000000" #choose a dark color – text is white
|
||||
|
||||
#Searching
|
||||
k8s_search = true
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ All your existing images will still work exactly the same.
|
|||
|
||||
### What about private images?
|
||||
|
||||
Also yes. All CRI runtimes support the same pull secrets configuration used in
|
||||
Yes. All CRI runtimes support the same pull secrets configuration used in
|
||||
Kubernetes, either via the PodSpec or ServiceAccount.
|
||||
|
||||
|
||||
|
@ -82,7 +82,7 @@ usability of other container runtimes. As an example, OpenShift 4.x has been
|
|||
using the [CRI-O] runtime in production since June 2019.
|
||||
|
||||
For other examples and references you can look at the adopters of containerd and
|
||||
cri-o, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
|
||||
CRI-O, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
|
||||
- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
|
||||
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
|
||||
|
||||
|
@ -110,7 +110,7 @@ provide an end-to-end standard for managing containers.
|
|||
|
||||
That’s a complex question and it depends on a lot of factors. If Docker is
|
||||
working for you, moving to containerd should be a relatively easy swap and
|
||||
has have strictly better performance and less overhead. However we encourage you
|
||||
will have strictly better performance and less overhead. However, we encourage you
|
||||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
|
||||
|
@ -129,7 +129,7 @@ common things to consider when migrating are:
|
|||
- Kubectl plugins that require docker CLI or the control socket
|
||||
- Kubernetes tools that require direct access to Docker (e.g. kube-imagepuller)
|
||||
- Configuration of functionality like `registry-mirrors` and insecure registries
|
||||
- Other support scripts or daemons that expect docker to be available and are run
|
||||
- Other support scripts or daemons that expect Docker to be available and are run
|
||||
outside of Kubernetes (e.g. monitoring or security agents)
|
||||
- GPUs or special hardware and how they integrate with your runtime and Kubernetes
|
||||
|
||||
|
|
|
@ -13,8 +13,8 @@ as a container runtime after v1.20.
|
|||
|
||||
**You do not need to panic. It’s not as dramatic as it sounds.**
|
||||
|
||||
tl;dr Docker as an underlying runtime is being deprecated in favor of runtimes
|
||||
that use the [Container Runtime Interface(CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
TL;DR Docker as an underlying runtime is being deprecated in favor of runtimes
|
||||
that use the [Container Runtime Interface (CRI)](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
|
||||
created for Kubernetes. Docker-produced images will continue to work in your
|
||||
cluster with all runtimes, as they always have.
|
||||
|
||||
|
@ -48,7 +48,7 @@ is a popular choice for that runtime (other common options include containerd
|
|||
and CRI-O), but Docker was not designed to be embedded inside Kubernetes, and
|
||||
that causes a problem.
|
||||
|
||||
You see, the thing we call “Docker” isn’t actually one thing -- it’s an entire
|
||||
You see, the thing we call “Docker” isn’t actually one thing—it’s an entire
|
||||
tech stack, and one part of it is a thing called “containerd,” which is a
|
||||
high-level container runtime by itself. Docker is cool and useful because it has
|
||||
a lot of UX enhancements that make it really easy for humans to interact with
|
||||
|
@ -66,11 +66,11 @@ does Kubernetes need the Dockershim?
|
|||
|
||||
Docker isn’t compliant with CRI, the [Container Runtime Interface](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/).
|
||||
If it were, we wouldn’t need the shim, and this wouldn’t be a thing. But it’s
|
||||
not the end of the world, and you don’t need to panic -- you just need to change
|
||||
not the end of the world, and you don’t need to panic—you just need to change
|
||||
your container runtime from Docker to another supported container runtime.
|
||||
|
||||
One thing to note: If you are relying on the underlying docker socket
|
||||
(/var/run/docker.sock) as part of a workflow within your cluster today, moving
|
||||
(`/var/run/docker.sock`) as part of a workflow within your cluster today, moving
|
||||
to a different runtime will break your ability to use it. This pattern is often
|
||||
called Docker in Docker. There are lots of options out there for this specific
|
||||
use case including things like
|
||||
|
@ -82,10 +82,10 @@ use case including things like
|
|||
|
||||
This change addresses a different environment than most folks use to interact
|
||||
with Docker. The Docker installation you’re using in development is unrelated to
|
||||
the Docker runtime inside your Kubernetes cluster. It’s confusing, I know. As a
|
||||
developer, Docker is still useful to you in all the ways it was before this
|
||||
the Docker runtime inside your Kubernetes cluster. It’s confusing, we understand.
|
||||
As a developer, Docker is still useful to you in all the ways it was before this
|
||||
change was announced. The image that Docker produces isn’t really a
|
||||
Docker-specific image -- it’s an OCI ([Open Container Initiative](https://opencontainers.org/)) image.
|
||||
Docker-specific image—it’s an OCI ([Open Container Initiative](https://opencontainers.org/)) image.
|
||||
Any OCI-compliant image, regardless of the tool you use to build it, will look
|
||||
the same to Kubernetes. Both [containerd](https://containerd.io/) and
|
||||
[CRI-O](https://cri-o.io/) know how to pull those images and run them. This is
|
||||
|
@ -95,10 +95,10 @@ So, this change is coming. It’s going to cause issues for some, but it isn’t
|
|||
catastrophic, and generally it’s a good thing. Depending on how you interact
|
||||
with Kubernetes, this could mean nothing to you, or it could mean a bit of work.
|
||||
In the long run, it’s going to make things easier. If this is still confusing
|
||||
for you, that’s okay -- there’s a lot going on here, Kubernetes has a lot of
|
||||
for you, that’s okay—there’s a lot going on here; Kubernetes has a lot of
|
||||
moving parts, and nobody is an expert in 100% of it. We encourage any and all
|
||||
questions regardless of experience level or complexity! Our goal is to make sure
|
||||
everyone is educated as much as possible on the upcoming changes. `<3` We hope
|
||||
this has answered most of your questions and soothed some anxieties!
|
||||
everyone is educated as much as possible on the upcoming changes. We hope
|
||||
this has answered most of your questions and soothed some anxieties! ❤️
|
||||
|
||||
Looking for more answers? Check out our accompanying [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
||||
|
|
|
@ -242,7 +242,7 @@ checks the state of each node every `--node-monitor-period` seconds.
|
|||
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
|
||||
|
||||
There are two forms of heartbeats: updates of `NodeStatus` and the
|
||||
[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io).
|
||||
[Lease object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io).
|
||||
Each Node has an associated Lease object in the `kube-node-lease`
|
||||
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
|
||||
Lease is a lightweight resource, which improves the performance
|
||||
|
|
|
@ -527,5 +527,5 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
|
|||
|
||||
For background information on design details for API priority and fairness, see
|
||||
the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
|
||||
You can make suggestions and feature requests via [SIG API
|
||||
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
|
||||
You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
|
||||
or the feature's [slack channel](http://kubernetes.slack.com/messages/api-priority-and-fairness).
|
||||
|
|
|
@ -69,6 +69,8 @@ A Service can be made to span multiple Deployments by omitting release-specific
|
|||
|
||||
A desired state of an object is described by a Deployment, and if changes to that spec are _applied_, the deployment controller changes the actual state to the desired state at a controlled rate.
|
||||
|
||||
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/) for common use cases. These standardized labels enrich the metadata in a way that allows tools, including `kubectl` and [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), to work in an interoperable way.
|
||||
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
|
||||
## Container Images
|
||||
|
|
|
@ -776,7 +776,7 @@ these pods.
|
|||
The `imagePullSecrets` field is a list of references to secrets in the same namespace.
|
||||
You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry
|
||||
password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
|
||||
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field.
|
||||
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field.
|
||||
|
||||
#### Manually specifying an imagePullSecret
|
||||
|
||||
|
|
|
@ -37,10 +37,10 @@ but with different settings.
|
|||
|
||||
Ensure the RuntimeClass feature gate is enabled (it is by default). See [Feature
|
||||
Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling
|
||||
feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _and_ kubelets.
|
||||
feature gates. The `RuntimeClass` feature gate must be enabled on API server _and_ kubelets.
|
||||
|
||||
1. Configure the CRI implementation on nodes (runtime dependent)
|
||||
2. Create the corresponding RuntimeClass resources
|
||||
1. Configure the CRI implementation on nodes (runtime dependent).
|
||||
2. Create the corresponding RuntimeClass resources.
|
||||
|
||||
### 1. Configure the CRI implementation on nodes
|
||||
|
||||
|
@ -51,7 +51,7 @@ CRI implementation for how to configure.
|
|||
{{< note >}}
|
||||
RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means
|
||||
that all nodes are configured the same way with respect to container runtimes). To support
|
||||
heterogenous node configurations, see [Scheduling](#scheduling) below.
|
||||
heterogeneous node configurations, see [Scheduling](#scheduling) below.
|
||||
{{< /note >}}
|
||||
|
||||
The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The
|
||||
|
@ -98,7 +98,7 @@ spec:
|
|||
# ...
|
||||
```
|
||||
|
||||
This will instruct the Kubelet to use the named RuntimeClass to run this pod. If the named
|
||||
This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named
|
||||
RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the
|
||||
`Failed` terminal [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase). Look for a
|
||||
corresponding [event](/docs/tasks/debug-application-cluster/debug-application-introspection/) for an
|
||||
|
@ -144,7 +144,7 @@ See CRI-O's [config documentation](https://raw.githubusercontent.com/cri-o/cri-o
|
|||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
As of Kubernetes v1.16, RuntimeClass includes support for heterogenous clusters through its
|
||||
As of Kubernetes v1.16, RuntimeClass includes support for heterogeneous clusters through its
|
||||
`scheduling` fields. Through the use of these fields, you can ensure that pods running with this
|
||||
RuntimeClass are scheduled to nodes that support it. To use the scheduling support, you must have
|
||||
the [RuntimeClass admission controller](/docs/reference/access-authn-authz/admission-controllers/#runtimeclass)
|
||||
|
|
|
@ -201,7 +201,7 @@ Monitoring agents for device plugin resources can be deployed as a daemon, or as
|
|||
The canonical directory `/var/lib/kubelet/pod-resources` requires privileged access, so monitoring
|
||||
agents must run in a privileged security context. If a device monitoring agent is running as a
|
||||
DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
|
||||
{{< glossary_tooltip term_id="volume" >}} in the plugin's
|
||||
{{< glossary_tooltip term_id="volume" >}} in the device monitoring agent's
|
||||
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).
|
||||
|
||||
Support for the "PodResources service" requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
|
||||
|
|
|
@ -159,7 +159,7 @@ This option is provided to the network-plugin; currently **only kubenet supports
|
|||
## Usage Summary
|
||||
|
||||
* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `--cni-bin-dir` (default `/opt/cni/bin`) and CNI plugin configuration located in `--cni-conf-dir` (default `/etc/cni/net.d`).
|
||||
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
|
||||
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge`, `lo` and `host-local` plugins placed in `/opt/cni/bin` or `cni-bin-dir`.
|
||||
* `--network-plugin-mtu=9001` specifies the MTU to use, currently only used by the `kubenet` network plugin.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -246,7 +246,7 @@ as at least one already-running pod that has a label with key "security" and val
|
|||
on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V
|
||||
such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and
|
||||
value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
|
||||
rule says that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with
|
||||
rule says that the pod should not be scheduled onto a node if that node is in the same zone as a pod with
|
||||
label having key "security" and value "S2". See the
|
||||
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
|
|
|
@ -194,4 +194,4 @@ from source in the meantime.
|
|||
|
||||
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
|
|
|
@ -82,7 +82,7 @@ requested the score value must be reversed as follows.
|
|||
```yaml
|
||||
shape:
|
||||
- utilization: 0
|
||||
score: 100
|
||||
score: 10
|
||||
- utilization: 100
|
||||
score: 0
|
||||
```
|
||||
|
|
|
@ -26,6 +26,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
|
|||
* [AKS Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
|
||||
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io)-based ingress
|
||||
controller.
|
||||
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provides L4-L7 load-balancing using [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
|
||||
* The [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) works with
|
||||
Citrix Application Delivery Controller.
|
||||
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller.
|
||||
|
|
|
@ -35,6 +35,8 @@ Pods become isolated by having a NetworkPolicy that selects them. Once there is
|
|||
|
||||
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
|
||||
|
||||
For a network flow between two pods to be allowed, both the egress policy on the source pod and the ingress policy on the destination pod need to allow the traffic. If either the egress policy on the source, or the ingress policy on the destination denies the traffic, the traffic will be denied.
|
||||
|
||||
## The NetworkPolicy resource {#networkpolicy-resource}
|
||||
|
||||
See the [NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#networkpolicy-v1-networking-k8s-io) reference for a full definition of the resource.
|
||||
|
|
|
@ -311,7 +311,7 @@ If expanding underlying storage fails, the cluster administrator can manually re
|
|||
PersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
|
||||
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
* [`azureDisk`](/docs/concepts/sotrage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
|
|
|
@ -72,7 +72,7 @@ used for provisioning VolumeSnapshots. This field must be specified.
|
|||
|
||||
### DeletionPolicy
|
||||
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot can either be `Retain` or `Delete`. This field must be specified.
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
|
||||
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
|
||||
|
||||
|
|
|
@ -150,14 +150,17 @@ curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-rep
|
|||
```
|
||||
|
||||
kubectl also supports cascading deletion.
|
||||
To delete dependents automatically using kubectl, set `--cascade` to true. To
|
||||
orphan dependents, set `--cascade` to false. The default value for `--cascade`
|
||||
is true.
|
||||
|
||||
To delete dependents in the foreground using kubectl, set `--cascade=foreground`. To
|
||||
orphan dependents, set `--cascade=orphan`.
|
||||
|
||||
The default behavior is to delete the dependents in the background which is the
|
||||
behavior when `--cascade` is omitted or explicitly set to `background`.
|
||||
|
||||
Here's an example that orphans the dependents of a ReplicaSet:
|
||||
|
||||
```shell
|
||||
kubectl delete replicaset my-repset --cascade=false
|
||||
kubectl delete replicaset my-repset --cascade=orphan
|
||||
```
|
||||
|
||||
### Additional note on Deployments
|
||||
|
|
|
@ -283,7 +283,7 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
|
|||
|
||||
### Deleting just a ReplicaSet
|
||||
|
||||
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
|
||||
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=orphan` option.
|
||||
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
|
||||
For example:
|
||||
```shell
|
||||
|
|
|
@ -15,8 +15,7 @@ card:
|
|||
_Pods_ are the smallest deployable units of computing that you can create and manage in Kubernetes.
|
||||
|
||||
A _Pod_ (as in a pod of whales or pea pod) is a group of one or more
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}}, with shared storage/network resources, and a specification
|
||||
for how to run the containers. A Pod's contents are always co-located and
|
||||
{{< glossary_tooltip text="containers" term_id="container" >}}, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and
|
||||
co-scheduled, and run in a shared context. A Pod models an
|
||||
application-specific "logical host": it contains one or more application
|
||||
containers which are relatively tightly coupled.
|
||||
|
@ -295,9 +294,10 @@ but cannot be controlled from there.
|
|||
object definition describes the object in detail.
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explains common layouts for Pods with more than one container.
|
||||
|
||||
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}, you can read about the prior art, including:
|
||||
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
|
||||
* [Borg](https://research.google.com/pubs/pub43438.html)
|
||||
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
|
||||
* [Omega](https://research.google/pubs/pub41684/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
|
||||
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as {{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}} or {{< glossary_tooltip text="Deployments" term_id="deployment" >}}), you can read about the prior art, including:
|
||||
|
||||
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
|
||||
* [Borg](https://research.google.com/pubs/pub43438.html)
|
||||
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
|
||||
* [Omega](https://research.google/pubs/pub41684/)
|
||||
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).
|
||||
|
|
|
@ -49,7 +49,7 @@ Cluster administrator actions include:
|
|||
|
||||
- [Draining a node](/docs/tasks/administer-cluster/safely-drain-node/) for repair or upgrade.
|
||||
- Draining a node from a cluster to scale the cluster down (learn about
|
||||
[Cluster Autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaler)
|
||||
[Cluster Autoscaling](https://github.com/kubernetes/autoscaler/#readme)
|
||||
).
|
||||
- Removing a pod from a node to permit something else to fit on that node.
|
||||
|
||||
|
|
|
@ -133,6 +133,7 @@ You can start this Pod by running:
|
|||
```shell
|
||||
kubectl apply -f myapp.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
pod/myapp-pod created
|
||||
```
|
||||
|
@ -141,6 +142,7 @@ And check on its status with:
|
|||
```shell
|
||||
kubectl get -f myapp.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
myapp-pod 0/1 Init:0/2 0 6m
|
||||
|
@ -150,6 +152,7 @@ or for more details:
|
|||
```shell
|
||||
kubectl describe -f myapp.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
Name: myapp-pod
|
||||
Namespace: default
|
||||
|
@ -224,6 +227,7 @@ To create the `mydb` and `myservice` services:
|
|||
```shell
|
||||
kubectl apply -f services.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
service/myservice created
|
||||
service/mydb created
|
||||
|
@ -235,6 +239,7 @@ Pod moves into the Running state:
|
|||
```shell
|
||||
kubectl get -f myapp.yaml
|
||||
```
|
||||
The output is similar to this:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
myapp-pod 1/1 Running 0 9m
|
||||
|
@ -319,11 +324,9 @@ reasons:
|
|||
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
|
||||
* Learn how to [debug init containers](/docs/tasks/debug-application-cluster/debug-init-containers/)
|
||||
|
||||
|
||||
|
|
|
@ -85,6 +85,13 @@ Value | Description
|
|||
`Failed` | All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
|
||||
`Unknown` | For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
|
||||
|
||||
{{< note >}}
|
||||
When a Pod is being deleted, it is shown as `Terminating` by some kubectl commands.
|
||||
This `Terminating` status is not one of the Pod phases.
|
||||
A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
|
||||
You can use the flag `--force` to [terminate a Pod by force](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced).
|
||||
{{< /note >}}
|
||||
|
||||
If a node dies or is disconnected from the rest of the cluster, Kubernetes
|
||||
applies a policy for setting the `phase` of all Pods on the lost node to Failed.
|
||||
|
||||
|
@ -325,7 +332,7 @@ a time longer than the liveness interval would allow.
|
|||
If your container usually starts in more than
|
||||
`initialDelaySeconds + failureThreshold × periodSeconds`, you should specify a
|
||||
startup probe that checks the same endpoint as the liveness probe. The default for
|
||||
`periodSeconds` is 30s. You should then set its `failureThreshold` high enough to
|
||||
`periodSeconds` is 10s. You should then set its `failureThreshold` high enough to
|
||||
allow the container to start, without changing the default values of the liveness
|
||||
probe. This helps to protect against deadlocks.
|
||||
|
||||
|
|
|
@ -6,4 +6,4 @@ This is an **example** content file inside the **includes** leaf bundle.
|
|||
|
||||
{{< note >}}
|
||||
Included content files can also contain shortcodes.
|
||||
{{< /note >}}
|
||||
{{< /note >}}
|
||||
|
|
|
@ -1,33 +1,30 @@
|
|||
---
|
||||
approvers:
|
||||
- chenopis
|
||||
title: Custom Hugo Shortcodes
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
This page explains the custom Hugo shortcodes that can be used in Kubernetes markdown documentation.
|
||||
This page explains the custom Hugo shortcodes that can be used in Kubernetes Markdown documentation.
|
||||
|
||||
Read more about shortcodes in the [Hugo documentation](https://gohugo.io/content-management/shortcodes).
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Feature state
|
||||
|
||||
In a markdown page (`.md` file) on this site, you can add a shortcode to display version and state of the documented feature.
|
||||
In a Markdown page (`.md` file) on this site, you can add a shortcode to display version and state of the documented feature.
|
||||
|
||||
### Feature state demo
|
||||
|
||||
Below is a demo of the feature state snippet, which displays the feature as stable in Kubernetes version 1.10.
|
||||
Below is a demo of the feature state snippet, which displays the feature as stable in the latest Kubernetes version.
|
||||
|
||||
```
|
||||
{{</* feature-state for_k8s_version="v1.10" state="stable" */>}}
|
||||
{{</* feature-state state="stable" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state for_k8s_version="v1.10" state="stable" >}}
|
||||
{{< feature-state state="stable" >}}
|
||||
|
||||
The valid values for `state` are:
|
||||
|
||||
|
@ -38,62 +35,22 @@ The valid values for `state` are:
|
|||
|
||||
### Feature state code
|
||||
|
||||
The displayed Kubernetes version defaults to that of the page or the site. This can be changed by passing the <code>for_k8s_version</code> shortcode parameter.
|
||||
The displayed Kubernetes version defaults to that of the page or the site. You can change the
|
||||
feature state version by passing the `for_k8s_version` shortcode parameter. For example:
|
||||
|
||||
```
|
||||
{{</* feature-state for_k8s_version="v1.10" state="stable" */>}}
|
||||
{{</* feature-state for_k8s_version="v1.10" state="beta" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state for_k8s_version="v1.10" state="stable" >}}
|
||||
|
||||
#### Alpha feature
|
||||
|
||||
```
|
||||
{{</* feature-state state="alpha" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state state="alpha" >}}
|
||||
|
||||
#### Beta feature
|
||||
|
||||
```
|
||||
{{</* feature-state state="beta" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state state="beta" >}}
|
||||
|
||||
#### Stable feature
|
||||
|
||||
```
|
||||
{{</* feature-state state="stable" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state state="stable" >}}
|
||||
|
||||
#### Deprecated feature
|
||||
|
||||
```
|
||||
{{</* feature-state state="deprecated" */>}}
|
||||
```
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< feature-state state="deprecated" >}}
|
||||
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
|
||||
|
||||
## Glossary
|
||||
|
||||
There are two glossary tooltips.
|
||||
There are two glossary shortcodes: `glossary_tooltip` and `glossary_definition`.
|
||||
|
||||
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the term is moused-over by someone
|
||||
using the online documentation, the glossary entry displays a tooltip.
|
||||
You can reference glossary terms with an inclusion that automatically updates and replaces content with the relevant links from [our glossary](/docs/reference/glossary/). When the glossary term is moused-over, the glossary entry displays a tooltip. The glossary term also displays as a link.
|
||||
|
||||
As well as inclusions with tooltips, you can reuse the definitions from the glossary in
|
||||
page content.
|
||||
|
@ -102,7 +59,7 @@ The raw data for glossary terms is stored at [https://github.com/kubernetes/webs
|
|||
|
||||
### Glossary demo
|
||||
|
||||
For example, the following include within the markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
|
||||
For example, the following include within the Markdown renders to {{< glossary_tooltip text="cluster" term_id="cluster" >}} with a tooltip:
|
||||
|
||||
```
|
||||
{{</* glossary_tooltip text="cluster" term_id="cluster" */>}}
|
||||
|
@ -113,13 +70,16 @@ Here's a short glossary definition:
|
|||
```
|
||||
{{</* glossary_definition prepend="A cluster is" term_id="cluster" length="short" */>}}
|
||||
```
|
||||
|
||||
which renders as:
|
||||
{{< glossary_definition prepend="A cluster is" term_id="cluster" length="short" >}}
|
||||
|
||||
You can also include a full definition:
|
||||
|
||||
```
|
||||
{{</* glossary_definition term_id="cluster" length="all" */>}}
|
||||
```
|
||||
|
||||
which renders as:
|
||||
{{< glossary_definition term_id="cluster" length="all" >}}
|
||||
|
||||
|
@ -255,7 +215,63 @@ Renders to:
|
|||
{{< tab name="JSON File" include="podtemplate.json" />}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## Version strings
|
||||
|
||||
To generate a version string for inclusion in the documentation, you can choose from
|
||||
several version shortcodes. Each version shortcode displays a version string derived from
|
||||
the value of a version parameter found in the site configuration file, `config.toml`.
|
||||
The two most commonly used version parameters are `latest` and `version`.
|
||||
|
||||
### `{{</* param "version" */>}}`
|
||||
|
||||
The `{{</* param "version" */>}}` shortcode generates the value of the current version of
|
||||
the Kubernetes documentation from the `version` site parameter. The `param` shortcode accepts the name of one site parameter, in this case: `version`.
|
||||
|
||||
{{< note >}}
|
||||
In previously released documentation, `latest` and `version` parameter values are not equivalent.
|
||||
After a new version is released, `latest` is incremented and the value of `version` for the documentation set remains unchanged. For example, a previously released version of the documentation displays `version` as
|
||||
`v1.19` and `latest` as `v1.20`.
|
||||
{{< /note >}}
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< param "version" >}}
|
||||
|
||||
### `{{</* latest-version */>}}`
|
||||
|
||||
The `{{</* latest-version */>}}` shortcode returns the value of the `latest` site parameter.
|
||||
The `latest` site parameter is updated when a new version of the documentation is released.
|
||||
This parameter does not always match the value of `version` in a documentation set.
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< latest-version >}}
|
||||
|
||||
### `{{</* latest-semver */>}}`
|
||||
|
||||
The `{{</* latest-semver */>}}` shortcode generates the value of `latest` without the "v" prefix.
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< latest-semver >}}
|
||||
|
||||
### `{{</* version-check */>}}`
|
||||
|
||||
The `{{</* version-check */>}}` shortcode checks if the `min-kubernetes-server-version`
|
||||
page parameter is present and then uses this value to compare to `version`.
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< version-check >}}
|
||||
|
||||
### `{{</* latest-release-notes */>}}`
|
||||
|
||||
The `{{</* latest-release-notes */>}}` shortcode generates a version string from `latest` and removes
|
||||
the "v" prefix. The shortcode prints a new URL for the release note CHANGELOG page with the modified version string.
|
||||
|
||||
Renders to:
|
||||
|
||||
{{< latest-release-notes >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
@ -264,4 +280,3 @@ Renders to:
|
|||
* Learn about [page content types](/docs/contribute/style/page-content-types/).
|
||||
* Learn about [opening a pull request](/docs/contribute/new-content/open-a-pr/).
|
||||
* Learn about [advanced contributing](/docs/contribute/advanced/).
|
||||
|
||||
|
|
|
@ -792,25 +792,8 @@ versions 1.9 and later).
|
|||
|
||||
## Is there a recommended set of admission controllers to use?
|
||||
|
||||
Yes. For Kubernetes version 1.10 and later, the recommended admission controllers are enabled by default (shown [here](/docs/reference/command-line-tools-reference/kube-apiserver/#options)), so you do not need to explicitly specify them. You can enable additional admission controllers beyond the default set using the `--enable-admission-plugins` flag (**order doesn't matter**).
|
||||
Yes. The recommended admission controllers are enabled by default (shown [here](/docs/reference/command-line-tools-reference/kube-apiserver/#options)), so you do not need to explicitly specify them. You can enable additional admission controllers beyond the default set using the `--enable-admission-plugins` flag (**order doesn't matter**).
|
||||
|
||||
{{< note >}}
|
||||
`--admission-control` was deprecated in 1.10 and replaced with `--enable-admission-plugins`.
|
||||
{{< /note >}}
|
||||
|
||||
For Kubernetes 1.9 and earlier, we recommend running the following set of admission controllers using the `--admission-control` flag (**order matters**).
|
||||
|
||||
* v1.9
|
||||
|
||||
```shell
|
||||
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
|
||||
```
|
||||
|
||||
* It's worth reiterating that in 1.9, these happen in a mutating phase
|
||||
and a validating phase, and that for example `ResourceQuota` runs in the validating
|
||||
phase, and therefore is the last admission controller to run.
|
||||
`MutatingAdmissionWebhook` appears before it in this list, because it runs
|
||||
in the mutating phase.
|
||||
|
||||
For earlier versions, there was no concept of validating versus mutating and the
|
||||
admission controllers ran in the exact order specified.
|
||||
|
|
|
@ -87,7 +87,7 @@ Because ClusterRoles are cluster-scoped, you can also use them to grant access t
|
|||
* non-resource endpoints (like `/healthz`)
|
||||
* namespaced resources (like Pods), across all namespaces
|
||||
For example: you can use a ClusterRole to allow a particular user to run
|
||||
`kubectl get pods --all-namespaces`.
|
||||
`kubectl get pods --all-namespaces`
|
||||
|
||||
Here is an example of a ClusterRole that can be used to grant read access to
|
||||
{{< glossary_tooltip text="secrets" term_id="secret" >}} in any particular namespace,
|
||||
|
|
|
@ -166,7 +166,8 @@ different Kubernetes components.
|
|||
| `StorageVersionHash` | `true` | Beta | 1.15 | |
|
||||
| `Sysctls` | `true` | Beta | 1.11 | |
|
||||
| `TTLAfterFinished` | `false` | Alpha | 1.12 | |
|
||||
| `TopologyManager` | `false` | Alpha | 1.16 | |
|
||||
| `TopologyManager` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `TopologyManager` | `true` | Beta | 1.18 | |
|
||||
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
||||
| `WindowsEndpointSliceProxying` | `false` | Alpha | 1.19 | |
|
||||
|
|
|
@ -351,7 +351,7 @@ kubelet [flags]
|
|||
<td colspan="2">--eviction-hard mapStringString Default: `imagefs.available<15%,memory.available<100Mi,nodefs.available<10%`</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;">A set of eviction thresholds (e.g. `memory.available<1Gi`) that if met would trigger a pod eviction. On a Linux node, the default value also includes `nodefs.inodesFree<5%`. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's `--config` flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
|
@ -194,6 +194,9 @@ kubectl get pods --show-labels
|
|||
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
|
||||
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
|
||||
|
||||
# Output decoded secrets without external tools
|
||||
kubectl get secret ${secret_name} -o go-template='{{range $k,$v := .data}}{{$k}}={{$v|base64decode}}{{"\n"}}{{end}}'
|
||||
|
||||
# List all Secrets currently in use by a pod
|
||||
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
|
||||
|
||||
|
@ -314,6 +317,7 @@ kubectl exec my-pod -- ls / # Run command in existing po
|
|||
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
|
||||
kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
|
||||
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
|
||||
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
|
||||
```
|
||||
|
||||
## Interacting with Nodes and cluster
|
||||
|
|
|
@ -37,23 +37,22 @@ All `kubectl run` generators are deprecated. See the Kubernetes v1.17 documentat
|
|||
|
||||
#### Generators
|
||||
You can generate the following resources with a kubectl command, `kubectl create --dry-run=client -o yaml`:
|
||||
```
|
||||
clusterrole Create a ClusterRole.
|
||||
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole.
|
||||
configmap Create a configmap from a local file, directory or literal value.
|
||||
cronjob Create a cronjob with the specified name.
|
||||
deployment Create a deployment with the specified name.
|
||||
job Create a job with the specified name.
|
||||
namespace Create a namespace with the specified name.
|
||||
poddisruptionbudget Create a pod disruption budget with the specified name.
|
||||
priorityclass Create a priorityclass with the specified name.
|
||||
quota Create a quota with the specified name.
|
||||
role Create a role with single rule.
|
||||
rolebinding Create a RoleBinding for a particular Role or ClusterRole.
|
||||
secret Create a secret using specified subcommand.
|
||||
service Create a service using specified subcommand.
|
||||
serviceaccount Create a service account with the specified name.
|
||||
```
|
||||
|
||||
* `clusterrole`: Create a ClusterRole.
|
||||
* `clusterrolebinding`: Create a ClusterRoleBinding for a particular ClusterRole.
|
||||
* `configmap`: Create a ConfigMap from a local file, directory or literal value.
|
||||
* `cronjob`: Create a CronJob with the specified name.
|
||||
* `deployment`: Create a Deployment with the specified name.
|
||||
* `job`: Create a Job with the specified name.
|
||||
* `namespace`: Create a Namespace with the specified name.
|
||||
* `poddisruptionbudget`: Create a PodDisruptionBudget with the specified name.
|
||||
* `priorityclass`: Create a PriorityClass with the specified name.
|
||||
* `quota`: Create a Quota with the specified name.
|
||||
* `role`: Create a Role with single rule.
|
||||
* `rolebinding`: Create a RoleBinding for a particular Role or ClusterRole.
|
||||
* `secret`: Create a Secret using specified subcommand.
|
||||
* `service`: Create a Service using specified subcommand.
|
||||
* `serviceaccount`: Create a ServiceAccount with the specified name.
|
||||
|
||||
### `kubectl apply`
|
||||
|
||||
|
|
|
@ -37,16 +37,11 @@ kubectl:
|
|||
# start the pod running nginx
|
||||
kubectl create deployment --image=nginx nginx-app
|
||||
```
|
||||
|
||||
```shell
|
||||
# add env to nginx-app
|
||||
kubectl set env deployment/nginx-app DOMAIN=cluster
|
||||
```
|
||||
```
|
||||
deployment.apps/nginx-app created
|
||||
```
|
||||
|
||||
```
|
||||
```shell
|
||||
# add env to nginx-app
|
||||
kubectl set env deployment/nginx-app DOMAIN=cluster
|
||||
```
|
||||
|
|
|
@ -108,7 +108,7 @@ extension points:
|
|||
- `SelectorSpread`: Favors spreading across nodes for Pods that belong to
|
||||
{{< glossary_tooltip text="Services" term_id="service" >}},
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and
|
||||
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}
|
||||
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}.
|
||||
Extension points: `PreScore`, `Score`.
|
||||
- `ImageLocality`: Favors nodes that already have the container images that the
|
||||
Pod runs.
|
||||
|
|
|
@ -71,7 +71,7 @@ the appliers, results in a conflict. Shared field owners may give up ownership
|
|||
of a field by removing it from their configuration.
|
||||
|
||||
Field management is stored in a`managedFields` field that is part of an object's
|
||||
[`metadata`](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#objectmeta-v1-meta).
|
||||
[`metadata`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta).
|
||||
|
||||
A simple example of an object created by Server Side Apply could look like this:
|
||||
|
||||
|
|
|
@ -143,6 +143,44 @@ sudo mkdir -p /etc/containerd
|
|||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Restart containerd
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Debian 9+" %}}
|
||||
|
||||
```shell
|
||||
# (Install containerd)
|
||||
## Set up the repository
|
||||
### Install packages to allow apt to use a repository over HTTPS
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
|
||||
```
|
||||
|
||||
```shell
|
||||
## Add Docker's official GPG key
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
|
||||
```
|
||||
|
||||
```shell
|
||||
## Add Docker apt repository.
|
||||
sudo add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/debian \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
```
|
||||
|
||||
```shell
|
||||
## Install containerd
|
||||
sudo apt-get update && sudo apt-get install -y containerd.io
|
||||
```
|
||||
|
||||
```shell
|
||||
# Set default containerd configuration
|
||||
sudo mkdir -p /etc/containerd
|
||||
containerd config default | sudo tee /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Restart containerd
|
||||
sudo systemctl restart containerd
|
||||
|
@ -432,6 +470,11 @@ sudo apt-get update && sudo apt-get install -y \
|
|||
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
|
||||
```
|
||||
|
||||
```shell
|
||||
## Create /etc/docker
|
||||
sudo mkdir /etc/docker
|
||||
```
|
||||
|
||||
```shell
|
||||
# Set up the Docker daemon
|
||||
cat <<EOF | sudo tee /etc/docker/daemon.json
|
||||
|
@ -523,4 +566,3 @@ sudo systemctl enable docker
|
|||
|
||||
Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/)
|
||||
for more information.
|
||||
|
||||
|
|
|
@ -102,8 +102,7 @@ This may be caused by a number of problems. The most common are:
|
|||
1. Install Docker again following instructions
|
||||
[here](/docs/setup/production-environment/container-runtimes/#docker).
|
||||
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
|
||||
[Configure cgroup driver used by kubelet on Master Node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node)
|
||||
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to [Configure cgroup driver used by kubelet on control-plane node](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node)
|
||||
|
||||
- control plane Docker containers are crashlooping or hanging. You can check this by running `docker ps` and investigating each container by running `docker logs`.
|
||||
|
||||
|
|
|
@ -160,7 +160,7 @@ for the pathnames of the certificate files. You need to change these to the actu
|
|||
of certificate files in your environment.
|
||||
|
||||
Sometimes you may want to use Base64-encoded data embedded here instead of separate
|
||||
certificate files; in that case you need add the suffix `-data` to the keys, for example,
|
||||
certificate files; in that case you need to add the suffix `-data` to the keys, for example,
|
||||
`certificate-authority-data`, `client-certificate-data`, `client-key-data`.
|
||||
|
||||
Each context is a triple (cluster, user, namespace). For example, the
|
||||
|
|
|
@ -49,12 +49,14 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled:
|
|||
{{< figure src="/images/docs/nodelocaldns.svg" alt="NodeLocal DNSCache flow" title="Nodelocal DNSCache flow" caption="This image shows how NodeLocal DNSCache handles DNS queries." >}}
|
||||
|
||||
## Configuration
|
||||
{{< note >}} The local listen IP address for NodeLocal DNSCache can be any IP in the 169.254.20.0/16 space or any other IP address that can be guaranteed to not collide with any existing IP. This document uses 169.254.20.10 as an example.
|
||||
{{< note >}} The local listen IP address for NodeLocal DNSCache can be any address that can be guaranteed to not collide with any existing IP in your cluster. It's recommended to use an address with a local scope, per example, from the link-local range 169.254.0.0/16 for IPv4 or from the Unique Local Address range in IPv6 fd00::/8.
|
||||
{{< /note >}}
|
||||
|
||||
This feature can be enabled using the following steps:
|
||||
|
||||
* Prepare a manifest similar to the sample [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) and save it as `nodelocaldns.yaml.`
|
||||
* If using IPv6, the CoreDNS configuration file need to enclose all the IPv6 addresses into square brackets if used in IP:Port format.
|
||||
If you are using the sample manifest from the previous point, this will require to modify [the configuration line L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70) like this `health [__PILLAR__LOCAL__DNS__]:8080`
|
||||
* Substitute the variables in the manifest with the right values:
|
||||
|
||||
* kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
|
||||
|
|
|
@ -82,6 +82,7 @@ You can list this and any other serviceAccount resources in the namespace with t
|
|||
```shell
|
||||
kubectl get serviceaccounts
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
|
@ -108,9 +109,10 @@ If you get a complete dump of the service account object, like this:
|
|||
```shell
|
||||
kubectl get serviceaccounts/build-robot -o yaml
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
|
@ -164,6 +166,7 @@ Any tokens for non-existent service accounts will be cleaned up by the token con
|
|||
```shell
|
||||
kubectl describe secrets/build-robot-secret
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
|
@ -227,7 +230,7 @@ kubectl get serviceaccounts default -o yaml > ./sa.yaml
|
|||
|
||||
The output of the `sa.yaml` file is similar to this:
|
||||
|
||||
```shell
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
|
@ -244,7 +247,7 @@ Using your editor of choice (for example `vi`), open the `sa.yaml` file, delete
|
|||
|
||||
The output of the `sa.yaml` file is similar to this:
|
||||
|
||||
```shell
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
|
@ -319,7 +322,8 @@ kubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml
|
|||
```
|
||||
|
||||
The kubelet will request and store the token on behalf of the pod, make the
|
||||
token available to the pod at a configurable file path, and refresh the token as it approaches expiration. Kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours.
|
||||
token available to the pod at a configurable file path, and refresh the token as it approaches expiration.
|
||||
The kubelet proactively rotates the token if it is older than 80% of its total TTL, or if the token is older than 24 hours.
|
||||
|
||||
The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most use cases.
|
||||
|
||||
|
@ -380,7 +384,6 @@ JWKS URI is required to use the `https` scheme.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
See also:
|
||||
|
||||
- [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
|
|
|
@ -192,7 +192,7 @@ this scenario using `kubectl run`:
|
|||
kubectl run myapp --image=busybox --restart=Never -- sleep 1d
|
||||
```
|
||||
|
||||
Run this command to create a copy of `myapp` named `myapp-copy` that adds a
|
||||
Run this command to create a copy of `myapp` named `myapp-debug` that adds a
|
||||
new Ubuntu container for debugging:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -445,6 +445,9 @@ and
|
|||
[kubectl apply](/docs/reference/generated/kubectl/kubectl-commands/#apply).
|
||||
|
||||
|
||||
{{< note >}}
|
||||
Strategic merge patch is not supported for custom resources.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
|
|
@ -337,7 +337,7 @@ sequenceDiagram
|
|||
Alice->John: Yes... John, how are you?
|
||||
{{</ mermaid >}}
|
||||
|
||||
<br>More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the offical docs.
|
||||
<br>More [examples](https://mermaid-js.github.io/mermaid/#/examples) from the official docs.
|
||||
|
||||
## Sidebars and Admonitions
|
||||
|
||||
|
|
|
@ -92,9 +92,7 @@ weight: 10
|
|||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
For your first Deployment, you'll use a Node.js application packaged in a Docker container. (If you didn't already try creating a
|
||||
Node.js application and deploying it using a container, you can do that first by following the
|
||||
instructions from the <a href="/docs/tutorials/hello-minikube/">Hello Minikube tutorial</a>).
|
||||
For your first Deployment, you'll use a hello-node application packaged in a Docker container that uses NGINX to echo back all the requests. (If you didn't already try creating a hello-node application and deploying it using a container, you can do that first by following the instructions from the <a href="/docs/tutorials/hello-minikube/">Hello Minikube tutorial</a>).
|
||||
<p>
|
||||
|
||||
<p>Now that you know what Deployments are, let's go to the online tutorial and deploy our first app!</p>
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: patch-demo
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: retainkeys-demo
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: frontend
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-master
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
|
|
|
@ -9,7 +9,7 @@ spec:
|
|||
app: mysql
|
||||
clusterIP: None
|
||||
---
|
||||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: mysql
|
||||
|
|
|
@ -25,7 +25,7 @@ spec:
|
|||
requests:
|
||||
storage: 20Gi
|
||||
---
|
||||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: wordpress-mysql
|
||||
|
|
|
@ -25,7 +25,7 @@ spec:
|
|||
requests:
|
||||
storage: 20Gi
|
||||
---
|
||||
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: wordpress
|
||||
|
|
|
@ -22,7 +22,7 @@ Este Código de Conducta se aplica tanto dentro de los espacios relacionados con
|
|||
|
||||
Los casos de comportamiento abusivo, acosador o de cualquier otro modo inaceptable podrán ser denunciados poniéndose en contacto con el [Comité del Código de Conducta de Kubernetes](https://git.k8s.io/community/committee-code-of-conduct) en <conduct@kubernetes.io>. Para otros proyectos, comuníquese con un mantenedor de proyectos de CNCF o con nuestra mediadora, Mishi Choudhary <mishi@linux.com>.
|
||||
|
||||
Este Código de Conducta está adaptado del Compromiso de Colaboradores (http://contributor-covenant.org), versión 1.2.0, disponible en http://contributor-covenant.org/version/1/2/0/
|
||||
Este Código de Conducta está adaptado del Compromiso de Colaboradores (https://contributor-covenant.org), versión 1.2.0, disponible en https://contributor-covenant.org/version/1/2/0/
|
||||
|
||||
### Código de Conducta para la Comunidad de la CNCF
|
||||
|
||||
|
|
|
@ -250,13 +250,14 @@ curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL
|
|||
|
||||
Installez `kubeadm`,` kubelet`, `kubectl` et ajoutez un service systemd` kubelet`:
|
||||
|
||||
RELEASE_VERSION="v0.6.0"
|
||||
|
||||
```bash
|
||||
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
|
||||
cd $DOWNLOAD_DIR
|
||||
sudo curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
|
||||
sudo chmod +x {kubeadm,kubelet,kubectl}
|
||||
|
||||
RELEASE_VERSION="v0.4.0"
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service
|
||||
sudo mkdir -p /etc/systemd/system/kubelet.service.d
|
||||
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
|
|
|
@ -31,7 +31,7 @@ content_type: concept
|
|||
## CLIリファレンス
|
||||
|
||||
* [kubectl](/docs/reference/kubectl/overview/) - コマンドの実行やKubernetesクラスターの管理に使う主要なCLIツールです。
|
||||
* [JSONPath](/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。
|
||||
* [JSONPath](/ja/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。
|
||||
* [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - セキュアなKubernetesクラスターを簡単にプロビジョニングするためのCLIツールです。
|
||||
|
||||
## コンポーネントリファレンス
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: JSONPathのサポート
|
||||
content_type: concept
|
||||
weight: 25
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
kubectlはJSONPathのテンプレートをサポートしています。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
JSONPathのテンプレートは、波括弧`{}`によって囲まれたJSONPathの式によって構成されています。
|
||||
kubectlでは、JSONPathの式を使うことで、JSONオブジェクトの特定のフィールドをフィルターしたり、出力のフォーマットを変更することができます。
|
||||
本来のJSONPathのテンプレートの構文に加え、以下の機能と構文が使えます:
|
||||
|
||||
1. JSONPathの式の内部でテキストをクォートするために、ダブルクォーテーションを使用します。
|
||||
2. リストを反復するために、`range`、`end`オペレーターを使用します。
|
||||
3. リストを末尾側から参照するために、負の数のインデックスを使用します。負の数のインデックスはリストを「周回」せず、`-index + listLength >= 0`が満たされる限りにおいて有効になります。
|
||||
|
||||
{{< note >}}
|
||||
|
||||
- 式は常にルートのオブジェクトから始まるので、`$`オペレーターの入力は任意になります。
|
||||
|
||||
- 結果のオブジェクトはString()関数を適用した形で表示されます。
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
以下のようなJSONの入力が与えられたとします。
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "List",
|
||||
"items":[
|
||||
{
|
||||
"kind":"None",
|
||||
"metadata":{"name":"127.0.0.1"},
|
||||
"status":{
|
||||
"capacity":{"cpu":"4"},
|
||||
"addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
|
||||
}
|
||||
},
|
||||
{
|
||||
"kind":"None",
|
||||
"metadata":{"name":"127.0.0.2"},
|
||||
"status":{
|
||||
"capacity":{"cpu":"8"},
|
||||
"addresses":[
|
||||
{"type": "LegacyHostIP", "address":"127.0.0.2"},
|
||||
{"type": "another", "address":"127.0.0.3"}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"users":[
|
||||
{
|
||||
"name": "myself",
|
||||
"user": {}
|
||||
},
|
||||
{
|
||||
"name": "e2e",
|
||||
"user": {"username": "admin", "password": "secret"}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
機能 | 説明 | 例 | 結果
|
||||
--------------------|---------------------------|-----------------------------------------------------------------|------------------
|
||||
`text` | プレーンテキスト | `kind is {.kind}` | `kind is List`
|
||||
`@` | 現在のオブジェクト | `{@}` | 入力した値と同じ値
|
||||
`.` or `[]` | 子要素 | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List`
|
||||
`..` | 子孫要素を再帰的に探す | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e`
|
||||
`*` | ワイルドカード。すべてのオブジェクトを取得する | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]`
|
||||
`[start:end:step]` | 添字 | `{.users[0].name}` | `myself`
|
||||
`[,]` | 和集合 | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]`
|
||||
`?()` | フィルター | `{.users[?(@.name=="e2e")].user.password}` | `secret`
|
||||
`range`, `end` | リストの反復 | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]`
|
||||
`''` | 解釈済みの文字列をクォートする | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2`
|
||||
|
||||
`kubectl`とJSONPathの式を使った例:
|
||||
|
||||
```shell
|
||||
kubectl get pods -o json
|
||||
kubectl get pods -o=jsonpath='{@}'
|
||||
kubectl get pods -o=jsonpath='{.items[0]}'
|
||||
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
|
||||
kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}"
|
||||
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Windowsでは、空白が含まれるJSONPathのテンプレートをクォートする場合は(上記のようにシングルクォーテーションを使うのではなく)、ダブルクォーテーションを使わなければなりません。
|
||||
また、テンプレート内のリテラルをクォートする際には、シングルクォーテーションか、エスケープされたダブルクォーテーションを使わなければなりません。例えば:
|
||||
|
||||
```cmd
|
||||
kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}"
|
||||
kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{\"\t\"}{.status.startTime}{\"\n\"}{end}"
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
|
||||
JSONPathの正規表現はサポートされていません。正規表現を利用した検索を行いたい場合は、`jq`のようなツールを使ってください。
|
||||
|
||||
```shell
|
||||
# kubectlはJSONpathの出力として正規表現をサポートしていないので、以下のコマンドは動作しない
|
||||
kubectl get pods -o jsonpath='{.items[?(@.metadata.name=~/^test$/)].metadata.name}'
|
||||
|
||||
# 上のコマンドに期待される結果が欲しい場合、以下のコマンドを使うとよい
|
||||
kubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test("test-")).spec.containers[].image'
|
||||
```
|
||||
{{< /note >}}
|
|
@ -191,8 +191,8 @@ kubectl [command] [TYPE] [NAME] -o <output_format>
|
|||
`-o custom-columns=<spec>` | [カスタムカラム](#custom-columns)のコンマ区切りのリストを使用して、テーブルを表示します。
|
||||
`-o custom-columns-file=<filename>` | `<filename>`ファイル内の[カスタムカラム](#custom-columns)のテンプレートを使用して、テーブルを表示します。
|
||||
`-o json` | JSON形式のAPIオブジェクトを出力します。
|
||||
`-o jsonpath=<template>` | [jsonpath](/docs/reference/kubectl/jsonpath/)式で定義されたフィールドを表示します。
|
||||
`-o jsonpath-file=<filename>` | `<filename>`ファイル内の[jsonpath](/docs/reference/kubectl/jsonpath/)式で定義されたフィールドを表示します。
|
||||
`-o jsonpath=<template>` | [jsonpath](/ja/docs/reference/kubectl/jsonpath/)式で定義されたフィールドを表示します。
|
||||
`-o jsonpath-file=<filename>` | `<filename>`ファイル内の[jsonpath](/ja/docs/reference/kubectl/jsonpath/)式で定義されたフィールドを表示します。
|
||||
`-o name` | リソース名のみを表示します。
|
||||
`-o wide` | 追加情報を含めて、プレーンテキスト形式で出力します。Podの場合は、Node名が含まれます。
|
||||
`-o yaml` | YAML形式のAPIオブジェクトを出力します。
|
||||
|
@ -263,7 +263,7 @@ pod-name 1m
|
|||
|
||||
### オブジェクトリストのソート
|
||||
|
||||
ターミナルウィンドウで、オブジェクトをソートされたリストに出力するには、サポートされている`kubectl`コマンドに`--sort-by`フラグを追加します。`--sort-by`フラグで任意の数値フィールドや文字列フィールドを指定することで、オブジェクトをソートします。フィールドの指定には、[jsonpath](/docs/reference/kubectl/jsonpath/)式を使用します。
|
||||
ターミナルウィンドウで、オブジェクトをソートされたリストに出力するには、サポートされている`kubectl`コマンドに`--sort-by`フラグを追加します。`--sort-by`フラグで任意の数値フィールドや文字列フィールドを指定することで、オブジェクトをソートします。フィールドの指定には、[jsonpath](/ja/docs/reference/kubectl/jsonpath/)式を使用します。
|
||||
|
||||
#### 構文
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ shape:
|
|||
```yaml
|
||||
shape:
|
||||
- utilization: 0
|
||||
score: 100
|
||||
score: 10
|
||||
- utilization: 100
|
||||
score: 0
|
||||
```
|
||||
|
|
|
@ -23,7 +23,7 @@ card:
|
|||
|
||||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>Basico do Kubernetes</h2>
|
||||
<h2>Básico do Kubernetes</h2>
|
||||
<p>Este tutorial fornece instruções básicas sobre o sistema de orquestração de cluster do Kubernetes. Cada módulo contém algumas informações básicas sobre os principais recursos e conceitos do Kubernetes e inclui um tutorial online interativo. Esses tutoriais interativos permitem que você mesmo gerencie um cluster simples e seus aplicativos em contêineres.</p>
|
||||
<p>Usando os tutoriais interativos, você pode aprender a:</p>
|
||||
<ul>
|
||||
|
@ -41,7 +41,7 @@ card:
|
|||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
<h2>O que o Kubernetes pode fazer por você?</h2>
|
||||
<p>Com os serviços da Web modernos, os usuários esperam que os aplicativos estejam disponíveis 24 horas por dia, 7 dias por semana, e os desenvolvedores esperam implantar novas versões desses aplicativos várias vezes ao dia. A conteinerização ajuda a empacotar o software para atender a esses objetivos, permitindo que os aplicativos sejam lançados e atualizados de maneira fácil e rápida, sem tempo de inatividade. O Kubernetes ajuda a garantir que esses aplicativos em contêiner sejam executados onde e quando você quiser e os ajuda a encontrar os recursos e ferramentas de que precisam para funcionar. Kubernetes é uma plataforma de código aberto pronta para produção, projetada com a experiência acumulada do Google em orquestração de contêineres, combinada com as melhores ideias da comunidade.</p>
|
||||
<p>Com os serviços da Web modernos, os usuários esperam que os aplicativos estejam disponíveis 24 horas por dia, 7 dias por semana, e os desenvolvedores esperam implantar novas versões desses aplicativos várias vezes ao dia. A conteinerização ajuda a empacotar o software para atender a esses objetivos, permitindo que os aplicativos sejam lançados e atualizados de maneira fácil e rápida, sem tempo de inatividade. O Kubernetes ajuda a garantir que esses aplicativos em contêiner sejam executados onde e quando você quiser e os ajuda a encontrar os recursos e ferramentas de que precisam para funcionar. Kubernetes é uma plataforma de código aberto pronta para produção, projetada com a experiência acumulada do Google em orquestração de contêineres, combinada com as melhores idéias da comunidade.</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -56,7 +56,7 @@ card:
|
|||
<div class="thumbnail">
|
||||
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Crie um cluster Kubernetes</h5></a>
|
||||
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Criar um cluster Kubernetes</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -92,7 +92,7 @@ card:
|
|||
<div class="thumbnail">
|
||||
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
|
||||
<div class="caption">
|
||||
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Amplie seu aplicativo</h5></a>
|
||||
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Escale seu aplicativo</h5></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: Implantar um aplicativo
|
||||
weight: 20
|
||||
---
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: Tutorial interativo - implantando um aplicativo
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<p>
|
||||
Um pod é a unidade de execução básica de um aplicativo Kubernetes. Cada pod representa uma parte de uma carga de trabalho em execução no cluster. <a href="/docs/concepts/workloads/pods/"> Saiba mais sobre pods</a>.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
Para interagir com o Terminal, use a versão desktop/tablet
|
||||
</div>
|
||||
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Terminal para treinamento do Kubernetes" style="height: 600px;">
|
||||
</div>
|
||||
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Continue para o Módulo 3<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
title: Usando kubectl para criar uma implantação
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="en">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Objetivos</h3>
|
||||
<ul>
|
||||
<li> Saiba mais sobre implantações de aplicativos. </li>
|
||||
<li> Implante seu primeiro aplicativo no Kubernetes com o kubectl. </li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="col-md-8">
|
||||
<h3>Implantações do Kubernetes</h3>
|
||||
<p>
|
||||
Assim que o seu cluster Kubernetes estiver em execução você pode implementar seu aplicativo em contêiners nele.
|
||||
Para fazer isso, você precisa criar uma configuração do tipo <b> Deployment </b> do Kubernetes. O Deployment define como criar e
|
||||
atualizar instâncias do seu aplicativo. Depois de criar um Deployment, o Master do Kubernetes
|
||||
agenda as instâncias do aplicativo incluídas nesse Deployment para ser executado em nós individuais do CLuster.
|
||||
</p>
|
||||
|
||||
<p> Depois que as instâncias do aplicativo são criadas, um Controlador do Kubernetes Deployment monitora continuamente essas instâncias.
|
||||
Se o nó que hospeda uma instância ficar inativo ou for excluído, o controlador de Deployment substituirá a instância por uma instância em outro nó no cluster.
|
||||
<b> Isso fornece um mecanismo de autocorreção para lidar com falhas ou manutenção da máquina. </b> </p>
|
||||
|
||||
<p>Em um mundo de pré-orquestração, os scripts de instalação costumavam ser usados para iniciar aplicativos, mas não permitiam a recuperação de falha da máquina.
|
||||
Ao criar suas instâncias de aplicativo e mantê-las em execução entre nós, as implantações do Kubernetes fornecem uma abordagem fundamentalmente diferente para o gerenciamento de aplicativos. </p>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_lined">
|
||||
<h3>Resumo:</h3>
|
||||
<ul>
|
||||
<li>Deployments</li>
|
||||
<li>Kubectl</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i>
|
||||
O tipo Deployment é responsável por criar e atualizar instâncias de seu aplicativo
|
||||
</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<h2 style="color: #3771e3;">Implantar seu primeiro aplicativo no Kubernetes</h2>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p><img src="/docs/tutorials/kubernetes-basics/public/images/module_02_first_app.svg"></p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
|
||||
<p>Você pode criar e gerenciar uma implantação usando a interface de linha de comando do Kubernetes, <b> Kubectl </b>.
|
||||
O Kubectl usa a API Kubernetes para interagir com o cluster. Neste módulo, você aprenderá os comandos Kubectl mais comuns necessários para criar implantações que executam seus aplicativos em um cluster Kubernetes.</p>
|
||||
|
||||
<p>Quando você cria um Deployment, você precisa especificar a imagem do contêiner para seu aplicativo e o número de réplicas que deseja executar.
|
||||
Você pode alterar essas informações posteriormente, atualizando sua implantação; Módulos<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/">5</a> e <a href="/docs/tutorials/kubernetes-basics/update/update-intro/">6</a> do bootcamp explica como você pode dimensionar e atualizar suas implantações.</p>
|
||||
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="content__box content__box_fill">
|
||||
<p><i> Os aplicativos precisam ser empacotados em um dos formatos de contêiner suportados para serem implantados no Kubernetes</i></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-8">
|
||||
<p>
|
||||
Para sua primeira implantação, você usará um aplicativo Node.js empacotado em um contêiner Docker.(Se você ainda não tentou criar um aplicativo Node.js e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as instruções do <a href="/docs/tutorials/hello-minikube/">tutorial Hello Minikube</a>).
|
||||
<p>
|
||||
|
||||
<p>Agora que você sabe o que são implantações (Deployment), vamos para o tutorial online e implantar nosso primeiro aplicativo!</p>
|
||||
</div>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">Iniciar tutorial interativo<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
|
@ -6,7 +6,7 @@ min-kubernetes-server-version: v1.18
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
|
||||
{{< feature-state state="beta" for_k8s_version="v1.20" >}}
|
||||
|
||||
<!--
|
||||
Controlling the behavior of the Kubernetes API server in an overload situation
|
||||
|
@ -57,33 +57,50 @@ Fairness feature enabled.
|
|||
|
||||
<!-- body -->
|
||||
|
||||
<!-- ## Enabling API Priority and Fairness -->
|
||||
## 启用 API 优先级和公平性 {#Enabling-API-Priority-and-Fairness}
|
||||
<!--
|
||||
## Enabling/Disabling API Priority and Fairness
|
||||
-->
|
||||
## 启用/禁用 API 优先级和公平性 {#enabling-api-priority-and-fairness}
|
||||
|
||||
<!--
|
||||
The API Priority and Fairness feature is controlled by a feature gate
|
||||
and is not enabled by default. See
|
||||
and is enabled by default. See
|
||||
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
for a general explanation of feature gates and how to enable and disable them. The
|
||||
name of the feature gate for APF is "APIPriorityAndFairness". This
|
||||
feature also involves an {{< glossary_tooltip term_id="api-group"
|
||||
text="API Group" >}} that must be enabled. You can do these
|
||||
things by adding the following command-line flags to your
|
||||
`kube-apiserver` invocation:
|
||||
for a general explanation of feature gates and how to enable and
|
||||
disable them. The name of the feature gate for APF is
|
||||
"APIPriorityAndFairness". This feature also involves an {{<
|
||||
glossary_tooltip term_id="api-group" text="API Group" >}} with: (a) a
|
||||
`v1alpha1` version, disabled by default, and (b) a `v1beta1`
|
||||
version, enabled by default. You can disable the feature
|
||||
gate and API group v1beta1 version by adding the following
|
||||
command-line flags to your `kube-apiserver` invocation:
|
||||
-->
|
||||
APF 特性由特性门控控制,默认情况下不启用。有关如何启用和禁用特性门控的描述,
|
||||
API 优先级与公平性(APF)特性由特性门控控制,默认情况下启用。
|
||||
有关特性门控的一般性描述以及如何启用和禁用特性门控,
|
||||
请参见[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
APF 的特性门控叫做 `APIPriorityAndFairness` 。
|
||||
此特性要求必须启用某个 {{< glossary_tooltip term_id="api-group" text="API Group" >}}。
|
||||
你可以在启动 `kube-apiserver` 时,添加以下命令行标志来完成这些操作:
|
||||
APF 的特性门控称为 `APIPriorityAndFairness`。
|
||||
此特性也与某个 {{< glossary_tooltip term_id="api-group" text="API 组" >}}
|
||||
相关:
|
||||
(a) 一个 `v1alpha1` 版本,默认被禁用;
|
||||
(b) 一个 `v1beta1` 版本,默认被启用。
|
||||
你可以在启动 `kube-apiserver` 时,添加以下命令行标志来禁用此功能门控
|
||||
及 v1beta1 API 组:
|
||||
|
||||
```shell
|
||||
kube-apiserver \
|
||||
--feature-gates=APIPriorityAndFairness=true \
|
||||
--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true \
|
||||
# …其他配置与之前相同
|
||||
--feature-gates=APIPriorityAndFairness=false \
|
||||
--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=false \
|
||||
# ...其他配置不变
|
||||
```
|
||||
|
||||
<!--
|
||||
Alternatively, you can enable the v1alpha1 version of the API group
|
||||
with `--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`.
|
||||
-->
|
||||
或者,你也可以通过
|
||||
`--runtime-config=flowcontrol.apiserver.k8s.io/v1beta1=true`
|
||||
启用 API 组的 v1alpha1 版本。
|
||||
|
||||
<!--
|
||||
The command-line flag `--enable-priority-and-fairness=false` will disable the
|
||||
API Priority and Fairness feature, even if other flags have enabled it.
|
||||
|
@ -158,7 +175,8 @@ flows of the same priority level.
|
|||
(尤其是在一个较为常见的场景中,一个有故障的客户端会疯狂地向 kube-apiserver 发送请求,
|
||||
理想情况下,这个有故障的客户端不应对其他客户端产生太大的影响)。
|
||||
公平排队算法在处理具有相同优先级的请求时,实现了上述场景。
|
||||
每个请求都被分配到某个 _流_ 中,该 _流_ 由对应的 FlowSchema 的名字加上一个 _流区分项(Flow Distinguisher)_ 来标识。
|
||||
每个请求都被分配到某个 _流_ 中,该 _流_ 由对应的 FlowSchema 的名字加上一个
|
||||
_流区分项(Flow Distinguisher)_ 来标识。
|
||||
这里的流区分项可以是发出请求的用户、目标资源的名称空间或什么都不是。
|
||||
系统尝试为不同流中具有相同优先级的请求赋予近似相等的权重。
|
||||
|
||||
|
@ -170,7 +188,8 @@ text="shuffle sharding" >}}, which makes relatively efficient use of
|
|||
queues to insulate low-intensity flows from high-intensity flows.
|
||||
-->
|
||||
将请求划分到流中之后,APF 功能将请求分配到队列中。
|
||||
分配时使用一种称为 {{< glossary_tooltip term_id="shuffle-sharding" text="混洗分片(Shuffle-Sharding)" >}} 的技术。
|
||||
分配时使用一种称为 {{< glossary_tooltip term_id="shuffle-sharding" text="混洗分片(Shuffle-Sharding)" >}}
|
||||
的技术。
|
||||
该技术可以相对有效地利用队列隔离低强度流与高强度流。
|
||||
|
||||
<!--
|
||||
|
@ -185,6 +204,7 @@ tolerance for bursty traffic, and the added latency induced by queuing.
|
|||
|
||||
<!--
|
||||
### Exempt requests
|
||||
|
||||
Some requests are considered sufficiently important that they are not subject to
|
||||
any of the limitations imposed by this feature. These exemptions prevent an
|
||||
improperly-configured flow control configuration from totally disabling an API
|
||||
|
@ -196,12 +216,13 @@ server.
|
|||
|
||||
<!--
|
||||
## Defaults
|
||||
|
||||
The Priority and Fairness feature ships with a suggested configuration that
|
||||
should suffice for experimentation; if your cluster is likely to
|
||||
experience heavy load then you should consider what configuration will work best.
|
||||
The suggested configuration groups requests into five priority classes:
|
||||
-->
|
||||
## 默认值 {#Defaults}
|
||||
## 默认值 {#defaults}
|
||||
|
||||
APF 特性附带推荐配置,该配置对实验场景应该足够;
|
||||
如果你的集群有可能承受较大的负载,那么你应该考虑哪种配置最有效。
|
||||
|
@ -213,7 +234,7 @@ APF 特性附带推荐配置,该配置对实验场景应该足够;
|
|||
workloads to be able to schedule on them.
|
||||
-->
|
||||
* `system` 优先级用于 `system:nodes` 组(即 Kubelets )的请求;
|
||||
kubelets 必须能连上 API 服务器,以便工作负载能够调度到其上。
|
||||
kubelets 必须能连上 API 服务器,以便工作负载能够调度到其上。
|
||||
|
||||
<!--
|
||||
* The `leader-election` priority level is for leader election requests from
|
||||
|
@ -225,35 +246,30 @@ kubelets 必须能连上 API 服务器,以便工作负载能够调度到其上
|
|||
causes more expensive traffic as the new controllers sync their informers.
|
||||
-->
|
||||
* `leader-election` 优先级用于内置控制器的领导选举的请求
|
||||
(特别是来自 `kube-system` 名称空间中 `system:kube-controller-manager` 和
|
||||
`system:kube-scheduler` 用户和服务账号,针对 `endpoints`、`configmaps` 或 `leases` 的请求)。
|
||||
将这些请求与其他流量相隔离非常重要,因为领导者选举失败会导致控制器发生故障并重新启动,
|
||||
这反过来会导致新启动的控制器在同步信息时,流量开销更大。
|
||||
(特别是来自 `kube-system` 名称空间中 `system:kube-controller-manager` 和
|
||||
`system:kube-scheduler` 用户和服务账号,针对 `endpoints`、`configmaps` 或 `leases` 的请求)。
|
||||
将这些请求与其他流量相隔离非常重要,因为领导者选举失败会导致控制器发生故障并重新启动,
|
||||
这反过来会导致新启动的控制器在同步信息时,流量开销更大。
|
||||
|
||||
<!--
|
||||
* The `workload-high` priority level is for other requests from built-in
|
||||
controllers.
|
||||
-->
|
||||
* `workload-high` 优先级用于内置控制器的请求。
|
||||
|
||||
<!--
|
||||
* The `workload-low` priority level is for requests from any other service
|
||||
account, which will typically include all requests from controllers running in
|
||||
Pods.
|
||||
-->
|
||||
* `workload-low` 优先级适用于来自任何服务帐户的请求,通常包括来自 Pods 中运行的控制器的所有请求。
|
||||
|
||||
<!--
|
||||
* The `global-default` priority level handles all other traffic, e.g.
|
||||
interactive `kubectl` commands run by nonprivileged users.
|
||||
-->
|
||||
* `workload-high` 优先级用于内置控制器的请求。
|
||||
* `workload-low` 优先级适用于来自任何服务帐户的请求,通常包括来自 Pods
|
||||
中运行的控制器的所有请求。
|
||||
* `global-default` 优先级可处理所有其他流量,例如:非特权用户运行的交互式 `kubectl` 命令。
|
||||
|
||||
<!--
|
||||
Additionally, there are two PriorityLevelConfigurations and two FlowSchemas that
|
||||
are built in and may not be overwritten:
|
||||
-->
|
||||
内置了两个 PriorityLevelConfiguration 和两个 FlowSchema,它们是内置的、不可重载的:
|
||||
系统内置了两个 PriorityLevelConfiguration 和两个 FlowSchema,它们是不可重载的:
|
||||
|
||||
<!--
|
||||
* The special `exempt` priority level is used for requests that are not subject
|
||||
|
@ -264,7 +280,7 @@ are built in and may not be overwritten:
|
|||
-->
|
||||
* 特殊的 `exempt` 优先级的请求完全不受流控限制:它们总是立刻被分发。
|
||||
特殊的 `exempt` FlowSchema 把 `system:masters` 组的所有请求都归入该优先级组。
|
||||
如果合适,你可以定义新的 FlowSchema,将其他请求定向到该优先级。
|
||||
如果合适,你可以定义新的 FlowSchema,将其他请求定向到该优先级。
|
||||
|
||||
<!--
|
||||
* The special `catch-all` priority level is used in combination with the special
|
||||
|
@ -279,10 +295,11 @@ are built in and may not be overwritten:
|
|||
error.
|
||||
-->
|
||||
* 特殊的 `catch-all` 优先级与特殊的 `catch-all` FlowSchema 结合使用,以确保每个请求都分类。
|
||||
一般地,你不应该依赖于 `catch-all` 的配置,而应适当地创建自己的 `catch-all` FlowSchema 和 PriorityLevelConfigurations
|
||||
(或使用默认安装的 `global-default` 配置)。
|
||||
为了帮助捕获部分请求未分类的配置错误,强制要求 `catch-all` 优先级仅允许一个并发份额,
|
||||
并且不对请求进行排队,使得仅与 `catch-all` FlowSchema 匹配的流量被拒绝的可能性更高,并显示 HTTP 429 错误。
|
||||
一般地,你不应该依赖于 `catch-all` 的配置,而应适当地创建自己的 `catch-all`
|
||||
FlowSchema 和 PriorityLevelConfigurations(或使用默认安装的 `global-default` 配置)。
|
||||
为了帮助捕获部分请求未分类的配置错误,强制要求 `catch-all` 优先级仅允许一个并发份额,
|
||||
并且不对请求进行排队,使得仅与 `catch-all` FlowSchema 匹配的流量被拒绝的可能性更高,
|
||||
并显示 HTTP 429 错误。
|
||||
|
||||
<!-- ## Health check concurrency exemption -->
|
||||
## 健康检查并发豁免 {#Health-check-concurrency-exemption}
|
||||
|
@ -306,8 +323,6 @@ requests from rate limiting.
|
|||
-->
|
||||
如果添加以下 FlowSchema,健康检查请求不受速率限制。
|
||||
|
||||
{{< caution >}}
|
||||
|
||||
<!--
|
||||
Making this change also allows any hostile party to then send
|
||||
health-check requests that match this FlowSchema, at any volume they
|
||||
|
@ -316,10 +331,10 @@ mechanism to protect your cluster's API server from general internet
|
|||
traffic, you can configure rules to block any health check requests
|
||||
that originate from outside your cluster.
|
||||
-->
|
||||
{{< caution >}}
|
||||
进行此更改后,任何敌对方都可以发送与此 FlowSchema 匹配的任意数量的健康检查请求。
|
||||
如果你有 Web 流量过滤器或类似的外部安全机制保护集群的 API 服务器免受常规网络流量的侵扰,
|
||||
则可以配置规则,阻止所有来自集群外部的健康检查请求。
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
|
||||
|
@ -327,12 +342,14 @@ that originate from outside your cluster.
|
|||
<!--
|
||||
## Resources
|
||||
The flow control API involves two kinds of resources.
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io)
|
||||
define the available isolation classes, the share of the available concurrency
|
||||
budget that each can handle, and allow for fine-tuning queuing behavior.
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
are used to classify individual inbound requests, matching each to a single
|
||||
PriorityLevelConfiguration.
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1beta1-flowcontrol-apiserver-k8s-io)
|
||||
are used to classify individual inbound requests, matching each to a
|
||||
single PriorityLevelConfiguration. There is also a `v1alpha1` version
|
||||
of the same API group, and it has the same Kinds with the same syntax and
|
||||
semantics.
|
||||
-->
|
||||
## 资源 {#Resources}
|
||||
|
||||
|
@ -341,6 +358,7 @@ PriorityLevelConfiguration.
|
|||
定义隔离类型和可处理的并发预算量,还可以微调排队行为。
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol-apiserver-k8s-io)
|
||||
用于对每个入站请求进行分类,并与一个 PriorityLevelConfigurations 相匹配。
|
||||
此外同一 API 组还有一个 `v1alpha1` 版本,其中包含语法和语义都相同的资源类别。
|
||||
|
||||
<!--
|
||||
### PriorityLevelConfiguration
|
||||
|
@ -350,7 +368,8 @@ requests, and limitations on the number of queued requests.
|
|||
-->
|
||||
### PriorityLevelConfiguration {#PriorityLevelConfiguration}
|
||||
|
||||
一个 PriorityLevelConfiguration 表示单个隔离类型。每个 PriorityLevelConfigurations 对未完成的请求数有各自的限制,对排队中的请求数也有限制。
|
||||
一个 PriorityLevelConfiguration 表示单个隔离类型。每个 PriorityLevelConfigurations
|
||||
对未完成的请求数有各自的限制,对排队中的请求数也有限制。
|
||||
|
||||
<!--
|
||||
Concurrency limits for PriorityLevelConfigurations are not specified in absolute
|
||||
|
@ -410,7 +429,7 @@ proposal](#whats-next), but in short:
|
|||
fair-queuing logic, but still allows requests to be queued.
|
||||
-->
|
||||
* `queues` 递增能减少不同流之间的冲突概率,但代价是增加了内存使用量。
|
||||
值为1时,会禁用公平排队逻辑,但仍允许请求排队。
|
||||
值为 1 时,会禁用公平排队逻辑,但仍允许请求排队。
|
||||
|
||||
<!--
|
||||
* Increasing `queueLengthLimit` allows larger bursts of traffic to be
|
||||
|
@ -418,28 +437,30 @@ proposal](#whats-next), but in short:
|
|||
latency and memory usage.
|
||||
-->
|
||||
* `queueLengthLimit` 递增可以在不丢弃任何请求的情况下支撑更大的突发流量,
|
||||
但代价是增加了等待时间和内存使用量。
|
||||
但代价是增加了等待时间和内存使用量。
|
||||
|
||||
<!--
|
||||
* Changing `handSize` allows you to adjust the probability of collisions between
|
||||
different flows and the overall concurrency available to a single flow in an
|
||||
overload situation.
|
||||
|
||||
{{< note >}}
|
||||
A larger `handSize` makes it less likely for two individual flows to collide
|
||||
(and therefore for one to be able to starve the other), but more likely that
|
||||
a small number of flows can dominate the apiserver. A larger `handSize` also
|
||||
potentially increases the amount of latency that a single high-traffic flow
|
||||
can cause. The maximum number of queued requests possible from a
|
||||
single flow is `handSize *queueLengthLimit`.
|
||||
A larger `handSize` makes it less likely for two individual flows to collide
|
||||
(and therefore for one to be able to starve the other), but more likely that
|
||||
a small number of flows can dominate the apiserver. A larger `handSize` also
|
||||
potentially increases the amount of latency that a single high-traffic flow
|
||||
can cause. The maximum number of queued requests possible from a
|
||||
single flow is `handSize *queueLengthLimit`.
|
||||
{{< /note >}}
|
||||
-->
|
||||
* 修改 `handSize` 允许你调整过载情况下不同流之间的冲突概率以及单个流可用的整体并发性。
|
||||
{{< note >}}
|
||||
较大的 `handSize` 使两个单独的流程发生碰撞的可能性较小(因此,一个流可以饿死另一个流),
|
||||
但是更有可能的是少数流可以控制 apiserver。
|
||||
较大的 `handSize` 还可能增加单个高并发流的延迟量。
|
||||
单个流中可能排队的请求的最大数量为 `handSize *queueLengthLimit` 。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
较大的 `handSize` 使两个单独的流程发生碰撞的可能性较小(因此,一个流可以饿死另一个流),
|
||||
但是更有可能的是少数流可以控制 apiserver。
|
||||
较大的 `handSize` 还可能增加单个高并发流的延迟量。
|
||||
单个流中可能排队的请求的最大数量为 `handSize *queueLengthLimit` 。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Following is a table showing an interesting collection of shuffle
|
||||
|
@ -454,7 +475,7 @@ https://play.golang.org/p/Gi0PLgVHiUg , which computes this table.
|
|||
|
||||
{{< table caption = "Example Shuffle Sharding Configurations" >}}
|
||||
<!-- HandSize | Queues | 1 elephant | 4 elephants | 16 elephants -->
|
||||
随机分片 | 队列数 | 1个大象 | 4个大象 | 16个大象
|
||||
随机分片 | 队列数 | 1 个大象 | 4 个大象 | 16 个大象
|
||||
|----------|-----------|------------|----------------|--------------------|
|
||||
| 12 | 32 | 4.428838398950118e-09 | 0.11431348830099144 | 0.9935089607656024 |
|
||||
| 10 | 32 | 1.550093439632541e-08 | 0.0626479840223545 | 0.9753101519027554 |
|
||||
|
@ -484,7 +505,6 @@ FlowSchema 匹配一些入站请求,并将它们分配给优先级。
|
|||
首先从 `matchingPrecedence` 数值最低的匹配开始(我们认为这是逻辑上匹配度最高),
|
||||
然后依次进行,直到首个匹配出现。
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Only the first matching FlowSchema for a given request matters. If multiple
|
||||
FlowSchemas match a single inbound request, it will be assigned based on the one
|
||||
|
@ -493,6 +513,7 @@ with the highest `matchingPrecedence`. If multiple FlowSchemas with equal
|
|||
smaller `name` will win, but it's better not to rely on this, and instead to
|
||||
ensure that no two FlowSchemas have the same `matchingPrecedence`.
|
||||
-->
|
||||
{{< caution >}}
|
||||
对一个请求来说,只有首个匹配的 FlowSchema 才有意义。
|
||||
如果一个入站请求与多个 FlowSchema 匹配,则将基于 `matchingPrecedence` 值最高的请求进行筛选。
|
||||
如果一个请求匹配多个 FlowSchema 且 `matchingPrecedence` 的值相同,则按 `name` 的字典序选择最小,
|
||||
|
@ -540,6 +561,7 @@ FlowSchema 的 `distinguisherMethod.type` 字段决定了如何把与该模式
|
|||
|
||||
<!--
|
||||
## Diagnostics
|
||||
|
||||
Every HTTP response from an API server with the priority and fairness feature
|
||||
enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
|
||||
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
|
||||
|
@ -547,7 +569,8 @@ and the priority level to which it was assigned, respectively. The API objects'
|
|||
names are not included in these headers in case the requesting user does not
|
||||
have permission to view them, so when debugging you can use a command like
|
||||
-->
|
||||
## 诊断程序 {#Diagnostics}
|
||||
|
||||
## 问题诊断 {#diagnostics}
|
||||
|
||||
启用了 APF 的 API 服务器,它每个 HTTP 响应都有两个额外的 HTTP 头:
|
||||
`X-Kubernetes-PF-FlowSchema-UID` 和 `X-Kubernetes-PF-PriorityLevel-UID`,
|
||||
|
@ -559,18 +582,35 @@ have permission to view them, so when debugging you can use a command like
|
|||
kubectl get flowschemas -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
|
||||
kubectl get prioritylevelconfigurations -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
|
||||
```
|
||||
|
||||
<!--
|
||||
to get a mapping of UIDs to names for both FlowSchemas and
|
||||
PriorityLevelConfigurations.
|
||||
-->
|
||||
来获取 UID 到 FlowSchema 的名称和 UID 到 PriorityLevelConfigurations 的名称的映射。
|
||||
|
||||
<!-- ## Observability -->
|
||||
<!--
|
||||
## Observability
|
||||
|
||||
### Metrics
|
||||
-->
|
||||
## 可观察性 {#Observability}
|
||||
|
||||
<!-- ### Metrics -->
|
||||
### 指标 {#Metrics}
|
||||
|
||||
<!--
|
||||
In versions of Kubernetes before v1.20, the labels `flow_schema` and
|
||||
`priority_level` were inconsistently named `flowSchema` and `priorityLevel`,
|
||||
respectively. If you're running Kubernetes versions v1.19 and earlier, you
|
||||
should refer to the documentation for your version.
|
||||
-->
|
||||
{{< note >}}
|
||||
在 Kubernetes v1.20 之前的版本中,标签 `flow_schema` 和 `priority_level`
|
||||
的命名有时被写作 `flowSchema` 和 `priorityLevel`,即存在不一致的情况。
|
||||
如果你在运行 Kubernetes v1.19 或者更早版本,你需要参考你所使用的集群
|
||||
版本对应的文档。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
When you enable the API Priority and Fairness feature, the kube-apiserver
|
||||
exports additional metrics. Monitoring these can help you determine whether your
|
||||
|
@ -584,8 +624,8 @@ poorly-behaved workloads that may be harming system health.
|
|||
<!--
|
||||
* `apiserver_flowcontrol_rejected_requests_total` is a counter vector
|
||||
(cumulative since server start) of requests that were rejected,
|
||||
broken down by the labels `flowSchema` (indicating the one that
|
||||
matched the request), `priorityLevel` (indicating the one to which
|
||||
broken down by the labels `flow_schema` (indicating the one that
|
||||
matched the request), `priority_evel` (indicating the one to which
|
||||
the request was assigned), and `reason`. The `reason` label will be
|
||||
have one of the following values:
|
||||
* `queue-full`, indicating that too many requests were already
|
||||
|
@ -597,23 +637,26 @@ poorly-behaved workloads that may be harming system health.
|
|||
when its queuing time limit expired.
|
||||
-->
|
||||
* `apiserver_flowcontrol_rejected_requests_total` 是一个计数器向量,
|
||||
记录被拒绝的请求数量(自服务器启动以来累计值),
|
||||
由标签 `flowSchema` (表示与请求匹配的 FlowSchema ), `priorityLevel` (表示分配给请该求的优先级)和 `reason` 拆分。
|
||||
`reason` 标签将具有以下值之一:
|
||||
* `queue-full` ,表明已经有太多请求排队,
|
||||
* `concurrency-limit` ,表示将 PriorityLevelConfiguration 配置为 `Reject` 而不是 `Queue` ,或者
|
||||
* `time-out`, 表示在其排队时间超期的请求仍在队列中。
|
||||
记录被拒绝的请求数量(自服务器启动以来累计值),
|
||||
由标签 `flow_chema` (表示与请求匹配的 FlowSchema ),`priority_evel`
|
||||
(表示分配给请该求的优先级)和 `reason` 来区分。
|
||||
`reason` 标签将具有以下值之一:
|
||||
|
||||
* `queue-full` ,表明已经有太多请求排队,
|
||||
* `concurrency-limit` ,表示将 PriorityLevelConfiguration 配置为 `Reject` 而不是 `Queue` ,或者
|
||||
* `time-out`, 表示在其排队时间超期的请求仍在队列中。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
|
||||
vector (cumulative since server start) of requests that began
|
||||
executing, broken down by the labels `flowSchema` (indicating the
|
||||
one that matched the request) and `priorityLevel` (indicating the
|
||||
executing, broken down by the labels `flow_schema` (indicating the
|
||||
one that matched the request) and `priority_level` (indicating the
|
||||
one to which the request was assigned).
|
||||
-->
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` 是一个计数器向量,
|
||||
记录开始执行的请求数量(自服务器启动以来的累积值),
|
||||
由标签 `flowSchema` (表示与请求匹配的 FlowSchema )和 `priorityLevel`(表示分配给该请求的优先级)拆分。
|
||||
记录开始执行的请求数量(自服务器启动以来的累积值),
|
||||
由标签 `flow_schema` (表示与请求匹配的 FlowSchema )和
|
||||
`priority_level`(表示分配给该请求的优先级)来区分。
|
||||
|
||||
<!--
|
||||
* `apiserver_current_inqueue_requests` is a gauge vector of recent
|
||||
|
@ -626,11 +669,11 @@ poorly-behaved workloads that may be harming system health.
|
|||
served.
|
||||
-->
|
||||
* `apiserver_current_inqueue_requests` 是一个表向量,
|
||||
记录最近排队请求数量的高水位线,
|
||||
由标签 `request_kind` 分组,标签的值为 `mutating` 或 `readOnly`。
|
||||
这些高水位线表示在最近一秒钟内看到的最大数字。
|
||||
它们补充说明了老的表向量 `apiserver_current_inflight_requests`
|
||||
(该量保存了最后一个窗口中,正在处理的请求数量的高水位线)。
|
||||
记录最近排队请求数量的高水位线,
|
||||
由标签 `request_kind` 分组,标签的值为 `mutating` 或 `readOnly`。
|
||||
这些高水位线表示在最近一秒钟内看到的最大数字。
|
||||
它们补充说明了老的表向量 `apiserver_current_inflight_requests`
|
||||
(该量保存了最后一个窗口中,正在处理的请求数量的高水位线)。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_read_vs_write_request_count_samples` is a
|
||||
|
@ -641,9 +684,9 @@ poorly-behaved workloads that may be harming system health.
|
|||
periodically at a high rate.
|
||||
-->
|
||||
* `apiserver_flowcontrol_read_vs_write_request_count_samples` 是一个直方图向量,
|
||||
记录当前请求数量的观察值,
|
||||
由标签 `phase` (取值为 `waiting` 和 `executing` )和 `request_kind` (取值 `mutating` 和 `readOnly` )拆分。
|
||||
定期以高速率观察该值。
|
||||
记录当前请求数量的观察值,
|
||||
由标签 `phase` (取值为 `waiting` 和 `executing` )和 `request_kind`
|
||||
(取值 `mutating` 和 `readOnly` )拆分。定期以高速率观察该值。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a
|
||||
|
@ -657,11 +700,11 @@ poorly-behaved workloads that may be harming system health.
|
|||
water marks show the range of values that occurred between samples.
|
||||
-->
|
||||
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` 是一个直方图向量,
|
||||
记录请求数量的高/低水位线,
|
||||
由标签 `phase` (取值为 `waiting` 和 `executing` )和 `request_kind` (取值为 `mutating` 和 `readOnly` )拆分;
|
||||
标签 `mark` 取值为 `high` 和 `low` 。
|
||||
`apiserver_flowcontrol_read_vs_write_request_count_samples` 向量观察到有值新增,则该向量累积。
|
||||
这些水位线显示了样本值的范围。
|
||||
记录请求数量的高/低水位线,
|
||||
由标签 `phase` (取值为 `waiting` 和 `executing` )和 `request_kind`
|
||||
(取值为 `mutating` 和 `readOnly` )拆分;标签 `mark` 取值为 `high` 和 `low` 。
|
||||
`apiserver_flowcontrol_read_vs_write_request_count_samples` 向量观察到有值新增,
|
||||
则该向量累积。这些水位线显示了样本值的范围。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector
|
||||
|
@ -669,37 +712,38 @@ poorly-behaved workloads that may be harming system health.
|
|||
broken down by the labels `priorityLevel` and `flowSchema`.
|
||||
-->
|
||||
* `apiserver_flowcontrol_current_inqueue_requests` 是一个表向量,
|
||||
记录包含排队中的(未执行)请求的瞬时数量,
|
||||
由标签 `priorityLevel` 和 `flowSchema` 拆分。
|
||||
记录包含排队中的(未执行)请求的瞬时数量,
|
||||
由标签 `priorityLevel` 和 `flowSchema` 拆分。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_current_executing_requests` is a gauge vector
|
||||
holding the instantaneous number of executing (not waiting in a
|
||||
queue) requests, broken down by the labels `priorityLevel` and
|
||||
`flowSchema`.
|
||||
queue) requests, broken down by the labels `priority_level` and
|
||||
`flow_schema`.
|
||||
-->
|
||||
* `apiserver_flowcontrol_current_executing_requests` 是一个表向量,
|
||||
记录包含执行中(不在队列中等待)请求的瞬时数量,
|
||||
由标签 `priorityLevel` 和 `flowSchema` 拆分。
|
||||
记录包含执行中(不在队列中等待)请求的瞬时数量,
|
||||
由标签 `priority_level` 和 `flow_schema` 进一步区分。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_priority_level_request_count_samples` is a
|
||||
histogram vector of observations of the then-current number of
|
||||
requests broken down by the labels `phase` (which takes on the
|
||||
values `waiting` and `executing`) and `priorityLevel`. Each
|
||||
values `waiting` and `executing`) and `priority_level`. Each
|
||||
histogram gets observations taken periodically, up through the last
|
||||
activity of the relevant sort. The observations are made at a high
|
||||
rate.
|
||||
-->
|
||||
* `apiserver_flowcontrol_priority_level_request_count_samples` 是一个直方图向量,
|
||||
记录当前请求的观测值,由标签 `phase` (取值为`waiting` 和 `executing`)和 `priorityLevel` 拆分。
|
||||
每个直方图都会定期进行观察,直到相关类别的最后活动为止。观察频率高。
|
||||
记录当前请求的观测值,由标签 `phase` (取值为`waiting` 和 `executing`)和
|
||||
`priority_level` 进一步区分。
|
||||
每个直方图都会定期进行观察,直到相关类别的最后活动为止。观察频率高。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a
|
||||
histogram vector of high or low water marks of the number of
|
||||
requests broken down by the labels `phase` (which takes on the
|
||||
values `waiting` and `executing`) and `priorityLevel`; the label
|
||||
values `waiting` and `executing`) and `priority_level`; the label
|
||||
`mark` takes on values `high` and `low`. The water marks are
|
||||
accumulated over windows bounded by the times when an observation
|
||||
was added to
|
||||
|
@ -707,46 +751,49 @@ poorly-behaved workloads that may be harming system health.
|
|||
water marks show the range of values that occurred between samples.
|
||||
-->
|
||||
* `apiserver_flowcontrol_priority_level_request_count_watermarks` 是一个直方图向量,
|
||||
记录请求数的高/低水位线,由标签 `phase` (取值为 `waiting` 和 `executing` )和 `priorityLevel` 拆分;
|
||||
标签 `mark` 取值为 `high` 和 `low` 。
|
||||
`apiserver_flowcontrol_priority_level_request_count_samples` 向量观察到有值新增,则该向量累积。
|
||||
这些水位线显示了样本值的范围。
|
||||
记录请求数的高/低水位线,由标签 `phase` (取值为 `waiting` 和 `executing` )和
|
||||
`priority_level` 拆分;
|
||||
标签 `mark` 取值为 `high` 和 `low` 。
|
||||
`apiserver_flowcontrol_priority_level_request_count_samples` 向量观察到有值新增,
|
||||
则该向量累积。这些水位线显示了样本值的范围。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_request_queue_length_after_enqueue` is a
|
||||
histogram vector of queue lengths for the queues, broken down by
|
||||
the labels `priorityLevel` and `flowSchema`, as sampled by the
|
||||
the labels `priority_level` and `flow_schema`, as sampled by the
|
||||
enqueued requests. Each request that gets queued contributes one
|
||||
sample to its histogram, reporting the length of the queue just
|
||||
after the request was added. Note that this produces different
|
||||
statistics than an unbiased survey would.
|
||||
-->
|
||||
* `apiserver_flowcontrol_request_queue_length_after_enqueue` 是一个直方图向量,
|
||||
记录请求队列的长度,由标签 `priorityLevel` 和 `flowSchema` 拆分。
|
||||
每个排队中的请求都会为其直方图贡献一个样本,并在添加请求后立即上报队列的长度。
|
||||
请注意,这样产生的统计数据与无偏调查不同。
|
||||
{{< note >}}
|
||||
<!--
|
||||
An outlier value in a histogram here means it is likely that a single flow
|
||||
(i.e., requests by one user or for one namespace, depending on
|
||||
configuration) is flooding the API server, and being throttled. By contrast,
|
||||
if one priority level's histogram shows that all queues for that priority
|
||||
level are longer than those for other priority levels, it may be appropriate
|
||||
to increase that PriorityLevelConfiguration's concurrency shares.
|
||||
-->
|
||||
直方图中的离群值在这里表示单个流(即,一个用户或一个名称空间的请求,具体取决于配置)正在疯狂请求 API 服务器,并受到限制。
|
||||
相反,如果一个优先级的直方图显示该优先级的所有队列都比其他优先级的队列长,则增加 PriorityLevelConfigurations 的并发份额是比较合适的。
|
||||
{{< /note >}}
|
||||
记录请求队列的长度,由标签 `priority_level` 和 `flow_schema` 进一步区分。
|
||||
每个排队中的请求都会为其直方图贡献一个样本,并在添加请求后立即上报队列的长度。
|
||||
请注意,这样产生的统计数据与无偏调查不同。
|
||||
<!--
|
||||
An outlier value in a histogram here means it is likely that a single flow
|
||||
(i.e., requests by one user or for one namespace, depending on
|
||||
configuration) is flooding the API server, and being throttled. By contrast,
|
||||
if one priority level's histogram shows that all queues for that priority
|
||||
level are longer than those for other priority levels, it may be appropriate
|
||||
to increase that PriorityLevelConfiguration's concurrency shares.
|
||||
-->
|
||||
{{< note >}}
|
||||
直方图中的离群值在这里表示单个流(即,一个用户或一个名称空间的请求,
|
||||
具体取决于配置)正在疯狂地向 API 服务器发请求,并受到限制。
|
||||
相反,如果一个优先级的直方图显示该优先级的所有队列都比其他优先级的队列长,
|
||||
则增加 PriorityLevelConfigurations 的并发份额是比较合适的。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
|
||||
hoding the computed concurrency limit (based on the API server's
|
||||
total concurrency limit and PriorityLevelConfigurations' concurrency
|
||||
shares), broken down by the label `priorityLevel`.
|
||||
shares), broken down by the label `priority_level`.
|
||||
-->
|
||||
* `apiserver_flowcontrol_request_concurrency_limit` 是一个表向量,
|
||||
记录并发限制的计算值(基于 API 服务器的总并发限制和 PriorityLevelConfigurations 的并发份额),
|
||||
并按标签 `priorityLevel` 拆分。
|
||||
记录并发限制的计算值(基于 API 服务器的总并发限制和 PriorityLevelConfigurations
|
||||
的并发份额),并按标签 `priority_level` 进一步区分。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram
|
||||
|
@ -757,19 +804,21 @@ poorly-behaved workloads that may be harming system health.
|
|||
executing).
|
||||
-->
|
||||
* `apiserver_flowcontrol_request_wait_duration_seconds` 是一个直方图向量,
|
||||
记录请求排队的时间,
|
||||
由标签 `flowSchema` (表示与请求匹配的 FlowSchema ),
|
||||
`priorityLevel` (表示分配该请求的优先级)
|
||||
和 `execute` (表示请求是否开始执行)拆分。
|
||||
{{< note >}}
|
||||
<!--
|
||||
Since each FlowSchema always assigns requests to a single
|
||||
PriorityLevelConfiguration, you can add the histograms for all the
|
||||
FlowSchemas for one priority level to get the effective histogram for
|
||||
requests assigned to that priority level.
|
||||
-->
|
||||
由于每个 FlowSchema 总会给请求分配 PriorityLevelConfigurations,因此你可以为一个优先级添加所有 FlowSchema 的直方图,以获取分配给该优先级的请求的有效直方图。
|
||||
{{< /note >}}
|
||||
记录请求排队的时间,
|
||||
由标签 `flow_schema` (表示与请求匹配的 FlowSchema ),
|
||||
`priority_level` (表示分配该请求的优先级)
|
||||
和 `execute` (表示请求是否开始执行)进一步区分。
|
||||
<!--
|
||||
Since each FlowSchema always assigns requests to a single
|
||||
PriorityLevelConfiguration, you can add the histograms for all the
|
||||
FlowSchemas for one priority level to get the effective histogram for
|
||||
requests assigned to that priority level.
|
||||
-->
|
||||
{{< note >}}
|
||||
由于每个 FlowSchema 总会给请求分配 PriorityLevelConfigurations,
|
||||
因此你可以为一个优先级添加所有 FlowSchema 的直方图,以获取分配给
|
||||
该优先级的请求的有效直方图。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
|
||||
|
@ -779,29 +828,34 @@ poorly-behaved workloads that may be harming system health.
|
|||
assigned).
|
||||
-->
|
||||
* `apiserver_flowcontrol_request_execution_seconds` 是一个直方图向量,
|
||||
记录请求实际执行需要花费的时间,
|
||||
由标签 `flowSchema` (表示与请求匹配的 FlowSchema )和 `priorityLevel` (表示分配给该请求的优先级)拆分。
|
||||
|
||||
<!-- ### Debug endpoints -->
|
||||
### 调试端点 {#Debug-endpoints}
|
||||
记录请求实际执行需要花费的时间,
|
||||
由标签 `flow_schema` (表示与请求匹配的 FlowSchema )和
|
||||
`priority_level` (表示分配给该请求的优先级)进一步区分。
|
||||
|
||||
<!--
|
||||
### Debug endpoints
|
||||
|
||||
When you enable the API Priority and Fairness feature,
|
||||
the kube-apiserver serves the following additional paths at its HTTP[S] ports.
|
||||
-->
|
||||
### 调试端点 {#Debug-endpoints}
|
||||
|
||||
启用 APF 特性后, kube-apiserver 会在其 HTTP/HTTPS 端口提供以下路径:
|
||||
|
||||
<!--
|
||||
- `/debug/api_priority_and_fairness/dump_priority_levels` - a listing of all the priority levels and the current state of each.
|
||||
You can fetch like this:
|
||||
-->
|
||||
- `/debug/api_priority_and_fairness/dump_priority_levels` —— 所有优先级及其当前状态的列表。你可以这样获取:
|
||||
- `/debug/api_priority_and_fairness/dump_priority_levels` ——
|
||||
所有优先级及其当前状态的列表。你可以这样获取:
|
||||
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
PriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests,
|
||||
workload-low, 0, true, false, 0, 0,
|
||||
|
@ -817,13 +871,16 @@ You can fetch like this:
|
|||
- `/debug/api_priority_and_fairness/dump_queues` - a listing of all the queues and their current state.
|
||||
You can fetch like this:
|
||||
-->
|
||||
- `/debug/api_priority_and_fairness/dump_queues` ——所有队列及其当前状态的列表。你可以这样获取:
|
||||
- `/debug/api_priority_and_fairness/dump_queues` —— 所有队列及其当前状态的列表。
|
||||
你可以这样获取:
|
||||
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_queues
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
PriorityLevelName, Index, PendingRequests, ExecutingRequests, VirtualStart,
|
||||
workload-high, 0, 0, 0, 0.0000,
|
||||
|
@ -838,13 +895,16 @@ You can fetch like this:
|
|||
- `/debug/api_priority_and_fairness/dump_requests` - a listing of all the requests that are currently waiting in a queue.
|
||||
You can fetch like this:
|
||||
-->
|
||||
- `/debug/api_priority_and_fairness/dump_requests` ——当前正在队列中等待的所有请求的列表。你可以这样获取:
|
||||
- `/debug/api_priority_and_fairness/dump_requests` ——当前正在队列中等待的所有请求的列表。
|
||||
你可以这样获取:
|
||||
|
||||
```shell
|
||||
kubectl get --raw /debug/api_priority_and_fairness/dump_requests
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
PriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime,
|
||||
exempt, <none>, <none>, <none>, <none>, <none>,
|
||||
|
@ -859,6 +919,7 @@ You can fetch like this:
|
|||
|
||||
<!-- You can get a more detailed listing with a command like this: -->
|
||||
你可以使用以下命令获得更详细的清单:
|
||||
|
||||
```shell
|
||||
kubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'
|
||||
```
|
||||
|
|
|
@ -459,6 +459,18 @@ configuration.
|
|||
不过,使用内置的 Secret 类型的有助于对凭据格式进行归一化处理,并且
|
||||
API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。
|
||||
|
||||
<!--
|
||||
SSH private keys do not establish trusted communication between an SSH client and
|
||||
host server on their own. A secondary means of establishing trust is needed to
|
||||
mitigate "man in the middle" attacks, such as a `known_hosts` file added to a
|
||||
ConfigMap.
|
||||
-->
|
||||
{{< caution >}}
|
||||
SSH 私钥自身无法建立 SSH 客户端与服务器端之间的可信连接。
|
||||
需要其它方式来建立这种信任关系,以缓解“中间人(Man In The Middle)”
|
||||
攻击,例如向 ConfigMap 中添加一个 `known_hosts` 文件。
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
### TLS secrets
|
||||
|
||||
|
@ -581,7 +593,7 @@ data:
|
|||
<!--
|
||||
A bootstrap type has the following keys specified under `data`:
|
||||
|
||||
- `token_id`: A random 6 character string as the token identifier. Required.
|
||||
- `token-id`: A random 6 character string as the token identifier. Required.
|
||||
- `token-secret`: A random 16 character string as the actual token secret. Required.
|
||||
- `description1`: A human-readable string that describes what the token is
|
||||
used for. Optional.
|
||||
|
@ -594,7 +606,7 @@ A bootstrap type has the following keys specified under `data`:
|
|||
-->
|
||||
启动引导令牌类型的 Secret 会在 `data` 字段中包含如下主键:
|
||||
|
||||
- `token_id`:由 6 个随机字符组成的字符串,作为令牌的标识符。必需。
|
||||
- `token-id`:由 6 个随机字符组成的字符串,作为令牌的标识符。必需。
|
||||
- `token-secret`:由 16 个随机字符组成的字符串,包含实际的令牌机密。必需。
|
||||
- `description`:供用户阅读的字符串,描述令牌的用途。可选。
|
||||
- `expiration`:一个使用 RFC3339 来编码的 UTC 绝对时间,给出令牌要过期的时间。可选。
|
||||
|
@ -1154,6 +1166,18 @@ The output is similar to:
|
|||
1f2d1e2e67df
|
||||
```
|
||||
|
||||
<!--
|
||||
#### Environment variables are not updated after a secret update
|
||||
|
||||
If a container already consumes a Secret in an environment variable, a Secret update will not be seen by the container unless it is restarted.
|
||||
There are third party solutions for triggering restarts when secrets change.
|
||||
-->
|
||||
#### Secret 更新之后对应的环境变量不会被更新
|
||||
|
||||
如果某个容器已经在通过环境变量使用某 Secret,对该 Secret 的更新不会被
|
||||
容器马上看见,除非容器被重启。有一些第三方的解决方案能够在 Secret 发生
|
||||
变化时触发容器重启。
|
||||
|
||||
<!--
|
||||
## Immutable Secrets {#secret-immutable}
|
||||
-->
|
||||
|
|
|
@ -56,7 +56,7 @@ but with different settings.
|
|||
|
||||
Ensure the RuntimeClass feature gate is enabled (it is by default). See [Feature
|
||||
Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling
|
||||
feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _and_ kubelets.
|
||||
feature gates. The `RuntimeClass` feature gate must be enabled on API server _and_ kubelets.
|
||||
-->
|
||||
## 设置 {#setup}
|
||||
|
||||
|
@ -66,11 +66,11 @@ feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _an
|
|||
`RuntimeClass` 特性开关必须在 API 服务器和 kubelet 端同时开启。
|
||||
|
||||
<!--
|
||||
1. Configure the CRI implementation on nodes (runtime dependent)
|
||||
2. Create the corresponding RuntimeClass resources
|
||||
1. Configure the CRI implementation on nodes (runtime dependent).
|
||||
2. Create the corresponding RuntimeClass resources.
|
||||
-->
|
||||
1. 在节点上配置 CRI 的实现(取决于所选用的运行时)
|
||||
2. 创建相应的 RuntimeClass 资源
|
||||
1. 在节点上配置 CRI 的实现(取决于所选用的运行时)。
|
||||
2. 创建相应的 RuntimeClass 资源。
|
||||
|
||||
<!--
|
||||
### 1. Configure the CRI implementation on nodes
|
||||
|
@ -161,7 +161,7 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
This will instruct the Kubelet to use the named RuntimeClass to run this pod. If the named
|
||||
This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named
|
||||
RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will enter the
|
||||
`Failed` terminal [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase). Look for a
|
||||
corresponding [event](/docs/tasks/debug-application-cluster/debug-application-introspection/) for an
|
||||
|
|
|
@ -100,8 +100,8 @@ metadata:
|
|||
## Applications And Instances Of Applications
|
||||
|
||||
An application can be installed one or more times into a Kubernetes cluster and,
|
||||
in some cases, the same namespace. For example, wordpress can be installed more
|
||||
than once where different websites are different installations of wordpress.
|
||||
in some cases, the same namespace. For example, WordPress can be installed more
|
||||
than once where different websites are different installations of WordPress.
|
||||
|
||||
The name of an application and the instance name are recorded separately. For
|
||||
example, WordPress has a `app.kubernetes.io/name` of `wordpress` while it has
|
||||
|
@ -111,7 +111,7 @@ to be identifiable. Every instance of an application must have a unique name.
|
|||
-->
|
||||
## 应用和应用实例
|
||||
|
||||
应用可以在 Kubernetes 集群中安装一次或多次。在某些情况下,可以安装在同一命名空间中。例如,可以不止一次地为不同的站点安装不同的 wordpress。
|
||||
应用可以在 Kubernetes 集群中安装一次或多次。在某些情况下,可以安装在同一命名空间中。例如,可以不止一次地为不同的站点安装不同的 WordPress。
|
||||
|
||||
应用的名称和实例的名称是分别记录的。例如,某 WordPress 实例的 `app.kubernetes.io/name` 为 `wordpress`,而其实例名称表现为 `app.kubernetes.io/instance` 的属性值 `wordpress-abcxzy`。这使应用程序和应用程序的实例成为可能是可识别的。应用程序的每个实例都必须具有唯一的名称。
|
||||
|
||||
|
|
|
@ -5,6 +5,10 @@ weight: 50
|
|||
---
|
||||
|
||||
<!--
|
||||
reviewers:
|
||||
- davidopp
|
||||
- kevin-wangzefeng
|
||||
- bsalamat
|
||||
title: Assigning Pods to Nodes
|
||||
content_type: concept
|
||||
weight: 50
|
||||
|
@ -23,8 +27,14 @@ but there are some circumstances where you may want more control on a node where
|
|||
that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
|
||||
services that communicate a lot into the same availability zone.
|
||||
-->
|
||||
|
||||
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的 {{< glossary_tooltip text="Node(s)" term_id="node" >}} 上运行,或者优先运行在特定的节点上。有几种方法可以实现这点,推荐的方法都是用[标签选择器](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 pod 分散到节点上,而不是将 pod 放置在可用资源不足的节点上等等),但在某些情况下,你可能需要更多控制 pod 停靠的节点,例如,确保 pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务且有大量通信的 pod 放置在同一个可用区。
|
||||
你可以约束一个 {{< glossary_tooltip text="Pod" term_id="pod" >}} 只能在特定的
|
||||
{{< glossary_tooltip text="节点" term_id="node" >}} 上运行,或者优先运行在特定的节点上。
|
||||
有几种方法可以实现这点,推荐的方法都是用
|
||||
[标签选择算符](/zh/docs/concepts/overview/working-with-objects/labels/)来进行选择。
|
||||
通常这样的约束不是必须的,因为调度器将自动进行合理的放置(比如,将 Pod 分散到节点上,
|
||||
而不是将 Pod 放置在可用资源不足的节点上等等)。但在某些情况下,你可能需要进一步控制
|
||||
Pod 停靠的节点,例如,确保 Pod 最终落在连接了 SSD 的机器上,或者将来自两个不同的服务
|
||||
且有大量通信的 Pods 被放置在同一个可用区。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -38,36 +48,34 @@ additional labels as well). The most common usage is one key-value pair.
|
|||
-->
|
||||
|
||||
`nodeSelector` 是节点选择约束的最简单推荐形式。`nodeSelector` 是 PodSpec 的一个字段。
|
||||
它包含键值对的映射。为了使 pod 可以在某个节点上运行,该节点的标签中必须包含这里的每个键值对(它也可以具有其他标签)。最常见的用法的是一对键值对。
|
||||
它包含键值对的映射。为了使 pod 可以在某个节点上运行,该节点的标签中
|
||||
必须包含这里的每个键值对(它也可以具有其他标签)。
|
||||
最常见的用法的是一对键值对。
|
||||
|
||||
<!--
|
||||
Let's walk through an example of how to use `nodeSelector`.
|
||||
-->
|
||||
|
||||
让我们来看一个使用 `nodeSelector` 的例子。
|
||||
|
||||
<!--
|
||||
### Step Zero: Prerequisites
|
||||
-->
|
||||
|
||||
### 步骤零:先决条件
|
||||
|
||||
<!--
|
||||
This example assumes that you have a basic understanding of Kubernetes pods and that you have [set up a Kubernetes cluster](/docs/setup/).
|
||||
-->
|
||||
### 步骤零:先决条件
|
||||
|
||||
本示例假设你已基本了解 Kubernetes 的 Pod 并且已经[建立一个 Kubernetes 集群](/zh/docs/setup/)。
|
||||
|
||||
<!--
|
||||
### Step One: Attach label to the node
|
||||
|
||||
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes <node-name> <label-key>=<label-value>` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
|
||||
-->
|
||||
### 步骤一:添加标签到节点 {#attach-labels-to-node}
|
||||
|
||||
<!--
|
||||
Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes <node-name> <label-key>=<label-value>` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
|
||||
-->
|
||||
|
||||
执行 `kubectl get nodes` 命令获取集群的节点名称。
|
||||
选择一个你要增加标签的节点,然后执行 `kubectl label nodes <node-name> <label-key>=<label-value>`
|
||||
选择一个你要增加标签的节点,然后执行
|
||||
`kubectl label nodes <node-name> <label-key>=<label-value>`
|
||||
命令将标签添加到你所选择的节点上。
|
||||
例如,如果你的节点名称为 'kubernetes-foo-node-1.c.a-robinson.internal'
|
||||
并且想要的标签是 'disktype=ssd',则可以执行
|
||||
|
@ -76,17 +84,17 @@ Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the o
|
|||
<!--
|
||||
You can verify that it worked by re-running `kubectl get nodes -show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node.
|
||||
-->
|
||||
你可以通过重新运行 `kubectl get nodes --show-labels`,查看节点当前具有了所指定的标签来验证它是否有效。
|
||||
你可以通过重新运行 `kubectl get nodes --show-labels`,
|
||||
查看节点当前具有了所指定的标签来验证它是否有效。
|
||||
你也可以使用 `kubectl describe node "nodename"` 命令查看指定节点的标签完整列表。
|
||||
|
||||
<!--
|
||||
### Step Two: Add a nodeSelector field to your pod configuration
|
||||
|
||||
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
|
||||
-->
|
||||
### 步骤二:添加 nodeSelector 字段到 Pod 配置中
|
||||
|
||||
<!--
|
||||
Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
|
||||
-->
|
||||
选择任何一个你想运行的 Pod 的配置文件,并且在其中添加一个 nodeSelector 部分。
|
||||
例如,如果下面是我的 pod 配置:
|
||||
|
||||
|
@ -106,7 +114,6 @@ spec:
|
|||
<!--
|
||||
Then add a nodeSelector like so:
|
||||
-->
|
||||
|
||||
然后像下面这样添加 nodeSelector:
|
||||
|
||||
{{< codenew file="pods/pod-nginx.yaml" >}}
|
||||
|
@ -118,19 +125,19 @@ verify that it worked by running `kubectl get pods -o wide` and looking at the
|
|||
"NODE" that the Pod was assigned to.
|
||||
-->
|
||||
当你之后运行 `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml` 命令,
|
||||
Pod 将会调度到将标签添加到的节点上。你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。
|
||||
Pod 将会调度到将标签添加到的节点上。
|
||||
你可以通过运行 `kubectl get pods -o wide` 并查看分配给 pod 的 “NODE” 来验证其是否有效。
|
||||
|
||||
<!--
|
||||
## Interlude: built-in node labels {#built-in-node-labels}
|
||||
-->
|
||||
|
||||
## 插曲:内置的节点标签 {#built-in-node-labels}
|
||||
|
||||
<!--
|
||||
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
|
||||
with a standard set of labels. These labels are
|
||||
-->
|
||||
除了你[附加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。这些标签是
|
||||
## 插曲:内置的节点标签 {#built-in-node-labels}
|
||||
|
||||
除了你[添加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。
|
||||
这些标签有:
|
||||
|
||||
* [`kubernetes.io/hostname`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
|
||||
* [`failure-domain.beta.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
|
||||
|
@ -148,31 +155,34 @@ The value of these labels is cloud provider specific and is not guaranteed to be
|
|||
For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments
|
||||
and a different value in other environments.
|
||||
-->
|
||||
这些标签的值是特定于云供应商的,因此不能保证可靠。例如,`kubernetes.io/hostname` 的值在某些环境中可能与节点名称相同,但在其他环境中可能是一个不同的值。
|
||||
这些标签的值是特定于云供应商的,因此不能保证可靠。
|
||||
例如,`kubernetes.io/hostname` 的值在某些环境中可能与节点名称相同,
|
||||
但在其他环境中可能是一个不同的值。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
## Node isolation/restriction
|
||||
-->
|
||||
|
||||
## 节点隔离/限制
|
||||
|
||||
<!--
|
||||
Adding labels to Node objects allows targeting pods to specific nodes or groups of nodes.
|
||||
This can be used to ensure specific pods only run on nodes with certain isolation, security, or regulatory properties.
|
||||
When using labels for this purpose, choosing label keys that cannot be modified by the kubelet process on the node is strongly recommended.
|
||||
This prevents a compromised node from using its kubelet credential to set those labels on its own Node object,
|
||||
and influencing the scheduler to schedule workloads to the compromised node.
|
||||
-->
|
||||
## 节点隔离/限制 {#node-isolation-restriction}
|
||||
|
||||
向 Node 对象添加标签可以将 pod 定位到特定的节点或节点组。这可以用来确保指定的 pod 只能运行在具有一定隔离性,安全性或监管属性的节点上。当为此目的使用标签时,强烈建议选择节点上的 kubelet 进程无法修改的标签键。这可以防止受感染的节点使用其 kubelet 凭据在自己的 Node 对象上设置这些标签,并影响调度器将工作负载调度到受感染的节点。
|
||||
向 Node 对象添加标签可以将 pod 定位到特定的节点或节点组。
|
||||
这可以用来确保指定的 Pod 只能运行在具有一定隔离性,安全性或监管属性的节点上。
|
||||
当为此目的使用标签时,强烈建议选择节点上的 kubelet 进程无法修改的标签键。
|
||||
这可以防止受感染的节点使用其 kubelet 凭据在自己的 Node 对象上设置这些标签,
|
||||
并影响调度器将工作负载调度到受感染的节点。
|
||||
|
||||
<!--
|
||||
The `NodeRestriction` admission plugin prevents kubelets from setting or modifying labels with a `node-restriction.kubernetes.io/` prefix.
|
||||
To make use of that label prefix for node isolation:
|
||||
-->
|
||||
|
||||
`NodeRestriction` 准入插件防止 kubelet 使用 `node-restriction.kubernetes.io/` 前缀设置或修改标签。要使用该标签前缀进行节点隔离:
|
||||
`NodeRestriction` 准入插件防止 kubelet 使用 `node-restriction.kubernetes.io/`
|
||||
前缀设置或修改标签。要使用该标签前缀进行节点隔离:
|
||||
|
||||
<!--
|
||||
1. Check that you're using Kubernetes v1.11+ so that NodeRestriction is available.
|
||||
|
@ -180,24 +190,24 @@ To make use of that label prefix for node isolation:
|
|||
3. Add labels under the `node-restriction.kubernetes.io/` prefix to your Node objects, and use those labels in your node selectors.
|
||||
For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`.
|
||||
-->
|
||||
|
||||
1. 检查是否在使用 Kubernetes v1.11+,以便 NodeRestriction 功能可用。
|
||||
2. 确保你在使用[节点授权](/zh/docs/reference/access-authn-authz/node/)并且已经_启用_
|
||||
[NodeRestriction 准入插件](/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction)。
|
||||
3. 将 `node-restriction.kubernetes.io/` 前缀下的标签添加到 Node 对象,然后在节点选择器中使用这些标签。例如,`example.com.node-restriction.kubernetes.io/fips=true` 或 `example.com.node-restriction.kubernetes.io/pci-dss=true`。
|
||||
3. 将 `node-restriction.kubernetes.io/` 前缀下的标签添加到 Node 对象,
|
||||
然后在节点选择器中使用这些标签。
|
||||
例如,`example.com.node-restriction.kubernetes.io/fips=true` 或
|
||||
`example.com.node-restriction.kubernetes.io/pci-dss=true`。
|
||||
|
||||
<!--
|
||||
## Affinity and anti-affinity
|
||||
-->
|
||||
|
||||
## 亲和与反亲和
|
||||
|
||||
<!--
|
||||
`nodeSelector` provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity
|
||||
feature, greatly expands the types of constraints you can express. The key enhancements are
|
||||
-->
|
||||
## 亲和性与反亲和性
|
||||
|
||||
`nodeSelector` 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。关键的增强点是
|
||||
`nodeSelector` 提供了一种非常简单的方法来将 Pod 约束到具有特定标签的节点上。
|
||||
亲和性/反亲和性功能极大地扩展了你可以表达约束的类型。关键的增强点包括:
|
||||
|
||||
<!--
|
||||
1. the language is more expressive (not just "AND of exact match")
|
||||
|
@ -206,10 +216,11 @@ feature, greatly expands the types of constraints you can express. The key enhan
|
|||
3. you can constrain against labels on other pods running on the node (or other topological domain),
|
||||
rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located
|
||||
-->
|
||||
|
||||
1. 语言更具表现力(不仅仅是“完全匹配的 AND”)
|
||||
2. 你可以发现规则是“软”/“偏好”,而不是硬性要求,因此,如果调度器无法满足该要求,仍然调度该 pod
|
||||
3. 你可以使用节点上(或其他拓扑域中)的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
|
||||
1. 语言更具表现力(不仅仅是“对完全匹配规则的 AND”)
|
||||
2. 你可以发现规则是“软需求”/“偏好”,而不是硬性要求,因此,
|
||||
如果调度器无法满足该要求,仍然调度该 Pod
|
||||
3. 你可以使用节点上(或其他拓扑域中)的 Pod 的标签来约束,而不是使用
|
||||
节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
|
||||
|
||||
<!--
|
||||
The affinity feature consists of two types of affinity, "node affinity" and "inter-pod affinity/anti-affinity".
|
||||
|
@ -217,21 +228,21 @@ Node affinity is like the existing `nodeSelector` (but with the first two benefi
|
|||
while inter-pod affinity/anti-affinity constrains against pod labels rather than node labels, as
|
||||
described in the third item listed above, in addition to having the first and second properties listed above.
|
||||
-->
|
||||
|
||||
亲和功能包含两种类型的亲和,即“节点亲和”和“pod 间亲和/反亲和”。节点亲和就像现有的 `nodeSelector`(但具有上面列出的前两个好处),然而 pod 间亲和/反亲和约束 pod 标签而不是节点标签(在上面列出的第三项中描述,除了具有上面列出的第一和第二属性)。
|
||||
亲和性功能包含两种类型的亲和性,即“节点亲和性”和“Pod 间亲和性/反亲和性”。
|
||||
节点亲和性就像现有的 `nodeSelector`(但具有上面列出的前两个好处),然而
|
||||
Pod 间亲和性/反亲和性约束 Pod 标签而不是节点标签(在上面列出的第三项中描述,
|
||||
除了具有上面列出的第一和第二属性)。
|
||||
|
||||
<!--
|
||||
### Node affinity
|
||||
-->
|
||||
|
||||
### 节点亲和
|
||||
|
||||
<!--
|
||||
Node affinity is conceptually similar to `nodeSelector` -- it allows you to constrain which nodes your
|
||||
pod is eligible to be scheduled on, based on labels on the node.
|
||||
-->
|
||||
### 节点亲和性 {#node-affinity}
|
||||
|
||||
节点亲和概念上类似于 `nodeSelector`,它使你可以根据节点上的标签来约束 pod 可以调度到哪些节点。
|
||||
节点亲和性概念上类似于 `nodeSelector`,它使你可以根据节点上的标签来约束
|
||||
Pod 可以调度到哪些节点。
|
||||
|
||||
<!--
|
||||
There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
|
||||
|
@ -244,29 +255,38 @@ met, the pod will still continue to run on the node. In the future we plan to of
|
|||
`requiredDuringSchedulingRequiredDuringExecution` which will be just like `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
|
||||
-->
|
||||
|
||||
目前有两种类型的节点亲和,分别为 `requiredDuringSchedulingIgnoredDuringExecution` 和
|
||||
`preferredDuringSchedulingIgnoredDuringExecution`。你可以视它们为“硬”和“软”,意思是,前者指定了将 pod 调度到一个节点上*必须*满足的规则(就像 `nodeSelector` 但使用更具表现力的语法),后者指定调度器将尝试执行但不能保证的*偏好*。名称的“IgnoredDuringExecution”部分意味着,类似于 `nodeSelector` 的工作原理,如果节点的标签在运行时发生变更,从而不再满足 pod 上的亲和规则,那么 pod 将仍然继续在该节点上运行。将来我们计划提供 `requiredDuringSchedulingRequiredDuringExecution`,它将类似于 `requiredDuringSchedulingIgnoredDuringExecution`,除了它会将 pod 从不再满足 pod 的节点亲和要求的节点上驱逐。
|
||||
目前有两种类型的节点亲和性,分别为 `requiredDuringSchedulingIgnoredDuringExecution` 和
|
||||
`preferredDuringSchedulingIgnoredDuringExecution`。
|
||||
你可以视它们为“硬需求”和“软需求”,意思是,前者指定了将 Pod 调度到一个节点上
|
||||
*必须*满足的规则(就像 `nodeSelector` 但使用更具表现力的语法),
|
||||
后者指定调度器将尝试执行但不能保证的*偏好*。
|
||||
名称的“IgnoredDuringExecution”部分意味着,类似于 `nodeSelector` 的工作原理,
|
||||
如果节点的标签在运行时发生变更,从而不再满足 Pod 上的亲和性规则,那么 Pod
|
||||
将仍然继续在该节点上运行。
|
||||
将来我们计划提供 `requiredDuringSchedulingRequiredDuringExecution`,
|
||||
它将类似于 `requiredDuringSchedulingIgnoredDuringExecution`,
|
||||
除了它会将 pod 从不再满足 pod 的节点亲和性要求的节点上驱逐。
|
||||
|
||||
<!--
|
||||
Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"
|
||||
and an example `preferredDuringSchedulingIgnoredDuringExecution` would be "try to run this set of pods in failure
|
||||
zone XYZ, but if it's not possible, then allow some to run elsewhere".
|
||||
-->
|
||||
|
||||
因此,`requiredDuringSchedulingIgnoredDuringExecution` 的示例将是“仅将 pod 运行在具有 Intel CPU 的节点上”,而 `preferredDuringSchedulingIgnoredDuringExecution` 的示例为“尝试将这组 pod 运行在 XYZ 故障区域,如果这不可能的话,则允许一些 pod 在其他地方运行”。
|
||||
因此,`requiredDuringSchedulingIgnoredDuringExecution` 的示例将是
|
||||
“仅将 Pod 运行在具有 Intel CPU 的节点上”,而
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` 的示例为
|
||||
“尝试将这组 Pod 运行在 XYZ 故障区域,如果这不可能的话,则允许一些
|
||||
Pod 在其他地方运行”。
|
||||
|
||||
<!--
|
||||
Node affinity is specified as field `nodeAffinity` of field `affinity` in the PodSpec.
|
||||
-->
|
||||
|
||||
节点亲和通过 PodSpec 的 `affinity` 字段下的 `nodeAffinity` 字段进行指定。
|
||||
节点亲和性通过 PodSpec 的 `affinity` 字段下的 `nodeAffinity` 字段进行指定。
|
||||
|
||||
<!--
|
||||
Here's an example of a pod that uses node affinity:
|
||||
-->
|
||||
|
||||
下面是一个使用节点亲和的 pod 的实例:
|
||||
下面是一个使用节点亲和性的 Pod 的实例:
|
||||
|
||||
{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
|
||||
|
||||
|
@ -276,58 +296,126 @@ This node affinity rule says the pod can only be placed on a node with a label w
|
|||
among nodes that meet that criteria, nodes with a label whose key is `another-node-label-key` and whose
|
||||
value is `another-node-label-value` should be preferred.
|
||||
-->
|
||||
|
||||
此节点亲和规则表示,pod 只能放置在具有标签键为 `kubernetes.io/e2e-az-name` 且 标签值为 `e2e-az1` 或 `e2e-az2` 的节点上。另外,在满足这些标准的节点中,具有标签键为 `another-node-label-key` 且标签值为 `another-node-label-value` 的节点应该优先使用。
|
||||
此节点亲和性规则表示,Pod 只能放置在具有标签键 `kubernetes.io/e2e-az-name`
|
||||
且标签值为 `e2e-az1` 或 `e2e-az2` 的节点上。
|
||||
另外,在满足这些标准的节点中,具有标签键为 `another-node-label-key`
|
||||
且标签值为 `another-node-label-value` 的节点应该优先使用。
|
||||
|
||||
<!--
|
||||
You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
|
||||
You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
|
||||
[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes.
|
||||
-->
|
||||
|
||||
你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和语法支持下面的操作符:
|
||||
你可以在上面的例子中看到 `In` 操作符的使用。新的节点亲和性语法支持下面的操作符:
|
||||
`In`,`NotIn`,`Exists`,`DoesNotExist`,`Gt`,`Lt`。
|
||||
你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和行为,或者使用
|
||||
[节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)将 pod 从特定节点中驱逐。
|
||||
你可以使用 `NotIn` 和 `DoesNotExist` 来实现节点反亲和性行为,或者使用
|
||||
[节点污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||
将 Pod 从特定节点中驱逐。
|
||||
|
||||
<!--
|
||||
If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
|
||||
to be scheduled onto a candidate node.
|
||||
-->
|
||||
|
||||
如果你同时指定了 `nodeSelector` 和 `nodeAffinity`,*两者*必须都要满足,才能将 pod 调度到候选节点上。
|
||||
|
||||
<!--
|
||||
If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types, then the pod can be scheduled onto a node **if one of** the `nodeSelectorTerms` is satisfied.
|
||||
-->
|
||||
如果你同时指定了 `nodeSelector` 和 `nodeAffinity`,*两者*必须都要满足,
|
||||
才能将 Pod 调度到候选节点上。
|
||||
|
||||
如果你指定了多个与 `nodeAffinity` 类型关联的 `nodeSelectorTerms`,则**如果其中一个** `nodeSelectorTerms` 满足的话,pod将可以调度到节点上。
|
||||
如果你指定了多个与 `nodeAffinity` 类型关联的 `nodeSelectorTerms`,则
|
||||
**如果其中一个** `nodeSelectorTerms` 满足的话,pod将可以调度到节点上。
|
||||
|
||||
<!--
|
||||
If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **only if all** `matchExpressions` can be satisfied.
|
||||
-->
|
||||
|
||||
如果你指定了多个与 `nodeSelectorTerms` 关联的 `matchExpressions`,则**只有当所有** `matchExpressions` 满足的话,pod 才会可以调度到节点上。
|
||||
|
||||
<!--
|
||||
If you remove or change the label of the node where the pod is scheduled, the pod won't be removed. In other words, the affinity selection works only at the time of scheduling the pod.
|
||||
-->
|
||||
|
||||
如果你修改或删除了 pod 所调度到的节点的标签,pod 不会被删除。换句话说,亲和选择只在 pod 调度期间有效。
|
||||
如果你指定了多个与 `nodeSelectorTerms` 关联的 `matchExpressions`,则
|
||||
**只有当所有** `matchExpressions` 满足的话,Pod 才会可以调度到节点上。
|
||||
|
||||
如果你修改或删除了 pod 所调度到的节点的标签,Pod 不会被删除。
|
||||
换句话说,亲和性选择只在 Pod 调度期间有效。
|
||||
|
||||
<!--
|
||||
The `weight` field in `preferredDuringSchedulingIgnoredDuringExecution` is in the range 1-100. For each node that meets all of the scheduling requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding MatchExpressions. This score is then combined with the scores of other priority functions for the node. The node(s) with the highest total score are the most preferred.
|
||||
-->
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` 中的 `weight` 字段值的
|
||||
范围是 1-100。
|
||||
对于每个符合所有调度要求(资源请求、RequiredDuringScheduling 亲和性表达式等)
|
||||
的节点,调度器将遍历该字段的元素来计算总和,并且如果节点匹配对应的
|
||||
MatchExpressions,则添加“权重”到总和。
|
||||
然后将这个评分与该节点的其他优先级函数的评分进行组合。
|
||||
总分最高的节点是最优选的。
|
||||
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` 中的 `weight` 字段值的范围是 1-100。对于每个符合所有调度要求(资源请求,RequiredDuringScheduling 亲和表达式等)的节点,调度器将遍历该字段的元素来计算总和,并且如果节点匹配对应的MatchExpressions,则添加“权重”到总和。然后将这个评分与该节点的其他优先级函数的评分进行组合。总分最高的节点是最优选的。
|
||||
<!--
|
||||
#### Node affinity per scheduling profile
|
||||
-->
|
||||
#### 逐个调度方案中设置节点亲和性 {#node-affinity-per-scheduling-profile}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.20" state="beta" >}}
|
||||
|
||||
<!--
|
||||
When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate
|
||||
a profile with a Node affinity, which is useful if a profile only applies to a specific set of Nodes.
|
||||
To do so, add an `addedAffinity` to the args of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
|
||||
in the [scheduler configuration](/docs/reference/scheduling/config/). For example:
|
||||
-->
|
||||
在配置多个[调度方案](/zh/docs/reference/scheduling/config/#multiple-profiles)时,
|
||||
你可以将某个方案与节点亲和性关联起来,如果某个调度方案仅适用于某组
|
||||
特殊的节点时,这样做是很有用的。
|
||||
要实现这点,可以在[调度器配置](/zh/docs/reference/scheduling/config/)中为
|
||||
[`NodeAffinity` 插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)
|
||||
添加 `addedAffinity` 参数。
|
||||
例如:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
profiles:
|
||||
- schedulerName: default-scheduler
|
||||
- schedulerName: foo-scheduler
|
||||
pluginConfig:
|
||||
- name: NodeAffinity
|
||||
args:
|
||||
addedAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: scheduler-profile
|
||||
operator: In
|
||||
values:
|
||||
- foo
|
||||
```
|
||||
|
||||
<!--
|
||||
The `addedAffinity` is applied to all Pods that set `.spec.schedulerName` to `foo-scheduler`, in addition to the
|
||||
NodeAffinity specified in the PodSpec.
|
||||
That is, in order to match the Pod, Nodes need to satisfy `addedAffinity` and the Pod's `.spec.NodeAffinity`.
|
||||
|
||||
Since the `addedAffinity` is not visible to end users, its behavior might be unexpected to them. We
|
||||
recommend to use node labels that have clear correlation with the profile's scheduler name.
|
||||
-->
|
||||
这里的 `addedAffinity` 除遵从 Pod 规约中设置的节点亲和性之外,还
|
||||
适用于将 `.spec.schedulerName` 设置为 `foo-scheduler`。
|
||||
|
||||
<!--
|
||||
The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler)
|
||||
is not aware of scheduling profiles. For this reason, it is recommended that you keep a scheduler profile, such as the
|
||||
`default-scheduler`, without any `addedAffinity`. Then, the Daemonset's Pod template should use this scheduler name.
|
||||
Otherwise, some Pods created by the Daemonset controller might remain unschedulable.
|
||||
-->
|
||||
{{< note >}}
|
||||
DaemonSet 控制器[为 DaemonSet 创建 Pods](/zh/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler),
|
||||
但该控制器不理会调度方案。因此,建议你保留一个调度方案,例如
|
||||
`default-scheduler`,不要在其中设置 `addedAffinity`。
|
||||
这样,DaemonSet 的 Pod 模板将会使用此调度器名称。
|
||||
否则,DaemonSet 控制器所创建的某些 Pods 可能持续处于不可调度状态。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Inter-pod affinity and anti-affinity
|
||||
-->
|
||||
|
||||
### pod 间亲和与反亲和
|
||||
|
||||
<!--
|
||||
Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled *based on
|
||||
labels on pods that are already running on the node* rather than based on labels on nodes. The rules are of the form
|
||||
"this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y".
|
||||
|
@ -338,8 +426,19 @@ like node, rack, cloud provider zone, cloud provider region, etc. You express it
|
|||
key for the node label that the system uses to denote such a topology domain, e.g. see the label keys listed above
|
||||
in the section [Interlude: built-in node labels](#built-in-node-labels).
|
||||
-->
|
||||
### pod 间亲和性与反亲和性 {#inter-pod-affinity-and-anti-affinity}
|
||||
|
||||
pod 间亲和与反亲和使你可以*基于已经在节点上运行的 pod 的标签*来约束 pod 可以调度到的节点,而不是基于节点上的标签。规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的pod,则这个 pod 应该(或者在非亲和的情况下不应该)运行在 X 节点”。Y 表示一个具有可选的关联命令空间列表的 LabelSelector;与节点不同,因为 pod 是命名空间限定的(因此 pod 上的标签也是命名空间限定的),因此作用于 pod 标签的标签选择器必须指定选择器应用在哪个命名空间。从概念上讲,X 是一个拓扑域,如节点,机架,云供应商地区,云供应商区域等。你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统用来表示这样的拓扑域。请参阅上面[插曲:内置的节点标签](#built-in-node-labels)部分中列出的标签键。
|
||||
Pod 间亲和性与反亲和性使你可以 *基于已经在节点上运行的 Pod 的标签* 来约束
|
||||
Pod 可以调度到的节点,而不是基于节点上的标签。
|
||||
规则的格式为“如果 X 节点上已经运行了一个或多个 满足规则 Y 的 Pod,
|
||||
则这个 Pod 应该(或者在反亲和性的情况下不应该)运行在 X 节点”。
|
||||
Y 表示一个具有可选的关联命令空间列表的 LabelSelector;
|
||||
与节点不同,因为 Pod 是命名空间限定的(因此 Pod 上的标签也是命名空间限定的),
|
||||
因此作用于 Pod 标签的标签选择算符必须指定选择算符应用在哪个命名空间。
|
||||
从概念上讲,X 是一个拓扑域,如节点、机架、云供应商可用区、云供应商地理区域等。
|
||||
你可以使用 `topologyKey` 来表示它,`topologyKey` 是节点标签的键以便系统
|
||||
用来表示这样的拓扑域。
|
||||
请参阅上面[插曲:内置的节点标签](#built-in-node-labels)部分中列出的标签键。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
@ -347,14 +446,16 @@ Inter-pod affinity and anti-affinity require substantial amount of
|
|||
processing which can slow down scheduling in large clusters significantly. We do
|
||||
not recommend using them in clusters larger than several hundred nodes.
|
||||
-->
|
||||
Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。我们不建议在超过数百个节点的集群中使用它们。
|
||||
Pod 间亲和性与反亲和性需要大量的处理,这可能会显著减慢大规模集群中的调度。
|
||||
我们不建议在超过数百个节点的集群中使用它们。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Pod anti-affinity requires nodes to be consistently labelled, i.e. every node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior.
|
||||
-->
|
||||
Pod 反亲和需要对节点进行一致的标记,即集群中的每个节点必须具有适当的标签能够匹配 `topologyKey`。如果某些或所有节点缺少指定的 `topologyKey` 标签,可能会导致意外行为。
|
||||
Pod 反亲和性需要对节点进行一致的标记,即集群中的每个节点必须具有适当的标签能够匹配
|
||||
`topologyKey`。如果某些或所有节点缺少指定的 `topologyKey` 标签,可能会导致意外行为。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -366,22 +467,27 @@ in the same zone, since they communicate a lot with each other"
|
|||
and an example `preferredDuringSchedulingIgnoredDuringExecution` anti-affinity would be "spread the pods from this service across zones"
|
||||
(a hard requirement wouldn't make sense, since you probably have more pods than zones).
|
||||
-->
|
||||
|
||||
与节点亲和一样,当前有两种类型的 pod 亲和与反亲和,即 `requiredDuringSchedulingIgnoredDuringExecution` 和
|
||||
`preferredDuringSchedulingIgnoredDuringExecution`,分别表示“硬性”与“软性”要求。请参阅前面节点亲和部分中的描述。`requiredDuringSchedulingIgnoredDuringExecution` 亲和的一个示例是“将服务 A 和服务 B 的 pod 放置在同一区域,因为它们之间进行大量交流”,而 `preferredDuringSchedulingIgnoredDuringExecution` 反亲和的示例将是“将此服务的 pod 跨区域分布”(硬性要求是说不通的,因为你可能拥有的 pod 数多于区域数)。
|
||||
与节点亲和性一样,当前有两种类型的 Pod 亲和性与反亲和性,即
|
||||
`requiredDuringSchedulingIgnoredDuringExecution` 和
|
||||
`preferredDuringSchedulingIgnoredDuringExecution`,分别表示“硬性”与“软性”要求。
|
||||
请参阅前面节点亲和性部分中的描述。
|
||||
`requiredDuringSchedulingIgnoredDuringExecution` 亲和性的一个示例是
|
||||
“将服务 A 和服务 B 的 Pod 放置在同一区域,因为它们之间进行大量交流”,而
|
||||
`preferredDuringSchedulingIgnoredDuringExecution` 反亲和性的示例将是
|
||||
“将此服务的 pod 跨区域分布”(硬性要求是说不通的,因为你可能拥有的
|
||||
Pod 数多于区域数)。
|
||||
|
||||
<!--
|
||||
Inter-pod affinity is specified as field `podAffinity` of field `affinity` in the PodSpec.
|
||||
And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity` in the PodSpec.
|
||||
-->
|
||||
|
||||
Pod 间亲和通过 PodSpec 中 `affinity` 字段下的 `podAffinity` 字段进行指定。而 pod 间反亲和通过 PodSpec 中 `affinity` 字段下的 `podAntiAffinity` 字段进行指定。
|
||||
Pod 间亲和性通过 PodSpec 中 `affinity` 字段下的 `podAffinity` 字段进行指定。
|
||||
而 Pod 间反亲和性通过 PodSpec 中 `affinity` 字段下的 `podAntiAffinity` 字段进行指定。
|
||||
|
||||
<!--
|
||||
#### An example of a pod that uses pod affinity:
|
||||
-->
|
||||
|
||||
### Pod 使用 pod 亲和 的示例:
|
||||
### Pod 使用 pod 亲和性 的示例:
|
||||
|
||||
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
|
||||
|
||||
|
@ -402,22 +508,34 @@ label having key "security" and value "S2".) See the
|
|||
for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor.
|
||||
-->
|
||||
|
||||
在这个 pod 的 affinity 配置定义了一条 pod 亲和规则和一条 pod 反亲和规则。在此示例中,`podAffinity` 配置为 `requiredDuringSchedulingIgnoredDuringExecution`,然而 `podAntiAffinity` 配置为 `preferredDuringSchedulingIgnoredDuringExecution`。pod 亲和规则表示,仅当节点和至少一个已运行且有键为“security”且值为“S1”的标签的 pod 处于同一区域时,才可以将该 pod 调度到节点上。(更确切的说,如果节点 N 具有带有键 `topology.kubernetes.io/zone` 和某个值 V 的标签,则 pod 有资格在节点 N 上运行,以便集群中至少有一个节点具有键 `topology.kubernetes.io/zone` 和值为 V 的节点正在运行具有键“security”和值“S1”的标签的 pod。)pod 反亲和规则表示,如果节点已经运行了一个具有键“security”和值“S2”的标签的 pod,则该 pod 不希望将其调度到该节点上。(如果 `topologyKey` 为 `topology.kubernetes.io/zone`,则意味着当节点和具有键“security”和值“S2”的标签的 pod 处于相同的区域,pod 不能被调度到该节点上。)查阅[设计文档](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)来获取更多 pod 亲和与反亲和的样例,包括 `requiredDuringSchedulingIgnoredDuringExecution`
|
||||
在这个 Pod 的亲和性配置定义了一条 Pod 亲和性规则和一条 Pod 反亲和性规则。
|
||||
在此示例中,`podAffinity` 配置为 `requiredDuringSchedulingIgnoredDuringExecution`,
|
||||
然而 `podAntiAffinity` 配置为 `preferredDuringSchedulingIgnoredDuringExecution`。
|
||||
Pod 亲和性规则表示,仅当节点和至少一个已运行且有键为“security”且值为“S1”的标签
|
||||
的 Pod 处于同一区域时,才可以将该 Pod 调度到节点上。
|
||||
(更确切的说,如果节点 N 具有带有键 `topology.kubernetes.io/zone` 和某个值 V 的标签,
|
||||
则 Pod 有资格在节点 N 上运行,以便集群中至少有一个节点具有键
|
||||
`topology.kubernetes.io/zone` 和值为 V 的节点正在运行具有键“security”和值
|
||||
“S1”的标签的 pod。)
|
||||
Pod 反亲和性规则表示,如果节点已经运行了一个具有键“security”和值“S2”的标签的 Pod,
|
||||
则该 pod 不希望将其调度到该节点上。
|
||||
(如果 `topologyKey` 为 `topology.kubernetes.io/zone`,则意味着当节点和具有键
|
||||
“security”和值“S2”的标签的 Pod 处于相同的区域,Pod 不能被调度到该节点上。)
|
||||
查阅[设计文档](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
以获取 Pod 亲和性与反亲和性的更多样例,包括
|
||||
`requiredDuringSchedulingIgnoredDuringExecution`
|
||||
和 `preferredDuringSchedulingIgnoredDuringExecution` 两种配置。
|
||||
|
||||
<!--
|
||||
The legal operators for pod affinity and anti-affinity are `In`, `NotIn`, `Exists`, `DoesNotExist`.
|
||||
-->
|
||||
|
||||
Pod 亲和与反亲和的合法操作符有 `In`,`NotIn`,`Exists`,`DoesNotExist`。
|
||||
|
||||
<!--
|
||||
In principle, the `topologyKey` can be any legal label-key. However,
|
||||
for performance and security reasons, there are some constraints on topologyKey:
|
||||
-->
|
||||
Pod 亲和性与反亲和性的合法操作符有 `In`,`NotIn`,`Exists`,`DoesNotExist`。
|
||||
|
||||
原则上,`topologyKey` 可以是任何合法的标签键。然而,出于性能和安全原因,topologyKey 受到一些限制:
|
||||
原则上,`topologyKey` 可以是任何合法的标签键。
|
||||
然而,出于性能和安全原因,topologyKey 受到一些限制:
|
||||
|
||||
<!--
|
||||
1. For affinity and for `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity,
|
||||
|
@ -426,10 +544,16 @@ empty `topologyKey` is not allowed.
|
|||
3. For `preferredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, empty `topologyKey` is interpreted as "all topologies" ("all topologies" here is now limited to the combination of `kubernetes.io/hostname`, `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`).
|
||||
4. Except for the above cases, the `topologyKey` can be any legal label-key.
|
||||
-->
|
||||
|
||||
1. 对于亲和与 `requiredDuringSchedulingIgnoredDuringExecution` 要求的 pod 反亲和,`topologyKey` 不允许为空。
|
||||
2. 对于 `requiredDuringSchedulingIgnoredDuringExecution` 要求的 pod 反亲和,准入控制器 `LimitPodHardAntiAffinityTopology` 被引入来限制 `topologyKey` 不为 `kubernetes.io/hostname`。如果你想使它可用于自定义拓扑结构,你必须修改准入控制器或者禁用它。
|
||||
3. 对于 `preferredDuringSchedulingIgnoredDuringExecution` 要求的 pod 反亲和,空的 `topologyKey` 被解释为“所有拓扑结构”(这里的“所有拓扑结构”限制为 `kubernetes.io/hostname`,`topology.kubernetes.io/zone` 和 `topology.kubernetes.io/region` 的组合)。
|
||||
1. 对于亲和性与 `requiredDuringSchedulingIgnoredDuringExecution` 要求的
|
||||
Pod 反亲和性,`topologyKey` 不允许为空。
|
||||
2. 对于 `requiredDuringSchedulingIgnoredDuringExecution` 要求的 Pod 反亲和性,
|
||||
准入控制器 `LimitPodHardAntiAffinityTopology` 被引入来限制 `topologyKey`
|
||||
不为 `kubernetes.io/hostname`。
|
||||
如果你想使它可用于自定义拓扑结构,你必须修改准入控制器或者禁用它。
|
||||
3. 对于 `preferredDuringSchedulingIgnoredDuringExecution` 要求的 Pod 反亲和性,
|
||||
空的 `topologyKey` 被解释为“所有拓扑结构”(这里的“所有拓扑结构”限制为
|
||||
`kubernetes.io/hostname`,`topology.kubernetes.io/zone` 和
|
||||
`topology.kubernetes.io/region` 的组合)。
|
||||
4. 除上述情况外,`topologyKey` 可以是任何合法的标签键。
|
||||
|
||||
<!--
|
||||
|
@ -437,47 +561,46 @@ In addition to `labelSelector` and `topologyKey`, you can optionally specify a l
|
|||
of namespaces which the `labelSelector` should match against (this goes at the same level of the definition as `labelSelector` and `topologyKey`).
|
||||
If omitted or empty, it defaults to the namespace of the pod where the affinity/anti-affinity definition appears.
|
||||
-->
|
||||
|
||||
除了 `labelSelector` 和 `topologyKey`,你也可以指定表示命名空间的 `namespaces` 队列,`labelSelector` 也应该匹配它(这个与 `labelSelector` 和 `topologyKey` 的定义位于相同的级别)。如果忽略或者为空,则默认为 pod 亲和/反亲和的定义所在的命名空间。
|
||||
除了 `labelSelector` 和 `topologyKey`,你也可以指定表示命名空间的
|
||||
`namespaces` 队列,`labelSelector` 也应该匹配它
|
||||
(这个与 `labelSelector` 和 `topologyKey` 的定义位于相同的级别)。
|
||||
如果忽略或者为空,则默认为 Pod 亲和性/反亲和性的定义所在的命名空间。
|
||||
|
||||
<!--
|
||||
All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity
|
||||
must be satisfied for the pod to be scheduled onto a node.
|
||||
-->
|
||||
|
||||
所有与 `requiredDuringSchedulingIgnoredDuringExecution` 亲和与反亲和关联的 `matchExpressions` 必须满足,才能将 pod 调度到节点上。
|
||||
所有与 `requiredDuringSchedulingIgnoredDuringExecution` 亲和性与反亲和性
|
||||
关联的 `matchExpressions` 必须满足,才能将 pod 调度到节点上。
|
||||
|
||||
<!--
|
||||
#### More Practical Use-cases
|
||||
-->
|
||||
|
||||
#### 更实际的用例
|
||||
|
||||
<!--
|
||||
Interpod Affinity and AntiAffinity can be even more useful when they are used with higher
|
||||
level collections such as ReplicaSets, StatefulSets, Deployments, etc. One can easily configure that a set of workloads should
|
||||
be co-located in the same defined topology, eg., the same node.
|
||||
-->
|
||||
#### 更实际的用例
|
||||
|
||||
Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,它们可能更加有用。可以轻松配置一组应位于相同定义拓扑(例如,节点)中的工作负载。
|
||||
Pod 间亲和性与反亲和性在与更高级别的集合(例如 ReplicaSets、StatefulSets、
|
||||
Deployments 等)一起使用时,它们可能更加有用。
|
||||
可以轻松配置一组应位于相同定义拓扑(例如,节点)中的工作负载。
|
||||
|
||||
<!--
|
||||
##### Always co-located in the same node
|
||||
-->
|
||||
|
||||
##### 始终放置在相同节点上
|
||||
|
||||
<!--
|
||||
In a three node cluster, a web application has in-memory cache such as redis. We want the web-servers to be co-located with the cache as much as possible.
|
||||
-->
|
||||
##### 始终放置在相同节点上
|
||||
|
||||
在三节点集群中,一个 web 应用程序具有内存缓存,例如 redis。我们希望 web 服务器尽可能与缓存放置在同一位置。
|
||||
在三节点集群中,一个 web 应用程序具有内存缓存,例如 redis。
|
||||
我们希望 web 服务器尽可能与缓存放置在同一位置。
|
||||
|
||||
<!--
|
||||
Here is the yaml snippet of a simple redis deployment with three replicas and selector label `app=store`. The deployment has `PodAntiAffinity` configured to ensure the scheduler does not co-locate replicas on a single node.
|
||||
-->
|
||||
|
||||
下面是一个简单 redis deployment 的 yaml 代码段,它有三个副本和选择器标签 `app=store`。Deployment 配置了 `PodAntiAffinity`,用来确保调度器不会将副本调度到单个节点上。
|
||||
下面是一个简单 redis Deployment 的 YAML 代码段,它有三个副本和选择器标签 `app=store`。
|
||||
Deployment 配置了 `PodAntiAffinity`,用来确保调度器不会将副本调度到单个节点上。
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -512,8 +635,9 @@ spec:
|
|||
<!--
|
||||
The below yaml snippet of the webserver deployment has `podAntiAffinity` and `podAffinity` configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label `app=store`. This will also ensure that each web-server replica does not co-locate on a single node.
|
||||
-->
|
||||
|
||||
下面 webserver deployment 的 yaml 代码段中配置了 `podAntiAffinity` 和 `podAffinity`。这将通知调度器将它的所有副本与具有 `app=store` 选择器标签的 pod 放置在一起。这还确保每个 web 服务器副本不会调度到单个节点上。
|
||||
下面 webserver Deployment 的 YAML 代码段中配置了 `podAntiAffinity` 和 `podAffinity`。
|
||||
这将通知调度器将它的所有副本与具有 `app=store` 选择器标签的 Pod 放置在一起。
|
||||
这还确保每个 web 服务器副本不会调度到单个节点上。
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -557,8 +681,7 @@ spec:
|
|||
<!--
|
||||
If we create the above two deployments, our three node cluster should look like below.
|
||||
-->
|
||||
|
||||
如果我们创建了上面的两个 deployment,我们的三节点集群将如下表所示。
|
||||
如果我们创建了上面的两个 Deployment,我们的三节点集群将如下表所示。
|
||||
|
||||
| node-1 | node-2 | node-3 |
|
||||
|:--------------------:|:-------------------:|:------------------:|
|
||||
|
@ -568,7 +691,6 @@ If we create the above two deployments, our three node cluster should look like
|
|||
<!--
|
||||
As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected.
|
||||
-->
|
||||
|
||||
如你所见,`web-server` 的三个副本都按照预期那样自动放置在同一位置。
|
||||
|
||||
```
|
||||
|
@ -592,20 +714,18 @@ web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3
|
|||
|
||||
<!--
|
||||
##### Never co-located in the same node
|
||||
-->
|
||||
|
||||
##### 永远不放置在相同节点
|
||||
|
||||
<!--
|
||||
The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that
|
||||
no two instances are located on the same host.
|
||||
See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
|
||||
for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique.
|
||||
-->
|
||||
##### 永远不放置在相同节点
|
||||
|
||||
上面的例子使用 `PodAntiAffinity` 规则和 `topologyKey: "kubernetes.io/hostname"`
|
||||
来部署 redis 集群以便在同一主机上没有两个实例。
|
||||
参阅 [ZooKeeper 教程](/zh/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure),
|
||||
以获取配置反亲和来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。
|
||||
以获取配置反亲和性来达到高可用性的 StatefulSet 的样例(使用了相同的技巧)。
|
||||
|
||||
## nodeName
|
||||
|
||||
|
@ -617,30 +737,33 @@ kubelet running on the named node tries to run the pod. Thus, if
|
|||
`nodeName` is provided in the PodSpec, it takes precedence over the
|
||||
above methods for node selection.
|
||||
-->
|
||||
`nodeName` 是节点选择约束的最简单方法,但是由于其自身限制,通常不使用它。`nodeName` 是 PodSpec 的一个字段。如果它不为空,调度器将忽略 pod,并且运行在它指定节点上的 kubelet 进程尝试运行该 pod。因此,如果 `nodeName` 在 PodSpec 中指定了,则它优先于上面的节点选择方法。
|
||||
`nodeName` 是节点选择约束的最简单方法,但是由于其自身限制,通常不使用它。
|
||||
`nodeName` 是 PodSpec 的一个字段。
|
||||
如果它不为空,调度器将忽略 Pod,并且给定节点上运行的 kubelet 进程尝试执行该 Pod。
|
||||
因此,如果 `nodeName` 在 PodSpec 中指定了,则它优先于上面的节点选择方法。
|
||||
|
||||
<!--
|
||||
Some of the limitations of using `nodeName` to select nodes are:
|
||||
|
||||
- If the named node does not exist, the pod will not be run, and in
|
||||
some cases may be automatically deleted.
|
||||
- If the named node does not have the resources to accommodate the
|
||||
pod, the pod will fail and its reason will indicate why,
|
||||
e.g. OutOfmemory or OutOfcpu.
|
||||
- Node names in cloud environments are not always predictable or
|
||||
stable.
|
||||
-->
|
||||
使用 `nodeName` 来选择节点的一些限制:
|
||||
|
||||
<!--
|
||||
- If the named node does not exist, the pod will not be run, and in
|
||||
some cases may be automatically deleted.
|
||||
- If the named node does not have the resources to accommodate the
|
||||
pod, the pod will fail and its reason will indicate why,
|
||||
e.g. OutOfmemory or OutOfcpu.
|
||||
- Node names in cloud environments are not always predictable or
|
||||
stable.
|
||||
-->
|
||||
- 如果指定的节点不存在,
|
||||
- 如果指定的节点没有资源来容纳 pod,pod 将会调度失败并且其原因将显示为,比如 OutOfmemory 或 OutOfcpu。
|
||||
- 云环境中的节点名称并非总是可预测或稳定的。
|
||||
- 如果指定的节点不存在,
|
||||
- 如果指定的节点没有资源来容纳 Pod,Pod 将会调度失败并且其原因将显示为,
|
||||
比如 OutOfmemory 或 OutOfcpu。
|
||||
- 云环境中的节点名称并非总是可预测或稳定的。
|
||||
|
||||
<!--
|
||||
Here is an example of a pod config file using the `nodeName` field:
|
||||
-->
|
||||
下面的是使用 `nodeName` 字段的 pod 配置文件的例子:
|
||||
下面的是使用 `nodeName` 字段的 Pod 配置文件的例子:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -657,7 +780,6 @@ spec:
|
|||
<!--
|
||||
The above pod will run on the node kube-01.
|
||||
-->
|
||||
|
||||
上面的 pod 将运行在 kube-01 节点上。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
@ -665,24 +787,23 @@ The above pod will run on the node kube-01.
|
|||
<!--
|
||||
[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
|
||||
-->
|
||||
|
||||
[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)允许节点*排斥*一组 pod。
|
||||
[污点](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)
|
||||
允许节点*排斥*一组 Pod。
|
||||
|
||||
<!--
|
||||
The design documents for
|
||||
[node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
|
||||
and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) contain extra background information about these features.
|
||||
-->
|
||||
|
||||
[节点亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)与
|
||||
[pod 间亲和/反亲和](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)的设计文档包含这些功能的其他背景信息。
|
||||
[节点亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)与
|
||||
[pod 间亲和性/反亲和性](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
的设计文档包含这些功能的其他背景信息。
|
||||
|
||||
<!--
|
||||
Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-local resources.
|
||||
The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can take part in node-level
|
||||
resource allocation decisions.
|
||||
-->
|
||||
|
||||
一旦 Pod 分配给 节点,kubelet 应用将运行该 pod 并且分配节点本地资源。
|
||||
[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/)
|
||||
可以参与到节点级别的资源分配决定中。
|
||||
|
|
|
@ -109,7 +109,7 @@ The above arguments give the node a score of 0 if utilization is 0% and 10 for u
|
|||
要启用最少请求(least requested)模式,必须按如下方式反转得分值。
|
||||
|
||||
```yaml
|
||||
{"utilization": 0, "score": 100},
|
||||
{"utilization": 0, "score": 10},
|
||||
{"utilization": 100, "score": 0}
|
||||
```
|
||||
|
||||
|
|
|
@ -614,7 +614,7 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
|
|||
|
||||
<!--
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
|
||||
* [`azureDisk`](/docs/concepts/sotrage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
|
||||
|
@ -643,33 +643,33 @@ PV 持久卷是用插件的形式来实现的。Kubernetes 目前支持以下插
|
|||
* [`storageos`](/docs/concepts/storage/volumes/#storageos) - StorageOS volume
|
||||
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
|
||||
-->
|
||||
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS 弹性块存储(EBS)
|
||||
* [`azureDisk`](/docs/concepts/sotrage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
|
||||
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack 块存储)
|
||||
* [`awsElasticBlockStore`](/zh/docs/concepts/storage/volumes/#awselasticblockstore) - AWS 弹性块存储(EBS)
|
||||
* [`azureDisk`](/zh/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
|
||||
* [`azureFile`](/zh/docs/concepts/storage/volumes/#azurefile) - Azure File
|
||||
* [`cephfs`](/zh/docs/concepts/storage/volumes/#cephfs) - CephFS volume
|
||||
* [`cinder`](/zh/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack 块存储)
|
||||
(**弃用**)
|
||||
* [`csi`](/docs/concepts/storage/volumes/#csi) - 容器存储接口 (CSI)
|
||||
* [`fc`](/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) 存储
|
||||
* [`flexVolume`](/docs/concepts/storage/volumes/#flexVolume) - FlexVolume
|
||||
* [`flocker`](/docs/concepts/storage/volumes/#flocker) - Flocker 存储
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘
|
||||
* [`glusterfs`](/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷
|
||||
* [`hostPath`](/docs/concepts/storage/volumes/#hostpath) - HostPath 卷
|
||||
* [`csi`](/zh/docs/concepts/storage/volumes/#csi) - 容器存储接口 (CSI)
|
||||
* [`fc`](/zh/docs/concepts/storage/volumes/#fc) - Fibre Channel (FC) 存储
|
||||
* [`flexVolume`](/zh/docs/concepts/storage/volumes/#flexVolume) - FlexVolume
|
||||
* [`flocker`](/zh/docs/concepts/storage/volumes/#flocker) - Flocker 存储
|
||||
* [`gcePersistentDisk`](/zh/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE 持久化盘
|
||||
* [`glusterfs`](/zh/docs/concepts/storage/volumes/#glusterfs) - Glusterfs 卷
|
||||
* [`hostPath`](/zh/docs/concepts/storage/volumes/#hostpath) - HostPath 卷
|
||||
(仅供单节点测试使用;不适用于多节点集群;
|
||||
尝试使用 `本地` 卷替换)
|
||||
* [`iscsi`](/docs/concepts/storage/volumes/#iscsi) - iSCSI (SCSI over IP) 存储
|
||||
* [`local`](/docs/concepts/storage/volumes/#local) - 节点上挂载的本地存储设备
|
||||
* [`nfs`](/docs/concepts/storage/volumes/#nfs) - 网络文件系统 (NFS) 存储
|
||||
请尝试使用 `local` 卷作为替代)
|
||||
* [`iscsi`](/zh/docs/concepts/storage/volumes/#iscsi) - iSCSI (SCSI over IP) 存储
|
||||
* [`local`](/zh/docs/concepts/storage/volumes/#local) - 节点上挂载的本地存储设备
|
||||
* [`nfs`](/zh/docs/concepts/storage/volumes/#nfs) - 网络文件系统 (NFS) 存储
|
||||
* `photonPersistentDisk` - Photon 控制器持久化盘。
|
||||
(这个卷类型已经因对应的云提供商被移除而被弃用)。
|
||||
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx 卷
|
||||
* [`quobyte`](/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷
|
||||
* [`rbd`](/docs/concepts/storage/volumes/#rbd) - Rados 块设备 (RBD) 卷
|
||||
* [`scaleIO`](/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷
|
||||
* [`portworxVolume`](/zh/docs/concepts/storage/volumes/#portworxvolume) - Portworx 卷
|
||||
* [`quobyte`](/zh/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷
|
||||
* [`rbd`](/zh/docs/concepts/storage/volumes/#rbd) - Rados 块设备 (RBD) 卷
|
||||
* [`scaleIO`](/zh/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷
|
||||
(**弃用**)
|
||||
* [`storageos`](/docs/concepts/storage/volumes/#storageos) - StorageOS 卷
|
||||
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK 卷
|
||||
* [`storageos`](/zh/docs/concepts/storage/volumes/#storageos) - StorageOS 卷
|
||||
* [`vsphereVolume`](/zh/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK 卷
|
||||
|
||||
<!--
|
||||
## Persistent Volumes
|
||||
|
|
|
@ -52,7 +52,7 @@ VolumeSnapshotClass 对象的名称很重要,是用户可以请求特定类的
|
|||
对象一旦创建就无法更新。
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-hostpath-snapclass
|
||||
|
@ -70,7 +70,7 @@ that don't request any particular class to bind to by adding the
|
|||
方法是设置注解 `snapshot.storage.kubernetes.io/is-default-class: "true"`:
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-hostpath-snapclass
|
||||
|
|
|
@ -5,7 +5,7 @@ feature:
|
|||
description: >
|
||||
Kubernetes 会分步骤地将针对应用或其配置的更改上线,同时监视应用程序运行状况以确保你不会同时终止所有实例。如果出现问题,Kubernetes 会为你回滚所作更改。你应该充分利用不断成长的部署方案生态系统。
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 10
|
||||
---
|
||||
<!--
|
||||
title: Deployments
|
||||
|
@ -15,24 +15,24 @@ feature:
|
|||
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
|
||||
|
||||
content_type: concept
|
||||
weight: 30
|
||||
weight: 10
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
A _Deployment_ controller provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
|
||||
A _Deployment_ provides declarative updates for [Pods](/docs/concepts/workloads/pods/pod/) and
|
||||
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
|
||||
-->
|
||||
一个 _Deployment_ 控制器为 {{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
一个 _Deployment_ 为 {{< glossary_tooltip text="Pods" term_id="pod" >}}
|
||||
和 {{< glossary_tooltip term_id="replica-set" text="ReplicaSets" >}}
|
||||
提供声明式的更新能力。
|
||||
|
||||
<!--
|
||||
You describe a _desired state_ in a Deployment, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
|
||||
|
||||
You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_tooltip term_id="controller" >}} changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
|
||||
-->
|
||||
你负责描述 Deployment 中的 _目标状态_,而 Deployment 控制器以受控速率更改实际状态,
|
||||
你负责描述 Deployment 中的 _目标状态_,而 Deployment {{< glossary_tooltip term_id="controller" >}}
|
||||
以受控速率更改实际状态,
|
||||
使其变为期望状态。你可以定义 Deployment 以创建新的 ReplicaSet,或删除现有 Deployment,
|
||||
并通过新的 Deployment 收养其资源。
|
||||
|
||||
|
@ -196,9 +196,12 @@ Follow the steps given below to create the above Deployment:
|
|||
请注意期望副本数是根据 `.spec.replicas` 字段设置 3。
|
||||
|
||||
<!--
|
||||
3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this:
|
||||
3. To see the Deployment rollout status, run `kubectl rollout status deployment/nginx-deployment`.
|
||||
|
||||
The output is similar to:
|
||||
-->
|
||||
3. 要查看 Deployment 上线状态,运行 `kubectl rollout status deployment.v1.apps/nginx-deployment`。
|
||||
3. 要查看 Deployment 上线状态,运行 `kubectl rollout status deployment/nginx-deployment`。
|
||||
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
|
@ -341,9 +344,11 @@ is changed, for example if the labels or container images of the template are up
|
|||
|
||||
```shell
|
||||
kubectl --record deployment.apps/nginx-deployment set image \
|
||||
deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
|
||||
deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
|
||||
```
|
||||
<!-- or simply use the following command: -->
|
||||
<!--
|
||||
or simply use the following command:
|
||||
-->
|
||||
或者使用下面的命令:
|
||||
|
||||
```shell
|
||||
|
@ -380,7 +385,7 @@ is changed, for example if the labels or container images of the template are up
|
|||
2. 要查看上线状态,运行:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment.v1.apps/nginx-deployment
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
|
@ -655,9 +660,10 @@ Deployment 被触发上线时,系统就会创建 Deployment 的新的修订版
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
|
||||
* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.161` instead of `nginx:1.16.1`:
|
||||
-->
|
||||
* 假设你在更新 Deployment 时犯了一个拼写错误,将镜像名称命名设置为 `nginx:1.161` 而不是 `nginx:1.16.1`:
|
||||
* 假设你在更新 Deployment 时犯了一个拼写错误,将镜像名称命名设置为
|
||||
`nginx:1.161` 而不是 `nginx:1.16.1`:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.161 --record=true
|
||||
|
@ -676,7 +682,7 @@ Deployment 被触发上线时,系统就会创建 Deployment 的新的修订版
|
|||
* 此上线进程会出现停滞。你可以通过检查上线状态来验证:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment.v1.apps/nginx-deployment
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
|
@ -1431,7 +1437,7 @@ successfully, `kubectl rollout status` returns a zero exit code.
|
|||
如果上线成功完成,`kubectl rollout status` 返回退出代码 0。
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment.v1.apps/nginx-deployment
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
|
@ -1652,16 +1658,27 @@ returns a non-zero exit code if the Deployment has exceeded the progression dead
|
|||
如果 Deployment 已超过进度限期,`kubectl rollout status` 返回非零退出代码。
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment.v1.apps/nginx-deployment
|
||||
kubectl rollout status deployment/nginx-deployment
|
||||
```
|
||||
|
||||
<!-- The output is similar to this: -->
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
error: deployment "nginx" exceeded its progress deadline
|
||||
```
|
||||
<!--
|
||||
and the exit status from `kubectl rollout` is 1 (indicating an error):
|
||||
-->
|
||||
`kubectl rollout` 命令的退出状态为 1(表明发生了错误):
|
||||
|
||||
```shell
|
||||
$ echo $?
|
||||
```
|
||||
```
|
||||
1
|
||||
```
|
||||
|
||||
|
|
|
@ -731,7 +731,7 @@ kubectl get job old -o yaml
|
|||
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
```yaml
|
||||
kind: Job
|
||||
metadata:
|
||||
name: old
|
||||
|
@ -758,7 +758,7 @@ the selector that the system normally generates for you automatically.
|
|||
你需要在新 Job 中设置 `manualSelector: true`,因为你并未使用系统通常自动为你
|
||||
生成的选择算符。
|
||||
|
||||
```
|
||||
```yaml
|
||||
kind: Job
|
||||
metadata:
|
||||
name: new
|
||||
|
|
|
@ -395,7 +395,7 @@ if the Pod `restartPolicy` is set to Always, the init containers use
|
|||
|
||||
A Pod cannot be `Ready` until all init containers have succeeded. The ports on an
|
||||
init container are not aggregated under a Service. A Pod that is initializing
|
||||
is in the `Pending` state but should have a condition `Initialized` set to true.
|
||||
is in the `Pending` state but should have a condition `Initialized` set to false.
|
||||
|
||||
If the Pod [restarts](#pod-restart-reasons), or is restarted, all init containers
|
||||
must execute again.
|
||||
|
@ -413,7 +413,7 @@ Pod 的 `restartPolicy` 策略进行重试。
|
|||
|
||||
在所有的 Init 容器没有成功之前,Pod 将不会变成 `Ready` 状态。
|
||||
Init 容器的端口将不会在 Service 中进行聚集。正在初始化中的 Pod 处于 `Pending` 状态,
|
||||
但会将状况 `Initializing` 设置为 true。
|
||||
但会将状况 `Initializing` 设置为 false。
|
||||
|
||||
如果 Pod [重启](#pod-restart-reasons),所有 Init 容器必须重新执行。
|
||||
|
||||
|
|
|
@ -611,13 +611,13 @@ a time longer than the liveness interval would allow.
|
|||
If your container usually starts in more than
|
||||
`initialDelaySeconds + failureThreshold × periodSeconds`, you should specify a
|
||||
startup probe that checks the same endpoint as the liveness probe. The default for
|
||||
`periodSeconds` is 30s. You should then set its `failureThreshold` high enough to
|
||||
`periodSeconds` is 10s. You should then set its `failureThreshold` high enough to
|
||||
allow the container to start, without changing the default values of the liveness
|
||||
probe. This helps to protect against deadlocks.
|
||||
-->
|
||||
如果你的容器启动时间通常超出 `initialDelaySeconds + failureThreshold × periodSeconds`
|
||||
总值,你应该设置一个启动探测,对存活态探针所使用的同一端点执行检查。
|
||||
`periodSeconds` 的默认值是 30 秒。你应该将其 `failureThreshold` 设置得足够高,
|
||||
`periodSeconds` 的默认值是 10 秒。你应该将其 `failureThreshold` 设置得足够高,
|
||||
以便容器有充足的时间完成启动,并且避免更改存活态探针所使用的默认值。
|
||||
这一设置有助于减少死锁状况的发生。
|
||||
|
||||
|
|
|
@ -499,6 +499,7 @@ profiles:
|
|||
- maxSkew: 1
|
||||
topologyKey: topology.kubernetes.io/zone
|
||||
whenUnsatisfiable: ScheduleAnyway
|
||||
defaultingType: List
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
|
@ -520,15 +521,15 @@ using default constraints for `PodTopologySpread`.
|
|||
-->
|
||||
#### 内部默认约束 {#internal-default-constraints}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.19" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.20" state="beta" >}}
|
||||
|
||||
<!--
|
||||
When you enable the `DefaultPodTopologySpread` feature gate, the
|
||||
With the `DefaultPodTopologySpread` feature gate, enabled by default, the
|
||||
legacy `SelectorSpread` plugin is disabled.
|
||||
kube-scheduler uses the following default topology constraints for the
|
||||
`PodTopologySpread` plugin configuration:
|
||||
-->
|
||||
当你启用了 `DefaultPodTopologySpread` 特性门控时,原来的
|
||||
当你使用了默认启用的 `DefaultPodTopologySpread` 特性门控时,原来的
|
||||
`SelectorSpread` 插件会被禁用。
|
||||
kube-scheduler 会使用下面的默认拓扑约束作为 `PodTopologySpread` 插件的
|
||||
配置:
|
||||
|
@ -565,6 +566,26 @@ Kubernetes 的默认约束。
|
|||
插件 `PodTopologySpread` 不会为未设置分布约束中所给拓扑键的节点评分。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
If you don't want to use the default Pod spreading constraints for your cluster,
|
||||
you can disable those defaults by setting `defaultingType` to `List` and leaving
|
||||
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
|
||||
-->
|
||||
如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List` 和
|
||||
将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
profiles:
|
||||
- pluginConfig:
|
||||
- name: PodTopologySpread
|
||||
args:
|
||||
defaultConstraints: []
|
||||
defaultingType: List
|
||||
```
|
||||
|
||||
<!--
|
||||
## Comparison with PodAffinity/PodAntiAffinity
|
||||
|
||||
|
|
|
@ -35,11 +35,11 @@
|
|||
|
||||
- You need to know how to create a pull request to a GitHub repository.
|
||||
This involves creating your own fork of the repository. For more
|
||||
information, see [Work from a local clone](/docs/contribute/intermediate/#work_from_a_local_clone).
|
||||
information, see [Work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo).
|
||||
-->
|
||||
- 你的 `PATH` 环境变量必须包含所需要的构建工具,例如 `Go` 程序和 `python`。
|
||||
|
||||
- 你需要知道如何为一个 GitHub 仓库创建拉取请求(PR)。
|
||||
这牵涉到创建仓库的派生(fork)副本。
|
||||
有关信息可进一步查看[基于本地副本开展工作](/docs/contribute/intermediate/#work_from_a_local_clone)。
|
||||
有关信息可进一步查看[基于本地副本开展工作](/zh/docs/contribute/new-content/open-a-pr/#fork-the-repo)。
|
||||
|
||||
|
|
|
@ -274,8 +274,8 @@ This admission controller ignores any `PersistentVolumeClaim` updates; it acts o
|
|||
-->
|
||||
当未配置默认存储类时,此准入控制器不执行任何操作。如果将多个存储类标记为默认存储类,
|
||||
它将拒绝任何创建 `PersistentVolumeClaim` 的操作,并显示错误。
|
||||
此时准入控制器会忽略任何 `PersistentVolumeClaim` 更新操作,仅响应创建操作。
|
||||
要修复此错误,管理员必须重新访问其 `StorageClass` 对象,并仅将其中一个标记为默认。
|
||||
此准入控制器会忽略所有 `PersistentVolumeClaim` 更新操作,仅响应创建操作。
|
||||
|
||||
<!--
|
||||
See [persistent volume](/docs/concepts/storage/persistent-volumes/) documentation about persistent volume claims and
|
||||
|
@ -1224,20 +1224,35 @@ See the [resourceQuota design doc](https://git.k8s.io/community/contributors/des
|
|||
<!--
|
||||
### RuntimeClass {#runtimeclass}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
For [RuntimeClass](/docs/concepts/containers/runtime-class/) definitions which describe an overhead associated with running a pod,
|
||||
this admission controller will set the pod.Spec.Overhead field accordingly.
|
||||
If you enable the `PodOverhead` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/), and define a RuntimeClass with [Pod overhead](/docs/concepts/scheduling-eviction/pod-overhead/) configured, this admission controller checks incoming
|
||||
Pods. When enabled, this admission controller rejects any Pod create requests that have the overhead already set.
|
||||
For Pods that have a RuntimeClass is configured and selected in their `.spec`, this admission controller sets `.spec.overhead` in the Pod based on the value defined in the corresponding RuntimeClass.
|
||||
|
||||
{{< note >}}
|
||||
The `.spec.overhead` field for Pod and the `.overhead` field for RuntimeClass are both in beta. If you do not enable the `PodOverhead` feature gate, all Pods are treated as if `.spec.overhead` is unset.
|
||||
{{< /note >}}
|
||||
|
||||
See also [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
for more information.
|
||||
-->
|
||||
### 容器运行时类 {#runtimeclass}
|
||||
### RuntimeClass {#runtimeclass}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
+{{< feature-state for_k8s_version="v1.20" state="stable" >}}
|
||||
|
||||
[RuntimeClass](/zh/docs/concepts/containers/runtime-class/) 定义描述了与运行 Pod
|
||||
相关的开销。此准入控制器将相应地设置 `pod.spec.overhead` 字段。
|
||||
如果你开启 `PodOverhead`
|
||||
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/),
|
||||
并且通过 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)
|
||||
配置来定义一个 RuntimeClass,这个准入控制器会检查新的 Pod。
|
||||
当启用的时候,这个准入控制器会拒绝任何 overhead 字段已经设置的 Pod。
|
||||
对于配置了 RuntimeClass 并在其 `.spec` 中选定 RuntimeClass 的 Pod,
|
||||
此准入控制器会根据相应 RuntimeClass 中定义的值为 Pod 设置 `.spec.overhead`。
|
||||
|
||||
{{< note >}}
|
||||
Pod 的 `.spec.overhead` 字段和 RuntimeClass 的 `.overhead` 字段均为处于 beta 版本。
|
||||
如果你未启用 `PodOverhead` 特性门控,则所有 Pod 均被视为未设置 `.spec.overhead`。
|
||||
{{< /note >}}
|
||||
|
||||
详情请参见 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)。
|
||||
|
||||
|
@ -1379,4 +1394,3 @@ For Kubernetes 1.9 and earlier, we recommend running the following set of admiss
|
|||
admission controllers ran in the exact order specified.
|
||||
-->
|
||||
对于更早期版本,没有验证和变更的概念,并且准入控制器按照指定的确切顺序运行。
|
||||
|
||||
|
|
|
@ -657,7 +657,7 @@ so that authentication produces usernames in the format you want.
|
|||
|
||||
RoleBinding 或者 ClusterRoleBinding 可绑定角色到某 *主体(Subject)*上。
|
||||
主体可以是组,用户或者
|
||||
{{< glossary_tooltip text="服务账号" term_id="service-account" >}}。
|
||||
{{< glossary_tooltip text="服务账户" term_id="service-account" >}}。
|
||||
|
||||
Kubernetes 用字符串来表示用户名。
|
||||
用户名可以是普通的用户名,像 "alice";或者是邮件风格的名称,如 "bob@example.com",
|
||||
|
@ -692,7 +692,7 @@ to groups with the `system:serviceaccounts:` prefix.
|
|||
与用户名一样,用户组名也用字符串来表示,而且对该字符串没有格式要求,
|
||||
只是不能使用保留的前缀 `system:`。
|
||||
|
||||
[服务账号](/zh/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
[服务账户](/zh/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
的用户名前缀为 `system:serviceaccount:`,属于前缀为 `system:serviceaccounts:`
|
||||
的用户组。
|
||||
|
||||
|
@ -701,8 +701,8 @@ to groups with the `system:serviceaccounts:` prefix.
|
|||
- `system:serviceaccount:` (singular) is the prefix for service account usernames.
|
||||
- `system:serviceaccounts:` (plural) is the prefix for service account groups.
|
||||
-->
|
||||
- `system:serviceaccount:` (单数)是用于服务账号用户名的前缀;
|
||||
- `system:serviceaccounts:` (复数)是用于服务账号组名的前缀。
|
||||
- `system:serviceaccount:` (单数)是用于服务账户用户名的前缀;
|
||||
- `system:serviceaccounts:` (复数)是用于服务账户组名的前缀。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
@ -741,7 +741,7 @@ subjects:
|
|||
<!--
|
||||
For the default service account in the "kube-system" namespace:
|
||||
-->
|
||||
对于 `kube-system` 名字空间中的默认服务账号:
|
||||
对于 `kube-system` 名字空间中的默认服务账户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
@ -751,9 +751,9 @@ subjects:
|
|||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in the "qa" namespace:
|
||||
For all service accounts in the "qa" group in any namespace:
|
||||
-->
|
||||
对于 "qa" 名字空间中所有的服务账号:
|
||||
对于任何名称空间中的 "qa" 组中所有的服务账户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
@ -762,10 +762,23 @@ subjects:
|
|||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in the "dev" group in the "development" namespace:
|
||||
-->
|
||||
对于 "dev" 名称空间中 "development" 组中的所有服务帐户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:serviceaccounts:dev
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
namespace: development
|
||||
```
|
||||
|
||||
<!--
|
||||
For all service accounts in any namespace:
|
||||
-->
|
||||
对于在任何名字空间中的服务账号:
|
||||
对于在任何名字空间中的服务账户:
|
||||
|
||||
```yaml
|
||||
subjects:
|
||||
|
@ -899,51 +912,69 @@ either do not manually edit the role, or disable auto-reconciliation.
|
|||
{{< /note >}}
|
||||
|
||||
<table>
|
||||
<!-- caption>Kubernetes RBAC API discovery roles</caption -->
|
||||
<caption>Kubernetes RBAC API 发现角色</caption>
|
||||
<colgroup><col width="25%"><col width="25%"><col></colgroup>
|
||||
<caption>
|
||||
<!--
|
||||
Kubernetes RBAC API discovery roles
|
||||
-->
|
||||
Kubernetes RBAC API 发现角色
|
||||
</caption>
|
||||
<colgroup><col width="25%"><col width="25%"><col></colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<!--
|
||||
<th>Default ClusterRole</th>
|
||||
<th>Default ClusterRoleBinding</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:basic-user</b></td>
|
||||
<td><b>system:authenticated</b> group</td>
|
||||
<td>Allows a user read-only access to basic information about themselves. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:discovery</b></td>
|
||||
<td><b>system:authenticated</b> group</td>
|
||||
<td>Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. Prior to 1.14, this role was also bound to `system:unauthenticated` by default.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:public-info-viewer</b></td>
|
||||
<td><b>system:authenticated</b> and <b>system:unauthenticated</b> groups</td>
|
||||
<td>Allows read-only access to non-sensitive information about the cluster. Introduced in Kubernetes v1.14.</td>
|
||||
-->
|
||||
<th>默认 ClusterRole</th>
|
||||
<th>默认 ClusterRoleBinding</th>
|
||||
<th>描述</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><b>system:basic-user</b></td>
|
||||
<!--
|
||||
<td><b>system:authenticated</b> group</td>
|
||||
-->
|
||||
<td><b>system:authenticated</b> 组</td>
|
||||
<td>允许用户以只读的方式去访问他们自己的基本信息。在 1.14 版本之前,这个角色在
|
||||
默认情况下也绑定在 <tt>system:unauthenticated</tt> 上。</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows a user read-only access to basic information about themselves.
|
||||
Prior to 1.14, this role was also bound to <tt>system:unauthenticated</tt> by default.
|
||||
-->
|
||||
允许用户以只读的方式去访问他们自己的基本信息。在 1.14 版本之前,这个角色在默认情况下也绑定在 <tt>system:unauthenticated</tt> 上。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:discovery</b></td>
|
||||
<!--
|
||||
<td><b>system:authenticated</b> group</td>
|
||||
-->
|
||||
<td><b>system:authenticated</b> 组</td>
|
||||
<td>允许以只读方式访问 API 发现端点,这些端点用来发现和协商 API 级别。
|
||||
在 1.14 版本之前,这个角色在默认情况下绑定在 <tt>system:unauthenticated</tt> 上。</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows read-only access to API discovery endpoints needed to discover and negotiate an API level.
|
||||
Prior to 1.14, this role was also bound to <tt>system:unauthenticated</tt> by default.
|
||||
-->
|
||||
允许以只读方式访问 API 发现端点,这些端点用来发现和协商 API 级别。
|
||||
在 1.14 版本之前,这个角色在默认情况下绑定在 <tt>system:unauthenticated</tt> 上。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:public-info-viewer</b></td>
|
||||
<!--
|
||||
<td><b>system:authenticated</b> and <b>system:unauthenticated</b> groups</td>
|
||||
-->
|
||||
<td><b>system:authenticated</b> 和 <b>system:unauthenticated</b> 组</td>
|
||||
<td>允许对集群的非敏感信息进行只读访问,它是在 1.14 版本中引入的。</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows read-only access to non-sensitive information about the cluster. Introduced in Kubernetes v1.14.
|
||||
-->
|
||||
允许对集群的非敏感信息进行只读访问,它是在 1.14 版本中引入的。
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
|
@ -979,74 +1010,101 @@ metadata:
|
|||
|
||||
<table>
|
||||
<colgroup><col width="25%"><col width="25%"><col></colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<!--
|
||||
<th>Default ClusterRole</th>
|
||||
<th>Default ClusterRoleBinding</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
-->
|
||||
<th>默认 ClusterRole</th>
|
||||
<th>默认 ClusterRoleBinding</th>
|
||||
<th>描述</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><b>cluster-admin</b></td>
|
||||
<!--td><b>system:masters</b> group</td -->
|
||||
<!--
|
||||
<td><b>system:masters</b> group</td>
|
||||
-->
|
||||
<td><b>system:masters</b> 组</td>
|
||||
<!-- td>Allows super-user access to perform any action on any resource.
|
||||
<td>
|
||||
<!--
|
||||
Allows super-user access to perform any action on any resource.
|
||||
When used in a <b>ClusterRoleBinding</b>, it gives full control over every resource in the cluster and in all namespaces.
|
||||
When used in a <b>RoleBinding</b>, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.</td-->
|
||||
<td>允许超级用户在平台上的任何资源上执行所有操作。
|
||||
When used in a <b>RoleBinding</b>, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.
|
||||
-->
|
||||
允许超级用户在平台上的任何资源上执行所有操作。
|
||||
当在 <b>ClusterRoleBinding</b> 中使用时,可以授权对集群中以及所有名字空间中的全部资源进行完全控制。
|
||||
当在 <b>RoleBinding</b> 中使用时,可以授权控制 RoleBinding 所在名字空间中的所有资源,包括名字空间本身。</td>
|
||||
当在 <b>RoleBinding</b> 中使用时,可以授权控制 RoleBinding 所在名字空间中的所有资源,包括名字空间本身。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>admin</b></td>
|
||||
<!-- td>None</td --->
|
||||
<!--
|
||||
<td>None</td>
|
||||
--->
|
||||
<td>无</td>
|
||||
<!-- td>Allows admin access, intended to be granted within a namespace using a <b>RoleBinding</b>.
|
||||
<td>
|
||||
<!--
|
||||
Allows admin access, intended to be granted within a namespace using a <b>RoleBinding</b>.
|
||||
If used in a <b>RoleBinding</b>, allows read/write access to most resources in a namespace,
|
||||
including the ability to create roles and rolebindings within the namespace.
|
||||
It does not allow write access to resource quota or to the namespace itself.</td -->
|
||||
<td>允许管理员访问权限,旨在使用 <b>RoleBinding</b> 在名字空间内执行授权。
|
||||
It does not allow write access to resource quota or to the namespace itself.
|
||||
-->
|
||||
允许管理员访问权限,旨在使用 <b>RoleBinding</b> 在名字空间内执行授权。
|
||||
如果在 <b>RoleBinding</b> 中使用,则可授予对名字空间中的大多数资源的读/写权限,
|
||||
包括创建角色和角色绑定的能力。
|
||||
但是它不允许对资源配额或者名字空间本身进行写操作。</td>
|
||||
但是它不允许对资源配额或者名字空间本身进行写操作。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>edit</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows read/write access to most objects in a namespace.
|
||||
<td>
|
||||
<!--
|
||||
Allows read/write access to most objects in a namespace.
|
||||
This role does not allow viewing or modifying roles or role bindings.
|
||||
However, this role allows accessing Secrets and running Pods as any ServiceAccount in
|
||||
the namespace, so it can be used to gain the API access levels of any ServiceAccount in
|
||||
the namespace.</td -->
|
||||
<td>允许对名字空间的大多数对象进行读/写操作。
|
||||
the namespace.
|
||||
-->
|
||||
允许对名字空间的大多数对象进行读/写操作。
|
||||
它不允许查看或者修改角色或者角色绑定。
|
||||
不过,此角色可以访问 Secret,以名字空间中任何 ServiceAccount 的身份运行 Pods,
|
||||
所以可以用来了解名字空间内所有服务账号的 API 访问级别。
|
||||
所以可以用来了解名字空间内所有服务账户的 API 访问级别。
|
||||
</td>
|
||||
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>view</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows read-only access to see most objects in a namespace.
|
||||
<td>
|
||||
<!--
|
||||
Allows read-only access to see most objects in a namespace.
|
||||
It does not allow viewing roles or rolebindings.
|
||||
-->
|
||||
允许对名字空间的大多数对象有只读权限。
|
||||
它不允许查看角色或角色绑定。
|
||||
|
||||
<!--
|
||||
This role does not allow viewing Secrets, since reading
|
||||
the contents of Secrets enables access to ServiceAccount credentials
|
||||
in the namespace, which would allow API access as any ServiceAccount
|
||||
in the namespace (a form of privilege escalation).</td -->
|
||||
<td>允许对名字空间的大多数对象有只读权限。
|
||||
它不允许查看角色或角色绑定。
|
||||
|
||||
in the namespace (a form of privilege escalation).
|
||||
-->
|
||||
此角色不允许查看 Secrets,因为读取 Secret 的内容意味着可以访问名字空间中
|
||||
ServiceAccount 的凭据信息,进而允许利用名字空间中任何 ServiceAccount 的
|
||||
身份访问 API(这是一种特权提升)。</td>
|
||||
身份访问 API(这是一种特权提升)。
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
|
@ -1056,62 +1114,99 @@ ServiceAccount 的凭据信息,进而允许利用名字空间中任何 Service
|
|||
|
||||
<table>
|
||||
<colgroup><col width="25%"><col width="25%"><col></colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<!--
|
||||
<th>Default ClusterRole</th>
|
||||
<th>Default ClusterRoleBinding</th>
|
||||
<th>Description</th>
|
||||
-->
|
||||
<th>默认 ClusterRole</th>
|
||||
<th>默认 ClusterRoleBinding</th>
|
||||
<th>描述</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><b>system:kube-scheduler</b></td>
|
||||
<td><b>system:kube-scheduler</b> user</td>
|
||||
<!-- td>Allows access to the resources required by the {{< glossary_tooltip term_id="kube-scheduler" text="scheduler" >}} component.</td -->
|
||||
<td>允许访问 {{< glossary_tooltip term_id="kube-scheduler" text="scheduler" >}}
|
||||
组件所需要的资源。</td>
|
||||
<!--
|
||||
<td><b>system:kube-scheduler</b> user</td>
|
||||
-->
|
||||
<td><b>system:kube-scheduler</b> 用户</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows access to the resources required by the {{< glossary_tooltip term_id="kube-scheduler" text="scheduler" >}} component.
|
||||
-->
|
||||
允许访问 {{< glossary_tooltip term_id="kube-scheduler" text="scheduler" >}}
|
||||
组件所需要的资源。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:volume-scheduler</b></td>
|
||||
<td><b>system:kube-scheduler</b> user</td>
|
||||
<!-- td>Allows access to the volume resources required by the kube-scheduler component.</td -->
|
||||
<td>允许访问 kube-scheduler 组件所需要的卷资源。</td>
|
||||
<!--
|
||||
<td><b>system:kube-scheduler</b> user</td>
|
||||
-->
|
||||
<td><b>system:kube-scheduler</b> 用户</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows access to the volume resources required by the kube-scheduler component.
|
||||
-->
|
||||
允许访问 kube-scheduler 组件所需要的卷资源。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:kube-controller-manager</b></td>
|
||||
<td><b>system:kube-controller-manager</b> user</td>
|
||||
<!-- td>Allows access to the resources required by the {{< glossary_tooltip term_id="kube-controller-manager" text="controller manager" >}} component.
|
||||
The permissions required by individual controllers are detailed in the <a href="#controller-roles">controller roles</a>.</td-->
|
||||
<td>允许访问{{< glossary_tooltip term_id="kube-controller-manager" text="控制器管理器" >}}
|
||||
<!--
|
||||
<td><b>system:kube-controller-manager</b> user</td>
|
||||
-->
|
||||
<td><b>system:kube-controller-manager</b> 用户</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows access to the resources required by the {{< glossary_tooltip term_id="kube-controller-manager" text="controller manager" >}} component.
|
||||
The permissions required by individual controllers are detailed in the <a href="#controller-roles">controller roles</a>.
|
||||
-->
|
||||
允许访问{{< glossary_tooltip term_id="kube-controller-manager" text="控制器管理器" >}}
|
||||
组件所需要的资源。
|
||||
各个控制回路所需要的权限在<a href="#controller-roles">控制器角色</a> 详述。</td>
|
||||
各个控制回路所需要的权限在<a href="#controller-roles">控制器角色</a> 详述。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:node</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows access to resources required by the kubelet, <b>including read access to all secrets, and write access to all pod status objects</b>.
|
||||
-->
|
||||
允许访问 kubelet 所需要的资源,<b>包括对所有 Secret 的读操作和对所有 Pod 状态对象的写操作。</b>
|
||||
|
||||
<!-- td>Allows access to resources required by the kubelet, <b>including read access to all secrets, and write access to all pod status objects</b>.
|
||||
|
||||
You should use the <a href="/docs/reference/access-authn-authz/node/">Node authorizer</a> and <a href="/docs/reference/access-authn-authz/admission-controllers/#noderestriction">NodeRestriction admission plugin</a> instead of the <tt>system:node</tt> role, and allow granting API access to kubelets based on the Pods scheduled to run on them.
|
||||
|
||||
The <tt>system:node</tt> role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.
|
||||
</td -->
|
||||
<td>允许访问 kubelet 所需要的资源,<b>包括对所有 Secret 的读操作和对所有 Pod 状态对象的写操作。</b>
|
||||
|
||||
<!--
|
||||
You should use the <a href="/docs/reference/access-authn-authz/node/">Node authorizer</a> and
|
||||
<a href="/docs/reference/access-authn-authz/admission-controllers/#noderestriction">NodeRestriction admission plugin</a>
|
||||
instead of the <tt>system:node</tt> role, and allow granting API access to kubelets based on the Pods scheduled to run on them.
|
||||
-->
|
||||
你应该使用 <a href="/zh/docs/reference/access-authn-authz/node/">Node 鉴权组件</a> 和
|
||||
<a href="/zh/docs/reference/access-authn-authz/admission-controllers/#noderestriction">NodeRestriction 准入插件</a>
|
||||
而不是 <tt>system:node</tt> 角色。同时基于 kubelet 上调度执行的 Pod 来授权
|
||||
kubelet 对 API 的访问。
|
||||
|
||||
<!--
|
||||
The <tt>system:node</tt> role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.
|
||||
-->
|
||||
<tt>system:node</tt> 角色的意义仅是为了与从 v1.8 之前版本升级而来的集群兼容。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:node-proxier</b></td>
|
||||
<td><b>system:kube-proxy</b> user</td>
|
||||
<!-- <td><b>system:kube-proxy</b> user</td> -->
|
||||
<td><b>system:kube-proxy</b> 用户</td>
|
||||
<!-- td>Allows access to the resources required by the {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} component.</td-->
|
||||
<td>允许访问 {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
|
||||
组件所需要的资源。</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
|
@ -1121,77 +1216,145 @@ kubelet 对 API 的访问。
|
|||
|
||||
<table>
|
||||
<colgroup><col width="25%"><col width="25%"><col></colgroup>
|
||||
<thead>
|
||||
<tr>
|
||||
<!-- th>Default ClusterRole</th>
|
||||
<!--
|
||||
<th>Default ClusterRole</th>
|
||||
<th>Default ClusterRoleBinding</th>
|
||||
<th>Description</th -->
|
||||
<th>Description</th>
|
||||
-->
|
||||
<th>默认 ClusterRole</th>
|
||||
<th>默认 ClusterRoleBinding</th>
|
||||
<th>描述</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><b>system:auth-delegator</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows delegated authentication and authorization checks.
|
||||
This is commonly used by add-on API servers for unified authentication and authorization.</td -->
|
||||
<td>允许将身份认证和鉴权检查操作外包出去。
|
||||
这种角色通常用在插件式 API 服务器上,以实现统一的身份认证和鉴权。</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows delegated authentication and authorization checks.
|
||||
This is commonly used by add-on API servers for unified authentication and authorization.
|
||||
-->
|
||||
允许将身份认证和鉴权检查操作外包出去。
|
||||
这种角色通常用在插件式 API 服务器上,以实现统一的身份认证和鉴权。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:heapster</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Role for the <a href="https://github.com/kubernetes/heapster">Heapster</a> component (deprecated).</td -->
|
||||
<td>为 <a href="https://github.com/kubernetes/heapster">Heapster</a> 组件(已弃用)定义的角色。</td>
|
||||
<td>
|
||||
<!--
|
||||
Role for the <a href="https://github.com/kubernetes/heapster">Heapster</a> component (deprecated).
|
||||
-->
|
||||
为 <a href="https://github.com/kubernetes/heapster">Heapster</a> 组件(已弃用)定义的角色。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:kube-aggregator</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Role for the <a href="https://github.com/kubernetes/kube-aggregator">kube-aggregator</a> component.</td -->
|
||||
<td>为 <a href="https://github.com/kubernetes/kube-aggregator">kube-aggregator</a> 组件定义的角色。</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:kube-dns</b></td>
|
||||
<!-- td><b>kube-dns</b> service account in the <b>kube-system</b> namespace</td -->
|
||||
<td>在 <b>kube-system</b> 名字空间中的 <b>kube-dns</b> 服务账号</td>
|
||||
<td>
|
||||
<!--
|
||||
<b>kube-dns</b> service account in the <b>kube-system</b> namespace</td
|
||||
-->
|
||||
在 <b>kube-system</b> 名字空间中的 <b>kube-dns</b> 服务账户</td>
|
||||
<!-- td>Role for the <a href="/docs/concepts/services-networking/dns-pod-service/">kube-dns</a> component.</td -->
|
||||
<td>为 <a href="/docs/concepts/services-networking/dns-pod-service/">kube-dns</a> 组件定义的角色。</td>
|
||||
<td>为 <a href="/docs/concepts/services-networking/dns-pod-service/">kube-dns</a> 组件定义的角色。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:kubelet-api-admin</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows full access to the kubelet API.</td -->
|
||||
<td>允许 kubelet API 的完全访问权限。</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows full access to the kubelet API.
|
||||
-->
|
||||
允许 kubelet API 的完全访问权限。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:node-bootstrapper</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows access to the resources required to perform
|
||||
<a href="/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/">Kubelet TLS bootstrapping</a>.</td -->
|
||||
<td>允许访问执行
|
||||
<td>
|
||||
<!--
|
||||
Allows access to the resources required to perform
|
||||
<a href="/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/">Kubelet TLS bootstrapping</a>.
|
||||
-->
|
||||
允许访问执行
|
||||
<a href="/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/">kubelet TLS 启动引导</a>
|
||||
所需要的资源。</td>
|
||||
所需要的资源。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:node-problem-detector</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Role for the <a href="https://github.com/kubernetes/node-problem-detector">node-problem-detector</a> component.</td -->
|
||||
<td>为 <a href="https://github.com/kubernetes/node-problem-detector">node-problem-detector</a> 组件定义的角色。</td>
|
||||
<td>
|
||||
<!--
|
||||
Role for the <a href="https://github.com/kubernetes/node-problem-detector">node-problem-detector</a> component.
|
||||
-->
|
||||
为 <a href="https://github.com/kubernetes/node-problem-detector">node-problem-detector</a> 组件定义的角色。
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:persistent-volume-provisioner</b></td>
|
||||
<!-- td>None</td -->
|
||||
<!--
|
||||
<td>None</td>
|
||||
-->
|
||||
<td>无</td>
|
||||
<!-- td>Allows access to the resources required by most <a href="/docs/concepts/storage/persistent-volumes/#provisioner">dynamic volume provisioners</a>.</td -->
|
||||
<td>允许访问大部分
|
||||
<a href="/docs/concepts/storage/persistent-volumes/#provisioner">动态卷驱动</a>
|
||||
<td>
|
||||
<!--
|
||||
Allows access to the resources required by most <a href="/docs/concepts/storage/persistent-volumes/#provisioner">dynamic volume provisioners</a>.
|
||||
-->
|
||||
允许访问大部分
|
||||
<a href="/docs/concepts/storage/persistent-volumes/#provisioner">动态卷驱动
|
||||
</a>
|
||||
所需要的资源。</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:monitoring</b></td>
|
||||
<!--
|
||||
<td><b>system:monitoring</b> group</td>
|
||||
-->
|
||||
<td><b>system:monitoring</b> 组</td>
|
||||
<td>
|
||||
<!--
|
||||
Allows read access to control-plane monitoring endpoints
|
||||
(i.e. {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} liveness and readiness endpoints
|
||||
(<tt>/healthz</tt>, <tt>/livez</tt>, <tt>/readyz</tt>), the individual health-check endpoints
|
||||
(<tt>/healthz/*</tt>, <tt>/livez/*</tt>, <tt>/readyz/*</tt>), and <tt>/metrics</tt>).
|
||||
Note that individual health check endpoints and the metric endpoint may expose sensitive information.
|
||||
-->
|
||||
允许对控制平面监控端点的读取访问(例如:{{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}}
|
||||
存活和就绪端点(<tt>/healthz</tt>、<tt>/livez</tt>、<tt>/readyz</tt>),
|
||||
各个健康检查端点(<tt>/healthz/*</tt>、<tt>/livez/*</tt>、<tt>/readyz/*</tt>)和 <tt>/metrics</tt>)。
|
||||
请注意,各个运行状况检查端点和度量标准端点可能会公开敏感信息。
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
|
@ -1212,7 +1375,7 @@ These roles include:
|
|||
Kubernetes {{< glossary_tooltip term_id="kube-controller-manager" text="控制器管理器" >}}
|
||||
运行内建于 Kubernetes 控制面的{{< glossary_tooltip term_id="controller" text="控制器" >}}。
|
||||
当使用 `--use-service-account-credentials` 参数启动时, kube-controller-manager
|
||||
使用单独的服务账号来启动每个控制器。
|
||||
使用单独的服务账户来启动每个控制器。
|
||||
每个内置控制器都有相应的、前缀为 `system:controller:` 的角色。
|
||||
如果控制管理器启动时未设置 `--use-service-account-credentials`,
|
||||
它使用自己的身份凭据来运行所有的控制器,该身份必须被授予所有相关的角色。
|
||||
|
@ -1510,7 +1673,7 @@ Grants a Role or ClusterRole within a specific namespace. Examples:
|
|||
* Within the namespace "acme", grant the permissions in the "view" ClusterRole to the service account in the namespace "acme" named "myapp":
|
||||
-->
|
||||
* 在名字空间 "acme" 中,将名为 `view` 的 ClusterRole 中的权限授予名字空间 "acme"
|
||||
中名为 `myapp` 的服务账号:
|
||||
中名为 `myapp` 的服务账户:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme
|
||||
|
@ -1520,7 +1683,7 @@ Grants a Role or ClusterRole within a specific namespace. Examples:
|
|||
* Within the namespace "acme", grant the permissions in the "view" ClusterRole to a service account in the namespace "myappnamespace" named "myapp":
|
||||
-->
|
||||
* 在名字空间 "acme" 中,将名为 `view` 的 ClusterRole 对象中的权限授予名字空间
|
||||
"myappnamespace" 中名称为 `myapp` 的服务账号:
|
||||
"myappnamespace" 中名称为 `myapp` 的服务账户:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme
|
||||
|
@ -1556,7 +1719,7 @@ Grants a ClusterRole across the entire cluster (all namespaces). Examples:
|
|||
* Across the entire cluster, grant the permissions in the "view" ClusterRole to a service account named "myapp" in the namespace "acme":
|
||||
-->
|
||||
* 在整个集群范围内,将名为 `view` 的 ClusterRole 中定义的权限授予 "acme" 名字空间中
|
||||
名为 "myapp" 的服务账号:
|
||||
名为 "myapp" 的服务账户:
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp
|
||||
|
@ -1631,15 +1794,15 @@ This allows you to grant particular roles to particular service accounts as need
|
|||
Fine-grained role bindings provide greater security, but require more effort to administrate.
|
||||
Broader grants can give unnecessary (and potentially escalating) API access to service accounts, but are easier to administrate.
|
||||
-->
|
||||
## 服务账号权限 {#service-account-permissions}
|
||||
## 服务账户权限 {#service-account-permissions}
|
||||
|
||||
默认的 RBAC 策略为控制面组件、节点和控制器授予权限。
|
||||
但是不会对 `kube-system` 名字空间之外的服务账号授予权限。
|
||||
但是不会对 `kube-system` 名字空间之外的服务账户授予权限。
|
||||
(除了授予所有已认证用户的发现权限)
|
||||
|
||||
这使得你可以根据需要向特定服务账号授予特定权限。
|
||||
这使得你可以根据需要向特定服务账户授予特定权限。
|
||||
细粒度的角色绑定可带来更好的安全性,但需要更多精力管理。
|
||||
粗粒度的授权可能导致服务账号被授予不必要的 API 访问权限(甚至导致潜在的权限提升),
|
||||
粗粒度的授权可能导致服务账户被授予不必要的 API 访问权限(甚至导致潜在的权限提升),
|
||||
但更易于管理。
|
||||
|
||||
<!--
|
||||
|
@ -1658,9 +1821,9 @@ In order from most secure to least secure, the approaches are:
|
|||
For example, grant read-only permission within "my-namespace" to the "my-sa" service account:
|
||||
-->
|
||||
这要求应用在其 Pod 规约中指定 `serviceAccountName`,
|
||||
并额外创建服务账号(包括通过 API、应用程序清单、`kubectl create serviceaccount` 等)。
|
||||
并额外创建服务账户(包括通过 API、应用程序清单、`kubectl create serviceaccount` 等)。
|
||||
|
||||
例如,在名字空间 "my-namespace" 中授予服务账号 "my-sa" 只读权限:
|
||||
例如,在名字空间 "my-namespace" 中授予服务账户 "my-sa" 只读权限:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding my-sa-view \
|
||||
|
@ -1672,7 +1835,7 @@ In order from most secure to least secure, the approaches are:
|
|||
<!--
|
||||
2. Grant a role to the "default" service account in a namespace
|
||||
-->
|
||||
2. 将角色授予某名字空间中的 "default" 服务账号
|
||||
2. 将角色授予某名字空间中的 "default" 服务账户
|
||||
|
||||
<!--
|
||||
If an application does not specify a `serviceAccountName`, it uses the "default" service account.
|
||||
|
@ -1684,14 +1847,14 @@ In order from most secure to least secure, the approaches are:
|
|||
|
||||
For example, grant read-only permission within "my-namespace" to the "default" service account:
|
||||
-->
|
||||
如果某应用没有指定 `serviceAccountName`,那么它将使用 "default" 服务账号。
|
||||
如果某应用没有指定 `serviceAccountName`,那么它将使用 "default" 服务账户。
|
||||
|
||||
{{< note >}}
|
||||
"default" 服务账号所具有的权限会被授予给名字空间中所有未指定
|
||||
"default" 服务账户所具有的权限会被授予给名字空间中所有未指定
|
||||
`serviceAccountName` 的 Pod。
|
||||
{{< /note >}}
|
||||
|
||||
例如,在名字空间 "my-namespace" 中授予服务账号 "default" 只读权限:
|
||||
例如,在名字空间 "my-namespace" 中授予服务账户 "default" 只读权限:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding default-view \
|
||||
|
@ -1712,9 +1875,9 @@ In order from most secure to least secure, the approaches are:
|
|||
{{< /note >}}
|
||||
-->
|
||||
许多[插件组件](/zh/docs/concepts/cluster-administration/addons/) 在 `kube-system`
|
||||
名字空间以 "default" 服务账号运行。
|
||||
名字空间以 "default" 服务账户运行。
|
||||
要允许这些插件组件以超级用户权限运行,需要将集群的 `cluster-admin` 权限授予
|
||||
`kube-system` 名字空间中的 "default" 服务账号。
|
||||
`kube-system` 名字空间中的 "default" 服务账户。
|
||||
|
||||
{{< note >}}
|
||||
启用这一配置意味着在 `kube-system` 名字空间中包含以超级用户账号来访问 API
|
||||
|
@ -1734,12 +1897,12 @@ In order from most secure to least secure, the approaches are:
|
|||
|
||||
For example, grant read-only permission within "my-namespace" to all service accounts in that namespace:
|
||||
-->
|
||||
3. 将角色授予名字空间中所有服务账号
|
||||
3. 将角色授予名字空间中所有服务账户
|
||||
|
||||
如果你想要名字空间中所有应用都具有某角色,无论它们使用的什么服务账号,
|
||||
可以将角色授予该名字空间的服务账号组。
|
||||
如果你想要名字空间中所有应用都具有某角色,无论它们使用的什么服务账户,
|
||||
可以将角色授予该名字空间的服务账户组。
|
||||
|
||||
例如,在名字空间 "my-namespace" 中的只读权限授予该名字空间中的所有服务账号:
|
||||
例如,在名字空间 "my-namespace" 中的只读权限授予该名字空间中的所有服务账户:
|
||||
|
||||
```shell
|
||||
kubectl create rolebinding serviceaccounts-view \
|
||||
|
@ -1757,9 +1920,9 @@ In order from most secure to least secure, the approaches are:
|
|||
-->
|
||||
4. 在集群范围内为所有服务账户授予一个受限角色(不鼓励)
|
||||
|
||||
如果你不想管理每一个名字空间的权限,你可以向所有的服务账号授予集群范围的角色。
|
||||
如果你不想管理每一个名字空间的权限,你可以向所有的服务账户授予集群范围的角色。
|
||||
|
||||
例如,为集群范围的所有服务账号授予跨所有名字空间的只读权限:
|
||||
例如,为集群范围的所有服务账户授予跨所有名字空间的只读权限:
|
||||
|
||||
|
||||
```shell
|
||||
|
@ -1781,7 +1944,7 @@ In order from most secure to least secure, the approaches are:
|
|||
-->
|
||||
5. 授予超级用户访问权限给集群范围内的所有服务帐户(强烈不鼓励)
|
||||
|
||||
如果你不关心如何区分权限,你可以将超级用户访问权限授予所有服务账号。
|
||||
如果你不关心如何区分权限,你可以将超级用户访问权限授予所有服务账户。
|
||||
|
||||
{{< warning >}}
|
||||
这样做会允许所有应用都对你的集群拥有完全的访问权限,并将允许所有能够读取
|
||||
|
@ -1814,7 +1977,7 @@ Here are two approaches for managing this transition:
|
|||
包括授予所有服务帐户全权访问 API 的能力。
|
||||
|
||||
默认的 RBAC 策略为控制面组件、节点和控制器等授予有限的权限,但不会为
|
||||
`kube-system` 名字空间外的服务账号授权
|
||||
`kube-system` 名字空间外的服务账户授权
|
||||
(除了授予所有认证用户的发现权限之外)。
|
||||
|
||||
这样做虽然安全得多,但可能会干扰期望自动获得 API 权限的现有工作负载。
|
||||
|
@ -1860,7 +2023,7 @@ You can use that information to determine which roles need to be granted to whic
|
|||
Once you have [granted roles to service accounts](#service-account-permissions) and workloads
|
||||
are running with no RBAC denial messages in the server logs, you can remove the ABAC authorizer.
|
||||
-->
|
||||
一旦你[将角色授予服务账号](#service-account-permissions) ,工作负载运行时
|
||||
一旦你[将角色授予服务账户](#service-account-permissions) ,工作负载运行时
|
||||
在服务器日志中没有出现 RBAC 拒绝消息,就可以删除 ABAC 鉴权器。
|
||||
|
||||
<!--
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -4,7 +4,7 @@ id: kube-scheduler
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-scheduler/
|
||||
short_description: >
|
||||
主节点上的组件,该组件监视那些新创建的未指定运行节点的 Pod,并选择节点让 Pod 在上面运行。
|
||||
控制平面组件,负责监视新创建的、未指定运行节点的 Pod,选择节点让 Pod 在上面运行。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -19,7 +19,7 @@ id: kube-scheduler
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-scheduler/
|
||||
short_description: >
|
||||
Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.
|
||||
Control plane component that watches for newly created pods with no assigned node, and selects a node for them to run on.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -28,10 +28,12 @@ tags:
|
|||
-->
|
||||
|
||||
<!--
|
||||
Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.
|
||||
-->
|
||||
Control plane component that watches for newly created
|
||||
{{< glossary_tooltip term_id="pod" text="Pods" >}} with no assigned
|
||||
{{< glossary_tooltip term_id="node" text="node">}}, and selects a node for them
|
||||
to run on.-->
|
||||
|
||||
主节点上的组件,该组件监视那些新创建的未指定运行节点的 Pod,并选择节点让 Pod 在上面运行。
|
||||
控制平面组件,负责监视新创建的、未指定运行{{< glossary_tooltip term_id="node" text="节点(node)">}}的 {{< glossary_tooltip term_id="pod" text="Pods" >}},选择节点让 Pod 在上面运行。
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -28,13 +28,14 @@ tags:
|
|||
<!--
|
||||
An agent that runs on each {{< glossary_tooltip text="node" term_id="node" >}} in the cluster. It makes sure that {{< glossary_tooltip text="containers" term_id="container" >}} are running in a {{< glossary_tooltip text="Pod" term_id="pod" >}}.
|
||||
-->
|
||||
一个在集群中每个节点上运行的代理。
|
||||
它保证容器都运行在 Pod 中。
|
||||
一个在集群中每个{{< glossary_tooltip text="节点(node)" term_id="node" >}}上运行的代理。
|
||||
它保证{{< glossary_tooltip text="容器(containers)" term_id="container" >}}都
|
||||
运行在 {{< glossary_tooltip text="Pod" term_id="pod" >}} 中。
|
||||
|
||||
<!--more-->
|
||||
|
||||
<!--
|
||||
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.
|
||||
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
|
||||
-->
|
||||
kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs
|
||||
中描述的容器处于运行状态且健康。
|
||||
|
|
|
@ -17,7 +17,7 @@ tags:
|
|||
title: Kubernetes API
|
||||
id: kubernetes-api
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/kubernetes-api/
|
||||
full_link: /docs/concepts/overview/kubernetes-api/
|
||||
short_description: >
|
||||
The application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.
|
||||
|
||||
|
@ -40,7 +40,7 @@ Kubernetes API 是通过 RESTful 接口提供 Kubernetes 功能服务并负责
|
|||
Kubernetes resources and "records of intent" are all stored as API objects, and modified via RESTful calls to the API. The API allows configuration to be managed in a declarative way. Users can interact with the Kubernetes API directly, or via tools like `kubectl`. The core Kubernetes API is flexible and can also be extended to support custom resources.
|
||||
-->
|
||||
|
||||
Kubernetes 资源和"意向记录"都是作为 API 对象储存的,并可以通过对 API 的 RESTful 调用进行修改。
|
||||
Kubernetes 资源和"意向记录"都是作为 API 对象储存的,并可以通过调用 RESTful 风格的 API 进行修改。
|
||||
API 允许以声明方式管理配置。
|
||||
用户可以直接和 Kubernetes API 交互,也可以通过 `kubectl` 这样的工具进行交互。
|
||||
核心的 Kubernetes API 是很灵活的,可以扩展以支持定制资源。
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 标签
|
||||
title: 标签(Label)
|
||||
id: label
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/labels/
|
||||
|
@ -15,7 +15,7 @@ tags:
|
|||
title: Label
|
||||
id: label
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/labels/
|
||||
full_link: /docs/concepts/overview/working-with-objects/labels/
|
||||
short_description: >
|
||||
Tags objects with identifying attributes that are meaningful and relevant to users.
|
||||
|
||||
|
|
|
@ -41,11 +41,11 @@ related:
|
|||
<!--
|
||||
Provides constraints to limit resource consumption per {{< glossary_tooltip text="Containers" term_id="container" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}} in a namespace.
|
||||
-->
|
||||
提供约束来限制命名空间中每个 {{< glossary_tooltip text="容器" term_id="container" >}} 或 {{< glossary_tooltip text="Pod" term_id="pod" >}} 的资源消耗。
|
||||
提供约束来限制命名空间中每个 {{< glossary_tooltip text="容器(Containers)" term_id="container" >}} 或 {{< glossary_tooltip text="Pod" term_id="pod" >}} 的资源消耗。
|
||||
|
||||
<!--more-->
|
||||
<!--
|
||||
LimitRange limits the quantity of objects that can be created by type,
|
||||
as well as the amount of compute resources that may be requested/consumed by individual {{< glossary_tooltip text="Containers" term_id="container" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}} in a namespace.
|
||||
-->
|
||||
LimitRange 按照类型来限制命名空间中对象能够创建的数量,以及单个 {{< glossary_tooltip text="容器" term_id="container" >}} 或 {{< glossary_tooltip text="Pod" term_id="pod" >}} 可以请求/使用的计算资源量。
|
||||
LimitRange 按照类型来限制命名空间中对象能够创建的数量,以及单个 {{< glossary_tooltip text="容器(Containers)" term_id="container" >}} 或 {{< glossary_tooltip text="Pod" term_id="pod" >}} 可以请求/使用的计算资源量。
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 日志
|
||||
title: 日志(Logging)
|
||||
id: logging
|
||||
date: 2019-04-04
|
||||
full_link: /zh/docs/concepts/cluster-administration/logging/
|
||||
|
@ -11,7 +11,7 @@ tags:
|
|||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
日志是 {{< glossary_tooltip text="集群" term_id="cluster" >}} 或应用程序记录的事件列表。
|
||||
日志是 {{< glossary_tooltip text="集群(cluster)" term_id="cluster" >}} 或应用程序记录的事件列表。
|
||||
|
||||
<!--
|
||||
---
|
||||
|
|
|
@ -32,11 +32,15 @@ tags:
|
|||
<!--more-->
|
||||
|
||||
<!--
|
||||
Some examples of Managed Services are AWS EC2, Azure SQL Database, and GCP Pub/Sub, but they can be any software offering that can be used by an application. [Service Catalog](/docs/concepts/service-catalog/) provides a way to list, provision, and bind with Managed Services offered by {{< glossary_tooltip text="Service Brokers" term_id="service-broker" >}}.
|
||||
Some examples of Managed Services are AWS EC2, Azure SQL Database, and
|
||||
GCP Pub/Sub, but they can be any software offering that can be used by an application.
|
||||
[Service Catalog](/docs/concepts/extend-kubernetes/service-catalog/) provides a way to
|
||||
list, provision, and bind with Managed Services offered by
|
||||
{{< glossary_tooltip text="Service Brokers" term_id="service-broker" >}}.
|
||||
-->
|
||||
托管服务的一些例子有 AWS EC2、Azure SQL 数据库和 GCP Pub/Sub 等,
|
||||
不过它们也可以是可以被某应用使用的任何软件交付件。
|
||||
[服务目录](/zh/docs/concepts/extend-kubernetes/service-catalog/)
|
||||
提供了一种方法用来列举、制备和绑定到
|
||||
{{< glossary_tooltip text="服务代理商" term_id="service-broker" >}}
|
||||
{{< glossary_tooltip text="服务代理商(Service Brokers)" term_id="service-broker" >}}
|
||||
所提供的托管服务。
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 清单
|
||||
title: 清单(Manifest)
|
||||
id: manifest
|
||||
date: 2019-06-28
|
||||
short_description: >
|
||||
|
|
|
@ -31,4 +31,4 @@ tags:
|
|||
<!--
|
||||
The term is still being used by some provisioning tools, such as {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}}, and managed services, to {{< glossary_tooltip text="label" term_id="label" >}} {{< glossary_tooltip text="nodes" term_id="node" >}} with `kubernetes.io/role` and control placement of {{< glossary_tooltip text="control plane" term_id="control-plane" >}} {{< glossary_tooltip text="pods" term_id="pod" >}}.
|
||||
-->
|
||||
该术语仍被一些配置工具使用,如 {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}} 以及托管的服务,为 {{< glossary_tooltip text="节点" term_id="node" >}} 添加 `kubernetes.io/role` 的 {{< glossary_tooltip text="标签" term_id="label" >}},以及管理控制平面 Pod 的调度。
|
||||
该术语仍被一些配置工具使用,如 {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}} 以及托管的服务,为 {{< glossary_tooltip text="节点(nodes)" term_id="node" >}} 添加 `kubernetes.io/role` 的 {{< glossary_tooltip text="标签(label)" term_id="label" >}},以及管理控制平面 Pod 的调度。
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 成员
|
||||
title: 成员(Member)
|
||||
id: member
|
||||
date: 2018-04-12
|
||||
full_link:
|
||||
|
@ -30,7 +30,7 @@ tags:
|
|||
A continuously active {{< glossary_tooltip text="contributor" term_id="contributor" >}} in the K8s community.
|
||||
-->
|
||||
|
||||
K8s 社区中持续活跃的贡献者。
|
||||
K8s 社区中持续活跃的{{< glossary_tooltip text="贡献者(contributor)" term_id="contributor" >}}。
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
@ -38,5 +38,5 @@ tags:
|
|||
Members can have issues and PRs assigned to them and participate in {{< glossary_tooltip text="special interest groups (SIGs)" term_id="sig" >}} through GitHub teams. Pre-submit tests are automatically run for members' PRs. A member is expected to remain an active contributor to the community.
|
||||
-->
|
||||
|
||||
可以将问题单(issue)和 PR 指派给成员,成员也可以通过 GitHub 小组加入 {{< glossary_tooltip text="特别兴趣小组 (SIGs)" term_id="sig" >}}。针对成员所提交的 PR,系统自动运行提交前测试。成员应该是持续活跃的社区贡献者。
|
||||
可以将问题单(issue)和 PR 指派给成员(Member),成员(Member)也可以通过 GitHub 小组加入 {{< glossary_tooltip text="特别兴趣小组 (SIGs)" term_id="sig" >}}。针对成员(Member)所提交的 PR,系统自动运行提交前测试。成员(Member)应该是持续活跃的社区贡献者。
|
||||
|
||||
|
|
|
@ -38,6 +38,10 @@ Minikube 是用来在本地运行 Kubernetes 的一种工具。
|
|||
|
||||
<!--
|
||||
Minikube runs a single-node cluster inside a VM on your computer.
|
||||
You can use Minikube to
|
||||
[try Kubernetes in a learning environment](/docs/setup/learning-environment/).
|
||||
-->
|
||||
|
||||
Minikube 在用户计算机上的一个虚拟机内运行单节点 Kubernetes 集群。
|
||||
你可以使用 Minikube
|
||||
[在学习环境中尝试 Kubernetes](/zh/docs/setup/learning-environment/).
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: 静态 Pod
|
||||
title: 镜像 Pod(Mirror Pod)
|
||||
id: 静态-pod
|
||||
date: 2019-08-06
|
||||
short_description: >
|
||||
API 服务器中的一个对象,用于跟踪 kubelet 上的静态容器。
|
||||
API 服务器中的一个对象,用于跟踪 kubelet 上的静态 pod。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -27,7 +27,8 @@ tags:
|
|||
A {{< glossary_tooltip text="pod" term_id="pod" >}} object that a kubelet uses
|
||||
to represent a {{< glossary_tooltip text="static pod" term_id="static-pod" >}}
|
||||
-->
|
||||
kubelet 使用一个对象 {{< glossary_tooltip text="pod" term_id="pod" >}} 来代表 {{< glossary_tooltip text="static pod" term_id="static-pod" >}}
|
||||
镜像 Pod(Mirror Pod)是被 kubelet 用来代表{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}} 的
|
||||
{{< glossary_tooltip text="pod" term_id="pod" >}} 对象。
|
||||
|
||||
<!--more-->
|
||||
<!--更多-->
|
||||
|
@ -43,4 +44,4 @@ will be visible on the API server, but cannot be controlled from there.
|
|||
它会自动地尝试在 Kubernetes API 服务器上为它创建 Pod 对象。
|
||||
这意味着 pod 在 API 服务器上将是可见的,但不能在其上进行控制。
|
||||
|
||||
(例如,删除静态 pod 将不会停止 kubelet 守护程序的运行)。
|
||||
(例如,删除镜像 Pod 也不会阻止 kubelet 守护进程继续运行它)。
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: 名称
|
||||
title: 名称(Name)
|
||||
id: name
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/names/
|
||||
|
@ -16,7 +16,7 @@ tags:
|
|||
title: Name
|
||||
id: name
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/names/
|
||||
full_link: /docs/concepts/overview/working-with-objects/names/
|
||||
short_description: >
|
||||
A client-provided string that refers to an object in a resource URL, such as `/api/v1/pods/some-name`.
|
||||
|
||||
|
@ -38,4 +38,4 @@ tags:
|
|||
Only one object of a given kind can have a given name at a time. However, if you delete the object, you can make a new object with the same name.
|
||||
-->
|
||||
|
||||
一次只能有一个给定类型的对象具有给定的名称。但是,如果删除对象,则可以创建同名的新对象。
|
||||
某一时刻,只能有一个给定类型的对象具有给定的名称。但是,如果删除该对象,则可以创建同名的新对象。
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: 命名空间
|
||||
title: 名字空间(Namespace)
|
||||
id: namespace
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/namespaces/
|
||||
short_description: >
|
||||
命名空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。
|
||||
名字空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
|
@ -16,7 +16,7 @@ tags:
|
|||
title: Namespace
|
||||
id: namespace
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/overview/working-with-objects/namespaces/
|
||||
full_link: /docs/concepts/overview/working-with-objects/namespaces/
|
||||
short_description: >
|
||||
An abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster.
|
||||
|
||||
|
@ -30,7 +30,7 @@ tags:
|
|||
An abstraction used by Kubernetes to support multiple virtual clusters on the same physical {{< glossary_tooltip text="cluster" term_id="cluster" >}}.
|
||||
-->
|
||||
|
||||
命名空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。
|
||||
名字空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
@ -38,4 +38,4 @@ tags:
|
|||
Namespaces are used to organize objects in a cluster and provide a way to divide cluster resources. Names of resources need to be unique within a namespace, but not across namespaces.
|
||||
-->
|
||||
|
||||
命名空间用来组织集群中对象,并为集群资源划分提供了一种方法。同一命名空间内的资源名称必须唯一,但跨命名空间时不作要求。
|
||||
名字空间用来组织集群中对象,并为集群资源划分提供了一种方法。同一名字空间内的资源名称必须唯一,但跨名字空间时不作要求。
|
||||
|
|
|
@ -18,7 +18,7 @@ tags:
|
|||
title: Network Policy
|
||||
id: network-policy
|
||||
date: 2018-04-12
|
||||
full_link: /zh/docs/concepts/services-networking/network-policies/
|
||||
full_link: /docs/concepts/services-networking/network-policies/
|
||||
short_description: >
|
||||
A specification of how groups of Pods are allowed to communicate with each other and with other network endpoints.
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue