Merge master into dev-1.22 to keep in sync.

pull/28569/head
Victor Palade 2021-06-22 19:52:26 +02:00
commit 1064c8dcff
301 changed files with 6243 additions and 1967 deletions

View File

@ -6,8 +6,9 @@ NETLIFY_FUNC = $(NODE_BIN)/netlify-lambda
# but this can be overridden when calling make, e.g.
# CONTAINER_ENGINE=podman make container-image
CONTAINER_ENGINE ?= docker
IMAGE_REGISTRY ?= gcr.io/k8s-staging-sig-docs
IMAGE_VERSION=$(shell scripts/hash-files.sh Dockerfile Makefile | cut -c 1-12)
CONTAINER_IMAGE = kubernetes-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
CONTAINER_IMAGE = $(IMAGE_REGISTRY)/k8s-website-hugo:v$(HUGO_VERSION)-$(IMAGE_VERSION)
CONTAINER_RUN = $(CONTAINER_ENGINE) run --rm --interactive --tty --volume $(CURDIR):/src
CCRED=\033[0;31m

24
cloudbuild.yaml Normal file
View File

@ -0,0 +1,24 @@
# See https://cloud.google.com/cloud-build/docs/build-config
# this must be specified in seconds. If omitted, defaults to 600s (10 mins)
timeout: 1200s
# this prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future.
options:
substitution_option: ALLOW_LOOSE
steps:
- name: "gcr.io/k8s-testimages/gcb-docker-gcloud:v20190906-745fed4"
entrypoint: make
env:
- DOCKER_CLI_EXPERIMENTAL=enabled
- TAG=$_GIT_TAG
- BASE_REF=$_PULL_BASE_REF
args:
- container-image
substitutions:
# _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and
# can be used as a substitution
_GIT_TAG: "12345"
# _PULL_BASE_REF will contain the ref that was pushed to to trigger this build -
# a branch like 'master' or 'release-0.2', or a tag like 'v0.2'.
_PULL_BASE_REF: "master"

View File

@ -0,0 +1,467 @@
---
layout: blog
title: "Writing a Controller for Pod Labels"
date: 2021-06-21
slug: writing-a-controller-for-pod-labels
---
**Authors**: Arthur Busser (Padok)
[Operators][what-is-an-operator] are proving to be an excellent solution to
running stateful distributed applications in Kubernetes. Open source tools like
the [Operator SDK][operator-sdk] provide ways to build reliable and maintainable
operators, making it easier to extend Kubernetes and implement custom
scheduling.
Kubernetes operators run complex software inside your cluster. The open source
community has already built [many operators][operatorhub] for distributed
applications like Prometheus, Elasticsearch, or Argo CD. Even outside of
open source, operators can help to bring new functionality to your Kubernetes
cluster.
An operator is a set of [custom resources][custom-resource-definitions] and a
set of [controllers][controllers]. A controller watches for changes to specific
resources in the Kubernetes API and reacts by creating, updating, or deleting
resources.
The Operator SDK is best suited for building fully-featured operators.
Nonetheless, you can use it to write a single controller. This post will walk
you through writing a Kubernetes controller in Go that will add a `pod-name`
label to pods that have a specific annotation.
## Why do we need a controller for this?
I recently worked on a project where we needed to create a Service that routed
traffic to a specific Pod in a ReplicaSet. The problem is that a Service can
only select pods by label, and all pods in a ReplicaSet have the same labels.
There are two ways to solve this problem:
1. Create a Service without a selector and manage the Endpoints or
EndpointSlices for that Service directly. We would need to write a custom
controller to insert our Pod's IP address into those resources.
2. Add a label to the Pod with a unique value. We could then use this label in
our Service's selector. Again, we would need to write a custom controller to
add this label.
A controller is a control loop that tracks one or more Kubernetes resource
types. The controller from option n°2 above only needs to track pods, which
makes it simpler to implement. This is the option we are going to walk through
by writing a Kubernetes controller that adds a `pod-name` label to our pods.
StatefulSets [do this natively][statefulset-pod-name-label] by adding a
`pod-name` label to each Pod in the set. But what if we don't want to or can't
use StatefulSets?
We rarely create pods directly; most often, we use a Deployment, ReplicaSet, or
another high-level resource. We can specify labels to add to each Pod in the
PodSpec, but not with dynamic values, so no way to replicate a StatefulSet's
`pod-name` label.
We tried using a [mutating admission webhook][mutating-admission-webhook]. When
anyone creates a Pod, the webhook patches the Pod with a label containing the
Pod's name. Disappointingly, this does not work: not all pods have a name before
being created. For instance, when the ReplicaSet controller creates a Pod, it
sends a `namePrefix` to the Kubernetes API server and not a `name`. The API
server generates a unique name before persisting the new Pod to etcd, but only
after calling our admission webhook. So in most cases, we can't know a Pod's
name with a mutating webhook.
Once a Pod exists in the Kubernetes API, it is mostly immutable, but we can
still add a label. We can even do so from the command line:
```bash
kubectl label my-pod my-label-key=my-label-value
```
We need to watch for changes to any pods in the Kubernetes API and add the label
we want. Rather than do this manually, we are going to write a controller that
does it for us.
## Bootstrapping a controller with the Operator SDK
A controller is a reconciliation loop that reads the desired state of a resource
from the Kubernetes API and takes action to bring the cluster's actual state
closer to the desired state.
In order to write this controller as quickly as possible, we are going to use
the Operator SDK. If you don't have it installed, follow the
[official documentation][operator-sdk-installation].
```terminal
$ operator-sdk version
operator-sdk version: "v1.4.2", commit: "4b083393be65589358b3e0416573df04f4ae8d9b", kubernetes version: "v1.19.4", go version: "go1.15.8", GOOS: "darwin", GOARCH: "amd64"
```
Let's create a new directory to write our controller in:
```bash
mkdir label-operator && cd label-operator
```
Next, let's initialize a new operator, to which we will add a single controller.
To do this, you will need to specify a domain and a repository. The domain
serves as a prefix for the group your custom Kubernetes resources will belong
to. Because we are not going to be defining custom resources, the domain does
not matter. The repository is going to be the name of the Go module we are going
to write. By convention, this is the repository where you will be storing your
code.
As an example, here is the command I ran:
```bash
# Feel free to change the domain and repo values.
operator-sdk init --domain=padok.fr --repo=github.com/busser/label-operator
```
Next, we need a create a new controller. This controller will handle pods and
not a custom resource, so no need to generate the resource code. Let's run this
command to scaffold the code we need:
```bash
operator-sdk create api --group=core --version=v1 --kind=Pod --controller=true --resource=false
```
We now have a new file: `controllers/pod_controller.go`. This file contains a
`PodReconciler` type with two methods that we need to implement. The first is
`Reconcile`, and it looks like this for now:
```go
func (r *PodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = r.Log.WithValues("pod", req.NamespacedName)
// your logic here
return ctrl.Result{}, nil
}
```
The `Reconcile` method is called whenever a Pod is created, updated, or deleted.
The name and namespace of the Pod are in the `ctrl.Request` the method receives
as a parameter.
The second method is `SetupWithManager` and for now it looks like this:
```go
func (r *PodReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
```
The `SetupWithManager` method is called when the operator starts. It serves to
tell the operator framework what types our `PodReconciler` needs to watch. To
use the same `Pod` type used by Kubernetes internally, we need to import some of
its code. All of the Kubernetes source code is open source, so you can import
any part you like in your own Go code. You can find a complete list of available
packages in the Kubernetes source code or [here on pkg.go.dev][pkg-go-dev]. To
use pods, we need the `k8s.io/api/core/v1` package.
```go
package controllers
import (
// other imports...
corev1 "k8s.io/api/core/v1"
// other imports...
)
```
Lets use the `Pod` type in `SetupWithManager` to tell the operator framework we
want to watch pods:
```go
func (r *PodReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Pod{}).
Complete(r)
}
```
Before moving on, we should set the RBAC permissions our controller needs. Above
the `Reconcile` method, we have some default permissions:
```go
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=pods/finalizers,verbs=update
```
We don't need all of those. Our controller will never interact with a Pod's
status or its finalizers. It only needs to read and update pods. Lets remove the
unnecessary permissions and keep only what we need:
```go
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch;update;patch
```
We are now ready to write our controller's reconciliation logic.
## Implementing reconciliation
Here is what we want our `Reconcile` method to do:
1. Use the Pod's name and namespace from the `ctrl.Request` to fetch the Pod
from the Kubernetes API.
2. If the Pod has an `add-pod-name-label` annotation, add a `pod-name` label to
the Pod; if the annotation is missing, don't add the label.
3. Update the Pod in the Kubernetes API to persist the changes made.
Lets define some constants for the annotation and label:
```go
const (
addPodNameLabelAnnotation = "padok.fr/add-pod-name-label"
podNameLabel = "padok.fr/pod-name"
)
```
The first step in our reconciliation function is to fetch the Pod we are working
on from the Kubernetes API:
```go
// Reconcile handles a reconciliation request for a Pod.
// If the Pod has the addPodNameLabelAnnotation annotation, then Reconcile
// will make sure the podNameLabel label is present with the correct value.
// If the annotation is absent, then Reconcile will make sure the label is too.
func (r *PodReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("pod", req.NamespacedName)
/*
Step 0: Fetch the Pod from the Kubernetes API.
*/
var pod corev1.Pod
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
log.Error(err, "unable to fetch Pod")
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
```
Our `Reconcile` method will be called when a Pod is created, updated, or
deleted. In the deletion case, our call to `r.Get` will return a specific error.
Let's import the package that defines this error:
```go
package controllers
import (
// other imports...
apierrors "k8s.io/apimachinery/pkg/api/errors"
// other imports...
)
```
We can now handle this specific error and — since our controller does not care
about deleted pods — explicitly ignore it:
```go
/*
Step 0: Fetch the Pod from the Kubernetes API.
*/
var pod corev1.Pod
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
if apierrors.IsNotFound(err) {
// we'll ignore not-found errors, since we can get them on deleted requests.
return ctrl.Result{}, nil
}
log.Error(err, "unable to fetch Pod")
return ctrl.Result{}, err
}
```
Next, lets edit our Pod so that our dynamic label is present if and only if our
annotation is present:
```go
/*
Step 1: Add or remove the label.
*/
labelShouldBePresent := pod.Annotations[addPodNameLabelAnnotation] == "true"
labelIsPresent := pod.Labels[podNameLabel] == pod.Name
if labelShouldBePresent == labelIsPresent {
// The desired state and actual state of the Pod are the same.
// No further action is required by the operator at this moment.
log.Info("no update required")
return ctrl.Result{}, nil
}
if labelShouldBePresent {
// If the label should be set but is not, set it.
if pod.Labels == nil {
pod.Labels = make(map[string]string)
}
pod.Labels[podNameLabel] = pod.Name
log.Info("adding label")
} else {
// If the label should not be set but is, remove it.
delete(pod.Labels, podNameLabel)
log.Info("removing label")
}
```
Finally, let's push our updated Pod to the Kubernetes API:
```go
/*
Step 2: Update the Pod in the Kubernetes API.
*/
if err := r.Update(ctx, &pod); err != nil {
log.Error(err, "unable to update Pod")
return ctrl.Result{}, err
}
```
When writing our updated Pod to the Kubernetes API, there is a risk that the Pod
has been updated or deleted since we first read it. When writing a Kubernetes
controller, we should keep in mind that we are not the only actors in the
cluster. When this happens, the best thing to do is start the reconciliation
from scratch, by requeuing the event. Lets do exactly that:
```go
/*
Step 2: Update the Pod in the Kubernetes API.
*/
if err := r.Update(ctx, &pod); err != nil {
if apierrors.IsConflict(err) {
// The Pod has been updated since we read it.
// Requeue the Pod to try to reconciliate again.
return ctrl.Result{Requeue: true}, nil
}
if apierrors.IsNotFound(err) {
// The Pod has been deleted since we read it.
// Requeue the Pod to try to reconciliate again.
return ctrl.Result{Requeue: true}, nil
}
log.Error(err, "unable to update Pod")
return ctrl.Result{}, err
}
```
Let's remember to return successfully at the end of the method:
```go
return ctrl.Result{}, nil
}
```
And that's it! We are now ready to run the controller on our cluster.
## Run the controller on your cluster
To run our controller on your cluster, we need to run the operator. For that,
all you will need is `kubectl`. If you don't have a Kubernetes cluster at hand,
I recommend you start one locally with [KinD (Kubernetes in Docker)][kind].
All it takes to run the operator from your machine is this command:
```bash
make run
```
After a few seconds, you should see the operator's logs. Notice that our
controller's `Reconcile` method was called for all pods already running in the
cluster.
Let's keep the operator running and, in another terminal, create a new Pod:
```bash
kubectl run --image=nginx my-nginx
```
The operator should quickly print some logs, indicating that it reacted to the
Pod's creation and subsequent changes in status:
```text
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
INFO controllers.Pod no update required {"pod": "default/my-nginx"}
```
Lets check the Pod's labels:
```terminal
$ kubectl get pod my-nginx --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx 1/1 Running 0 11m run=my-nginx
```
Let's add an annotation to the Pod so that our controller knows to add our
dynamic label to it:
```bash
kubectl annotate pod my-nginx padok.fr/add-pod-name-label=true
```
Notice that the controller immediately reacted and produced a new line in its
logs:
```text
INFO controllers.Pod adding label {"pod": "default/my-nginx"}
```
```terminal
$ kubectl get pod my-nginx --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-nginx 1/1 Running 0 13m padok.fr/pod-name=my-nginx,run=my-nginx
```
Bravo! You just successfully wrote a Kubernetes controller capable of adding
labels with dynamic values to resources in your cluster.
Controllers and operators, both big and small, can be an important part of your
Kubernetes journey. Writing operators is easier now than it has ever been. The
possibilities are endless.
## What next?
If you want to go further, I recommend starting by deploying your controller or
operator inside a cluster. The `Makefile` generated by the Operator SDK will do
most of the work.
When deploying an operator to production, it is always a good idea to implement
robust testing. The first step in that direction is to write unit tests.
[This documentation][operator-sdk-testing] will guide you in writing tests for
your operator. I wrote tests for the operator we just wrote; you can find all of
my code in [this GitHub repository][github-repo].
## How to learn more?
The [Operator SDK documentation][operator-sdk-docs] goes into detail on how you
can go further and implement more complex operators.
When modeling a more complex use-case, a single controller acting on built-in
Kubernetes types may not be enough. You may need to build a more complex
operator with [Custom Resource Definitions (CRDs)][custom-resource-definitions]
and multiple controllers. The Operator SDK is a great tool to help you do this.
If you want to discuss building an operator, join the [#kubernetes-operator][slack-channel]
channel in the [Kubernetes Slack workspace][slack-workspace]!
<!-- Links -->
[controllers]: https://kubernetes.io/docs/concepts/architecture/controller/
[custom-resource-definitions]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation
[github-repo]: https://github.com/busser/label-operator
[mutating-admission-webhook]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook
[operator-sdk]: https://sdk.operatorframework.io/
[operator-sdk-docs]: https://sdk.operatorframework.io/docs/
[operator-sdk-installation]: https://sdk.operatorframework.io/docs/installation/
[operator-sdk-testing]: https://sdk.operatorframework.io/docs/building-operators/golang/testing/
[operatorhub]: https://operatorhub.io/
[pkg-go-dev]: https://pkg.go.dev/k8s.io/api
[slack-channel]: https://kubernetes.slack.com/messages/kubernetes-operators
[slack-workspace]: https://slack.k8s.io/
[statefulset-pod-name-label]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label
[what-is-an-operator]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/

0
content/en/docs/concepts/architecture/_index.md Executable file → Normal file
View File

View File

@ -159,11 +159,12 @@ You can run your own controller as a set of Pods,
or externally to Kubernetes. What fits best will depend on what that particular
controller does.
## {{% heading "whatsnext" %}}
* Read about the [Kubernetes control plane](/docs/concepts/overview/components/#control-plane-components)
* Discover some of the basic [Kubernetes objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/)
* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
* If you want to write your own controller, see [Extension Patterns](/docs/concepts/extend-kubernetes/extend-cluster/#extension-patterns) in Extending Kubernetes.
* If you want to write your own controller, see
[Extension Patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
in Extending Kubernetes.

View File

@ -283,7 +283,7 @@ The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time:
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
(default 0.55), then the eviction rate is reduced.
- If the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
@ -377,6 +377,21 @@ For example, if `ShutdownGracePeriod=30s`, and
for gracefully terminating normal pods, and the last 10 seconds would be
reserved for terminating [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical).
{{< note >}}
When pods were evicted during the graceful node shutdown, they are marked as failed.
Running `kubectl get pods` shows the status of the the evicted pods as `Shutdown`.
And `kubectl describe pod` indicates that the pod was evicted because of node shutdown:
```
Status: Failed
Reason: Shutdown
Message: Node is shutting, evicting pods
```
Failed pod objects will be preserved until explicitly deleted or [cleaned up by the GC](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
This is a change of behavior compared to abrupt node termination.
{{< /note >}}
## {{% heading "whatsnext" %}}
* Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node.

View File

@ -33,8 +33,6 @@ the `--max-requests-inflight` flag without the API Priority and
Fairness feature enabled.
{{< /caution >}}
<!-- body -->
## Enabling/Disabling API Priority and Fairness
@ -65,6 +63,7 @@ The command-line flag `--enable-priority-and-fairness=false` will disable the
API Priority and Fairness feature, even if other flags have enabled it.
## Concepts
There are several distinct features involved in the API Priority and Fairness
feature. Incoming requests are classified by attributes of the request using
_FlowSchemas_, and assigned to priority levels. Priority levels add a degree of
@ -75,12 +74,13 @@ each other, and allows for requests to be queued to prevent bursty traffic from
causing failed requests when the average load is acceptably low.
### Priority Levels
Without APF enabled, overall concurrency in
the API server is limited by the `kube-apiserver` flags
`--max-requests-inflight` and `--max-mutating-requests-inflight`. With APF
enabled, the concurrency limits defined by these flags are summed and then the sum is divided up
among a configurable set of _priority levels_. Each incoming request is assigned
to a single priority level, and each priority level will only dispatch as many
Without APF enabled, overall concurrency in the API server is limited by the
`kube-apiserver` flags `--max-requests-inflight` and
`--max-mutating-requests-inflight`. With APF enabled, the concurrency limits
defined by these flags are summed and then the sum is divided up among a
configurable set of _priority levels_. Each incoming request is assigned to a
single priority level, and each priority level will only dispatch as many
concurrent requests as its configuration allows.
The default configuration, for example, includes separate priority levels for
@ -90,6 +90,7 @@ requests cannot prevent leader election or actions by the built-in controllers
from succeeding.
### Queuing
Even within a priority level there may be a large number of distinct sources of
traffic. In an overload situation, it is valuable to prevent one stream of
requests from starving others (in particular, in the relatively common case of a
@ -114,15 +115,18 @@ independent flows will all make progress when total traffic exceeds capacity),
tolerance for bursty traffic, and the added latency induced by queuing.
### Exempt requests
Some requests are considered sufficiently important that they are not subject to
any of the limitations imposed by this feature. These exemptions prevent an
improperly-configured flow control configuration from totally disabling an API
server.
## Defaults
The Priority and Fairness feature ships with a suggested configuration that
should suffice for experimentation; if your cluster is likely to
experience heavy load then you should consider what configuration will work best. The suggested configuration groups requests into five priority
experience heavy load then you should consider what configuration will work
best. The suggested configuration groups requests into five priority
classes:
* The `system` priority level is for requests from the `system:nodes` group,
@ -180,19 +184,18 @@ If you add the following additional FlowSchema, this exempts those
requests from rate limiting.
{{< caution >}}
Making this change also allows any hostile party to then send
health-check requests that match this FlowSchema, at any volume they
like. If you have a web traffic filter or similar external security
mechanism to protect your cluster's API server from general internet
traffic, you can configure rules to block any health check requests
that originate from outside your cluster.
{{< /caution >}}
{{< codenew file="priority-and-fairness/health-for-strangers.yaml" >}}
## Resources
The flow control API involves two kinds of resources.
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1beta1-flowcontrol-apiserver-k8s-io)
define the available isolation classes, the share of the available concurrency
@ -204,6 +207,7 @@ of the same API group, and it has the same Kinds with the same syntax and
semantics.
### PriorityLevelConfiguration
A PriorityLevelConfiguration represents a single isolation class. Each
PriorityLevelConfiguration has an independent limit on the number of outstanding
requests, and limitations on the number of queued requests.
@ -217,6 +221,7 @@ server by restarting `kube-apiserver` with a different value for
`--max-requests-inflight` (or `--max-mutating-requests-inflight`), and all
PriorityLevelConfigurations will see their maximum allowed concurrency go up (or
down) by the same fraction.
{{< caution >}}
With the Priority and Fairness feature enabled, the total concurrency limit for
the server is set to the sum of `--max-requests-inflight` and
@ -235,8 +240,8 @@ above the threshold will be queued, with the shuffle sharding and fair queuing t
to balance progress between request flows.
The queuing configuration allows tuning the fair queuing algorithm for a
priority level. Details of the algorithm can be read in the [enhancement
proposal](#whats-next), but in short:
priority level. Details of the algorithm can be read in the
[enhancement proposal](#whats-next), but in short:
* Increasing `queues` reduces the rate of collisions between different flows, at
the cost of increased memory usage. A value of 1 here effectively disables the
@ -249,15 +254,15 @@ proposal](#whats-next), but in short:
* Changing `handSize` allows you to adjust the probability of collisions between
different flows and the overall concurrency available to a single flow in an
overload situation.
{{< note >}}
A larger `handSize` makes it less likely for two individual flows to collide
(and therefore for one to be able to starve the other), but more likely that
a small number of flows can dominate the apiserver. A larger `handSize` also
potentially increases the amount of latency that a single high-traffic flow
can cause. The maximum number of queued requests possible from a
single flow is `handSize * queueLengthLimit`.
{{< /note >}}
{{< note >}}
A larger `handSize` makes it less likely for two individual flows to collide
(and therefore for one to be able to starve the other), but more likely that
a small number of flows can dominate the apiserver. A larger `handSize` also
potentially increases the amount of latency that a single high-traffic flow
can cause. The maximum number of queued requests possible from a
single flow is `handSize * queueLengthLimit`.
{{< /note >}}
Following is a table showing an interesting collection of shuffle
sharding configurations, showing for each the probability that a
@ -319,6 +324,7 @@ considered part of a single flow. The correct choice for a given FlowSchema
depends on the resource and your particular environment.
## Diagnostics
Every HTTP response from an API server with the priority and fairness feature
enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
@ -356,13 +362,14 @@ poorly-behaved workloads that may be harming system health.
matched the request), `priority_level` (indicating the one to which
the request was assigned), and `reason`. The `reason` label will be
have one of the following values:
* `queue-full`, indicating that too many requests were already
queued,
* `concurrency-limit`, indicating that the
PriorityLevelConfiguration is configured to reject rather than
queue excess requests, or
* `time-out`, indicating that the request was still in the queue
when its queuing time limit expired.
* `queue-full`, indicating that too many requests were already
queued,
* `concurrency-limit`, indicating that the
PriorityLevelConfiguration is configured to reject rather than
queue excess requests, or
* `time-out`, indicating that the request was still in the queue
when its queuing time limit expired.
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
vector (cumulative since server start) of requests that began
@ -430,14 +437,15 @@ poorly-behaved workloads that may be harming system health.
sample to its histogram, reporting the length of the queue immediately
after the request was added. Note that this produces different
statistics than an unbiased survey would.
{{< note >}}
An outlier value in a histogram here means it is likely that a single flow
(i.e., requests by one user or for one namespace, depending on
configuration) is flooding the API server, and being throttled. By contrast,
if one priority level's histogram shows that all queues for that priority
level are longer than those for other priority levels, it may be appropriate
to increase that PriorityLevelConfiguration's concurrency shares.
{{< /note >}}
{{< note >}}
An outlier value in a histogram here means it is likely that a single flow
(i.e., requests by one user or for one namespace, depending on
configuration) is flooding the API server, and being throttled. By contrast,
if one priority level's histogram shows that all queues for that priority
level are longer than those for other priority levels, it may be appropriate
to increase that PriorityLevelConfiguration's concurrency shares.
{{< /note >}}
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
holding the computed concurrency limit (based on the API server's
@ -450,12 +458,13 @@ poorly-behaved workloads that may be harming system health.
`priority_level` (indicating the one to which the request was
assigned), and `execute` (indicating whether the request started
executing).
{{< note >}}
Since each FlowSchema always assigns requests to a single
PriorityLevelConfiguration, you can add the histograms for all the
FlowSchemas for one priority level to get the effective histogram for
requests assigned to that priority level.
{{< /note >}}
{{< note >}}
Since each FlowSchema always assigns requests to a single
PriorityLevelConfiguration, you can add the histograms for all the
FlowSchemas for one priority level to get the effective histogram for
requests assigned to that priority level.
{{< /note >}}
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
vector of how long requests took to actually execute, broken down by
@ -465,14 +474,19 @@ poorly-behaved workloads that may be harming system health.
### Debug endpoints
When you enable the API Priority and Fairness feature, the kube-apiserver serves the following additional paths at its HTTP[S] ports.
When you enable the API Priority and Fairness feature, the `kube-apiserver`
serves the following additional paths at its HTTP[S] ports.
- `/debug/api_priority_and_fairness/dump_priority_levels` - a listing of
all the priority levels and the current state of each. You can fetch like this:
- `/debug/api_priority_and_fairness/dump_priority_levels` - a listing of all the priority levels and the current state of each. You can fetch like this:
```shell
kubectl get --raw /debug/api_priority_and_fairness/dump_priority_levels
```
The output is similar to this:
```
```none
PriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests,
workload-low, 0, true, false, 0, 0,
global-default, 0, true, false, 0, 0,
@ -483,12 +497,16 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
workload-high, 0, true, false, 0, 0,
```
- `/debug/api_priority_and_fairness/dump_queues` - a listing of all the queues and their current state. You can fetch like this:
- `/debug/api_priority_and_fairness/dump_queues` - a listing of all the
queues and their current state. You can fetch like this:
```shell
kubectl get --raw /debug/api_priority_and_fairness/dump_queues
```
The output is similar to this:
```
```none
PriorityLevelName, Index, PendingRequests, ExecutingRequests, VirtualStart,
workload-high, 0, 0, 0, 0.0000,
workload-high, 1, 0, 0, 0.0000,
@ -498,25 +516,33 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
leader-election, 15, 0, 0, 0.0000,
```
- `/debug/api_priority_and_fairness/dump_requests` - a listing of all the requests that are currently waiting in a queue. You can fetch like this:
- `/debug/api_priority_and_fairness/dump_requests` - a listing of all the requests
that are currently waiting in a queue. You can fetch like this:
```shell
kubectl get --raw /debug/api_priority_and_fairness/dump_requests
```
The output is similar to this:
```
```none
PriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime,
exempt, <none>, <none>, <none>, <none>, <none>,
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:26:57.179170694Z,
```
In addition to the queued requests, the output includes one phantom line for each priority level that is exempt from limitation.
In addition to the queued requests, the output includes one phantom line
for each priority level that is exempt from limitation.
You can get a more detailed listing with a command like this:
```shell
kubectl get --raw '/debug/api_priority_and_fairness/dump_requests?includeRequestDetails=1'
```
The output is similar to this:
```
```none
PriorityLevelName, FlowSchemaName, QueueIndex, RequestIndexInQueue, FlowDistingsher, ArriveTime, UserName, Verb, APIPath, Namespace, Name, APIVersion, Resource, SubResource,
system, system-nodes, 12, 0, system:node:127.0.0.1, 2020-07-23T15:31:03.583823404Z, system:node:127.0.0.1, create, /api/v1/namespaces/scaletest/configmaps,
system, system-nodes, 12, 1, system:node:127.0.0.1, 2020-07-23T15:31:03.594555947Z, system:node:127.0.0.1, create, /api/v1/namespaces/scaletest/configmaps,
@ -528,4 +554,4 @@ When you enable the API Priority and Fairness feature, the kube-apiserver serves
For background information on design details for API priority and fairness, see
the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness).
You can make suggestions and feature requests via [SIG API Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery)
or the feature's [slack channel](http://kubernetes.slack.com/messages/api-priority-and-fairness).
or the feature's [slack channel](https://kubernetes.slack.com/messages/api-priority-and-fairness).

View File

@ -1,5 +1,4 @@
---
reviewers:
title: Garbage collection for container images
content_type: concept
weight: 70
@ -7,12 +6,13 @@ weight: 70
<!-- overview -->
Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
Garbage collection is a helpful function of kubelet that will clean up unused
[images](/docs/concepts/containers/#container-images) and unused
[containers](/docs/concepts/containers/). Kubelet will perform garbage collection
for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially
break the behavior of kubelet by removing containers expected to exist.
<!-- body -->
@ -28,10 +28,24 @@ threshold has been met.
## Container Collection
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
The policy for garbage collecting containers considers three user-defined variables.
`MinAge` is the minimum age at which a container can be garbage collected.
`MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have.
`MaxContainers` is the maximum number of total dead containers.
These variables can be individually disabled by setting `MinAge` to zero and
setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
Kubelet will act on containers that are unidentified, deleted, or outside of
the boundaries set by the previously mentioned flags. The oldest containers
will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may
potentially conflict with each other in situations where retaining the maximum
number of containers per pod (`MaxPerPodContainer`) would go outside the
allowable range of global dead containers (`MaxContainers`).
`MaxPerPodContainer` would be adjusted in this situation: A worst case
scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest
containers. Additionally, containers owned by pods that have been deleted are
removed once they are older than `MinAge`.
Containers that are not managed by kubelet are not subject to container garbage collection.
@ -40,18 +54,18 @@ Containers that are not managed by kubelet are not subject to container garbage
You can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 85%.
Default is 85%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
to free. Default is 80%.
You can customize the garbage collection policy through the following kubelet flags:
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
per container. Default is 1.
per container. Default is 1.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is -1, which means there is no global limit.
Default is -1, which means there is no global limit.
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
@ -77,10 +91,8 @@ Including:
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
## {{% heading "whatsnext" %}}
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
See [Configuring Out Of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
for more details.

View File

@ -50,7 +50,7 @@ It is a recommended practice to put resources related to the same microservice o
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into GitHub:
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx/nginx-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/nginx/nginx-deployment.yaml
```
```shell

0
content/en/docs/concepts/configuration/_index.md Executable file → Normal file
View File

View File

@ -17,6 +17,11 @@ a *kubeconfig file*. This is a generic way of referring to configuration files.
It does not mean that there is a file named `kubeconfig`.
{{< /note >}}
{{< warning >}}
Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure.
If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
{{< /warning>}}
By default, `kubectl` looks for a file named `config` in the `$HOME/.kube` directory.
You can specify other kubeconfig files by setting the `KUBECONFIG` environment
variable or by setting the
@ -154,4 +159,3 @@ are stored absolutely.

View File

@ -51,7 +51,7 @@ heterogeneous node configurations, see [Scheduling](#scheduling) below.
{{< /note >}}
The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The
handler must be a valid DNS 1123 label (alpha-numeric + `-` characters).
handler must be a valid [DNS label name](/docs/concepts/overview/working-with-objects/names/#dns-label-names).
### 2. Create the corresponding RuntimeClass resources
@ -135,7 +135,7 @@ table](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md#crioruntim
runtime_path = "${PATH_TO_BINARY}"
```
See CRI-O's [config documentation](https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md) for more details.
See CRI-O's [config documentation](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md) for more details.
## Scheduling
@ -179,4 +179,4 @@ are accounted for in Kubernetes.
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -116,7 +116,7 @@ Operator.
* [Charmed Operator Framework](https://juju.is/)
* [kubebuilder](https://book.kubebuilder.io/)
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
* [Metacontroller](https://metacontroller.app/) along with WebHooks that
* [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html) along with WebHooks that
you implement yourself
* [Operator Framework](https://operatorframework.io)
* [shell-operator](https://github.com/flant/shell-operator)

0
content/en/docs/concepts/overview/_index.md Executable file → Normal file
View File

View File

View File

@ -48,7 +48,7 @@ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Alway
## Multiple resource types
You use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
You can use field selectors across multiple resource types. This `kubectl` command selects all Statefulsets and Services that are not in the `default` namespace:
```shell
kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default

0
content/en/docs/concepts/policy/_index.md Executable file → Normal file
View File

View File

@ -10,7 +10,8 @@ weight: 40
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
Kubernetes allow you to limit the number of process IDs (PIDs) that a {{< glossary_tooltip term_id="Pod" text="Pod" >}} can use.
Kubernetes allow you to limit the number of process IDs (PIDs) that a
{{< glossary_tooltip term_id="Pod" text="Pod" >}} can use.
You can also reserve a number of allocatable PIDs for each {{< glossary_tooltip term_id="node" text="node" >}}
for use by the operating system and daemons (rather than by Pods).
@ -84,7 +85,9 @@ gate](/docs/reference/command-line-tools-reference/feature-gates/)
Kubernetes allows you to limit the number of processes running in a Pod. You
specify this limit at the node level, rather than configuring it as a resource
limit for a particular Pod. Each Node can have a different PID limit.
To configure the limit, you can specify the command line parameter `--pod-max-pids` to the kubelet, or set `PodPidsLimit` in the kubelet [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
To configure the limit, you can specify the command line parameter `--pod-max-pids`
to the kubelet, or set `PodPidsLimit` in the kubelet
[configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
{{< note >}}
Before Kubernetes version 1.20, PID resource limiting for Pods required enabling
@ -95,9 +98,12 @@ the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
## PID based eviction
You can configure kubelet to start terminating a Pod when it is misbehaving and consuming abnormal amount of resources.
This feature is called eviction. You can [Configure Out of Resource Handling](/docs/tasks/administer-cluster/out-of-resource) for various eviction signals.
This feature is called eviction. You can
[Configure Out of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/)
for various eviction signals.
Use `pid.available` eviction signal to configure the threshold for number of PIDs used by Pod.
You can set soft and hard eviction policies. However, even with the hard eviction policy, if the number of PIDs growing very fast,
You can set soft and hard eviction policies.
However, even with the hard eviction policy, if the number of PIDs growing very fast,
node can still get into unstable state by hitting the node PIDs limit.
Eviction signal value is calculated periodically and does NOT enforce the limit.
@ -112,6 +118,7 @@ when one Pod is misbehaving.
## {{% heading "whatsnext" %}}
- Refer to the [PID Limiting enhancement document](https://github.com/kubernetes/enhancements/blob/097b4d8276bc9564e56adf72505d43ce9bc5e9e8/keps/sig-node/20190129-pid-limiting.md) for more information.
- For historical context, read [Process ID Limiting for Stability Improvements in Kubernetes 1.14](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/).
- For historical context, read
[Process ID Limiting for Stability Improvements in Kubernetes 1.14](/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/).
- Read [Managing Resources for Containers](/docs/concepts/configuration/manage-resources-containers/).
- Learn how to [Configure Out of Resource Handling](/docs/tasks/administer-cluster/out-of-resource).
- Learn how to [Configure Out of Resource Handling](/docs/concepts/scheduling-eviction/node-pressure-eviction/).

View File

@ -57,8 +57,9 @@ Neither contention nor changes to quota will affect already created resources.
## Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} `--enable-admission-plugins=` flag has `ResourceQuota` as
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the {{< glossary_tooltip text="API server" term_id="kube-apiserver" >}}
`--enable-admission-plugins=` flag has `ResourceQuota` as
one of its arguments.
A resource quota is enforced in a particular namespace when there is a
@ -66,7 +67,9 @@ ResourceQuota in that namespace.
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace.
You can limit the total sum of
[compute resources](/docs/concepts/configuration/manage-resources-containers/)
that can be requested in a given namespace.
The following resource types are supported:
@ -125,7 +128,9 @@ In release 1.8, quota support for local ephemeral storage is added as an alpha f
| `ephemeral-storage` | Same as `requests.ephemeral-storage`. |
{{< note >}}
When using a CRI container runtime, container logs will count against the ephemeral storage quota. This can result in the unexpected eviction of pods that have exhausted their storage quotas. Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
When using a CRI container runtime, container logs will count against the ephemeral storage quota.
This can result in the unexpected eviction of pods that have exhausted their storage quotas.
Refer to [Logging Architecture](/docs/concepts/cluster-administration/logging/) for details.
{{< /note >}}
## Object Count Quota
@ -192,7 +197,7 @@ Resources specified on the quota outside of the allowed set results in a validat
| `NotTerminating` | Match pods where `.spec.activeDeadlineSeconds is nil` |
| `BestEffort` | Match pods that have best effort quality of service. |
| `NotBestEffort` | Match pods that do not have best effort quality of service. |
| `PriorityClass` | Match pods that references the specified [priority class](/docs/concepts/configuration/pod-priority-preemption). |
| `PriorityClass` | Match pods that references the specified [priority class](/docs/concepts/scheduling-eviction/pod-priority-preemption). |
| `CrossNamespacePodAffinity` | Match pods that have cross-namespace pod [(anti)affinity terms](/docs/concepts/scheduling-eviction/assign-pod-node). |
The `BestEffort` scope restricts a quota to tracking the following resource:
@ -248,13 +253,14 @@ specified.
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
Pods can be created at a specific [priority](/docs/concepts/configuration/pod-priority-preemption/#pod-priority).
Pods can be created at a specific [priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority).
You can control a pod's consumption of system resources based on a pod's priority, by using the `scopeSelector`
field in the quota spec.
A quota is matched and consumed only if `scopeSelector` in the quota spec selects the pod.
When quota is scoped for priority class using `scopeSelector` field, quota object is restricted to track only following resources:
When quota is scoped for priority class using `scopeSelector` field, quota object
is restricted to track only following resources:
* `pods`
* `cpu`
@ -554,7 +560,7 @@ kubectl create -f ./object-counts.yaml --namespace=myspace
kubectl get quota --namespace=myspace
```
```
```none
NAME AGE
compute-resources 30s
object-counts 32s
@ -564,7 +570,7 @@ object-counts 32s
kubectl describe quota compute-resources --namespace=myspace
```
```
```none
Name: compute-resources
Namespace: myspace
Resource Used Hard
@ -580,7 +586,7 @@ requests.nvidia.com/gpu 0 4
kubectl describe quota object-counts --namespace=myspace
```
```
```none
Name: object-counts
Namespace: myspace
Resource Used Hard
@ -677,10 +683,10 @@ Then, create a resource quota object in the `kube-system` namespace:
{{< codenew file="policy/priority-class-resourcequota.yaml" >}}
```shell
$ kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
```
```
```none
resourcequota/pods-cluster-services created
```

View File

@ -214,7 +214,7 @@ signal below the threshold, the kubelet begins to evict end-user pods.
The kubelet uses the following parameters to determine pod eviction order:
1. Whether the pod's resource usage exceeds requests
1. [Pod Priority](/docs/concepts/configuration/pod-priority-preemption/)
1. [Pod Priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
1. The pod's resource usage relative to requests
As a result, kubelet ranks and evicts pods in the following order:

View File

@ -252,12 +252,12 @@ Even so, the answer to the preceding question must be yes. If the answer is no,
the Node is not considered for preemption.
{{< /note >}}
If a pending Pod has inter-pod affinity to one or more of the lower-priority
Pods on the Node, the inter-Pod affinity rule cannot be satisfied in the absence
of those lower-priority Pods. In this case, the scheduler does not preempt any
Pods on the Node. Instead, it looks for another Node. The scheduler might find a
suitable Node or it might not. There is no guarantee that the pending Pod can be
scheduled.
If a pending Pod has inter-pod {{< glossary_tooltip text="affinity" term_id="affinity" >}}
to one or more of the lower-priority Pods on the Node, the inter-Pod affinity
rule cannot be satisfied in the absence of those lower-priority Pods. In this case,
the scheduler does not preempt any Pods on the Node. Instead, it looks for another
Node. The scheduler might find a suitable Node or it might not. There is no
guarantee that the pending Pod can be scheduled.
Our recommended solution for this problem is to create inter-Pod affinity only
towards equal or higher priority Pods.

View File

@ -285,7 +285,7 @@ arbitrary tolerations to DaemonSets.
## {{% heading "whatsnext" %}}
* Read about [out of resource handling](/docs/tasks/administer-cluster/out-of-resource/) and how you can configure it
* Read about [pod priority](/docs/concepts/configuration/pod-priority-preemption/)
* Read about [out of resource handling](/docs/concepts/scheduling-eviction/out-of-resource/) and how you can configure it
* Read about [pod priority](/docs/concepts/scheduling-eviction/pod-priority-preemption/)

View File

@ -283,9 +283,9 @@ of individual policies are not defined here.
[**PodSecurityPolicy**](/docs/concepts/policy/pod-security-policy/)
- [Privileged](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/privileged-psp.yaml)
- [Baseline](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/baseline-psp.yaml)
- [Restricted](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml)
- {{< example file="policy/privileged-psp.yaml" >}}Privileged{{< /example >}}
- {{< example file="policy/baseline-psp.yaml" >}}Baseline{{< /example >}}
- {{< example file="policy/restricted-psp.yaml" >}}Restricted{{< /example >}}
## FAQ

0
content/en/docs/concepts/services-networking/_index.md Executable file → Normal file
View File

View File

@ -7,6 +7,7 @@ content_type: concept
weight: 20
---
<!-- overview -->
Kubernetes creates DNS records for services and pods. You can contact
services with consistent DNS names instead of IP addresses.
@ -261,6 +262,8 @@ spec:
### Pod's DNS Config {#pod-dns-config}
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
Pod's DNS Config allows users more control on the DNS settings for a Pod.
The `dnsConfig` field is optional and it can work with any `dnsPolicy` settings.
@ -332,7 +335,6 @@ The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
| 1.9 | Alpha |
## {{% heading "whatsnext" %}}

View File

@ -249,5 +249,4 @@ implementation in `kube-proxy`.
## {{% heading "whatsnext" %}}
* Learn about [Enabling EndpointSlices](/docs/tasks/administer-cluster/enabling-endpointslices)
* Read [Connecting Applications with Services](/docs/concepts/services-networking/connect-applications-service/)

View File

@ -215,7 +215,7 @@ each Service port. The value of this field is mirrored by the corresponding
Endpoints and EndpointSlice objects.
This field follows standard Kubernetes label syntax. Values should either be
[IANA standard service names](http://www.iana.org/assignments/service-names) or
[IANA standard service names](https://www.iana.org/assignments/service-names) or
domain prefixed names such as `mycompany.com/my-custom-protocol`.
## Virtual IPs and service proxies

0
content/en/docs/concepts/storage/_index.md Executable file → Normal file
View File

View File

View File

@ -32,7 +32,8 @@ different flags and/or different memory and cpu requests for different hardware
### Create a DaemonSet
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
You can describe a DaemonSet in a YAML file. For example, the `daemonset.yaml` file below
describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
{{< codenew file="controllers/daemonset.yaml" >}}
@ -46,19 +47,23 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see
[running stateless applications](/docs/tasks/run-application/run-stateless-application-deployment/),
[configuring containers](/docs/tasks/), and [object management using kubectl](/docs/concepts/overview/working-with-objects/object-management/) documents.
[running stateless applications](/docs/tasks/run-application/run-stateless-application-deployment/)
and [object management using kubectl](/docs/concepts/overview/working-with-objects/object-management/).
The name of a DaemonSet object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
A DaemonSet also needs a [`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) section.
A DaemonSet also needs a
[`.spec`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
section.
### Pod Template
The `.spec.template` is one of the required fields in `.spec`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates).
It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}},
except it is nested and does not have an `apiVersion` or `kind`.
In addition to required fields for a Pod, a Pod template in a DaemonSet has to specify appropriate
labels (see [pod selector](#pod-selector)).
@ -79,20 +84,23 @@ unintentional orphaning of Pods, and it was found to be confusing to users.
The `.spec.selector` is an object consisting of two fields:
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/).
* `matchLabels` - works the same as the `.spec.selector` of a
[ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/).
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
list of values and an operator that relates the key and values.
When the two are specified the result is ANDed.
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`. Config with these not matching will be rejected by the API.
If the `.spec.selector` is specified, it must match the `.spec.template.metadata.labels`.
Config with these not matching will be rejected by the API.
### Running Pods on select Nodes
If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create Pods on nodes which match that [node
selector](/docs/concepts/scheduling-eviction/assign-pod-node/). Likewise if you specify a `.spec.template.spec.affinity`,
then DaemonSet controller will create Pods on nodes which match that [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/).
create Pods on nodes which match that [node selector](/docs/concepts/scheduling-eviction/assign-pod-node/).
Likewise if you specify a `.spec.template.spec.affinity`,
then DaemonSet controller will create Pods on nodes which match that
[node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/).
If you do not specify either, then the DaemonSet controller will create Pods on all nodes.
## How Daemon Pods are scheduled
@ -106,18 +114,19 @@ node that a Pod runs on is selected by the Kubernetes scheduler. However,
DaemonSet pods are created and scheduled by the DaemonSet controller instead.
That introduces the following issues:
* Inconsistent Pod behavior: Normal Pods waiting to be scheduled are created
and in `Pending` state, but DaemonSet pods are not created in `Pending`
state. This is confusing to the user.
* [Pod preemption](/docs/concepts/configuration/pod-priority-preemption/)
is handled by default scheduler. When preemption is enabled, the DaemonSet controller
will make scheduling decisions without considering pod priority and preemption.
* Inconsistent Pod behavior: Normal Pods waiting to be scheduled are created
and in `Pending` state, but DaemonSet pods are not created in `Pending`
state. This is confusing to the user.
* [Pod preemption](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
is handled by default scheduler. When preemption is enabled, the DaemonSet controller
will make scheduling decisions without considering pod priority and preemption.
`ScheduleDaemonSetPods` allows you to schedule DaemonSets using the default
scheduler instead of the DaemonSet controller, by adding the `NodeAffinity` term
to the DaemonSet pods, instead of the `.spec.nodeName` term. The default
scheduler is then used to bind the pod to the target host. If node affinity of
the DaemonSet pod already exists, it is replaced (the original node affinity was taken into account before selecting the target host). The DaemonSet controller only
the DaemonSet pod already exists, it is replaced (the original node affinity was
taken into account before selecting the target host). The DaemonSet controller only
performs these operations when creating or modifying DaemonSet pods, and no
changes are made to the `spec.template` of the DaemonSet.
@ -158,10 +167,12 @@ Some possible patterns for communicating with Pods in a DaemonSet are:
- **Push**: Pods in the DaemonSet are configured to send updates to another service, such
as a stats database. They do not have clients.
- **NodeIP and Known Port**: Pods in the DaemonSet can use a `hostPort`, so that the pods are reachable via the node IPs. Clients know the list of node IPs somehow, and know the port by convention.
- **DNS**: Create a [headless service](/docs/concepts/services-networking/service/#headless-services) with the same pod selector,
and then discover DaemonSets using the `endpoints` resource or retrieve multiple A records from
DNS.
- **NodeIP and Known Port**: Pods in the DaemonSet can use a `hostPort`, so that the pods
are reachable via the node IPs.
Clients know the list of node IPs somehow, and know the port by convention.
- **DNS**: Create a [headless service](/docs/concepts/services-networking/service/#headless-services)
with the same pod selector, and then discover DaemonSets using the `endpoints`
resource or retrieve multiple A records from DNS.
- **Service**: Create a service with the same Pod selector, and use the service to reach a
daemon on a random node. (No way to reach specific node.)

View File

@ -68,7 +68,7 @@ is subject to deletion once all owners are verified absent.
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolveable owner reference, and is not able to be garbage collected.
it is treated as having an unresolvable owner reference, and is not able to be garbage collected.
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event

View File

@ -304,7 +304,7 @@ cleaned up by CronJobs based on the specified capacity-based cleanup policy.
### TTL mechanism for finished Jobs
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
{{< feature-state for_k8s_version="v1.21" state="beta" >}}
Another way to clean up finished Jobs (either `Complete` or `Failed`)
automatically is to use a TTL mechanism provided by a
@ -342,11 +342,6 @@ If the field is set to `0`, the Job will be eligible to be automatically deleted
immediately after it finishes. If the field is unset, this Job won't be cleaned
up by the TTL controller after it finishes.
Note that this TTL mechanism is alpha, with feature gate `TTLAfterFinished`. For
more information, see the documentation for
[TTL controller](/docs/concepts/workloads/controllers/ttlafterfinished/) for
finished resources.
## Job patterns
The Job object can be used to support reliable parallel execution of Pods. The Job object is not

View File

@ -86,7 +86,7 @@ rolling out node software updates can cause voluntary disruptions. Also, some im
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
Your cluster administrator or hosting provider should have documented what level of voluntary
disruptions, if any, to expect. Certain configuration options, such as
[using PriorityClasses](/docs/concepts/configuration/pod-priority-preemption/)
[using PriorityClasses](/docs/concepts/scheduling-eviction/pod-priority-preemption/)
in your pod spec can also cause voluntary (and involuntary) disruptions.

View File

@ -16,7 +16,7 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
In versions of Kubernetes before v1.18, you must enable the `EvenPodsSpread`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on
the [API server](/docs/concepts/overview/components/#kube-apiserver) and the
[scheduler](/docs/reference/generated/kube-scheduler/) in order to use Pod
[scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) in order to use Pod
topology spread constraints.
{{< /note >}}

View File

@ -18,8 +18,8 @@ the build setup and generates the reference documentation for a release.
## Getting the docs repository
Make sure your `website` fork is up-to-date with the `kubernetes/website` master and clone
your `website` fork.
Make sure your `website` fork is up-to-date with the `kubernetes/website` remote on
GitHub (`main` branch), and clone your `website` fork.
```shell
mkdir github.com
@ -171,7 +171,7 @@ For example:
The `release.yml` configuration file contains instructions to fix relative links.
To fix relative links within your imported files, set the`gen-absolute-links`
property to `true`. You can find an example of this in
[`release.yml`](https://github.com/kubernetes/website/blob/master/update-imported-docs/release.yml).
[`release.yml`](https://github.com/kubernetes/website/blob/main/update-imported-docs/release.yml).
## Adding and committing changes in kubernetes/website

View File

@ -98,7 +98,7 @@ Once you've opened a localization PR, you can become members of the Kubernetes G
### Add your localization team in GitHub
Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685).
Next, add your Kubernetes localization team to [`sig-docs/teams.yaml`](https://github.com/kubernetes/org/blob/main/config/kubernetes/sig-docs/teams.yaml). For an example of adding a localization team, see the PR to add the [Spanish localization team](https://github.com/kubernetes/org/pull/685).
Members of `@kubernetes/sig-docs-**-owners` can approve PRs that change content within (and only within) your localization directory: `/content/**/`.
@ -117,7 +117,7 @@ For an example of adding a label, see the PR for adding the [Italian language la
### Modify the site configuration
The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/master/config.toml) file. To support a new localization, you'll need to modify `config.toml`.
The Kubernetes website uses Hugo as its web framework. The website's Hugo configuration resides in the [`config.toml`](https://github.com/kubernetes/website/tree/main/config.toml) file. To support a new localization, you'll need to modify `config.toml`.
Add a configuration block for the new language to `config.toml`, under the existing `[languages]` block. The German block, for example, looks like:
@ -136,7 +136,7 @@ For more information about Hugo's multilingual support, see "[Multilingual Mode]
### Add a new localization directory
Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/master/content) folder in the repository. For example, the two-letter code for German is `de`:
Add a language-specific subdirectory to the [`content`](https://github.com/kubernetes/website/tree/main/content) folder in the repository. For example, the two-letter code for German is `de`:
```shell
mkdir content/de
@ -219,7 +219,7 @@ For an example of adding a new localization, see the PR to enable [docs in Frenc
### Add a localized README file
To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of k/website, where `**` is the two-letter language code. For example, a German README file would be `README-de.md`.
To guide other localization contributors, add a new [`README-**.md`](https://help.github.com/articles/about-readmes/) to the top level of [k/website](https://github.com/kubernetes/website/), where `**` is the two-letter language code. For example, a German README file would be `README-de.md`.
Provide guidance to localization contributors in the localized `README-**.md` file. Include the same information contained in `README.md` as well as:
@ -276,15 +276,15 @@ To find source files for your target version:
2. Select a branch for your target version from the following table:
Target version | Branch
-----|-----
Latest version | [`master`](https://github.com/kubernetes/website/tree/master)
Latest version | [`main`](https://github.com/kubernetes/website/tree/main)
Previous version | [`release-{{< skew prevMinorVersion >}}`](https://github.com/kubernetes/website/tree/release-{{< skew prevMinorVersion >}})
Next version | [`dev-{{< skew nextMinorVersion >}}`](https://github.com/kubernetes/website/tree/dev-{{< skew nextMinorVersion >}})
The `master` branch holds content for the current release `{{< latest-version >}}`. The release team will create a `{{< release-branch >}}` branch before the next release: v{{< skew nextMinorVersion >}}.
The `main` branch holds content for the current release `{{< latest-version >}}`. The release team will create a `{{< release-branch >}}` branch before the next release: v{{< skew nextMinorVersion >}}.
### Site strings in i18n
Localizations must include the contents of [`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/master/data/i18n/en/en.toml) in a new language-specific file. Using German as an example: `data/i18n/de/de.toml`.
Localizations must include the contents of [`data/i18n/en/en.toml`](https://github.com/kubernetes/website/blob/main/data/i18n/en/en.toml) in a new language-specific file. Using German as an example: `data/i18n/de/de.toml`.
Add a new localization directory and file to `data/i18n/`. For example, with German (`de`):
@ -339,14 +339,14 @@ Repeat steps 1-4 as needed until the localization is complete. For example, subs
Teams must merge localized content into the same branch from which the content was sourced.
For example:
- a localization branch sourced from `master` must be merged into `master`.
- a localization branch sourced from `release-1.19` must be merged into `release-1.19`.
- a localization branch sourced from `main` must be merged into `main`.
- a localization branch sourced from `release-{{ skew "prevMinorVersion" }}` must be merged into `release-{{ skew "prevMinorVersion" }}`.
{{< note >}}
If your localization branch was created from `master` branch but it is not merged into `master` before new release branch `{{< release-branch >}}` created, merge it into both `master` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`.
If your localization branch was created from `main` branch but it is not merged into `main` before new release branch `{{< release-branch >}}` created, merge it into both `main` and new release branch `{{< release-branch >}}`. To merge your localization branch into new release branch `{{< release-branch >}}`, you need to switch upstream branch of your localization branch to `{{< release-branch >}}`.
{{< /note >}}
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/master/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/master/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
At the beginning of every team milestone, it's helpful to open an issue comparing upstream changes between the previous localization branch and the current localization branch. There are two scripts for comparing upstream changes. [`upstream_changes.py`](https://github.com/kubernetes/website/tree/main/scripts#upstream_changespy) is useful for checking the changes made to a specific file. And [`diff_l10n_branches.py`](https://github.com/kubernetes/website/tree/main/scripts#diff_l10n_branchespy) is useful for creating a list of outdated files for a specific localization branch.
While only approvers can open a new localization branch and merge pull requests, anyone can open a pull request for a new localization branch. No special permissions are required.

View File

@ -40,7 +40,7 @@ Anyone can write a blog post and submit it for review.
- Many CNCF projects have their own blog. These are often a better choice for posts. There are times of major feature or milestone for a CNCF project that users would be interested in reading on the Kubernetes blog.
- Blog posts should be original content
- The official blog is not for repurposing existing content from a third party as new content.
- The [license](https://github.com/kubernetes/website/blob/master/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around.
- The [license](https://github.com/kubernetes/website/blob/main/LICENSE) for the blog allows commercial use of the content for commercial purposes, but not the other way around.
- Blog posts should aim to be future proof
- Given the development velocity of the project, we want evergreen content that won't require updates to stay accurate for the reader.
- It can be a better choice to add a tutorial or update official documentation than to write a high level overview as a blog post.
@ -56,7 +56,7 @@ The SIG Docs [blog subproject](https://github.com/kubernetes/community/tree/mast
To submit a blog post follow these directions:
- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts) directory.
- [Open a pull request](/docs/contribute/new-content/open-a-pr/#fork-the-repo) with a new blog post. New blog posts go under the [`content/en/blog/_posts`](https://github.com/kubernetes/website/tree/main/content/en/blog/_posts) directory.
- Ensure that your blog post follows the correct naming conventions and the following frontmatter (metadata) information:
@ -90,6 +90,6 @@ Case studies highlight how organizations are using Kubernetes to solve
real-world problems. The Kubernetes marketing team and members of the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} collaborate with you on all case studies.
Have a look at the source for the
[existing case studies](https://github.com/kubernetes/website/tree/master/content/en/case-studies).
[existing case studies](https://github.com/kubernetes/website/tree/main/content/en/case-studies).
Refer to the [case study guidelines](https://github.com/cncf/foundation/blob/master/case-study-guidelines.md) and submit your request as outlined in the guidelines.

View File

@ -127,7 +127,7 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi
upstream https://github.com/kubernetes/website.git (push)
```
6. Fetch commits from your fork's `origin/master` and `kubernetes/website`'s `upstream/master`:
6. Fetch commits from your fork's `origin/main` and `kubernetes/website`'s `upstream/main`:
```bash
git fetch origin
@ -137,15 +137,15 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi
This makes sure your local repository is up to date before you start making changes.
{{< note >}}
This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `master` with `upstream/master` before pushing updates to your fork.
This workflow is different than the [Kubernetes Community GitHub Workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md). You do not need to merge your local copy of `main` with `upstream/main` before pushing updates to your fork.
{{< /note >}}
### Create a branch
1. Decide which branch base to your work on:
- For improvements to existing content, use `upstream/master`.
- For new content about existing features, use `upstream/master`.
- For improvements to existing content, use `upstream/main`.
- For new content about existing features, use `upstream/main`.
- For localized content, use the localization's conventions. For more information, see [localizing Kubernetes documentation](/docs/contribute/localization/).
- For new features in an upcoming Kubernetes release, use the feature branch. For more information, see [documenting for a release](/docs/contribute/new-content/new-features/).
- For long-running efforts that multiple SIG Docs contributors collaborate on,
@ -154,10 +154,10 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi
If you need help choosing a branch, ask in the `#sig-docs` Slack channel.
2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/master`:
2. Create a new branch based on the branch identified in step 1. This example assumes the base branch is `upstream/main`:
```bash
git checkout -b <my_new_branch> upstream/master
git checkout -b <my_new_branch> upstream/main
```
3. Make your changes using a text editor.
@ -262,7 +262,7 @@ The commands below use Docker as default container engine. Set the `CONTAINER_EN
Alternately, install and use the `hugo` command on your computer:
1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml).
1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/main/netlify.toml).
2. If you have not updated your website repository, the `website/themes/docsy` directory is empty.
The site cannot build without a local copy of the theme. To update the website theme, run:
@ -370,11 +370,11 @@ If another contributor commits changes to the same file in another PR, it can cr
git push --force-with-lease origin <your-branch-name>
```
2. Fetch changes from `kubernetes/website`'s `upstream/master` and rebase your branch:
2. Fetch changes from `kubernetes/website`'s `upstream/main` and rebase your branch:
```bash
git fetch upstream
git rebase upstream/master
git rebase upstream/main
```
3. Inspect the results of the rebase:

View File

@ -42,7 +42,7 @@ When opening a pull request, you need to know in advance which branch to base yo
Scenario | Branch
:---------|:------------
Existing or new English language content for the current release | `master`
Existing or new English language content for the current release | `main`
Content for a feature change release | The branch which corresponds to the major and minor version the feature change is in, using the pattern `dev-<version>`. For example, if a feature changes in the `v{{< skew nextMinorVersion >}}` release, then add documentation changes to the ``dev-{{< skew nextMinorVersion >}}`` branch.
Content in other languages (localizations) | Use the localization's convention. See the [Localization branching strategy](/docs/contribute/localization/#branching-strategy) for more information.
@ -60,6 +60,6 @@ Limit pull requests to one language per PR. If you need to make an identical cha
## Tools for contributors
The [doc contributors tools](https://github.com/kubernetes/website/tree/master/content/en/docs/doc-contributor-tools) directory in the `kubernetes/website` repository contains tools to help your contribution journey go more smoothly.
The [doc contributors tools](https://github.com/kubernetes/website/tree/main/content/en/docs/doc-contributor-tools) directory in the `kubernetes/website` repository contains tools to help your contribution journey go more smoothly.

View File

@ -73,8 +73,8 @@ two [prow plugins](https://github.com/kubernetes/test-infra/tree/master/prow/plu
- approve
These two plugins use the
[OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) and
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES)
[OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) and
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)
files in the top level of the `kubernetes/website` GitHub repository to control
how prow works within the repository.

View File

@ -44,8 +44,8 @@ These queries exclude localization PRs. All queries are against the main branch
Lists PRs that need an LGTM from a member. If the PR needs technical review, loop in one of the reviewers suggested by the bot. If the content needs work, add suggestions and feedback in-line.
- [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge%2Fwork-in-progress+-label%3Ado-not-merge%2Fhold+label%3Alanguage%2Fen+label%3Algtm+):
Lists PRs that need an `/approve` comment to merge.
- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]).
- [Not against the main branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch.
- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amain+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]).
- [Not against the primary branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amain): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch.
### Helpful Prow commands for wranglers

View File

@ -147,7 +147,7 @@ separately for reviewer status in SIG Docs.
To apply:
1. Open a pull request that adds your GitHub user name to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS) file
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS) file
in the `kubernetes/website` repository.
{{< note >}}
@ -219,7 +219,7 @@ separately for approver status in SIG Docs.
To apply:
1. Open a pull request adding yourself to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS)
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS)
file in the `kubernetes/website` repository.
{{< note >}}

View File

@ -55,7 +55,7 @@ You can reference glossary terms with an inclusion that automatically updates an
As well as inclusions with tooltips, you can reuse the definitions from the glossary in
page content.
The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary), with a content file for each glossary term.
The raw data for glossary terms is stored at [https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/main/content/en/docs/reference/glossary), with a content file for each glossary term.
### Glossary demo

View File

@ -30,7 +30,7 @@ glossary entries, tabs, and representing feature state.
## Language
Kubernetes documentation has been translated into multiple languages
(see [Localization READMEs](https://github.com/kubernetes/website/blob/master/README.md#localization-readmemds)).
(see [Localization READMEs](https://github.com/kubernetes/website/blob/main/README.md#localization-readmemds)).
The way of localizing the docs for a different language is described in [Localizing Kubernetes Documentation](/docs/contribute/localization/).

View File

@ -10,7 +10,7 @@ card:
<!-- overview -->
If you notice an issue with Kubernetes documentation, or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser.
If you notice an issue with Kubernetes documentation or have an idea for new content, then open an issue. All you need is a [GitHub account](https://github.com/join) and a web browser.
In most cases, new work on Kubernetes documentation begins with an issue in GitHub. Kubernetes contributors
then review, categorize and tag issues as needed. Next, you or another member
@ -22,7 +22,7 @@ of the Kubernetes community open a pull request with changes to resolve the issu
## Opening an issue
If you want to suggest improvements to existing content, or notice an error, then open an issue.
If you want to suggest improvements to existing content or notice an error, then open an issue.
1. Click the **Create an issue** link on the right sidebar. This redirects you
to a GitHub issue page pre-populated with some headers.

View File

@ -698,6 +698,8 @@ admission plugin, which allows preventing pods from running on specifically tain
### PodSecurityPolicy {#podsecuritypolicy}
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
This admission controller acts on creation and modification of the pod and determines if it should be admitted
based on the requested security context and the available Pod Security Policies.

View File

@ -14,6 +14,48 @@ auto_generated: true
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
</tbody>
</table>
## `KubeletConfiguration` {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}
@ -445,10 +487,10 @@ Default: "10s"</td>
status to master if node status does not change. Kubelet will ignore this
frequency and post node status immediately if any change is detected. It is
only used when node lease feature is enabled. nodeStatusReportFrequency's
default value is 1m. But if nodeStatusUpdateFrequency is set explicitly,
default value is 5m. But if nodeStatusUpdateFrequency is set explicitly,
nodeStatusReportFrequency's default value will be set to
nodeStatusUpdateFrequency for backward compatibility.
Default: "1m"</td>
Default: "5m"</td>
</tr>
@ -590,7 +632,7 @@ Default: "cgroupfs"</td>
Requires the CPUManager feature gate to be enabled.
Dynamic Kubelet Config (beta): This field should not be updated without a full node
reboot. It is safest to keep this value the same as the local config.
Default: "none"</td>
Default: "None"</td>
</tr>
@ -606,6 +648,18 @@ Default: "10s"</td>
</tr>
<tr><td><code>memoryManagerPolicy</code><br/>
<code>string</code>
</td>
<td>
MemoryManagerPolicy is the name of the policy to use by memory manager.
Requires the MemoryManager feature gate to be enabled.
Dynamic Kubelet Config (beta): This field should not be updated without a full node
reboot. It is safest to keep this value the same as the local config.
Default: "none"</td>
</tr>
<tr><td><code>topologyManagerPolicy</code><br/>
<code>string</code>
</td>
@ -1231,7 +1285,7 @@ Default: true</td>
</td>
<td>
ShutdownGracePeriod specifies the total duration that the node should delay the shutdown and total grace period for pod termination during a node shutdown.
Default: "30s"</td>
Default: "0s"</td>
</tr>
@ -1241,7 +1295,46 @@ Default: "30s"</td>
<td>
ShutdownGracePeriodCriticalPods specifies the duration used to terminate critical pods during a node shutdown. This should be less than ShutdownGracePeriod.
For example, if ShutdownGracePeriod=30s, and ShutdownGracePeriodCriticalPods=10s, during a node shutdown the first 20 seconds would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating critical pods.
Default: "10s"</td>
Default: "0s"</td>
</tr>
<tr><td><code>reservedMemory</code><br/>
<a href="#kubelet-config-k8s-io-v1beta1-MemoryReservation"><code>[]MemoryReservation</code></a>
</td>
<td>
ReservedMemory specifies a comma-separated list of memory reservations for NUMA nodes.
The parameter makes sense only in the context of the memory manager feature. The memory manager will not allocate reserved memory for container workloads.
For example, if you have a NUMA0 with 10Gi of memory and the ReservedMemory was specified to reserve 1Gi of memory at NUMA0,
the memory manager will assume that only 9Gi is available for allocation.
You can specify a different amount of NUMA node and memory types.
You can omit this parameter at all, but you should be aware that the amount of reserved memory from all NUMA nodes
should be equal to the amount of memory specified by the node allocatable features(https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
If at least one node allocatable parameter has a non-zero value, you will need to specify at least one NUMA node.
Also, avoid specifying:
1. Duplicates, the same NUMA node, and memory type, but with a different value.
2. zero limits for any memory type.
3. NUMAs nodes IDs that do not exist under the machine.
4. memory types except for memory and hugepages-<size>
Default: nil</td>
</tr>
<tr><td><code>enableProfilingHandler</code><br/>
<code>bool</code>
</td>
<td>
enableProfilingHandler enables profiling via web interface host:port/debug/pprof/
Default: true</td>
</tr>
<tr><td><code>enableDebugFlagsHandler</code><br/>
<code>bool</code>
</td>
<td>
enableDebugFlagsHandler enables flags endpoint via web interface host:port/debug/flags/v
Default: true</td>
</tr>
@ -1544,6 +1637,47 @@ and groups corresponding to the Organization in the client certificate.</td>
## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
MemoryReservation specifies the memory reservation of different types for each NUMA node
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>numaNode</code> <B>[Required]</B><br/>
<code>int32</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>limits</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#resourcelist-v1-core"><code>core/v1.ResourceList</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy}
(Alias of `string`)
@ -1560,45 +1694,3 @@ managers (secret, configmap) are discovering object changes.
## `LoggingConfiguration` {#LoggingConfiguration}
**Appears in:**
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
LoggingConfiguration contains logging options
Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>format</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Format Flag specifies the structure of log messages.
default value of format is `text`</td>
</tr>
<tr><td><code>sanitization</code> <B>[Required]</B><br/>
<code>bool</code>
</td>
<td>
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens).
Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)</td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,22 @@
---
title: Affinity
id: affinity
date: 2019-01-11
full_link: /docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
short_description: >
Rules used by the scheduler to determine where to place pods
aka:
tags:
- fundamental
---
In Kubernetes, _affinity_ is a set of rules that give hints to the scheduler about where to place pods.
<!--more-->
There are two kinds of affinity:
* [node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
* [pod-to-pod affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
The rules are defined using the Kubernetes {{< glossary_tooltip term_id="label" text="labels">}},
and {{< glossary_tooltip term_id="selector" text="selectors">}} specified in {{< glossary_tooltip term_id="pod" text="pods" >}},
and they can be either required or preferred, depending on how strictly you want the scheduler to enforce them.

0
content/en/docs/reference/glossary/annotation.md Executable file → Normal file
View File

View File

View File

0
content/en/docs/reference/glossary/approver.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/certificate.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/cla.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/cloud-provider.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/cluster-operator.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/cluster.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/cncf.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/code-contributor.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/configmap.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/container.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/contributor.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/controller.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/cronjob.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/daemonset.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/deployment.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/developer.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/docker.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/downstream.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/etcd.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/helm-chart.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/image.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/index.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/ingress.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/init-container.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/istio.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/job.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kops.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kube-apiserver.md Executable file → Normal file
View File

View File

0
content/en/docs/reference/glossary/kube-proxy.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kube-scheduler.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kubeadm.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kubectl.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kubelet.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/kubernetes-api.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/label.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/limitrange.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/managed-service.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/member.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/minikube.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/mirror-pod.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/name.md Executable file → Normal file
View File

0
content/en/docs/reference/glossary/namespace.md Executable file → Normal file
View File

Some files were not shown because too many files have changed in this diff Show More