Official 1.18 Release Docs (#19116)
* Requesting for Approve Permisssions (#18550) As I will be part of kubernetes 1.18 docs release team. Approve permissions will help me in approving the 1.18 docs enhnacemnts. Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Change config on dev-1.18 branch to prepare for v1.18 release (#18557) * Changed config.toml from v1.17 to v1.18 As a part of kubernetes 1.18 release work. Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Removed the older prior version i.e v1.13 Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Added missing v1.17 block into config.toml Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Requesting for Approve Permisssions (#18550) As I will be part of kubernetes 1.18 docs release team. Approve permissions will help me in approving the 1.18 docs enhnacemnts. Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Change config on dev-1.18 branch to prepare for v1.18 release (#18557) * Changed config.toml from v1.17 to v1.18 As a part of kubernetes 1.18 release work. Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Removed the older prior version i.e v1.13 Signed-off-by: vineeth <vineethpothulapati@outlook.com> * Added missing v1.17 block into config.toml Signed-off-by: vineeth <vineethpothulapati@outlook.com> * add kubectl diff in common operations (#18665) * bootstrap-tokens: promote to GA in 1.18 (#18428) * Updated version for the HPA configurable scaling feature (#18965) Signed-off-by: Arjun Naik <arjun.rn@gmail.com> * kubeadm: add notes about deprecating kube-dns usage in 1.18 (#18851) * kubeadm: add notes about deprecating kube-dns usage in 1.18 * implementation-details: update notes about DNS * Sync up between dev-1.18 and master branches (#19055) * Fixed outdated ECR credential debug message (#18631) * Fixed outdated ECR credential debug message The log message for troubleshooting kubelet auto fetching ECR credentils issue has been changed (noticed since 1.14), and the new message reads like this when verbose log level is set to 3: - `aws_credentials.go:109] unable to get ECR credentials from cache, checking ECR API` - `aws_credentials.go:116] Got ECR credentials from ECR API for <Your ECR AWS Account ID>.dkr.ecr.us-east-1.amazonaws.com` This is based on the kubelet source code: https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/credentialprovider/aws/aws_credentials.go#L91 This PR is to fix this and to avoid confusion for more people who are troubleshooting the kubelet ECR issue. * Update content/en/docs/concepts/containers/images.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Fix deployment name in docs/tasks/administer-cluster/dns-horizontal-autoscaling.md (#18772) * ru/docs/tutorials/hello-minikube.md: sync with English translation. (#18687) * content/ru/docs/concepts/_index.md: use English names for kinds. (#18613) * Fix French typo in "when" section (#18786) * First Japanese l10n work for release-1.16 (#18790) * Translate concepts/services-networking/connect-applications-service/ into Japanese (#17710) * Translate concepts/services-networking/connect-applications-service/ into Japanese * Apply review * Translate content/ja/docs/tasks/_index.md into Japanese (#17789) * add task index * huge page * ja-docs: Update kops Installation Steps (#17804) * Update /ja/docs/tasks/tools/install-minikube/ (#17711) * Update /ja/docs/tasks/tools/install-minikube/ * Apply review * Apply review * Update content/ja/docs/tasks/tools/install-minikube.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/tools/install-minikube.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Translate tasks/configure-pod-container/assign-cpu-resource/ in Japanese (#16160) * copy from content/en/docs/tasks/configure-pod-container/ to ja * translate assign-cpu-resource.md in Japanese * Update content/ja/docs/tasks/configure-pod-container/assign-cpu-resource.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/configure-pod-container/assign-cpu-resource.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update assign-cpu-resource.md ここの *request* と *limit* はほかの文中の単語とは異なり、YAMLのfieldを表すため、訳さないでおく * fix translation "Pod scheduling is based on requests." の箇所。 requestsに基づいているのは事実だが、直訳されたときになにを指すのかあいまいなので、対象を具体的に記述 * Translate concepts/workloads/controllers/deployment/ in Japanese #14848 (#17794) * ja-trans: Translate concepts/workloads/controllers/deployment/ into Japanese (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * ja-trans: Improve Japanese translation in concepts/workloads/controllers/deployment/ (#14848) * little fix (#18135) * update index (#18136) * Update /ja/docs/setup/_index.md (#18139) * Update /ja/docs/tasks/tools/install-kubectl/ (#18137) * update /docs/ja/tasks/tools/install-kubectl/ * fix mongon * apply reveiw * Update /ja/docs/reference/command-line-tools-reference/feature-gates/ (#18141) * Update feature agete * tidy up feature gates list * translate new lines * table caption * blank * する -> します * apply review * fix broken link * Update content/ja/docs/reference/command-line-tools-reference/feature-gates.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * update translation * remove line * Update content/ja/docs/reference/command-line-tools-reference/feature-gates.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * rollpack * Update /ja/docs/concepts/services-networking/service/ (#18138) * update /ja/docs/concepts/services-networking/service/ * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/services-networking/service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * consider Endpoints as a Kubernetes resource * full * Update content/ja/docs/concepts/_index.md (#18145) * Update concepts * control plane * apply review * fix bold (#18165) * Update /ja/docs/concepts/overview/components.md (#18153) * update /ja/docs/concepts/overview/components.md * some japanese docs are already there * translate prepend * apply upstream changes (#18278) * Translate concepts/services-networking/ingress into Japanese #17741 (#18234) * ja-trans: Translate concepts/services-networking/ingress into Japanese (#17741) * ja-trans: Improve Japanese translation in concepts/services-networking/ingress (#17741) * ja-trans: Improve Japanese translation in concepts/services-networking/ingress (#17741) * Update pod overview in Japanese (#18277) * Update pod-overview * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * ノード * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/concepts/workloads/pods/pod-overview.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> * Translate concepts/scheduling/scheduler-perf-tuning/ in Japanese #17119 (#17796) * ja-trans: Translate concepts/scheduling/scheduler-perf-tuning/ into Japanese (#17119) * ja-trans: Improve Japanese translation in concepts/scheduling/scheduler-perf-tuning/ (#17119) * ja-trans: Improve Japanese translation in concepts/scheduling/scheduler-perf-tuning/ (#17119) * ja-trans:conetent/ja/casestudies/nav (#18450) * Translate tasks/debug-application-cluster/debug-service/ in Japanese (#18395) * Translate tasks/debug-application-cluster/debug-service/ in Japanese * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Change all `Pods` to `Pod` and `Endpoints` to `Endpoint` * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Updated content pointed out in review * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Apply suggestions from code review Co-Authored-By: inductor <kohei.ota@zozo.com> * Apply suggestions from review * Apply suggestions form review * Apply suggestions from review * Apply suggestions from review * Apply suggestions from code review Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/debug-application-cluster/debug-service.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: inductor <kohei.ota@zozo.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> * Translate concepts/extend-kubernetes/api-extension/custom-resources/ into Japanese (#18200) * Translate concepts/extend-kubernetes/api-extension/custom-resources/ into Japanese * Apply suggestions from code review between L1 an L120 by oke-py Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Apply suggestions from code review by oke-py Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update CustomResourceDefinition not to localize into Japanese * Revert the link to customresourcedefinitions to English Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Apply suggestions from code review by oke-py and inductor Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> Co-Authored-By: inductor <kohei.ota@zozo.com> * Apply a suggestion from review by inductor * Apply a suggestion from code review by oke-py Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: inductor <kohei.ota@zozo.com> * Translate tasks/configure-pod-container/quality-service-pod/ into Japanese (#16173) * copy from content/en/docs/tasks/configure-pod-container/quality-service-pod.md to Ja * Translate tasks/configure-pod-container/quality-service-pod/ into Japanese Guaranteed, Burstable, BestEffortは用語として存在するので訳さない Signed-off-by: Takuma Hashimoto <takumaxd+github@gmail.com> * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * Update content/ja/docs/tasks/configure-pod-container/quality-service-pod.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> * Translate content/ja/docs/reference/kubectl/cheatsheet.md (#17739) (#18285) * Translate content/ja/docs/reference/kubectl/cheatsheet.md (#17739) * Translated kubectl cheet sheet. * Fix typos in content/ja/docs/reference/kubectl/cheatsheet.md (#17739) * Fix japanese style in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Fix translation in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Fix typo in content/ja/docs/reference/kubectl/cheatsheet.md * Modify translation for casestudies (#18767) * modify terminology * add ten * update translation * update * update * update * fix typo (#18769) * remove english comment (#18770) * ja-trans:conetent/ja/casestudies/spotify (#18451) * ja-trans: content/ja/case-studies/spotify * Update content/ja/case-studies/spotify/index.html Updated with the proposal from inductor Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Updated with inductor 's proposal Co-Authored-By: inductor <kohei.ota@zozo.com> * ja-trans: content/ja/case-studies/spotify * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> * Update content/ja/case-studies/spotify/index.html Co-Authored-By: inductor <kohei.ota@zozo.com> Co-authored-by: inductor <kohei.ota@zozo.com> * Translate Japanese headers (#18776) * translate headers * add index for references * Update content/ja/docs/setup/production-environment/tools/_index.md Co-Authored-By: Naoki Oketani <okepy.naoki@gmail.com> * translate controller Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> * ja-docs: translate install-kubeadm into Japanese (#18198) * ja-docs: translate install-kubeadm into Japanese * translate table title in install-kubeadm to Japanese * update kubeadm install doc * remove extra spaces * fix translation miss * translate url title into japanese * fix translation miss * remove line break in sentence and translate title * remove extra line break * remove extra line break * fix translation miss Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Samuel Kihahu <kihahu@users.noreply.github.com> Co-authored-by: Takuma Hashimoto <takuma-hashimoto@freee.co.jp> Co-authored-by: Keita Akutsu <kakts.git@gmail.com> Co-authored-by: Masa Taniguchi <maabou512@gmail.com> Co-authored-by: Soto Sugita <sotoiwa@gmail.com> Co-authored-by: Kozzy Hasebe <48105562+hasebe@users.noreply.github.com> Co-authored-by: kazuaki harada <canhel.4suti50y.salamander@gmail.com> Co-authored-by: Shunsuke Miyoshi <s.miyoshi@jp.fujitsu.com> * delete zh SEE ALSO(51-54) (#18788) * Added missing brackets in markdown (#18783) * Fix broken links in api_changes doc (#18743) * fix jump (#18781) * fix redundant note (#18780) * Fix typo: default-manager -> default-scheduler (#18709) like #18649 #18708 * fix issue #18738 (#18773) Signed-off-by: Dominic Yin <yindongchao@inspur.com> * Correct description of kubectl (#18172) * Correct description of kubectl Given that `kubectl` is not a [command line interface (CLI)](https://en.wikipedia.org/wiki/Command-line_interface), I suggest calling it what it is -- a control utility (ctl = control). The term "tool" is commonly used in place of "utility," including the `kubectl` docs. A CLI presents the user with a command prompt at which the user can enter multiple command lines that a command-line interpreter interprets and processes. Think of `bash`, `emacs`, or a SQL shell. Since `kubectl` is not run in a shell, it is not a CLI. Here are related docs that correctly refer to `kubectl` as a "command-line tool": - https://kubernetes.io/docs/reference/tools/#kubectl - https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-kubectl - https://kubernetes.io/docs/tasks/tools/install-kubectl/ - https://kubernetes.io/docs/reference/kubectl/kubectl/ * Update content/en/docs/reference/kubectl/overview.md Co-Authored-By: Zach Corleissen <zacharysarah@users.noreply.github.com> Co-authored-by: Zach Corleissen <zacharysarah@users.noreply.github.com> * Add blog post: Reviewing 2019 in Docs (#18662) Tiny fix Feedback from onlydole Add missing link Incremental fixes Revise Jim's job title Update content/en/blog/_posts/2020-01-17-Docs-Review-2019.md Co-Authored-By: Celeste Horgan <celeste@cncf.io> Feedback from celeste, change date * Update OWNERS_ALIASES (#18803) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#16869) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-authored-by: Bob Killen <killen.bob@gmail.com> Co-authored-by: Taylor Dolezal <onlydole@users.noreply.github.com> * blog: introduce CSI support for ephemeral inline volumes (#16832) * csi-ephemeral-inline-volumes: introduce CSI support for ephemeral inline volumes This was alpha in Kubernetes 1.15 and became beta in 1.16. Several CSI drivers already support it (soon...). * csi-ephemeral-inline-volumes: bump date and address feedback (NodeUnpublishVolume) * csi-ephemeral-inline-volumes: add examples and next steps * csi-ephemeral-inline-volumes: rename file, minor edits * csi-ephemeral-inline-volumes: include Docker example * Create 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md (#18062) * Create 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update and rename 2019-12-10-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md to 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md * Update and rename 2019-01-16-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md to 2019-01-22-Gamified-Chaos-Engineering-Tool-for-Kubernetes.md Co-authored-by: Kaitlyn Barnard <kaitlynbarnard10@gmail.com> * Revert "Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#16869)" (#18805) This reverts commitpull/19836/head snapshot-initial-v1.182c4545e105
. * add blog k8s on mips (#18795) * add blog k8s on mips * modify english title to chinese * modify some error * Remove user-journeys legacy content #18615 (#18779) * Use monospace for HostFolder and VM in the French Minikube setup guide. (#18749) * Add French version of persistent volume page concept page (#18706) * Add French version of persistent volume page concept page * Fix * Fix * Fix * Fix * sync content/zh/docs/reference/issues-security/ en zh (#18727) * update zh-translation: /docs/concepts/storage/volume-snapshots.md (#18650) * Clean up user journeys content for zh (#18815) * Followup fixes for: Add resource version section to api-concepts (#18069) * Followup fixes for: Add resource version section to api-concepts documentation * Apply feedback * Apply feedback * Switch paragraph to active voice * Add Community and Code of Conduct for ID (#18828) * Add additional ways to contribute part to update zh doc (#18762) * Add additional ways to contribute part to update zh doc * Add original English text * Update content/zh/docs/contribute/_index.md Co-Authored-By: chentanjun <tanjunchen20@gmail.com> Co-authored-by: chentanjun <tanjunchen20@gmail.com> * Clean up extensions/v1beta1 in docs (#18839) * fix an example path (#18848) * Translating network plugins (#17184) * Fix for a typo (#18822) * tą instalację -> tę instalację / (https://sjp.pwn.pl/poradnia/haslo/te-czy-ta;1598.html) (#18801) * Fix typo in Scalability section (#18866) The phrase `very larger` is not valid, it is supposed to be either `very large` or `larger`. Propose to have it `very large`. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Add Polish translation of Contribute index page (#18775) Co-Authored-By: Michał Sochoń <kaszpir@gmail.com> Co-authored-by: Michał Sochoń <kaszpir@gmail.com> * Clean up extensions/v1beta1 in docs (#18838) * Add Indonesian Manage Compute Resources page (#18468) * Add Indonesian Manage Compute Resources page * Updates to id Manage Compute Resources page * Add DaemonSet docs ID localization (#18632) Signed-off-by: giovanism <giovanism@outlook.co.id> * Fix typo in en/docs/contribute/style/content-guilde.md (#18862) * partial fix for SEE ALSO section under content/zh/docs/reference/setup-tools/kubeadm/generated/ need to be deleted #18411 (#18875) * See Also removed file 31 * see also removed file 32 * see also removed file 33 * see also removed file 34 * see also removed file 35 * Modify pod.md (#18818) website/content/ko/docs/concepts/workloads/pods/pod.md 23 line 쿠버네티스는는 -> 쿠버네티스는 modify * remove $ following the style guide (#18855) * Add Hyperlink to Kubernetes API (#18852) * Drive by copy edit of blog post (#18881) * Medium copy edit. * more fixes * Translate Events Calendar (#18860) * Adding Bahasa Indonesia translation for Device Plugin page #18676 (#18676) Co-Authored-By: Gede Wahyu Adi Pramana <tokekbesi@gmail.com> Co-authored-by: Gede Wahyu Adi Pramana <tokekbesi@gmail.com> * change escaped chars to markdown (#18858) Helps to keep doc clean for long term * Fix header layout on Safari (#18888) * Fix references to sig-docs-l10n-admins (#18661) * Add French deployment concept page (#18516) * Add French deployment concept page * Fix * Fix * Fix * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Fix * Fix * Fix * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/fr/docs/concepts/workloads/controllers/deployment.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Fix ZH security aliases (#18895) * disable simplytunde as an approver due to inactivity. (#18899) Always welcome to come back if able to become active again Signed-off-by: Brad Topol <btopol@us.ibm.com> * install container runtimes without prompts (#18893) In Kubernetes docs, all of the packages that are required to set up the Kubernetes are installed without requiring any prompts through the package manager (like apt or yum) except for the container runtimes. https://kubernetes.io/docs/setup/production-environment/container-runtimes/ So, it would be better to have these installations with prompts (yes) disabled. * Fix small typos (#18886) * Fix small typos Small typos noticed and fixed in: - configure-upgrade-etcd.md - reconfigure-kubelet.md Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Rephrase a paragraph on etcd upgrade en\docs\tasks\administer-cluster\configure-upgrade-etcd.md Following a suggestion in #18886, I've rephrased a sentence on etcd upgrade prerequisites. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Clean up extensions/v1beta1 in docs (#18841) * Update _index.md (#18825) * Run minikube docker-env in a shell-independent way (#18823) * doc: correct pv status for pv protection example. (#18816) * Small editorial fixes in glossary entries (#18807) * Small editorial fixes in glossary entries * Revert the wording in the glossary term for proxy * fix doc conflict regarding postStart (#18806) * kubeadm: improvements to the cert management documentation (#18397) - move the sections about custom certificates and external CA to the kubeadm-certs page - minor cleanups to the kubeadm-certs page, including updated output for the check-expiration command - link the implementation details page to the new locations for custom certs and external CA * fix doc conflict regarding postStart * Grammar (#18785) * grammar: 'to' distributes over 'or' * grammar: reword per app.grammarly.com * grammar: simplify from app.grammarly.com * spelling: etc. * feat: add ephermeral container approach inside pod debug page. (#18754) * doc: add pod security policy reference link to document. (#18729) * doc: add pod security policy reference link to document. * doc: add what's next for pod-security-policy ref. * Revise version requirements (#18688) Assume that the reader is running a version of Kubernetes that supports the Secret resource. * en: Remove kubectl duplicate example (#18656) With #16974 and the removal of --include-uninitialized flag, the second and third examples of kubectl delete become equal, thus leading to duplication and being confusing. Suggest to remove the duplicate and replace it with another example in the future if needed. Observed in v1.16 and v1.17 documentation. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Fix typo for tasks/access-kubernetes-api/configure-aggregation-layer.md (#18652) * Unify runtime references (#18493) - Use the glossary to correctly reference runtimes - Updated runtime class documentation for CRI-O - Removed rktlet from runtimes since its EOL Signed-off-by: Sascha Grunert <sgrunert@suse.com> * Clean up admission controller deprecation example (#18399) * sync zh-trans content/zh/docs/concepts/workloads/pods/ephemeral-containers.md (#18883) * Remove redundant information when deploy flannel on kubernetes include windows node (#18272) * sync zh-trans content/zh/docs/concepts/workloads/pods/pod-overview.md (#18882) * partial fix for for SEE ALSO section under content/zh/docs/reference/setup-tools/kubeadm/generated/ need to be deleted (#18879) * see also removed from file 36 * see also removed from file 37 * see also removed from file 38 * see also removed from file 39 * see also removed from file 40 * update zh content/zh/docs/contribute/style/write-new-topic.md (#18859) * sync zh-trans /docs/concepts/_index.md and /docs/concepts/example-concept-template.md (#18863) * See also removed file 56 & 57 (#18912) * see also removed file 56 * see also removed file 57 * Third Korean L10n Work For Release 1.17 (#18915) * Changed some words in the IPv4/IPv6 dual-stack korean doc. (#18668) * Update to Outdated files in dev-1.17-ko.3 branch. (#18580) * Translate content/ko/docs/concepts/services-networking/service in Korean (#18195) * Translate docs/tasks/access-application-cluster/port-forward-access-application-cluster.md in Korean (#18721) * Translate controllers/garbage-collection.md in Korean. (#18595) Co-Authored-by: Seokho Son <shsongist@gmail.com> Co-Authored-by: Lawrence Kay <lkay9495@hotmail.com> Co-Authored-by: Jesang Myung <jesang.myung@gmail.com> Co-Authored-by: Claudia J.Kang <claudiajkang@gmail.com> Co-Authored-by: Yuk, Yongsu <ysyukr@gmail.com> Co-Authored-By: June Yi <june.yi@samsung.com> Co-authored-by: Yuk, Yongsu <ysyukr@gmail.com> Co-authored-by: Seokho Son <shsongist@gmail.com> Co-authored-by: Lawrence Kay <me@lkaybob.pe.kr> Co-authored-by: Jesang Myung <jesang.myung@gmail.com> Co-authored-by: June Yi <june.yi@samsung.com> * clean up makefile, config (#18517) Added target for createversiondirs (shell script) in Makefile. updates for tagged release regenerate api ref, rm Makefile_temp add parens to pip check * Improve Russian translation of Home page (#17841) * Improve Russian translation of Home page * Update i18n/ru.toml Co-Authored-By: Slava Semushin <slava.semushin@gmail.com> * Update content/ru/_index.html Co-Authored-By: Slava Semushin <slava.semushin@gmail.com> * Update content/ru/_index.html Co-Authored-By: Slava Semushin <slava.semushin@gmail.com> Co-authored-by: Slava Semushin <slava.semushin@gmail.com> * update ref link for v1.16 (#18837) Related to issue #18820. remove links to prev API refs * Cleanup user journeys related configs and scripts (#18814) * See also removed file 81 to 85 (#18909) * see also removed file 81 * see also removed file 82 * see also removed file 83 * see also removed file 84 * see also removed file 85 * See also removed file 65 to 70 (#18908) * see also removed file 65 * see also removed file 66 * see also removed file 67 * see also removed file 68 * see also removed file 69 * see also removed file 70 * Translate Task index page into Polish (#18876) Co-Authored-By: Karol Pucyński <kpucynski@gmail.com> Co-Authored-By: Michał Sochoń <kaszpir@gmail.com> Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Michał Sochoń <kaszpir@gmail.com> * Document dry-run authorization requirements (#18235) * Document dry-run write access requirement. - Add section on dry-run authorization - Refer to dry-run authorization for diff - Consistently hyphenate dry-run * Update content/en/docs/reference/using-api/api-concepts.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * reword storage release note to match the change in k/k PR #87090 (#18921) * sync zh-trans content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md (#18868) * See also removed file 60 to 63 (#18907) * see also removed file 60 * see also removed file 61 * see also removed file 62 * see also removed file 63 * See also removed file 91 to 95 (#18910) * see also removed file 91 * see also removed file 93 * see also removed file 94 * see also removed file 95 * content/zh/docs/concepts/workloads/pods/podpreset.md (#18870) * fix: fixed eating initial 2 spaces inside code. (#18914) * Update Calico section of kubeadm install guide (#18821) * Update Calico section of kubeadm install guide * Address review feedback * See also removed file 96 to 100 (#18911) * see also removed file 96 * see also removed file 97 * see also removed file 98 * see also removed file 99 * see also removed file 100 * repair zh docs in kubeadm (#18949) * repair zh docs about kubeadm (#18950) * Update apparmor.md (#18951) * Update basic-stateful-set.md (#18952) * Add missing hyperlink for pod-overhead (#18936) * Update service.md (#18480) make article reads more smoothly * zh-trans update content/zh/docs/concepts/workloads/controllers/deploy… (#18657) * zh-trans update content/zh/docs/concepts/workloads/controllers/deployment.md * zh-trans update content\zh\docs\concepts\workloads\controllers\deployment.md * Update source-ip documentation (#18760) * sync zh-trans /docs/concepts/workloads/pods/pod.md (#18880) * sync zh-trans /docs/concepts/workloads/controllers/cron-jobs.md and /docs/concepts/workloads/controllers/daemonset.md (#18864) * sync zh-trans content/zh/docs/concepts/workloads/controllers/ttlafterfinished.md (#18867) * Add a French version of Secret concept page (#18604) * Add a French version of Secret concept page * Fix * Fix * Update content/fr/docs/concepts/configuration/secret.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Fix * Update content/fr/docs/concepts/configuration/secret.md Co-Authored-By: Aurélien Perrier <aperrier@universe.sh> * Fix Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: Aurélien Perrier <aperrier@universe.sh> * (refactor): Corrections (grammatical) in service.md file (#18944) * Update service.md * Fixed the invaild changes Signed-off-by: Udit Gaurav <uditgaurav@gmail.com> * Update container-runtimes.md (#18608) for debian install of docker, also install gnupg2 for apt-key add to work * Fix that dual-stack does not require Kubenet specifically (#18924) * Fix that dual-stack does not require Kubenet specifically Rather it requires a network plugin that supports dual-stack, and others are available, including Calico. * Update content/en/docs/tasks/network/validate-dual-stack.md Added link to doc about network plugins Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Revert "Configurable Scaling for the HPA (#18157)" (#18963) This reverts commit5dbfaafe1a
. * Update horizontal-pod-autoscale-walkthrough.md (#18960) Update command for creating php-apache deployment due to the following warning: `kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.` * doc: add link for type=LoadBalancer service in tutorial. (#18916) * Typo fix (#18830) * sync zh-trans content/zh/docs/concepts/workloads/controllers/statefulset.md (#18869) * Revise pull request template (#18744) * Revise pull request template * Reference compiled docs in PR template Refer readers to https://k8s.io/contribute/start/ This keeps the template short, and it lets Hugo use templating for the current version. * Update certificates.md (#18970) * Add web-ui-dashboard to French (#17974) * Add web-ui-dashboard to French * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Fix * Fix * Fix * Update content/fr/docs/tasks/access-application-cluster/web-ui-dashboard.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix * Fix Co-authored-by: Tim Bannister <tim@scalefactory.com> * Added a translated code of conduct (#18981) * Added a translated code of conduct * fixed some minor mistakes and capitalization * Moved to informal speech * Translate the contribute advanced page to French (#13425) * Translate the contribute advanced page to French * Corrections * Correction * Correction * Correction * Correction * Correction * Fix typo in hello-minikube.md (#18991) * Add note for LB behaviour for cordoned nodes. (#18784) * Add note for LB behaviour for cordoned nodes. See also https://github.com/kubernetes/kubernetes/issues/65013 This is a reasonably common pitfall: `kubectl cordon <all nodes>` will also drop all LB traffic to the cluster, but this is not documented anywhere but in issues, when found it is usually already too late. * Update with feedback * Add KIND as the options for spinning up a test kubernetes environment (#17860) * fix typo in /ja/docs/concepts/workloads/pods/init-containers (#18997) * hide some original comments in translate docs (#18986) * hide original comment * hide some original comments * Fix code of conduct title (#19006) * Added a note about built-in priority-classes (#18979) * Added a note about build-in priority-classes * Update content/en/docs/concepts/configuration/pod-priority-preemption.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Add description for TTL (#19001) * Fix whitespace on deployment page (#18990) * Add details to the API deprecations blog post (#19014) * Document list/map/structType and listMapKeys (#18977) These markers where introduced to describe topology of lists, maps, structs - primarily in support of server-side apply. Secondarily, a small typo fix:) * Remove "Unschedulable" pod condition type from the pod lifecycle docs (#18956) The pod lifecycle documentation erroneously indicated `Unschedulable` as a possible `type` of pod condition. That's not true. Only four condition types exist. The `Unschedulable` value is not a type, but one of the possible reasons of the `PodScheduled` condition type. * Revise “Encrypting Secret Data at Rest” (#18810) * Drop reference to old Kubernetes versions At the time of writing, Kubernetes v1.13 is the oldest supported version, and encryption-at-rest is no longer alpha. * Tidy whitespace * Add table caption * Set metadata for required Kubernetes version * maintain the current relative path when switching to other site versions (#18871) * Update kubectl create configmap section (#18885) * Add common examples to Service Topology documentation (#18712) * service topology: add missing 'enabling service topology' page Signed-off-by: Andrew Sy Kim <kiman@vmware.com> * service topology: add common examples Signed-off-by: Andrew Sy Kim <kiman@vmware.com> * updating contrib for ref docs (#18787) more cleanup * fix translate docs format (#19018) * Update nodes.md (#19019) * Translate Contribute index page into Russian (#19022) * Added german translation for Addons page (#19010) * Added german translation for Addons page * Smaller adjustments * removed a english leftover-sentence * consistent spelling of "Add-Ons" * Removed english entry for CoreDNS * Update content/de/docs/concepts/cluster-administration/addons.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Translated a heading Co-authored-by: Tim Bannister <tim@scalefactory.com> * (fix) Removed `-n test` from `kubectl get pv` command (#18877) - PV are cluster scoped rather than namespaced scope - So, there is no need to list it by namespace Signed-off-by: Aman Gupta <aman.gupta@mayadata.io> * Link to setup page about Kind (#18996) Link from /docs/setup/ to /docs/setup/learning-environment/kind/ now that the target page exists. * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md (#18808) * Create Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Taylor Dolezal <onlydole@users.noreply.github.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update content/en/blog/_posts/Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-Authored-By: Bob Killen <killen.bob@gmail.com> * Update Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md * Update and rename Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md to 2020-02-07-Deploying-External-OpenStack-Cloud-Provider-With-Kubeadm.md Co-authored-by: Bob Killen <killen.bob@gmail.com> Co-authored-by: Taylor Dolezal <onlydole@users.noreply.github.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: Kaitlyn Barnard <kaitlynbarnard10@gmail.com> * Revise glossary entry for Device Plugin (#16291) * Document control plane monitoring (#17578) * Document control plane monitoring * Update content/en/docs/concepts/cluster-administration/monitoring.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/docs/concepts/cluster-administration/monitoring.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Merge controller-metrics.md into monitoring.md Co-authored-by: Tim Bannister <tim@scalefactory.com> * Document none driver compatibility with non docker runtime. (#17952) * Refined unclear sentence on 3rd party dependencies (#18015) * Refined unclear sentence on 3rd party dependencies I reworded the sentence on third party dependencies a bit in order to make it more sound * Update content/en/docs/concepts/security/overview.md Sounds much better Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Improve network policies concept (#18091) * Adopt website style guidelines * Tweak wording Co-Authored-By: cmluciano <cmluciano@cruznet.org> * Make sample NetworkPolicies downloadable Co-authored-by: cmluciano <cmluciano@cruznet.org> * clean up secret generators (#18320) * Use built-in version check & metadata (#18542) * Reword kubelet live reconfiguration task (#18629) - Revise version requirements - Use glossary tooltips in summary - Use sentence case for headings - Write kubelet in lowercase where appropriate - Add “What's next” section * fix: add dns search record limit note. (#18913) * Remove duplicate content: Roles & Responsibilities (#18920) * Remove duplicate content: Roles & Responsibilities Signed-off-by: Celeste <celeste@cncf.io> Address feedback Signed-off-by: Celeste <celeste@cncf.io> * Apply suggestions from review Co-Authored-By: Zach Corleissen <zacharysarah@users.noreply.github.com> * Link to contribution guidelines Signed-off-by: Celeste Horgan <celeste@cncf.io> * Address PR feedback Signed-off-by: Celeste Horgan <celeste@cncf.io> Co-authored-by: Zach Corleissen <zacharysarah@users.noreply.github.com> * Fix of pull request #18960 (#18974) * Fix of pull request #18960 * Add yaml configuration file snippets * Remove redundant code snippet for command * Update cheatsheet.md (#18975) * Update cheatsheet.md "List all pods in the namespace, with more details" command corrected by adding --all-namespaces * Update content/en/docs/reference/kubectl/cheatsheet.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Correct description of Knitter CNI plugin (#18983) * Add Elastic metricbeat to examples of DaemonSets and rename logstash (#19024) * Add Elastic metricbeat to examples of DaemonSets The URL points to the docs related to how to configure metricbeat on k8s * Filebeat is the next thing * Separated commands from output (#19023) * Update KubeCon URLs (#19027) The URLs had changed (and were being redirected). Also, added parameters to better identify the traffic source. * remove see also and close issue (#19032) * sync zh-trans content/zh/docs/concepts/workloads/controllers/garbage-collection.md (#18865) * zh trans /docs/reference/access-authn-authz/extensible-admission-controllers.md (#18856) * Update zh/docs/concepts/services-networking/dns-pod-service.md#pods (#18992) * Adding contribution best practice in contribute docs (#18059) * Add kubectl patch example with quotes on Windows (#18853) * Add kubectl patch example with quotes on Windows When running the `kubectl patch` example, on Windows systems you get an error when passing the patch request in single quotes. Passing it in double quotes with the inner ones escaped produced the desired behavior as is in the example given for Linux systems. I've added a small note for Windows users to have that in mind. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Use Hugo note shortcode Windows note is placed inside a [shortcode](https://kubernetes.io/docs/contribute/style/style-guide/#shortcodes) to be consistent with the style guide. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Remove shell Markdown syntax I've removed the shell syntax from the Windows example and have changed the description to be the same as the one used in [jsonpath](https://kubernetes.io/docs/reference/kubectl/jsonpath/) document to be more consistent. The jsonpath example uses cmd syntax, though it is note inside a note shortcode, therefore I've opted out of using any syntax as it seems to break rendering inside the shortcode. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Add cmd markdown syntax and fix order list I've tested this locally with `make docker-serve` on my Linux machine and finally things are looking better, I've managed to address these two issues: - the Windows example is now inside `note` shortcode and also the cmd syntax renders correctly on the page - the list of steps broke after the first one, I've indented a paragraph and now the steps are in the expected order Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Remove command prompt from example According to the [style guide](https://kubernetes.io/docs/contribute/style/style-guide/#don-t-include-the-command-prompt), the command prompt should not be included when showing an example. This commit removes it for consistency with the style guide. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * cleanup /docs/concepts/workloads/pods/pod-lifecycle/ (#19009) * update nodes.md (#18987) 将“用量低”更改为“可用量低”,避免歧义 * Remove command prompt from Windows example (#18906) * Remove command prompt from Windows example According to the [style guide](https://kubernetes.io/docs/contribute/style/style-guide/#don-t-include-the-command-prompt), the command prompt should not be included in the examples. Removing the Windows command prompt from the jsonpath example. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Put Windows example inside note shortcode I'm putting the Windows example in a Hug note shortcode to be consistent with the rest of the documentation. Signed-off-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> * Updated CHANGELOG-11 link (#19036) * update command used to create deployment (#19005) The previous one was showing a deprecation warning when used. * Update Korean localization guide (#19004) rev1-Update Korean localization guide * docs: fix broken etcd's official documents link (#19021) * Update automated-tasks-with-cron-jobs.md (#19043) Co-authored-by: Xin Chen <xchen@opq.com.au> Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: lemon <lemonli@users.noreply.github.com> Co-authored-by: Slava Semushin <slava.semushin@gmail.com> Co-authored-by: Olivier Cloirec <5033885+clook@users.noreply.github.com> Co-authored-by: inductor <kohei.ota@zozo.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Samuel Kihahu <kihahu@users.noreply.github.com> Co-authored-by: Takuma Hashimoto <takuma-hashimoto@freee.co.jp> Co-authored-by: Keita Akutsu <kakts.git@gmail.com> Co-authored-by: Masa Taniguchi <maabou512@gmail.com> Co-authored-by: Soto Sugita <sotoiwa@gmail.com> Co-authored-by: Kozzy Hasebe <48105562+hasebe@users.noreply.github.com> Co-authored-by: kazuaki harada <canhel.4suti50y.salamander@gmail.com> Co-authored-by: Shunsuke Miyoshi <s.miyoshi@jp.fujitsu.com> Co-authored-by: hato wang <26351545+wyyxd2017@users.noreply.github.com> Co-authored-by: xieyanker <xjsisnice@gmail.com> Co-authored-by: zhouya0 <50729202+zhouya0@users.noreply.github.com> Co-authored-by: littleboy <zhaoze01@inspur.com> Co-authored-by: camper42 <camper.xlii@gmail.com> Co-authored-by: Dominic Yin <hi@ydcool.me> Co-authored-by: Steve Bang <stevebang@gmail.com> Co-authored-by: Zach Corleissen <zacharysarah@users.noreply.github.com> Co-authored-by: Ryan McGinnis <ryanmcginnis@users.noreply.github.com> Co-authored-by: Shunde Zhang <shunde.p.zhang@gmail.com> Co-authored-by: Bob Killen <killen.bob@gmail.com> Co-authored-by: Taylor Dolezal <onlydole@users.noreply.github.com> Co-authored-by: Patrick Ohly <patrick.ohly@intel.com> Co-authored-by: Eugenio Marzo <eugenio.marzo@yahoo.it> Co-authored-by: Kaitlyn Barnard <kaitlynbarnard10@gmail.com> Co-authored-by: TimYin <shiguangyin@inspur.com> Co-authored-by: Shivang Goswami <shivang.goswami@infosys.com> Co-authored-by: Fabian Baumanis <fabian.baumanis@gmx.de> Co-authored-by: Rémy Léone <remy.leone@gmail.com> Co-authored-by: chentanjun <tanjunchen20@gmail.com> Co-authored-by: helight <helight@helight.info> Co-authored-by: Jie Shen <drfish.me@gmail.com> Co-authored-by: Joe Betz <jpbetz@google.com> Co-authored-by: Danni Setiawan <danninov@users.noreply.github.com> Co-authored-by: GoodGameZoo <gaoguangze111@gmail.com> Co-authored-by: makocchi <makocchi@gmail.com> Co-authored-by: babang <prabangkoro@users.noreply.github.com> Co-authored-by: Sharjeel Aziz <sharjeel.aziz@gmail.com> Co-authored-by: Wojtek Cichoń <wojtek.cichon@protonmail.com> Co-authored-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> Co-authored-by: Maciej Filocha <12587791+mfilocha@users.noreply.github.com> Co-authored-by: Michał Sochoń <kaszpir@gmail.com> Co-authored-by: Yudi A Phanama <11147376+phanama@users.noreply.github.com> Co-authored-by: Giovan Isa Musthofa <giovanism@outlook.co.id> Co-authored-by: Park Sung Taek <tjdxor8223@gmail.com> Co-authored-by: Kyle Smith <kylessmith@protonmail.com> Co-authored-by: craigbox <craig.box@gmail.com> Co-authored-by: Afrizal Fikri <laser.survivor@gmail.com> Co-authored-by: Gede Wahyu Adi Pramana <tokekbesi@gmail.com> Co-authored-by: Anshu Prateek <333902+anshprat@users.noreply.github.com> Co-authored-by: Sergei Zyubin <sergei@crate.io> Co-authored-by: Christoph Blecker <admin@toph.ca> Co-authored-by: Brad Topol <btopol@us.ibm.com> Co-authored-by: Venkata Harshavardhan Reddy Allu <venkataharshavardhan_ven@srmuniv.edu.in> Co-authored-by: KYamani <yamani.kamel@gmail.com> Co-authored-by: Trishank Karthik Kuppusamy <33133073+trishankatdatadog@users.noreply.github.com> Co-authored-by: Jacky Wu <Colstuwjx@gmail.com> Co-authored-by: Gerasimos Dimitriadis <gedimitr@gmail.com> Co-authored-by: Rajat Toshniwal <rnt.rajat@gmail.com> Co-authored-by: Josh Soref <jsoref@users.noreply.github.com> Co-authored-by: Sascha Grunert <sgrunert@suse.com> Co-authored-by: wawa <xiaozhang0210@hotmail.com> Co-authored-by: Claudia J.Kang <claudiajkang@gmail.com> Co-authored-by: Yuk, Yongsu <ysyukr@gmail.com> Co-authored-by: Seokho Son <shsongist@gmail.com> Co-authored-by: Lawrence Kay <me@lkaybob.pe.kr> Co-authored-by: Jesang Myung <jesang.myung@gmail.com> Co-authored-by: June Yi <june.yi@samsung.com> Co-authored-by: Karen Bradshaw <kbhawkey@gmail.com> Co-authored-by: Alexey Pyltsyn <lex61rus@gmail.com> Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Julian V. Modesto <julianvmodesto@gmail.com> Co-authored-by: Jeremy L. Morris <jeremylevanmorris@gmail.com> Co-authored-by: Casey Davenport <caseydavenport@users.noreply.github.com> Co-authored-by: zhanwang <zhanw15@gmail.com> Co-authored-by: wwgfhf <51694849+wwgfhf@users.noreply.github.com> Co-authored-by: harleyliao <357857613@qq.com> Co-authored-by: ten2ton <50288981+ten2ton@users.noreply.github.com> Co-authored-by: Aurélien Perrier <aperrier@universe.sh> Co-authored-by: UDIT GAURAV <35391335+uditgaurav@users.noreply.github.com> Co-authored-by: Rene Luria <rene@luria.ch> Co-authored-by: Neil Jerram <neiljerram@gmail.com> Co-authored-by: Arjun <arjunrn@users.noreply.github.com> Co-authored-by: Katarzyna Kańska <katarzyna.m.kanska@gmail.com> Co-authored-by: Laurens Versluis <lfdversluis@users.noreply.github.com> Co-authored-by: Ray76 <rayfoo55@gmail.com> Co-authored-by: Alexander Zimmermann <7714821+alexzimmer96@users.noreply.github.com> Co-authored-by: Christian Meter <cmeter@googlemail.com> Co-authored-by: MMeent <boekewurm@gmail.com> Co-authored-by: RA489 <rohit.anand@india.nec.com> Co-authored-by: Akira Tanimura <autopp.inc@gmail.com> Co-authored-by: Patouche <Patouche@users.noreply.github.com> Co-authored-by: Jordan Liggitt <jordan@liggitt.net> Co-authored-by: Maria Ntalla <maria.ntalla@gmail.com> Co-authored-by: Marko Lukša <marko.luksa@gmail.com> Co-authored-by: John Morrissey <jwm@horde.net> Co-authored-by: Andrew Sy Kim <kim.andrewsy@gmail.com> Co-authored-by: ngsw <ngsw@ngsw.jp> Co-authored-by: Aman Gupta <aman.gupta@mayadata.io> Co-authored-by: Marek Siarkowicz <marek.siarkowicz@protonmail.com> Co-authored-by: tom1299 <tom1299@users.noreply.github.com> Co-authored-by: cmluciano <cmluciano@cruznet.org> Co-authored-by: Celeste Horgan <celeste@cncf.io> Co-authored-by: Prasad Honavar <prasadhonavar@gmail.com> Co-authored-by: Sam <sammcj@users.noreply.github.com> Co-authored-by: Victor Martinez <victormartinezrubio@gmail.com> Co-authored-by: Dan Kohn <dan@linuxfoundation.org> Co-authored-by: vishakha <54327666+vishakhanihore@users.noreply.github.com> Co-authored-by: liyinda246 <liyinda0000@163.com> Co-authored-by: Kabir Kwatra <kabir@kwatra.me> Co-authored-by: Armand Grillet <2117580+armandgrillet@users.noreply.github.com> Co-authored-by: Junwoo Ji <jydrogen@gmail.com> Co-authored-by: rm <rajib.jolite@gmail.com> * Revert "Repair and sync the dev-1.18 branch" (#19228) * Add capture statement (#19237) * Update hugepages documentation (#19008) * Update hugepages documentation - described support for multiple huge page sizes - described container isolation of the huge pages * Add HugePageStorageMediumSize description * update description for container isolation of hugepages Signed-off-by: Byonggon Chun <bg.chun@samsung.com> Co-authored-by: Byonggon Chun <bg.chun@samsung.com> * User documentation for Priority and Fairness (/flowcontrol) API (#19319) Adds an entry to the Concepts section that gives an overview of the feature and builds upon the generated API documentation. Also adds a Glossary entry for shuffle sharding. Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-Authored-By: Mike Spreitzer <mspreitz@us.ibm.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: Mike Spreitzer <mspreitz@us.ibm.com> * doc: ContainerD on Windows alpha (#19208) * Taint based eviction promoted to GA in 1.18 (#19302) * Updating Windows RunAsUserName page to reflect feature going GA (#19016) * Document server-side dry-run is GA in 1.18 (#19549) * Update dry-run docs. * Promote server-side dry-run feature state to stable in 1.18 * Replace --dry-run with --dry-run=server|client|none accordingly * Update server-side dry-run blog post * Update content/en/docs/reference/using-api/api-concepts.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Address comments * Add note at end of server-side dry-run blogpost Co-authored-by: Tim Bannister <tim@scalefactory.com> * ServiceAccountIssuerDiscovery: Add user facing documentation (#19328) Extends the documentation on configuring service accounts to include details on the ServiceAccountIssuerDiscovery feature. * Immutable secrets doc (#19297) * Placeholder PR for CSIDriver promotion to GA (#19354) * pod-overhead: updates for beta (#19059) * pod-overhead: updates for beta Signed-off-by: Eric Ernst <eric.ernst@intel.com> * pod-overhead: update documentation for beta Signed-off-by: Eric Ernst <eric@amperecomputing.com> * Update Topology Manager for 1.18 (#19050) * [WIP] Update Topology Manager for 1.18 Move from Alpha to Beta. Remove Known Limitation - fixed in 1.18. * Nit: Update content/en/docs/tasks/administer-cluster/topology-manager.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Add section on feature gate with version guidance. * Add Known Limitation on Memory/Hugepages * Update content/en/docs/tasks/administer-cluster/topology-manager.md Clean up wording of known limitations Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Edits to feature gate description based on review * Up date feature gates table. Co-authored-by: Tim Bannister <tim@scalefactory.com> * Updating docs for EndpointSlice changes in 1.18 (#19316) * Adding documentation to cover ingress v1beta updates (#19231) Co-authored-by: Rob Scott <robertjscott@google.com> * Raw block volume GA (#19338) * Raw block volume GA Feature BlockVolume reaches GA in 1.17 * Add volumeMode overview * Promote Pod Topology Spread to Beta (#18969) * Promote PodTopologySpread (a.k.a EvenPodsSpread) to beta in 1.18 * address comments * Default Topology Spread constraints (#19170) * Add usage of default constraints for Pod Topology Spread. Signed-off-by: Aldo Culquicondor <acondor@google.com> * Add apiVersion and kind Signed-off-by: Aldo Culquicondor <acondor@google.com> * Unify debug pod docs and update for `kubectl alpha debug` (#19093) Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Adding AppProtocol documentation for 1.18 (#19317) * Update kubeadm for Windows (#19217) * Start updates for DaemonSet approach * Rough first draft * Remove old image assets * Document how to fetch the correct version of kube-proxy * Remove InstallNSSM script It was merged into the PrepareNode script * Update links to sig-windows-tools release files Using the latest tag so we can continue to update things as needed without needing to keep updating the docs * Add upgrade tutorial for Windows kubeadm nodes * Clarify which machine each command should be run from * Add link from main tutorial, small edits - Rename file / title - Remove reviewers * Remove IP management section User can just rely on the defaults from kube-proxy and flannel now, or change them in the same manner as on Linux * Apply suggestions from code review Co-Authored-By: Tim Bannister <tim@scalefactory.com> * More review updates * Switch to task template * Apply suggestions from code review Co-Authored-By: Tim Bannister <tim@scalefactory.com> * More review feedback * Adjust upgrading title * Additional edits * Move to kubeadm directory * Remove stale link This guide never had any real content about the pause image * Add redirect from old url * Fix weights * Update Windows intro * Apply suggestions from code review Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Fix typo * Add beta markers Co-authored-by: Tim Bannister <tim@scalefactory.com> * Document using multiple scheduler profiles (#19172) * Add instructions for using multiple scheduling profiles. Signed-off-by: Aldo Culquicondor <acondor@google.com> * Move scheduling policies and profiles to Reference Signed-off-by: Aldo Culquicondor <acondor@google.com> * Renames and nits Signed-off-by: Aldo Culquicondor <acondor@google.com> * Fix links and grammar Signed-off-by: Aldo Culquicondor <acondor@google.com> * Fix link and flag usage Signed-off-by: Aldo Culquicondor <acondor@google.com> * Update PVCDataSource to GA (#19318) Update docs to reflect GA status for PVCDataSources in the 1.18 release. * Update scheduler framework document (#19606) * Update scheduler framework document * address comments * Add page redirect and keep old link anchors * address comment * Document recursive chown feature (#19391) * Add placeholder docs for recursive chown feature * Update content/en/docs/concepts/policy/pod-security-policy.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/docs/concepts/policy/pod-security-policy.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Address feedback * Move recursive chown docs to correct place * Address review feedback * Apply suggestions from code review Co-Authored-By: Tim Bannister <tim@scalefactory.com> * move bit about ephemental volumes to note * Update content/en/docs/tasks/configure-pod-container/security-context.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Doc update for GMSA promotion to Stable (#19349) Signed-off-by: Deep Debroy <ddebroy@docker.com> * Document generic data sources feature (#19315) * Document AnyVolumeDataSource feature gate * Update content/en/docs/reference/command-line-tools-reference/feature-gates.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Remove link to deprecated cluster monitoring (#18193) * CertificateSigningRequest doc updates (#19290) * Add CSR doc outline * Add CertificateSigningRequest document content Co-authored-by: James Munnelly <james.munnelly@jetstack.io> * Update Docker installation instructions to use v19.03.8 (#19643) * kubeadm: update upgrade documentation for 1.18 (#19551) * 1.18 Server-side apply (#19286) * Clarify content-type and tracking of all objects * Update content/en/docs/reference/using-api/api-concepts.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * SSA: Improve a couple of sentences Co-authored-by: Kevin Wiesmueller <kwiesmueller@seibert-media.net> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Drop feature availability from Reserve Compute Resources (#19593) * Drop feature availability section This section does not fit with the website style guide. * State task version requirements Follow website style guide for stating these. * Revise CertificateSigningRequest documentation (#19698) * Revise CertificateSigningRequest approval & rejection details API clients can deny CSRs. Document this and tidy the page. "There is currently not a mechanism for a signer implementation to report its inability to sign a request." seemed misleading; clients can set status. * Tidy page Co-Authored-By: Jordan Liggitt <jordan@liggitt.net> Co-authored-by: Jordan Liggitt <jordan@liggitt.net> * Remove trailing spaces from it documents (#16790) * fix format error (#19118) * Add a theme-color (#18976) * Fourth Korean L10n Work For Release 1.17 (#19127) * translate assign-pod-node.md (#18955) * correct the misspelling (#19063) * Update to Outdated files in dev-1.17-ko.4 branch. (#19002) Co-Authored-By: KimMJ <a01083612486@gmail.com> Co-Authored-By: forybm <forybm1@naver.com> Co-Authored-By: Yuk, Yongsu <ysyukr@gmail.com> Co-authored-by: KimMJ <a01083612486@gmail.com> Co-authored-by: forybm <forybm1@naver.com> Co-authored-by: Yuk, Yongsu <ysyukr@gmail.com> * hidden original annotation (#19126) * Correct note shortcode for run-replicated-stateful-application.md (#19132) * Update Korean glossary for l10n (#19094) * Update Korean glossary for l10n * Update content/ko/docs/contribute/localization_ko.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Issue with k8s.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/ (#18560) * Add localization guide specific to Polish language (#18985) Co-Authored-By: Karol Pucyński <kpucynski@gmail.com> Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-Authored-By: Michał Sochoń <kaszpir@gmail.com> Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: Michał Sochoń <kaszpir@gmail.com> * Polish zh translation for taint-and-toleration (#19133) * Add missing "in" for DeploymentSpec link (#19135) * Add latest-semver shortcode (#19110) * Remove extra note in note shortcode (#19131) * Remove extra note in note shortcode * Translate Inline Markdown and HTML section * update zh-trans content/zh/docs/concepts/policy/resource-quotas.md (#19037) * A few tweaks to wording (#19136) Just a few tweaks where words were missing, etc. * Prune Bradamant3 from permissions (#19084) * Translate Start contributing page into Russian (#19124) * Upgrade command of homebrew installs minikube (#18281) * Update installation of minikube on macOS (#19143) Installation through brew cask install minikube is no longer supported, as there is not such a binary. Corrected to brew install minikube works. * Tidy “Participating in SIG Docs” (#19151) - Fix broken link - Move localization work from Reviewers group to Anyone - Minor other tidying * Change Kontainer to Node (#19105) * Translate Participating in SIG Docs page into Russian (#19150) * Translate Participating in SIG Docs page into Russian * Fixes * Second Japanese l10n work for release-1.16 (#19157) * update install kubeadm related doc (#18826) * modify terminology mechanism (#19148) * latin phrase removed (#19180) * kubeadm: add TS guide note about CoreOS read-only /usr (#19166) * kubeadm: add TS guide note about CoreOS read-only /usr * Update content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Latin Abbreviations "vs" Updated to "versus" (#19181) * grep -lR ' vs ' ./content/en/docs | xargs sed -i '' -e 's/ vs / versus /g' * Update content/en/docs/concepts/configuration/overview.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Update content/en/docs/concepts/policy/resource-quotas.md Co-Authored-By: Tim Bannister <tim@scalefactory.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Announce the contributor summit schedule (#19174) * Announce the contributor summit schedule Signed-off-by: Jorge O. Castro <jorgec@vmware.com> * Fix minor nits found during review Signed-off-by: Jorge O. Castro <jorgec@vmware.com> * Update content/en/blog/_posts/2020-02-18-Contributor-Summit-Amsterdam-Schedule-Announced.md Make the map URL easier to read. Co-Authored-By: Tim Bannister <tim@scalefactory.com> * Add contributor summit image Signed-off-by: Jorge O. Castro <jorgec@vmware.com> Co-authored-by: Tim Bannister <tim@scalefactory.com> * Fix Invalid link in readme section (#19178) * Fix inital alpha version of ValidateProxyRedirects (#19192) See: https://github.com/kubernetes/kubernetes/pull/88260 * Add missing space in documentation line (#19013) * Reword “Creating a single control-plane cluster with kubeadm” (#18939) * Consolidate words of caution about Pod network * Tweak wording - use tooltips - fix a TODO hyperlink - adopt style guidelines * Revise prerequisites for kubeadm * Rework page structure - Replace some headings with anchor elements (preserving inbound links) - Use a "discussion" section for the discussion part of the page. - Make Feedback be a part of the What's Next section - Skip mentioning Docker in a logging context; provide generic signposting instead. - Update overview - Document limitations and fix link to HA topology - Fixes for styling * Redo network plugin info * Use glossary tooltips to introduce terms * Kill all federation v1 related tasks pages. (#17949) * Resource name constraints (1) (#19106) xref: #17969, #19099, #18746 * doc: remove tasks federation index from tasks main page. (#19205) * Update output of creating replicaset in controllers/replicaset (#19088) * Fix Invalid link in readme (en) (#19204) * Added 'ClaimRef' to make documentation clearer (#17914) * added final period (.) * removed backticks Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com> Signed-off-by: Dan POP <dan.papandrea@sysdig.com> Co-authored-by: Leo Di Donato <leodidonato@gmail.com> * Change back cronjob timezone note to UTC (#18715) * Fix ambiguous translation in kubeadm.md (#19201) * Fix tooltip (#19794) * Add kubectl diff in kustomize (#19586) * Tidy RuntimeClass page for v1.18 (#19578) * Tidy RuntimeClass page for v1.18 * Reword heading Co-authored-by: Irvi Firqotul Aini <irvi.fa@gmail.com> Co-authored-by: zhouya0 <50729202+zhouya0@users.noreply.github.com> Co-authored-by: Lubomir I. Ivanov <lubomirivanov@vmware.com> Co-authored-by: Kubernetes Prow Robot <k8s-ci-robot@users.noreply.github.com> Co-authored-by: Arjun <arjunrn@users.noreply.github.com> Co-authored-by: Lubomir I. Ivanov <neolit123@gmail.com> Co-authored-by: Xin Chen <xchen@opq.com.au> Co-authored-by: Tim Bannister <tim@scalefactory.com> Co-authored-by: lemon <lemonli@users.noreply.github.com> Co-authored-by: Slava Semushin <slava.semushin@gmail.com> Co-authored-by: Olivier Cloirec <5033885+clook@users.noreply.github.com> Co-authored-by: inductor <kohei.ota@zozo.com> Co-authored-by: Naoki Oketani <okepy.naoki@gmail.com> Co-authored-by: Samuel Kihahu <kihahu@users.noreply.github.com> Co-authored-by: Takuma Hashimoto <takuma-hashimoto@freee.co.jp> Co-authored-by: Keita Akutsu <kakts.git@gmail.com> Co-authored-by: Masa Taniguchi <maabou512@gmail.com> Co-authored-by: Soto Sugita <sotoiwa@gmail.com> Co-authored-by: Kozzy Hasebe <48105562+hasebe@users.noreply.github.com> Co-authored-by: kazuaki harada <canhel.4suti50y.salamander@gmail.com> Co-authored-by: Shunsuke Miyoshi <s.miyoshi@jp.fujitsu.com> Co-authored-by: hato wang <26351545+wyyxd2017@users.noreply.github.com> Co-authored-by: xieyanker <xjsisnice@gmail.com> Co-authored-by: littleboy <zhaoze01@inspur.com> Co-authored-by: camper42 <camper.xlii@gmail.com> Co-authored-by: Dominic Yin <hi@ydcool.me> Co-authored-by: Steve Bang <stevebang@gmail.com> Co-authored-by: Zach Corleissen <zacharysarah@users.noreply.github.com> Co-authored-by: Ryan McGinnis <ryanmcginnis@users.noreply.github.com> Co-authored-by: Shunde Zhang <shunde.p.zhang@gmail.com> Co-authored-by: Bob Killen <killen.bob@gmail.com> Co-authored-by: Taylor Dolezal <onlydole@users.noreply.github.com> Co-authored-by: Patrick Ohly <patrick.ohly@intel.com> Co-authored-by: Eugenio Marzo <eugenio.marzo@yahoo.it> Co-authored-by: Kaitlyn Barnard <kaitlynbarnard10@gmail.com> Co-authored-by: TimYin <shiguangyin@inspur.com> Co-authored-by: Shivang Goswami <shivang.goswami@infosys.com> Co-authored-by: Fabian Baumanis <fabian.baumanis@gmx.de> Co-authored-by: Rémy Léone <remy.leone@gmail.com> Co-authored-by: chentanjun <tanjunchen20@gmail.com> Co-authored-by: helight <helight@helight.info> Co-authored-by: Jie Shen <drfish.me@gmail.com> Co-authored-by: Joe Betz <jpbetz@google.com> Co-authored-by: Danni Setiawan <danninov@users.noreply.github.com> Co-authored-by: GoodGameZoo <gaoguangze111@gmail.com> Co-authored-by: makocchi <makocchi@gmail.com> Co-authored-by: babang <prabangkoro@users.noreply.github.com> Co-authored-by: Sharjeel Aziz <sharjeel.aziz@gmail.com> Co-authored-by: Wojtek Cichoń <wojtek.cichon@protonmail.com> Co-authored-by: Mariyan Dimitrov <mariyan.dimitrov@gmail.com> Co-authored-by: Maciej Filocha <12587791+mfilocha@users.noreply.github.com> Co-authored-by: Michał Sochoń <kaszpir@gmail.com> Co-authored-by: Yudi A Phanama <11147376+phanama@users.noreply.github.com> Co-authored-by: Giovan Isa Musthofa <giovanism@outlook.co.id> Co-authored-by: Park Sung Taek <tjdxor8223@gmail.com> Co-authored-by: Kyle Smith <kylessmith@protonmail.com> Co-authored-by: craigbox <craig.box@gmail.com> Co-authored-by: Afrizal Fikri <laser.survivor@gmail.com> Co-authored-by: Gede Wahyu Adi Pramana <tokekbesi@gmail.com> Co-authored-by: Anshu Prateek <333902+anshprat@users.noreply.github.com> Co-authored-by: Sergei Zyubin <sergei@crate.io> Co-authored-by: Christoph Blecker <admin@toph.ca> Co-authored-by: Brad Topol <btopol@us.ibm.com> Co-authored-by: Venkata Harshavardhan Reddy Allu <venkataharshavardhan_ven@srmuniv.edu.in> Co-authored-by: KYamani <yamani.kamel@gmail.com> Co-authored-by: Trishank Karthik Kuppusamy <33133073+trishankatdatadog@users.noreply.github.com> Co-authored-by: Jacky Wu <Colstuwjx@gmail.com> Co-authored-by: Gerasimos Dimitriadis <gedimitr@gmail.com> Co-authored-by: Rajat Toshniwal <rnt.rajat@gmail.com> Co-authored-by: Josh Soref <jsoref@users.noreply.github.com> Co-authored-by: Sascha Grunert <sgrunert@suse.com> Co-authored-by: wawa <xiaozhang0210@hotmail.com> Co-authored-by: Claudia J.Kang <claudiajkang@gmail.com> Co-authored-by: Yuk, Yongsu <ysyukr@gmail.com> Co-authored-by: Seokho Son <shsongist@gmail.com> Co-authored-by: Lawrence Kay <me@lkaybob.pe.kr> Co-authored-by: Jesang Myung <jesang.myung@gmail.com> Co-authored-by: June Yi <june.yi@samsung.com> Co-authored-by: Karen Bradshaw <kbhawkey@gmail.com> Co-authored-by: Alexey Pyltsyn <lex61rus@gmail.com> Co-authored-by: Karol Pucyński <9209870+kpucynski@users.noreply.github.com> Co-authored-by: Julian V. Modesto <julianvmodesto@gmail.com> Co-authored-by: Jeremy L. Morris <jeremylevanmorris@gmail.com> Co-authored-by: Casey Davenport <caseydavenport@users.noreply.github.com> Co-authored-by: zhanwang <zhanw15@gmail.com> Co-authored-by: wwgfhf <51694849+wwgfhf@users.noreply.github.com> Co-authored-by: harleyliao <357857613@qq.com> Co-authored-by: ten2ton <50288981+ten2ton@users.noreply.github.com> Co-authored-by: Aurélien Perrier <aperrier@universe.sh> Co-authored-by: UDIT GAURAV <35391335+uditgaurav@users.noreply.github.com> Co-authored-by: Rene Luria <rene@luria.ch> Co-authored-by: Neil Jerram <neiljerram@gmail.com> Co-authored-by: Katarzyna Kańska <katarzyna.m.kanska@gmail.com> Co-authored-by: Laurens Versluis <lfdversluis@users.noreply.github.com> Co-authored-by: Ray76 <rayfoo55@gmail.com> Co-authored-by: Alexander Zimmermann <7714821+alexzimmer96@users.noreply.github.com> Co-authored-by: Christian Meter <cmeter@googlemail.com> Co-authored-by: MMeent <boekewurm@gmail.com> Co-authored-by: RA489 <rohit.anand@india.nec.com> Co-authored-by: Akira Tanimura <autopp.inc@gmail.com> Co-authored-by: Patouche <Patouche@users.noreply.github.com> Co-authored-by: Jordan Liggitt <jordan@liggitt.net> Co-authored-by: Maria Ntalla <maria.ntalla@gmail.com> Co-authored-by: Marko Lukša <marko.luksa@gmail.com> Co-authored-by: John Morrissey <jwm@horde.net> Co-authored-by: Andrew Sy Kim <kim.andrewsy@gmail.com> Co-authored-by: ngsw <ngsw@ngsw.jp> Co-authored-by: Aman Gupta <aman.gupta@mayadata.io> Co-authored-by: Marek Siarkowicz <marek.siarkowicz@protonmail.com> Co-authored-by: tom1299 <tom1299@users.noreply.github.com> Co-authored-by: cmluciano <cmluciano@cruznet.org> Co-authored-by: Celeste Horgan <celeste@cncf.io> Co-authored-by: Prasad Honavar <prasadhonavar@gmail.com> Co-authored-by: Sam <sammcj@users.noreply.github.com> Co-authored-by: Victor Martinez <victormartinezrubio@gmail.com> Co-authored-by: Dan Kohn <dan@linuxfoundation.org> Co-authored-by: vishakha <54327666+vishakhanihore@users.noreply.github.com> Co-authored-by: liyinda246 <liyinda0000@163.com> Co-authored-by: Kabir Kwatra <kabir@kwatra.me> Co-authored-by: Armand Grillet <2117580+armandgrillet@users.noreply.github.com> Co-authored-by: Junwoo Ji <jydrogen@gmail.com> Co-authored-by: rm <rajib.jolite@gmail.com> Co-authored-by: zacharysarah <zach@corleissen.com> Co-authored-by: Byonggon Chun <bg.chun@samsung.com> Co-authored-by: Jonathan Klabunde Tomer <jktomer@google.com> Co-authored-by: Mike Spreitzer <mspreitz@us.ibm.com> Co-authored-by: Patrick Lang <PatrickLang@users.noreply.github.com> Co-authored-by: Jan Chaloupka <jchaloup@redhat.com> Co-authored-by: Mark Rossetti <marosset@microsoft.com> Co-authored-by: Julian V. Modesto <julian.modesto@liveramp.com> Co-authored-by: Michael Taufen <mtaufen@users.noreply.github.com> Co-authored-by: Wojciech Tyczynski <wojtekt@google.com> Co-authored-by: Christian Huffman <chuffman@redhat.com> Co-authored-by: Eric Ernst <eric@amperecomputing.com> Co-authored-by: Conor Nolan <conor.nolan@intel.com> Co-authored-by: Rob Scott <robertjscott@google.com> Co-authored-by: cmluciano <cmluciano@us.ibm.com> Co-authored-by: Jan Šafránek <jsafrane@redhat.com> Co-authored-by: Wei Huang <wei.huang1@ibm.com> Co-authored-by: Aldo Culquicondor <1299064+alculquicondor@users.noreply.github.com> Co-authored-by: Lee Verberne <verb@google.com> Co-authored-by: Ben Moss <moss.127@gmail.com> Co-authored-by: John Griffith <john.griffith8@gmail.com> Co-authored-by: Hemant Kumar <gnufied@users.noreply.github.com> Co-authored-by: Deep Debroy <ddebroy@docker.com> Co-authored-by: Ben Swartzlander <ben@swartzlander.org> Co-authored-by: Jordan Liggitt <liggitt@google.com> Co-authored-by: James Munnelly <james.munnelly@jetstack.io> Co-authored-by: Ciprian Hacman <ciprianhacman@gmail.com> Co-authored-by: Antoine Pelisse <apelisse@google.com> Co-authored-by: Kevin Wiesmueller <kwiesmueller@seibert-media.net> Co-authored-by: Yushiro FURUKAWA <y.furukawa_2@fujitsu.com> Co-authored-by: June Yi <gochist@gmail.com> Co-authored-by: KimMJ <a01083612486@gmail.com> Co-authored-by: forybm <forybm1@naver.com> Co-authored-by: Kartik Sharma <52158641+Kartik494@users.noreply.github.com> Co-authored-by: Kirk Larkin <6025110+serpent5@users.noreply.github.com> Co-authored-by: chentanjun <2799194073@qq.com> Co-authored-by: Felix Geelhaar <felix@felixgeelhaar.de> Co-authored-by: Eko Simanjuntak <ecojuntak@gmail.com> Co-authored-by: Andrew Allbright <aallbrig@gmail.com> Co-authored-by: Jorge O. Castro <jorgec@vmware.com> Co-authored-by: Ihor Sychevskyi <26163841+Arhell@users.noreply.github.com> Co-authored-by: Fabian Ruff <fabian@progra.de> Co-authored-by: Jamie Luckett <jamieluckett@googlemail.com> Co-authored-by: Qiming Teng <tengqim@cn.ibm.com> Co-authored-by: Kohei Toyoda <k-toyoda@pi.jp.nec.com> Co-authored-by: Dan POP <dan.papandrea@sysdig.com> Co-authored-by: Leo Di Donato <leodidonato@gmail.com> Co-authored-by: Xiaokang An <anxiaokang@hotmail.com>
parent
92d63939d3
commit
998fba98d7
|
@ -51,6 +51,7 @@ aliases:
|
|||
- sftim
|
||||
- steveperry-53
|
||||
- tengqm
|
||||
- vineethreddy02
|
||||
- xiangpengzhao
|
||||
- zacharysarah
|
||||
- zparnold
|
||||
|
@ -155,7 +156,6 @@ aliases:
|
|||
- gochist
|
||||
- ianychoi
|
||||
- seokho-son
|
||||
- ysyukr
|
||||
sig-docs-maintainers: # Website maintainers
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
|
|
33
config.toml
33
config.toml
|
@ -66,10 +66,10 @@ time_format_blog = "Monday, January 02, 2006"
|
|||
description = "Production-Grade Container Orchestration"
|
||||
showedit = true
|
||||
|
||||
latest = "v1.17"
|
||||
latest = "v1.18"
|
||||
|
||||
fullversion = "v1.17.0"
|
||||
version = "v1.17"
|
||||
fullversion = "v1.18.0"
|
||||
version = "v1.18"
|
||||
githubbranch = "master"
|
||||
docsbranch = "master"
|
||||
deprecated = false
|
||||
|
@ -83,12 +83,6 @@ announcement = false
|
|||
# announcement_message is only displayed when announcement = true; update with your specific message
|
||||
announcement_message = "The Kubernetes Documentation team would like your feedback! Please take a <a href='https://www.surveymonkey.com/r/8R237FN' target='_blank'>short survey</a> so we can improve the Kubernetes online documentation."
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.17.0"
|
||||
version = "v1.17"
|
||||
githubbranch = "v1.17.0"
|
||||
docsbranch = "release-1.17"
|
||||
url = "https://kubernetes.io"
|
||||
|
||||
[params.pushAssets]
|
||||
css = [
|
||||
|
@ -101,6 +95,20 @@ js = [
|
|||
"script"
|
||||
]
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.18.0"
|
||||
version = "v1.18"
|
||||
githubbranch = "v1.18.0"
|
||||
docsbranch = "release-1.18"
|
||||
url = "https://kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.17.0"
|
||||
version = "v1.17"
|
||||
githubbranch = "v1.17.0"
|
||||
docsbranch = "release-1.17"
|
||||
url = "https://v1-17.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.16.3"
|
||||
version = "v1.16"
|
||||
|
@ -122,13 +130,6 @@ githubbranch = "v1.14.9"
|
|||
docsbranch = "release-1.14"
|
||||
url = "https://v1-14.docs.kubernetes.io"
|
||||
|
||||
[[params.versions]]
|
||||
fullversion = "v1.13.12"
|
||||
version = "v1.13"
|
||||
githubbranch = "v1.13.12"
|
||||
docsbranch = "release-1.13"
|
||||
url = "https://v1-13.docs.kubernetes.io"
|
||||
|
||||
# Language definitions.
|
||||
|
||||
[languages]
|
||||
|
|
|
@ -97,3 +97,11 @@ semantics to fields! It's also going to improve support for CRDs and unions!
|
|||
- Some kubectl apply features are missing from diff and could be useful, like the ability
|
||||
to filter by label, or to display pruned resources.
|
||||
- Eventually, kubectl diff will use server-side apply!
|
||||
|
||||
{{< note >}}
|
||||
|
||||
The flag `kubectl apply --server-dry-run` is deprecated in v1.18.
|
||||
Use the flag `--dry-run=server` for using server-side dry-run in
|
||||
`kubectl apply` and other subcommands.
|
||||
|
||||
{{< /note >}}
|
||||
|
|
|
@ -0,0 +1,377 @@
|
|||
---
|
||||
title: API Priority and Fairness
|
||||
content_template: templates/concept
|
||||
min-kubernetes-server-version: v1.18
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
|
||||
|
||||
Controlling the behavior of the Kubernetes API server in an overload situation
|
||||
is a key task for cluster administrators. The {{< glossary_tooltip
|
||||
term_id="kube-apiserver" text="kube-apiserver" >}} has some controls available
|
||||
(i.e. the `--max-requests-inflight` and `--max-mutating-requests-inflight`
|
||||
command-line flags) to limit the amount of outstanding work that will be
|
||||
accepted, preventing a flood of inbound requests from overloading and
|
||||
potentially crashing the API server, but these flags are not enough to ensure
|
||||
that the most important requests get through in a period of high traffic.
|
||||
|
||||
The API Priority and Fairness feature (APF) is an alternative that improves upon
|
||||
aforementioned max-inflight limitations. APF classifies
|
||||
and isolates requests in a more fine-grained way. It also introduces
|
||||
a limited amount of queuing, so that no requests are rejected in cases
|
||||
of very brief bursts. Requests are dispatched from queues using a
|
||||
fair queuing technique so that, for example, a poorly-behaved {{<
|
||||
glossary_tooltip text="controller" term_id="controller" >}}) need not
|
||||
starve others (even at the same priority level).
|
||||
|
||||
{{< caution >}}
|
||||
Requests classified as "long-running" — primarily watches — are not
|
||||
subject to the API Priority and Fairness filter. This is also true for
|
||||
the `--max-requests-inflight` flag without the API Priority and
|
||||
Fairness feature enabled.
|
||||
{{< /caution >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Enabling API Priority and Fairness
|
||||
|
||||
The API Priority and Fairness feature is controlled by a feature gate
|
||||
and is not enabled by default. See
|
||||
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
for a general explanation of feature gates and how to enable and disable them. The
|
||||
name of the feature gate for APF is "APIPriorityAndFairness". This
|
||||
feature also involves an {{< glossary_tooltip term_id="api-group"
|
||||
text="API Group" >}} that must be enabled. You can do these
|
||||
things by adding the following command-line flags to your
|
||||
`kube-apiserver` invocation:
|
||||
|
||||
```shell
|
||||
kube-apiserver \
|
||||
--feature-gates=APIPriorityAndFairness=true \
|
||||
--runtime-config=flowcontrol.apiserver.k8s.io/v1alpha1=true \
|
||||
# …and other flags as usual
|
||||
```
|
||||
|
||||
The command-line flag `--enable-priority-and-fairness=false` will disable the
|
||||
API Priority and Fairness feature, even if other flags have enabled it.
|
||||
|
||||
## Concepts
|
||||
There are several distinct features involved in the API Priority and Fairness
|
||||
feature. Incoming requests are classified by attributes of the request using
|
||||
_FlowSchemas_, and assigned to priority levels. Priority levels add a degree of
|
||||
isolation by maintaining separate concurrency limits, so that requests assigned
|
||||
to different priority levels cannot starve each other. Within a priority level,
|
||||
a fair-queuing algorithm prevents requests from different _flows_ from starving
|
||||
each other, and allows for requests to be queued to prevent bursty traffic from
|
||||
causing failed requests when the average load is acceptably low.
|
||||
|
||||
### Priority Levels
|
||||
Without APF enabled, overall concurrency in
|
||||
the API server is limited by the `kube-apiserver` flags
|
||||
`--max-requests-inflight` and `--max-mutating-requests-inflight`. With APF
|
||||
enabled, the concurrency limits defined by these flags are summed and then the sum is divided up
|
||||
among a configurable set of _priority levels_. Each incoming request is assigned
|
||||
to a single priority level, and each priority level will only dispatch as many
|
||||
concurrent requests as its configuration allows.
|
||||
|
||||
The default configuration, for example, includes separate priority levels for
|
||||
leader-election requests, requests from built-in controllers, and requests from
|
||||
Pods. This means that an ill-behaved Pod that floods the API server with
|
||||
requests cannot prevent leader election or actions by the built-in controllers
|
||||
from succeeding.
|
||||
|
||||
### Queuing
|
||||
Even within a priority level there may be a large number of distinct sources of
|
||||
traffic. In an overload situation, it is valuable to prevent one stream of
|
||||
requests from starving others (in particular, in the relatively common case of a
|
||||
single buggy client flooding the kube-apiserver with requests, that buggy client
|
||||
would ideally not have much measurable impact on other clients at all). This is
|
||||
handled by use of a fair-queuing algorithm to process requests that are assigned
|
||||
the same priority level. Each request is assigned to a _flow_, identified by the
|
||||
name of the matching FlowSchema plus a _flow distinguisher_ — which
|
||||
is either the requesting user, the target resource's namespace, or nothing — and the
|
||||
system attempts to give approximately equal weight to requests in different
|
||||
flows of the same priority level.
|
||||
|
||||
After classifying a request into a flow, the API Priority and Fairness
|
||||
feature then may assign the request to a queue. This assignment uses
|
||||
a technique known as {{< glossary_tooltip term_id="shuffle-sharding"
|
||||
text="shuffle sharding" >}}, which makes relatively efficient use of
|
||||
queues to insulate low-intensity flows from high-intensity flows.
|
||||
|
||||
The details of the queuing algorithm are tunable for each priority level, and
|
||||
allow administrators to trade off memory use, fairness (the property that
|
||||
independent flows will all make progress when total traffic exceeds capacity),
|
||||
tolerance for bursty traffic, and the added latency induced by queuing.
|
||||
|
||||
### Exempt requests
|
||||
Some requests are considered sufficiently important that they are not subject to
|
||||
any of the limitations imposed by this feature. These exemptions prevent an
|
||||
improperly-configured flow control configuration from totally disabling an API
|
||||
server.
|
||||
|
||||
## Defaults
|
||||
The Priority and Fairness feature ships with a suggested configuration that
|
||||
should suffice for experimentation; if your cluster is likely to
|
||||
experience heavy load then you should consider what configuration will work best. The suggested configuration groups requests into five priority
|
||||
classes:
|
||||
|
||||
* The `system` priority level is for requests from the `system:nodes` group,
|
||||
i.e. Kubelets, which must be able to contact the API server in order for
|
||||
workloads to be able to schedule on them.
|
||||
|
||||
* The `leader-election` priority level is for leader election requests from
|
||||
built-in controllers (in particular, requests for `endpoints`, `configmaps`,
|
||||
or `leases` coming from the `system:kube-controller-manager` or
|
||||
`system:kube-scheduler` users and service accounts in the `kube-system`
|
||||
namespace). These are important to isolate from other traffic because failures
|
||||
in leader election cause their controllers to fail and restart, which in turn
|
||||
causes more expensive traffic as the new controllers sync their informers.
|
||||
|
||||
* The `workload-high` priority level is for other requests from built-in
|
||||
controllers.
|
||||
|
||||
* The `workload-low` priority level is for requests from any other service
|
||||
account, which will typically include all requests from controllers runing in
|
||||
Pods.
|
||||
|
||||
* The `global-default` priority level handles all other traffic, e.g.
|
||||
interactive `kubectl` commands run by nonprivileged users.
|
||||
|
||||
Additionally, there are two PriorityLevelConfigurations and two FlowSchemas that
|
||||
are built in and may not be overwritten:
|
||||
|
||||
* The special `exempt` priority level is used for requests that are not subject
|
||||
to flow control at all: they will always be dispatched immediately. The
|
||||
special `exempt` FlowSchema classifies all requests from the `system:masters`
|
||||
group into this priority level. You may define other FlowSchemas that direct
|
||||
other requests to this priority level, if appropriate.
|
||||
|
||||
* The special `catch-all` priority level is used in combination with the special
|
||||
`catch-all` FlowSchema to make sure that every request gets some kind of
|
||||
classification. Typically you should not rely on this catch-all configuration,
|
||||
and should create your own catch-all FlowSchema and PriorityLevelConfiguration
|
||||
(or use the `global-default` configuration that is installed by default) as
|
||||
appropriate. To help catch configuration errors that miss classifying some
|
||||
requests, the mandatory `catch-all` priority level only allows one concurrency
|
||||
share and does not queue requests, making it relatively likely that traffic
|
||||
that only matches the `catch-all` FlowSchema will be rejected with an HTTP 429
|
||||
error.
|
||||
|
||||
## Resources
|
||||
The flow control API involves two kinds of resources.
|
||||
[PriorityLevelConfigurations](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#prioritylevelconfiguration-v1alpha1-flowcontrol)
|
||||
define the available isolation classes, the share of the available concurrency
|
||||
budget that each can handle, and allow for fine-tuning queuing behavior.
|
||||
[FlowSchemas](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#flowschema-v1alpha1-flowcontrol)
|
||||
are used to classify individual inbound requests, matching each to a single
|
||||
PriorityLevelConfiguration.
|
||||
|
||||
### PriorityLevelConfiguration
|
||||
A PriorityLevelConfiguration represents a single isolation class. Each
|
||||
PriorityLevelConfiguration has an independent limit on the number of outstanding
|
||||
requests, and limitations on the number of queued requests.
|
||||
|
||||
Concurrency limits for PriorityLevelConfigurations are not specified in absolute
|
||||
number of requests, but rather in "concurrency shares." The total concurrency
|
||||
limit for the API Server is distributed among the existing
|
||||
PriorityLevelConfigurations in proportion with these shares. This allows a
|
||||
cluster administrator to scale up or down the total amount of traffic to a
|
||||
server by restarting `kube-apiserver` with a different value for
|
||||
`--max-requests-inflight` (or `--max-mutating-requests-inflight`), and all
|
||||
PriorityLevelConfigurations will see their maximum allowed concurrency go up (or
|
||||
down) by the same fraction.
|
||||
{{< caution >}}
|
||||
With the Priority and Fairness feature enabled, the total concurrency limit for
|
||||
the server is set to the sum of `--max-requests-inflight` and
|
||||
`--max-mutating-requests-inflight`. There is no longer any distinction made
|
||||
between mutating and non-mutating requests; if you want to treat them
|
||||
separately for a given resource, make separate FlowSchemas that match the
|
||||
mutating and non-mutating verbs respectively.
|
||||
{{< /caution >}}
|
||||
|
||||
When the volume of inbound requests assigned to a single
|
||||
PriorityLevelConfiguration is more than its permitted concurrency level, the
|
||||
`type` field of its specification determines what will happen to extra requests.
|
||||
A type of `Reject` means that excess traffic will immediately be rejected with
|
||||
an HTTP 429 (Too Many Requests) error. A type of `Queue` means that requests
|
||||
above the threshold will be queued, with the shuffle sharding and fair queuing techniques used
|
||||
to balance progress between request flows.
|
||||
|
||||
The queuing configuration allows tuning the fair queuing algorithm for a
|
||||
priority level. Details of the algorithm can be read in the [enhancement
|
||||
proposal](#what-s-next), but in short:
|
||||
|
||||
* Increasing `queues` reduces the rate of collisions between different flows, at
|
||||
the cost of increased memory usage. A value of 1 here effectively disables the
|
||||
fair-queuing logic, but still allows requests to be queued.
|
||||
|
||||
* Increasing `queueLengthLimit` allows larger bursts of traffic to be
|
||||
sustained without dropping any requests, at the cost of increased
|
||||
latency and memory usage.
|
||||
|
||||
* Changing `handSize` allows you to adjust the probability of collisions between
|
||||
different flows and the overall concurrency available to a single flow in an
|
||||
overload situation.
|
||||
{{< note >}}
|
||||
A larger `handSize` makes it less likely for two individual flows to collide
|
||||
(and therefore for one to be able to starve the other), but more likely that
|
||||
a small number of flows can dominate the apiserver. A larger `handSize` also
|
||||
potentially increases the amount of latency that a single high-traffic flow
|
||||
can cause. The maximum number of queued requests possible from a
|
||||
single flow is `handSize * queueLengthLimit`.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
Following is a table showing an interesting collection of shuffle
|
||||
sharding configurations, showing for each the probability that a
|
||||
given mouse (low-intensity flow) is squished by the elephants (high-intensity flows) for
|
||||
an illustrative collection of numbers of elephants. See
|
||||
https://play.golang.org/p/Gi0PLgVHiUg , which computes this table.
|
||||
|
||||
{{< table caption="Example Shuffle Sharding Configurations" >}}
|
||||
|HandSize| Queues| 1 elephant| 4 elephants| 16 elephants|
|
||||
|--------|-----------|------------|----------------|--------------------|
|
||||
| 12| 32| 4.428838398950118e-09| 0.11431348830099144| 0.9935089607656024|
|
||||
| 10| 32| 1.550093439632541e-08| 0.0626479840223545| 0.9753101519027554|
|
||||
| 10| 64| 6.601827268370426e-12| 0.00045571320990370776| 0.49999929150089345|
|
||||
| 9| 64| 3.6310049976037345e-11| 0.00045501212304112273| 0.4282314876454858|
|
||||
| 8| 64| 2.25929199850899e-10| 0.0004886697053040446| 0.35935114681123076|
|
||||
| 8| 128| 6.994461389026097e-13| 3.4055790161620863e-06| 0.02746173137155063|
|
||||
| 7| 128| 1.0579122850901972e-11| 6.960839379258192e-06| 0.02406157386340147|
|
||||
| 7| 256| 7.597695465552631e-14| 6.728547142019406e-08| 0.0006709661542533682|
|
||||
| 6| 256| 2.7134626662687968e-12| 2.9516464018476436e-07| 0.0008895654642000348|
|
||||
| 6| 512| 4.116062922897309e-14| 4.982983350480894e-09| 2.26025764343413e-05|
|
||||
| 6| 1024| 6.337324016514285e-16| 8.09060164312957e-11| 4.517408062903668e-07|
|
||||
|
||||
### FlowSchema
|
||||
|
||||
A FlowSchema matches some inbound requests and assigns them to a
|
||||
priority level. Every inbound request is tested against every
|
||||
FlowSchema in turn, starting with those with numerically lowest ---
|
||||
which we take to be the logically highest --- `matchingPrecedence` and
|
||||
working onward. The first match wins.
|
||||
|
||||
{{< caution >}}
|
||||
Only the first matching FlowSchema for a given request matters. If multiple
|
||||
FlowSchemas match a single inbound request, it will be assigned based on the one
|
||||
with the highest `matchingPrecedence`. If multiple FlowSchemas with equal
|
||||
`matchingPrecedence` match the same request, the one with lexicographically
|
||||
smaller `name` will win, but it's better not to rely on this, and instead to
|
||||
ensure that no two FlowSchemas have the same `matchingPrecedence`.
|
||||
{{< /caution >}}
|
||||
|
||||
A FlowSchema matches a given request if at least one of its `rules`
|
||||
matches. A rule matches if at least one of its `subjects` *and* at least
|
||||
one of its `resourceRules` or `nonResourceRules` (depending on whether the
|
||||
incoming request is for a resource or non-resource URL) matches the request.
|
||||
|
||||
For the `name` field in subjects, and the `verbs`, `apiGroups`, `resources`,
|
||||
`namespaces`, and `nonResourceURLs` fields of resource and non-resource rules,
|
||||
the wildcard `*` may be specified to match all values for the given field,
|
||||
effectively removing it from consideration.
|
||||
|
||||
A FlowSchema's `distinguisherMethod.type` determines how requests matching that
|
||||
schema will be separated into flows. It may be
|
||||
either `ByUser`, in which case one requesting user will not be able to starve
|
||||
other users of capacity, or `ByNamespace`, in which case requests for resources
|
||||
in one namespace will not be able to starve requests for resources in other
|
||||
namespaces of capacity, or it may be blank (or `distinguisherMethod` may be
|
||||
omitted entirely), in which case all requests matched by this FlowSchema will be
|
||||
considered part of a single flow. The correct choice for a given FlowSchema
|
||||
depends on the resource and your particular environment.
|
||||
|
||||
## Diagnostics
|
||||
Every HTTP response from an API server with the priority and fairness feature
|
||||
enabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and
|
||||
`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request
|
||||
and the priority level to which it was assigned, respectively. The API objects'
|
||||
names are not included in these headers in case the requesting user does not
|
||||
have permission to view them, so when debugging you can use a command like
|
||||
|
||||
```shell
|
||||
kubectl get flowschemas -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
|
||||
kubectl get prioritylevelconfigurations -o custom-columns="uid:{metadata.uid},name:{metadata.name}"
|
||||
```
|
||||
|
||||
to get a mapping of UIDs to names for both FlowSchemas and
|
||||
PriorityLevelConfigurations.
|
||||
|
||||
## Observability
|
||||
When you enable the API Priority and Fairness feature, the kube-apiserver
|
||||
exports additional metrics. Monitoring these can help you determine whether your
|
||||
configuration is inappropriately throttling important traffic, or find
|
||||
poorly-behaved workloads that may be harming system health.
|
||||
|
||||
* `apiserver_flowcontrol_rejected_requests_total` counts requests that
|
||||
were rejected, grouped by the name of the assigned priority level,
|
||||
the name of the assigned FlowSchema, and the reason for rejection.
|
||||
The reason will be one of the following:
|
||||
* `queue-full`, indicating that too many requests were already
|
||||
queued,
|
||||
* `concurrency-limit`, indicating that the
|
||||
PriorityLevelConfiguration is configured to reject rather than
|
||||
queue excess requests, or
|
||||
* `time-out`, indicating that the request was still in the queue
|
||||
when its queuing time limit expired.
|
||||
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` counts requests
|
||||
that began executing, grouped by the name of the assigned priority
|
||||
level and the name of the assigned FlowSchema.
|
||||
|
||||
* `apiserver_flowcontrol_current_inqueue_requests` gives the
|
||||
instantaneous total number of queued (not executing) requests,
|
||||
grouped by priority level and FlowSchema.
|
||||
|
||||
* `apiserver_flowcontrol_current_executing_requests` gives the instantaneous
|
||||
total number of executing requests, grouped by priority level and FlowSchema.
|
||||
|
||||
* `apiserver_flowcontrol_request_queue_length_after_enqueue` gives a
|
||||
histogram of queue lengths for the queues, grouped by priority level
|
||||
and FlowSchema, as sampled by the enqueued requests. Each request
|
||||
that gets queued contributes one sample to its histogram, reporting
|
||||
the length of the queue just after the request was added. Note that
|
||||
this produces different statistics than an unbiased survey would.
|
||||
{{< note >}}
|
||||
An outlier value in a histogram here means it is likely that a single flow
|
||||
(i.e., requests by one user or for one namespace, depending on
|
||||
configuration) is flooding the API server, and being throttled. By contrast,
|
||||
if one priority level's histogram shows that all queues for that priority
|
||||
level are longer than those for other priority levels, it may be appropriate
|
||||
to increase that PriorityLevelConfiguration's concurrency shares.
|
||||
{{< /note >}}
|
||||
|
||||
* `apiserver_flowcontrol_request_concurrency_limit` gives the computed
|
||||
concurrency limit (based on the API server's total concurrency limit and PriorityLevelConfigurations'
|
||||
concurrency shares) for each PriorityLevelConfiguration.
|
||||
|
||||
* `apiserver_flowcontrol_request_wait_duration_seconds` gives a histogram of how
|
||||
long requests spent queued, grouped by the FlowSchema that matched the
|
||||
request, the PriorityLevel to which it was assigned, and whether or not the
|
||||
request successfully executed.
|
||||
{{< note >}}
|
||||
Since each FlowSchema always assigns requests to a single
|
||||
PriorityLevelConfiguration, you can add the histograms for all the
|
||||
FlowSchemas for one priority level to get the effective histogram for
|
||||
requests assigned to that priority level.
|
||||
{{< /note >}}
|
||||
|
||||
* `apiserver_flowcontrol_request_execution_seconds` gives a histogram of how
|
||||
long requests took to actually execute, grouped by the FlowSchema that matched the
|
||||
request and the PriorityLevel to which it was assigned.
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
For background information on design details for API priority and fairness, see
|
||||
the [enhancement proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190228-priority-and-fairness.md).
|
||||
You can make suggestions and feature requests via [SIG API
|
||||
Machinery](https://github.com/kubernetes/community/tree/master/sig-api-machinery).
|
||||
|
||||
{{% /capture %}}
|
|
@ -185,9 +185,10 @@ resource limits, see the
|
|||
|
||||
The resource usage of a Pod is reported as part of the Pod status.
|
||||
|
||||
If [optional monitoring](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)
|
||||
is configured for your cluster, then Pod resource usage can be retrieved from
|
||||
the monitoring system.
|
||||
If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
are available in your cluster, then Pod resource usage can be retrieved either
|
||||
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
|
||||
directly or from your monitoring tools.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
@ -385,7 +386,7 @@ spec:
|
|||
### How Pods with ephemeral-storage requests are scheduled
|
||||
|
||||
When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
|
||||
run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
|
||||
run on. Each node has a maximum amount of local ephemeral storage it can provide for Pods. For more information, see ["Node Allocatable"](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable).
|
||||
|
||||
The scheduler ensures that the sum of the resource requests of the scheduled Containers is less than the capacity of the node.
|
||||
|
||||
|
|
|
@ -10,12 +10,12 @@ weight: 20
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
|
||||
When you run a Pod on a Node, the Pod itself takes an amount of system resources. These
|
||||
resources are additional to the resources needed to run the container(s) inside the Pod.
|
||||
_Pod Overhead_ is a feature for accounting for the resources consumed by the pod infrastructure
|
||||
_Pod Overhead_ is a feature for accounting for the resources consumed by the Pod infrastructure
|
||||
on top of the container requests & limits.
|
||||
|
||||
|
||||
|
@ -24,33 +24,169 @@ on top of the container requests & limits.
|
|||
|
||||
{{% capture body %}}
|
||||
|
||||
## Pod Overhead
|
||||
|
||||
In Kubernetes, the pod's overhead is set at
|
||||
In Kubernetes, the Pod's overhead is set at
|
||||
[admission](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
|
||||
time according to the overhead associated with the pod's
|
||||
time according to the overhead associated with the Pod's
|
||||
[RuntimeClass](/docs/concepts/containers/runtime-class/).
|
||||
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
resource requests when scheduling a pod. Similarly, Kubelet will include the pod overhead when sizing
|
||||
the pod cgroup, and when carrying out pod eviction ranking.
|
||||
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
|
||||
the Pod cgroup, and when carrying out Pod eviction ranking.
|
||||
|
||||
### Set Up
|
||||
## Enabling Pod Overhead {#set-up}
|
||||
|
||||
You need to make sure that the `PodOverhead`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is off by default)
|
||||
across your cluster. This means:
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (it is on by default as of 1.18)
|
||||
across your cluster, and a `RuntimeClass` is utilized which defines the `overhead` field.
|
||||
|
||||
- in {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
||||
- in {{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}
|
||||
- in the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} on each Node
|
||||
- in any custom API servers that use feature gates
|
||||
## Usage example
|
||||
|
||||
{{< note >}}
|
||||
Users who can write to RuntimeClass resources are able to have cluster-wide impact on
|
||||
workload performance. You can limit access to this ability using Kubernetes access controls.
|
||||
See [Authorization Overview](/docs/reference/access-authn-authz/authorization/) for more details.
|
||||
{{< /note >}}
|
||||
To use the PodOverhead feature, you need a RuntimeClass that defines the `overhead` field. As
|
||||
an example, you could use the following RuntimeClass definition with a virtualizing container runtime
|
||||
that uses around 120MiB per Pod for the virtual machine and the guest OS:
|
||||
|
||||
```yaml
|
||||
---
|
||||
kind: RuntimeClass
|
||||
apiVersion: node.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: kata-fc
|
||||
handler: kata-fc
|
||||
overhead:
|
||||
podFixed:
|
||||
memory: "120Mi"
|
||||
cpu: "250m"
|
||||
```
|
||||
|
||||
Workloads which are created which specify the `kata-fc` RuntimeClass handler will take the memory and
|
||||
cpu overheads into account for resource quota calculations, node scheduling, as well as Pod cgroup sizing.
|
||||
|
||||
Consider running the given example workload, test-pod:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
runtimeClassName: kata-fc
|
||||
containers:
|
||||
- name: busybox-ctr
|
||||
image: busybox
|
||||
stdin: true
|
||||
tty: true
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 100Mi
|
||||
- name: nginx-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1500m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
At admission time the RuntimeClass [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
|
||||
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
|
||||
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
|
||||
to include an `overhead`.
|
||||
|
||||
After the RuntimeClass admission controller, you can check the updated PodSpec:
|
||||
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.overhead}'
|
||||
```
|
||||
|
||||
The output is:
|
||||
```
|
||||
map[cpu:250m memory:120Mi]
|
||||
```
|
||||
|
||||
If a ResourceQuota is defined, the sum of container requests as well as the
|
||||
`overhead` field are counted.
|
||||
|
||||
When the kube-scheduler is deciding which node should run a new Pod, the scheduler considers that Pod's
|
||||
`overhead` as well as the sum of container requests for that Pod. For this example, the scheduler adds the
|
||||
requests and the overhead, then looks for a node that has 2.25 CPU and 320 MiB of memory available.
|
||||
|
||||
Once a Pod is scheduled to a node, the kubelet on that node creates a new {{< glossary_tooltip text="cgroup" term_id="cgroup" >}}
|
||||
for the Pod. It is within this pod that the underlying container runtime will create containers.
|
||||
|
||||
If the resource has a limit defined for each container (Guaranteed QoS or Bustrable QoS with limits defined),
|
||||
the kubelet will set an upper limit for the pod cgroup associated with that resource (cpu.cfs_quota_us for CPU
|
||||
and memory.limit_in_bytes memory). This upper limit is based on the sum of the container limits plus the `overhead`
|
||||
defined in the PodSpec.
|
||||
|
||||
For CPU, if the Pod is Guaranteed or Burstable QoS, the kubelet will set `cpu.shares` based on the sum of container
|
||||
requests plus the `overhead` defined in the PodSpec.
|
||||
|
||||
Looking at our example, verify the container requests for the workload:
|
||||
```bash
|
||||
kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}'
|
||||
```
|
||||
|
||||
The total container requests are 2000m CPU and 200MiB of memory:
|
||||
```
|
||||
map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi]
|
||||
```
|
||||
|
||||
Check this against what is observed by the node:
|
||||
```bash
|
||||
kubectl describe node | grep test-pod -B2
|
||||
```
|
||||
|
||||
The output shows 2250m CPU and 320MiB of memory are requested, which includes PodOverhead:
|
||||
```
|
||||
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
|
||||
--------- ---- ------------ ---------- --------------- ------------- ---
|
||||
default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m
|
||||
```
|
||||
|
||||
## Verify Pod cgroup limits
|
||||
|
||||
Check the Pod's memory cgroups on the node where the workload is running. In the following example, [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)
|
||||
is used on the node, which provides a CLI for CRI-compatible container runtimes. This is an
|
||||
advanced example to show PodOverhead behavior, and it is not expected that users should need to check
|
||||
cgroups directly on the node.
|
||||
|
||||
First, on the particular node, determine the Pod identifier:
|
||||
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
POD_ID="$(sudo crictl pods --name test-pod -q)"
|
||||
```
|
||||
|
||||
From this, you can determine the cgroup path for the Pod:
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled
|
||||
sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath
|
||||
```
|
||||
|
||||
The resulting cgroup path includes the Pod's `pause` container. The Pod level cgroup is one directory above.
|
||||
```
|
||||
"cgroupsPath": "/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a"
|
||||
```
|
||||
|
||||
In this specific case, the pod cgroup path is `kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`. Verify the Pod level cgroup setting for memory:
|
||||
```bash
|
||||
# Run this on the node where the Pod is scheduled.
|
||||
# Also, change the name of the cgroup to match the cgroup allocated for your pod.
|
||||
cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes
|
||||
```
|
||||
|
||||
This is 320 MiB, as expected:
|
||||
```
|
||||
335544320
|
||||
```
|
||||
|
||||
### Observability
|
||||
|
||||
A `kube_pod_overhead` metric is available in [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
|
||||
to help identify when PodOverhead is being utilized and to help observe stability of workloads
|
||||
running with a defined Overhead. This functionality is not available in the 1.9 release of
|
||||
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
|
||||
from source in the meantime.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -676,6 +676,37 @@ A container using a Secret as a
|
|||
Secret updates.
|
||||
{{< /note >}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
The Kubernetes alpha feature _Immutable Secrets and ConfigMaps_ provides an option to set
|
||||
individual Secrets and ConfigMaps as immutable. For clusters that extensively use Secrets
|
||||
(at least tens of thousands of unique Secret to Pod mounts), preventing changes to their
|
||||
data has the following advantages:
|
||||
|
||||
- protects you from accidental (or unwanted) updates that could cause applications outages
|
||||
- improves performance of your cluster by significantly reducing load on kube-apiserver, by
|
||||
closing watches for secrets marked as immutable.
|
||||
|
||||
To use this feature, enable the `ImmutableEmphemeralVolumes`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and set
|
||||
your Secret or ConfigMap `immutable` field to `true`. For example:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
...
|
||||
data:
|
||||
...
|
||||
immutable: true
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Once a Secret or ConfigMap is marked as immutable, it is _not_ possible to revert this change
|
||||
nor to mutate the contents of the `data` field. You can only delete and recreate the Secret.
|
||||
Existing Pods maintain a mount point to the deleted Secret - it is recommended to recreate
|
||||
these pods.
|
||||
{{< /note >}}
|
||||
|
||||
### Using Secrets as environment variables
|
||||
|
||||
To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
|
||||
|
|
|
@ -197,11 +197,13 @@ on the special hardware nodes. This will make sure that these special hardware
|
|||
nodes are dedicated for pods requesting such hardware and you don't have to
|
||||
manually add tolerations to your pods.
|
||||
|
||||
* **Taint based Evictions (beta feature)**: A per-pod-configurable eviction behavior
|
||||
* **Taint based Evictions**: A per-pod-configurable eviction behavior
|
||||
when there are node problems, which is described in the next section.
|
||||
|
||||
## Taint based Evictions
|
||||
|
||||
{{< feature-state for_k8s_version="1.18" state="stable" >}}
|
||||
|
||||
Earlier we mentioned the `NoExecute` taint effect, which affects pods that are already
|
||||
running on the node as follows
|
||||
|
||||
|
@ -229,9 +231,9 @@ certain condition is true. The following taints are built in:
|
|||
as unusable. After a controller from the cloud-controller-manager initializes
|
||||
this node, the kubelet removes this taint.
|
||||
|
||||
In version 1.13, the `TaintBasedEvictions` feature is promoted to beta and enabled by default, hence the taints are automatically
|
||||
added by the NodeController (or kubelet) and the normal logic for evicting pods from nodes
|
||||
based on the Ready NodeCondition is disabled.
|
||||
In case a node is to be evicted, the node controller or the kubelet adds relevant taints
|
||||
with `NoExecute` effect. If the fault condition returns to normal the kubelet or node
|
||||
controller can remove the relevant taint(s).
|
||||
|
||||
{{< note >}}
|
||||
To maintain the existing [rate limiting](/docs/concepts/architecture/nodes/)
|
||||
|
@ -240,7 +242,7 @@ in a rate-limited way. This prevents massive pod evictions in scenarios such
|
|||
as the master becoming partitioned from the nodes.
|
||||
{{< /note >}}
|
||||
|
||||
This beta feature, in combination with `tolerationSeconds`, allows a pod
|
||||
The feature, in combination with `tolerationSeconds`, allows a pod
|
||||
to specify how long it should stay bound to a node that has one or both of these problems.
|
||||
|
||||
For example, an application with a lot of local state might want to stay
|
||||
|
@ -277,15 +279,13 @@ admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/default
|
|||
* `node.kubernetes.io/unreachable`
|
||||
* `node.kubernetes.io/not-ready`
|
||||
|
||||
This ensures that DaemonSet pods are never evicted due to these problems,
|
||||
which matches the behavior when this feature is disabled.
|
||||
This ensures that DaemonSet pods are never evicted due to these problems.
|
||||
|
||||
## Taint Nodes by Condition
|
||||
|
||||
The node lifecycle controller automatically creates taints corresponding to
|
||||
Node conditions.
|
||||
Node conditions with `NoSchedule` effect.
|
||||
Similarly the scheduler does not check Node conditions; instead the scheduler checks taints. This assures that Node conditions don't affect what's scheduled onto the Node. The user can choose to ignore some of the Node's problems (represented as Node conditions) by adding appropriate Pod tolerations.
|
||||
Note that `TaintNodesByCondition` only taints nodes with `NoSchedule` effect. `NoExecute` effect is controlled by `TaintBasedEviction` which is a beta feature and enabled by default since version 1.13.
|
||||
|
||||
Starting in Kubernetes 1.8, the DaemonSet controller automatically adds the
|
||||
following `NoSchedule` tolerations to all daemons, to prevent DaemonSets from
|
||||
|
|
|
@ -13,22 +13,14 @@ weight: 20
|
|||
|
||||
This page describes the RuntimeClass resource and runtime selection mechanism.
|
||||
|
||||
{{< warning >}}
|
||||
RuntimeClass includes *breaking* changes in the beta upgrade in v1.14. If you were using
|
||||
RuntimeClass prior to v1.14, see [Upgrading RuntimeClass from Alpha to
|
||||
Beta](#upgrading-runtimeclass-from-alpha-to-beta).
|
||||
{{< /warning >}}
|
||||
RuntimeClass is a feature for selecting the container runtime configuration. The container runtime
|
||||
configuration is used to run a Pod's containers.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Runtime Class
|
||||
|
||||
RuntimeClass is a feature for selecting the container runtime configuration. The container runtime
|
||||
configuration is used to run a Pod's containers.
|
||||
|
||||
## Motivation
|
||||
|
||||
You can set a different RuntimeClass between different Pods to provide a balance of
|
||||
|
@ -41,7 +33,7 @@ additional overhead.
|
|||
You can also use RuntimeClass to run different Pods with the same container runtime
|
||||
but with different settings.
|
||||
|
||||
### Set Up
|
||||
## Setup
|
||||
|
||||
Ensure the RuntimeClass feature gate is enabled (it is by default). See [Feature
|
||||
Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling
|
||||
|
@ -50,7 +42,7 @@ feature gates. The `RuntimeClass` feature gate must be enabled on apiservers _an
|
|||
1. Configure the CRI implementation on nodes (runtime dependent)
|
||||
2. Create the corresponding RuntimeClass resources
|
||||
|
||||
#### 1. Configure the CRI implementation on nodes
|
||||
### 1. Configure the CRI implementation on nodes
|
||||
|
||||
The configurations available through RuntimeClass are Container Runtime Interface (CRI)
|
||||
implementation dependent. See the corresponding documentation ([below](#cri-configuration)) for your
|
||||
|
@ -65,7 +57,7 @@ heterogenous node configurations, see [Scheduling](#scheduling) below.
|
|||
The configurations have a corresponding `handler` name, referenced by the RuntimeClass. The
|
||||
handler must be a valid DNS 1123 label (alpha-numeric + `-` characters).
|
||||
|
||||
#### 2. Create the corresponding RuntimeClass resources
|
||||
### 2. Create the corresponding RuntimeClass resources
|
||||
|
||||
The configurations setup in step 1 should each have an associated `handler` name, which identifies
|
||||
the configuration. For each handler, create a corresponding RuntimeClass object.
|
||||
|
@ -91,7 +83,7 @@ restricted to the cluster administrator. This is typically the default. See [Aut
|
|||
Overview](/docs/reference/access-authn-authz/authorization/) for more details.
|
||||
{{< /note >}}
|
||||
|
||||
### Usage
|
||||
## Usage
|
||||
|
||||
Once RuntimeClasses are configured for the cluster, using them is very simple. Specify a
|
||||
`runtimeClassName` in the Pod spec. For example:
|
||||
|
@ -150,14 +142,14 @@ See CRI-O's [config documentation][100] for more details.
|
|||
|
||||
[100]: https://raw.githubusercontent.com/cri-o/cri-o/9f11d1d/docs/crio.conf.5.md
|
||||
|
||||
### Scheduling
|
||||
## Scheduling
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
As of Kubernetes v1.16, RuntimeClass includes support for heterogenous clusters through its
|
||||
`scheduling` fields. Through the use of these fields, you can ensure that pods running with this
|
||||
RuntimeClass are scheduled to nodes that support it. To use the scheduling support, you must have
|
||||
the RuntimeClass [admission controller][] enabled (the default, as of 1.16).
|
||||
the [RuntimeClass admission controller][] enabled (the default, as of 1.16).
|
||||
|
||||
To ensure pods land on nodes supporting a specific RuntimeClass, that set of nodes should have a
|
||||
common label which is then selected by the `runtimeclass.scheduling.nodeSelector` field. The
|
||||
|
@ -173,50 +165,23 @@ by each.
|
|||
To learn more about configuring the node selector and tolerations, see [Assigning Pods to
|
||||
Nodes](/docs/concepts/configuration/assign-pod-node/).
|
||||
|
||||
[admission controller]: /docs/reference/access-authn-authz/admission-controllers/
|
||||
[RuntimeClass admission controller]: /docs/reference/access-authn-authz/admission-controllers/#runtimeclass
|
||||
|
||||
### Pod Overhead
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
As of Kubernetes v1.16, RuntimeClass includes support for specifying overhead associated with
|
||||
running a pod, as part of the [`PodOverhead`](/docs/concepts/configuration/pod-overhead/) feature.
|
||||
To use `PodOverhead`, you must have the PodOverhead [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled (it is off by default).
|
||||
You can specify _overhead_ resources that are associated with running a Pod. Declaring overhead allows
|
||||
the cluster (including the scheduler) to account for it when making decisions about Pods and resources.
|
||||
To use Pod overhead, you must have the PodOverhead [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled (it is on by default).
|
||||
|
||||
|
||||
Pod overhead is defined in RuntimeClass through the `Overhead` fields. Through the use of these fields,
|
||||
Pod overhead is defined in RuntimeClass through the `overhead` fields. Through the use of these fields,
|
||||
you can specify the overhead of running pods utilizing this RuntimeClass and ensure these overheads
|
||||
are accounted for in Kubernetes.
|
||||
|
||||
### Upgrading RuntimeClass from Alpha to Beta
|
||||
|
||||
The RuntimeClass Beta feature includes the following changes:
|
||||
|
||||
- The `node.k8s.io` API group and `runtimeclasses.node.k8s.io` resource have been migrated to a
|
||||
built-in API from a CustomResourceDefinition.
|
||||
- The `spec` has been inlined in the RuntimeClass definition (i.e. there is no more
|
||||
RuntimeClassSpec).
|
||||
- The `runtimeHandler` field has been renamed `handler`.
|
||||
- The `handler` field is now required in all API versions. This means the `runtimeHandler` field in
|
||||
the Alpha API is also required.
|
||||
- The `handler` field must be a valid DNS label ([RFC 1123](https://tools.ietf.org/html/rfc1123)),
|
||||
meaning it can no longer contain `.` characters (in all versions). Valid handlers match the
|
||||
following regular expression: `^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`.
|
||||
|
||||
**Action Required:** The following actions are required to upgrade from the alpha version of the
|
||||
RuntimeClass feature to the beta version:
|
||||
|
||||
- RuntimeClass resources must be recreated *after* upgrading to v1.14, and the
|
||||
`runtimeclasses.node.k8s.io` CRD should be manually deleted:
|
||||
```
|
||||
kubectl delete customresourcedefinitions.apiextensions.k8s.io runtimeclasses.node.k8s.io
|
||||
```
|
||||
- Alpha RuntimeClasses with an unspecified or empty `runtimeHandler` or those using a `.` character
|
||||
in the handler are no longer valid, and must be migrated to a valid handler configuration (see
|
||||
above).
|
||||
|
||||
### Further Reading
|
||||
{{% /capture %}}
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
|
||||
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Kubernetes Scheduler
|
||||
content_template: templates/concept
|
||||
weight: 60
|
||||
weight: 50
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
@ -54,14 +54,12 @@ individual and collective resource requirements, hardware / software /
|
|||
policy constraints, affinity and anti-affinity specifications, data
|
||||
locality, inter-workload interference, and so on.
|
||||
|
||||
## Scheduling with kube-scheduler {#kube-scheduler-implementation}
|
||||
### Node selection in kube-scheduler {#kube-scheduler-implementation}
|
||||
|
||||
kube-scheduler selects a node for the pod in a 2-step operation:
|
||||
|
||||
1. Filtering
|
||||
|
||||
2. Scoring
|
||||
|
||||
1. Scoring
|
||||
|
||||
The _filtering_ step finds the set of Nodes where it's feasible to
|
||||
schedule the Pod. For example, the PodFitsResources filter checks whether a
|
||||
|
@ -78,105 +76,15 @@ Finally, kube-scheduler assigns the Pod to the Node with the highest ranking.
|
|||
If there is more than one node with equal scores, kube-scheduler selects
|
||||
one of these at random.
|
||||
|
||||
There are two supported ways to configure the filtering and scoring behavior
|
||||
of the scheduler:
|
||||
|
||||
### Default policies
|
||||
|
||||
kube-scheduler has a default set of scheduling policies.
|
||||
|
||||
### Filtering
|
||||
|
||||
- `PodFitsHostPorts`: Checks if a Node has free ports (the network protocol kind)
|
||||
for the Pod ports the Pod is requesting.
|
||||
|
||||
- `PodFitsHost`: Checks if a Pod specifies a specific Node by its hostname.
|
||||
|
||||
- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory)
|
||||
to meet the requirement of the Pod.
|
||||
|
||||
- `PodMatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}}
|
||||
matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}.
|
||||
|
||||
- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}}
|
||||
that a Pod requests are available on the Node, given the failure zone restrictions for
|
||||
that storage.
|
||||
|
||||
- `NoDiskConflict`: Evaluates if a Pod can fit on a Node due to the volumes it requests,
|
||||
and those that are already mounted.
|
||||
|
||||
- `MaxCSIVolumeCount`: Decides how many {{< glossary_tooltip text="CSI" term_id="csi" >}}
|
||||
volumes should be attached, and whether that's over a configured limit.
|
||||
|
||||
- `CheckNodeMemoryPressure`: If a Node is reporting memory pressure, and there's no
|
||||
configured exception, the Pod won't be scheduled there.
|
||||
|
||||
- `CheckNodePIDPressure`: If a Node is reporting that process IDs are scarce, and
|
||||
there's no configured exception, the Pod won't be scheduled there.
|
||||
|
||||
- `CheckNodeDiskPressure`: If a Node is reporting storage pressure (a filesystem that
|
||||
is full or nearly full), and there's no configured exception, the Pod won't be
|
||||
scheduled there.
|
||||
|
||||
- `CheckNodeCondition`: Nodes can report that they have a completely full filesystem,
|
||||
that networking isn't available or that kubelet is otherwise not ready to run Pods.
|
||||
If such a condition is set for a Node, and there's no configured exception, the Pod
|
||||
won't be scheduled there.
|
||||
|
||||
- `PodToleratesNodeTaints`: checks if a Pod's {{< glossary_tooltip text="tolerations" term_id="toleration" >}}
|
||||
can tolerate the Node's {{< glossary_tooltip text="taints" term_id="taint" >}}.
|
||||
|
||||
- `CheckVolumeBinding`: Evaluates if a Pod can fit due to the volumes it requests.
|
||||
This applies for both bound and unbound
|
||||
{{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}.
|
||||
|
||||
### Scoring
|
||||
|
||||
- `SelectorSpreadPriority`: Spreads Pods across hosts, considering Pods that
|
||||
belong to the same {{< glossary_tooltip text="Service" term_id="service" >}},
|
||||
{{< glossary_tooltip term_id="statefulset" >}} or
|
||||
{{< glossary_tooltip term_id="replica-set" >}}.
|
||||
|
||||
- `InterPodAffinityPriority`: Computes a sum by iterating through the elements
|
||||
of weightedPodAffinityTerm and adding “weight” to the sum if the corresponding
|
||||
PodAffinityTerm is satisfied for that node; the node(s) with the highest sum
|
||||
are the most preferred.
|
||||
|
||||
- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other
|
||||
words, the more Pods that are placed on a Node, and the more resources those
|
||||
Pods use, the lower the ranking this policy will give.
|
||||
|
||||
- `MostRequestedPriority`: Favors nodes with most requested resources. This policy
|
||||
will fit the scheduled Pods onto the smallest number of Nodes needed to run your
|
||||
overall set of workloads.
|
||||
|
||||
- `RequestedToCapacityRatioPriority`: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.
|
||||
|
||||
- `BalancedResourceAllocation`: Favors nodes with balanced resource usage.
|
||||
|
||||
- `NodePreferAvoidPodsPriority`: Prioritizes nodes according to the node annotation
|
||||
`scheduler.alpha.kubernetes.io/preferAvoidPods`. You can use this to hint that
|
||||
two different Pods shouldn't run on the same Node.
|
||||
|
||||
- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling
|
||||
preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution.
|
||||
You can read more about this in [Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/).
|
||||
|
||||
- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on
|
||||
the number of intolerable taints on the node. This policy adjusts a node's rank
|
||||
taking that list into account.
|
||||
|
||||
- `ImageLocalityPriority`: Favors nodes that already have the
|
||||
{{< glossary_tooltip text="container images" term_id="image" >}} for that
|
||||
Pod cached locally.
|
||||
|
||||
- `ServiceSpreadingPriority`: For a given Service, this policy aims to make sure that
|
||||
the Pods for the Service run on different nodes. It favours scheduling onto nodes
|
||||
that don't have Pods for the service already assigned there. The overall outcome is
|
||||
that the Service becomes more resilient to a single Node failure.
|
||||
|
||||
- `CalculateAntiAffinityPriorityMap`: This policy helps implement
|
||||
[pod anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
|
||||
- `EqualPriorityMap`: Gives an equal weight of one to all nodes.
|
||||
1. [Scheduling Policies](/docs/reference/scheduling/policies) allow you to
|
||||
configure _Predicates_ for filtering and _Priorities_ for scoring.
|
||||
1. [Scheduling Profiles](/docs/reference/scheduling/profiles) allow you to
|
||||
configure Plugins that implement different scheduling stages, including:
|
||||
`QueueSort`, `Filter`, `Score`, `Bind`, `Reserve`, `Permit`, and others. You
|
||||
can also configure the kube-scheduler to run different profiles.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% capture whatsnext %}}
|
||||
|
|
|
@ -3,14 +3,14 @@ reviewers:
|
|||
- ahg-g
|
||||
title: Scheduling Framework
|
||||
content_template: templates/concept
|
||||
weight: 70
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="1.15" state="alpha" >}}
|
||||
|
||||
The scheduling framework is a new pluggable architecture for Kubernetes Scheduler
|
||||
The scheduling framework is a pluggable architecture for Kubernetes Scheduler
|
||||
that makes scheduler customizations easy. It adds a new set of "plugin" APIs to
|
||||
the existing scheduler. Plugins are compiled into the scheduler. The APIs
|
||||
allow most scheduling features to be implemented as plugins, while keeping the
|
||||
|
@ -56,16 +56,16 @@ stateful tasks.
|
|||
|
||||
{{< figure src="/images/docs/scheduling-framework-extensions.png" title="scheduling framework extension points" >}}
|
||||
|
||||
### Queue sort
|
||||
### QueueSort {#queue-sort}
|
||||
|
||||
These plugins are used to sort Pods in the scheduling queue. A queue sort plugin
|
||||
essentially will provide a "less(Pod1, Pod2)" function. Only one queue sort
|
||||
essentially provides a `Less(Pod1, Pod2)` function. Only one queue sort
|
||||
plugin may be enabled at a time.
|
||||
|
||||
### Pre-filter
|
||||
### PreFilter {#pre-filter}
|
||||
|
||||
These plugins are used to pre-process info about the Pod, or to check certain
|
||||
conditions that the cluster or the Pod must meet. If a pre-filter plugin returns
|
||||
conditions that the cluster or the Pod must meet. If a PreFilter plugin returns
|
||||
an error, the scheduling cycle is aborted.
|
||||
|
||||
### Filter
|
||||
|
@ -75,28 +75,25 @@ node, the scheduler will call filter plugins in their configured order. If any
|
|||
filter plugin marks the node as infeasible, the remaining plugins will not be
|
||||
called for that node. Nodes may be evaluated concurrently.
|
||||
|
||||
### Post-filter
|
||||
### PreScore {#pre-score}
|
||||
|
||||
This is an informational extension point. Plugins will be called with a list of
|
||||
nodes that passed the filtering phase. A plugin may use this data to update
|
||||
internal state or to generate logs/metrics.
|
||||
These plugins are used to perform "pre-scoring" work, which generates a sharable
|
||||
state for Score plugins to use. If a PreScore plugin returns an error, the
|
||||
scheduling cycle is aborted.
|
||||
|
||||
**Note:** Plugins wishing to perform "pre-scoring" work should use the
|
||||
post-filter extension point.
|
||||
|
||||
### Scoring
|
||||
### Score {#scoring}
|
||||
|
||||
These plugins are used to rank nodes that have passed the filtering phase. The
|
||||
scheduler will call each scoring plugin for each node. There will be a well
|
||||
defined range of integers representing the minimum and maximum scores. After the
|
||||
[normalize scoring](#normalize-scoring) phase, the scheduler will combine node
|
||||
[NormalizeScore](#normalize-scoring) phase, the scheduler will combine node
|
||||
scores from all plugins according to the configured plugin weights.
|
||||
|
||||
### Normalize scoring
|
||||
### NormalizeScore {#normalize-scoring}
|
||||
|
||||
These plugins are used to modify scores before the scheduler computes a final
|
||||
ranking of Nodes. A plugin that registers for this extension point will be
|
||||
called with the [scoring](#scoring) results from the same plugin. This is called
|
||||
called with the [Score](#scoring) results from the same plugin. This is called
|
||||
once per plugin per scheduling cycle.
|
||||
|
||||
For example, suppose a plugin `BlinkingLightScorer` ranks Nodes based on how
|
||||
|
@ -104,7 +101,7 @@ many blinking lights they have.
|
|||
|
||||
```go
|
||||
func ScoreNode(_ *v1.pod, n *v1.Node) (int, error) {
|
||||
return getBlinkingLightCount(n)
|
||||
return getBlinkingLightCount(n)
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -114,21 +111,23 @@ extension point.
|
|||
|
||||
```go
|
||||
func NormalizeScores(scores map[string]int) {
|
||||
highest := 0
|
||||
for _, score := range scores {
|
||||
highest = max(highest, score)
|
||||
}
|
||||
for node, score := range scores {
|
||||
scores[node] = score*NodeScoreMax/highest
|
||||
}
|
||||
highest := 0
|
||||
for _, score := range scores {
|
||||
highest = max(highest, score)
|
||||
}
|
||||
for node, score := range scores {
|
||||
scores[node] = score*NodeScoreMax/highest
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If any normalize-scoring plugin returns an error, the scheduling cycle is
|
||||
If any NormalizeScore plugin returns an error, the scheduling cycle is
|
||||
aborted.
|
||||
|
||||
**Note:** Plugins wishing to perform "pre-reserve" work should use the
|
||||
normalize-scoring extension point.
|
||||
{{< note >}}
|
||||
Plugins wishing to perform "pre-reserve" work should use the
|
||||
NormalizeScore extension point.
|
||||
{{< /note >}}
|
||||
|
||||
### Reserve
|
||||
|
||||
|
@ -140,53 +139,53 @@ to prevent race conditions while the scheduler waits for the bind to succeed.
|
|||
|
||||
This is the last step in a scheduling cycle. Once a Pod is in the reserved
|
||||
state, it will either trigger [Unreserve](#unreserve) plugins (on failure) or
|
||||
[Post-bind](#post-bind) plugins (on success) at the end of the binding cycle.
|
||||
|
||||
*Note: This concept used to be referred to as "assume".*
|
||||
[PostBind](#post-bind) plugins (on success) at the end of the binding cycle.
|
||||
|
||||
### Permit
|
||||
|
||||
These plugins are used to prevent or delay the binding of a Pod. A permit plugin
|
||||
can do one of three things.
|
||||
_Permit_ plugins are invoked at the end of the scheduling cycle for each Pod, to
|
||||
prevent or delay the binding to the candidate node. A permit plugin can do one of
|
||||
the three things:
|
||||
|
||||
1. **approve** \
|
||||
Once all permit plugins approve a Pod, it is sent for binding.
|
||||
Once all Permit plugins approve a Pod, it is sent for binding.
|
||||
|
||||
1. **deny** \
|
||||
If any permit plugin denies a Pod, it is returned to the scheduling queue.
|
||||
If any Permit plugin denies a Pod, it is returned to the scheduling queue.
|
||||
This will trigger [Unreserve](#unreserve) plugins.
|
||||
|
||||
1. **wait** (with a timeout) \
|
||||
If a permit plugin returns "wait", then the Pod is kept in the permit phase
|
||||
until a [plugin approves it](#frameworkhandle). If a timeout occurs, **wait**
|
||||
becomes **deny** and the Pod is returned to the scheduling queue, triggering
|
||||
[Unreserve](#unreserve) plugins.
|
||||
If a Permit plugin returns "wait", then the Pod is kept in an internal "waiting"
|
||||
Pods list, and the binding cycle of this Pod starts but directly blocks until it
|
||||
gets [approved](#frameworkhandle). If a timeout occurs, **wait** becomes **deny**
|
||||
and the Pod is returned to the scheduling queue, triggering [Unreserve](#unreserve)
|
||||
plugins.
|
||||
|
||||
**Approving a Pod binding**
|
||||
{{< note >}}
|
||||
While any plugin can access the list of "waiting" Pods and approve them
|
||||
(see [`FrameworkHandle`](#frameworkhandle)), we expect only the permit
|
||||
plugins to approve binding of reserved Pods that are in "waiting" state. Once a Pod
|
||||
is approved, it is sent to the [PreBind](#pre-bind) phase.
|
||||
{{< /note >}}
|
||||
|
||||
While any plugin can access the list of "waiting" Pods from the cache and
|
||||
approve them (see [`FrameworkHandle`](#frameworkhandle)) we expect only the permit
|
||||
plugins to approve binding of reserved Pods that are in "waiting" state. Once a
|
||||
Pod is approved, it is sent to the pre-bind phase.
|
||||
|
||||
### Pre-bind
|
||||
### PreBind {#pre-bind}
|
||||
|
||||
These plugins are used to perform any work required before a Pod is bound. For
|
||||
example, a pre-bind plugin may provision a network volume and mount it on the
|
||||
target node before allowing the Pod to run there.
|
||||
|
||||
If any pre-bind plugin returns an error, the Pod is [rejected](#unreserve) and
|
||||
If any PreBind plugin returns an error, the Pod is [rejected](#unreserve) and
|
||||
returned to the scheduling queue.
|
||||
|
||||
### Bind
|
||||
|
||||
These plugins are used to bind a Pod to a Node. Bind plugins will not be called
|
||||
until all pre-bind plugins have completed. Each bind plugin is called in the
|
||||
until all PreBind plugins have completed. Each bind plugin is called in the
|
||||
configured order. A bind plugin may choose whether or not to handle the given
|
||||
Pod. If a bind plugin chooses to handle a Pod, **the remaining bind plugins are
|
||||
skipped**.
|
||||
|
||||
### Post-bind
|
||||
### PostBind {#post-bind}
|
||||
|
||||
This is an informational extension point. Post-bind plugins are called after a
|
||||
Pod is successfully bound. This is the end of a binding cycle, and can be used
|
||||
|
@ -209,88 +208,35 @@ interfaces have the following form.
|
|||
|
||||
```go
|
||||
type Plugin interface {
|
||||
Name() string
|
||||
Name() string
|
||||
}
|
||||
|
||||
type QueueSortPlugin interface {
|
||||
Plugin
|
||||
Less(*v1.pod, *v1.pod) bool
|
||||
Plugin
|
||||
Less(*v1.pod, *v1.pod) bool
|
||||
}
|
||||
|
||||
type PreFilterPlugin interface {
|
||||
Plugin
|
||||
PreFilter(PluginContext, *v1.pod) error
|
||||
Plugin
|
||||
PreFilter(context.Context, *framework.CycleState, *v1.pod) error
|
||||
}
|
||||
|
||||
// ...
|
||||
```
|
||||
|
||||
# Plugin Configuration
|
||||
## Plugin configuration
|
||||
|
||||
Plugins can be enabled in the scheduler configuration. Also, default plugins can
|
||||
be disabled in the configuration. In 1.15, there are no default plugins for the
|
||||
scheduling framework.
|
||||
You can enable or disable plugins in the scheduler configuration. If you are using
|
||||
Kubernetes v1.18 or later, most scheduling
|
||||
[plugins](/docs/reference/scheduling/profiles/#scheduling-plugins) are in use and
|
||||
enabled by default.
|
||||
|
||||
The scheduler configuration can include configuration for plugins as well. Such
|
||||
configurations are passed to the plugins at the time the scheduler initializes
|
||||
them. The configuration is an arbitrary value. The receiving plugin should
|
||||
decode and process the configuration.
|
||||
In addition to default plugins, you can also implement your own scheduling
|
||||
plugins and get them configured along with default plugins. You can visit
|
||||
[scheduler-plugins](https://github.com/kubernetes-sigs/scheduler-plugins) for more details.
|
||||
|
||||
The following example shows a scheduler configuration that enables some
|
||||
plugins at `reserve` and `preBind` extension points and disables a plugin. It
|
||||
also provides a configuration to plugin `foo`.
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1alpha1
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
...
|
||||
|
||||
plugins:
|
||||
reserve:
|
||||
enabled:
|
||||
- name: foo
|
||||
- name: bar
|
||||
disabled:
|
||||
- name: baz
|
||||
preBind:
|
||||
enabled:
|
||||
- name: foo
|
||||
disabled:
|
||||
- name: baz
|
||||
|
||||
pluginConfig:
|
||||
- name: foo
|
||||
args: >
|
||||
Arbitrary set of args to plugin foo
|
||||
```
|
||||
|
||||
When an extension point is omitted from the configuration default plugins for
|
||||
that extension points are used. When an extension point exists and `enabled` is
|
||||
provided, the `enabled` plugins are called in addition to default plugins.
|
||||
Default plugins are called first and then the additional enabled plugins are
|
||||
called in the same order specified in the configuration. If a different order of
|
||||
calling default plugins is desired, default plugins must be `disabled` and
|
||||
`enabled` in the desired order.
|
||||
|
||||
Assuming there is a default plugin called `foo` at `reserve` and we are adding
|
||||
plugin `bar` that we want to be invoked before `foo`, we should disable `foo`
|
||||
and enable `bar` and `foo` in order. The following example shows the
|
||||
configuration that achieves this:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1alpha1
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
...
|
||||
|
||||
plugins:
|
||||
reserve:
|
||||
enabled:
|
||||
- name: bar
|
||||
- name: foo
|
||||
disabled:
|
||||
- name: foo
|
||||
```
|
||||
If you are using Kubernetes v1.18 or later, you can configure a set of plugins as
|
||||
a scheduler profile and then define multiple profiles to fit various kinds of workload.
|
||||
Learn more at [multiple profiles](/docs/reference/scheduling/profiles/#multiple-profiles).
|
||||
|
||||
{{% /capture %}}
|
|
@ -73,6 +73,7 @@ spec:
|
|||
- http:
|
||||
paths:
|
||||
- path: /testpath
|
||||
pathType: Prefix
|
||||
backend:
|
||||
serviceName: test
|
||||
servicePort: 80
|
||||
|
@ -117,6 +118,84 @@ backend is typically a configuration option of the [Ingress controller](/docs/co
|
|||
If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is
|
||||
routed to your default backend.
|
||||
|
||||
### Path Types
|
||||
|
||||
Each path in an Ingress has a corresponding path type. There are three supported
|
||||
path types:
|
||||
|
||||
* _`ImplementationSpecific`_ (default): With this path type, matching is up to
|
||||
the IngressClass. Implementations can treat this as a separate `pathType or
|
||||
treat it identically to `Prefix` or `Exact` path types.
|
||||
|
||||
* _`Exact`_: Matches the URL path exactly and with case sensitivity.
|
||||
|
||||
* _`Prefix`_: Matches based on a URL path prefix split by `/`. Matching is case
|
||||
sensitive and done on a path element by element basis. A path element refers
|
||||
to the list of labels in the path split by the `/` separator. A request is a
|
||||
match for path _p_ if every _p_ is an element-wise prefix of _p_ of the
|
||||
request path.
|
||||
{{< note >}}
|
||||
If the last element of the path is a substring of the
|
||||
last element in request path, it is not a match (for example:
|
||||
`/foo/bar` matches`/foo/bar/baz`, but does not match `/foo/barbaz`).
|
||||
{{< /note >}}
|
||||
|
||||
#### Multiple Matches
|
||||
In some cases, multiple paths within an Ingress will match a request. In those
|
||||
cases precedence will be given first to the longest matching path. If two paths
|
||||
are still equally matched, precedence will be given to paths with an exact path
|
||||
type over prefix path type.
|
||||
|
||||
## Ingress Class
|
||||
|
||||
Ingresses can be implemented by different controllers, often with different
|
||||
configuration. Each Ingress should specify a class, a reference to an
|
||||
IngressClass resource that contains additional configuration including the name
|
||||
of the controller that should implement the class.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: IngressClass
|
||||
metadata:
|
||||
name: external-lb
|
||||
spec:
|
||||
controller: example.com/ingress-controller
|
||||
parameters:
|
||||
apiGroup: k8s.example.com/v1alpha
|
||||
kind: IngressParameters
|
||||
name: external-lb
|
||||
```
|
||||
|
||||
IngressClass resources contain an optional parameters field. This can be used to
|
||||
reference additional configuration for this class.
|
||||
|
||||
### Deprecated Annotation
|
||||
|
||||
Before the IngressClass resource and `ingressClassName` field were added in
|
||||
Kubernetes 1.18, Ingress classes were specified with a
|
||||
`kubernetes.io/ingress.class` annotation on the Ingress. This annotation was
|
||||
never formally defined, but was widely supported by Ingress controllers.
|
||||
|
||||
The newer `ingressClassName` field on Ingresses is a replacement for that
|
||||
annotation, but is not a direct equivalent. While the annotation was generally
|
||||
used to reference the name of the Ingress controller that should implement the
|
||||
Ingress, the field is a reference to an IngressClass resource that contains
|
||||
additional Ingress configuration, including the name of the Ingress controller.
|
||||
|
||||
### Default Ingress Class
|
||||
|
||||
You can mark a particular IngressClass as default for your cluster. Setting the
|
||||
`ingressclass.kubernetes.io/is-default-class` annotation to `true` on an
|
||||
IngressClass resource will ensure that new Ingresses without an
|
||||
`ingressClassName` field specified will be assigned this default IngressClass.
|
||||
|
||||
{{< caution >}}
|
||||
If you have more than one IngressClass marked as the default for your cluster,
|
||||
the admission controller prevents creating new Ingress objects that don't have
|
||||
an `ingressClassName` specified. You can resolve this by ensuring that at most 1
|
||||
IngressClasess are marked as default in your cluster.
|
||||
{{< /caution >}}
|
||||
|
||||
## Types of Ingress
|
||||
|
||||
### Single Service Ingress
|
||||
|
|
|
@ -207,7 +207,7 @@ This ensures that even pods that aren't selected by any other NetworkPolicy will
|
|||
|
||||
{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
|
||||
|
||||
To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`.
|
||||
To use this feature, you (or your cluster administrator) will need to enable the `SCTPSupport` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) for the API server with `--feature-gates=SCTPSupport=true,…`.
|
||||
When the feature gate is enabled, you can set the `protocol` field of a NetworkPolicy to `SCTP`.
|
||||
|
||||
{{< note >}}
|
||||
|
|
|
@ -202,6 +202,17 @@ endpoints.
|
|||
EndpointSlices provide additional attributes and functionality which is
|
||||
described in detail in [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/).
|
||||
|
||||
### Application protocol
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
The AppProtocol field provides a way to specify an application protocol to be
|
||||
used for each Service port.
|
||||
|
||||
As an alpha feature, this field is not enabled by default. To use this field,
|
||||
enable the `ServiceAppProtocol` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/).
|
||||
|
||||
## Virtual IPs and service proxies
|
||||
|
||||
Every node in a Kubernetes cluster runs a `kube-proxy`. `kube-proxy` is
|
||||
|
|
|
@ -324,12 +324,24 @@ Currently, storage size is the only resource that can be set or requested. Futu
|
|||
|
||||
### Volume Mode
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
Prior to Kubernetes 1.9, all volume plugins created a filesystem on the persistent volume.
|
||||
Now, you can set the value of `volumeMode` to `block` to use a raw block device, or `filesystem`
|
||||
to use a filesystem. `filesystem` is the default if the value is omitted. This is an optional API
|
||||
parameter.
|
||||
Kubernetes supports two `volumeModes` of PersistentVolumes: `Filesystem` and `Block`.
|
||||
|
||||
`volumeMode` is an optional API parameter.
|
||||
`Filesystem` is the default mode used when `volumeMode` parameter is omitted.
|
||||
|
||||
A volume with `volumeMode: Filesystem` is *mounted* into Pods into a directory. If the volume
|
||||
is backed by a block device and the device is empty, Kuberneretes creates a filesystem
|
||||
on the device before mounting it for the first time.
|
||||
|
||||
You can set the value of `volumeMode` to `Block` to use a volume as a raw block device.
|
||||
Such volume is presented into a Pod as a block device, without any filesystem on it.
|
||||
This mode is useful to provide a Pod the fastest possible way to access a volume, without
|
||||
any filesystem layer between the Pod and the volume. On the other hand, the application
|
||||
running in the Pod must know how to handle a raw block device.
|
||||
See [Raw Block Volume Support](docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
|
||||
for an example on how to use a volume with `volumeMode: Block` in a Pod.
|
||||
|
||||
### Access Modes
|
||||
|
||||
|
@ -564,24 +576,21 @@ spec:
|
|||
|
||||
## Raw Block Volume Support
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
The following volume plugins support raw block volumes, including dynamic provisioning where
|
||||
applicable:
|
||||
|
||||
* AWSElasticBlockStore
|
||||
* AzureDisk
|
||||
* CSI
|
||||
* FC (Fibre Channel)
|
||||
* GCEPersistentDisk
|
||||
* iSCSI
|
||||
* Local volume
|
||||
* OpenStack Cinder
|
||||
* RBD (Ceph Block Device)
|
||||
* VsphereVolume (alpha)
|
||||
|
||||
{{< note >}}
|
||||
Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9.
|
||||
Support for the additional plugins was added in 1.10.
|
||||
{{< /note >}}
|
||||
* VsphereVolume
|
||||
|
||||
### Persistent Volumes using a Raw Block Volume
|
||||
|
||||
|
@ -697,12 +706,7 @@ spec:
|
|||
|
||||
## Volume Cloning
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
Volume clone feature was added to support CSI Volume Plugins only. For details, see [volume cloning](/docs/concepts/storage/volume-pvc-datasource/).
|
||||
|
||||
To enable support for cloning a volume from a PVC data source, enable the
|
||||
`VolumePVCDataSource` feature gate on the apiserver and controller-manager.
|
||||
[Volume Cloning](/docs/concepts/storage/volume-pvc-datasource/) only available for CSI volume plugins.
|
||||
|
||||
### Create Persistent Volume Claim from an existing pvc
|
||||
|
||||
|
|
|
@ -11,7 +11,6 @@ weight: 30
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
|
||||
|
||||
{{% /capture %}}
|
||||
|
@ -36,6 +35,7 @@ Users need to be aware of the following when using this feature:
|
|||
* Cloning is only supported within the same Storage Class.
|
||||
- Destination volume must be the same storage class as the source
|
||||
- Default storage class can be used and storageClassName omitted in the spec
|
||||
* Cloning can only be performed between two volumes that use the same VolumeMode setting (if you request a block mode volume, the source MUST also be block mode)
|
||||
|
||||
|
||||
## Provisioning
|
||||
|
|
|
@ -1334,19 +1334,13 @@ persistent volume:
|
|||
|
||||
#### CSI raw block volume support
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
Starting with version 1.11, CSI introduced support for raw block volumes, which
|
||||
relies on the raw block volume feature that was introduced in a previous version of
|
||||
Kubernetes. This feature will make it possible for vendors with external CSI drivers to
|
||||
implement raw block volumes support in Kubernetes workloads.
|
||||
Vendors with external CSI drivers can implement raw block volumes support
|
||||
in Kubernetes workloads.
|
||||
|
||||
CSI block volume support is feature-gated, but enabled by default. The two
|
||||
feature gates which must be enabled for this feature are `BlockVolume` and
|
||||
`CSIBlockVolume`.
|
||||
|
||||
Learn how to
|
||||
[setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support).
|
||||
You can [setup your PV/PVC with raw block volume support](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)
|
||||
as usual, without any CSI specific changes.
|
||||
|
||||
#### CSI ephemeral volumes
|
||||
|
||||
|
|
|
@ -17,14 +17,9 @@ A _Cron Job_ creates [Jobs](/docs/concepts/workloads/controllers/jobs-run-to-com
|
|||
One CronJob object is like one line of a _crontab_ (cron table) file. It runs a job periodically
|
||||
on a given schedule, written in [Cron](https://en.wikipedia.org/wiki/Cron) format.
|
||||
|
||||
{{< caution >}}
|
||||
All **CronJob** `schedule:` times are based on the timezone of the
|
||||
{{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}.
|
||||
|
||||
If your control plane runs the kube-controller-manager in Pods or bare
|
||||
containers, the timezone set for the kube-controller-manager container determines the timezone
|
||||
that the cron job controller uses.
|
||||
{{< /caution >}}
|
||||
{{< note >}}
|
||||
All **CronJob** `schedule:` times are denoted in UTC.
|
||||
{{< /note >}}
|
||||
|
||||
When creating the manifest for a CronJob resource, make sure the name you provide
|
||||
is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
|
|
|
@ -12,16 +12,15 @@ weight: 80
|
|||
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
|
||||
|
||||
This page provides an overview of ephemeral containers: a special type of container
|
||||
that runs temporarily in an existing {{< glossary_tooltip term_id="pod" >}} to accomplish user-initiated actions such
|
||||
as troubleshooting. You use ephemeral containers to inspect services rather than
|
||||
to build applications.
|
||||
that runs temporarily in an existing {{< glossary_tooltip term_id="pod" >}} to
|
||||
accomplish user-initiated actions such as troubleshooting. You use ephemeral
|
||||
containers to inspect services rather than to build applications.
|
||||
|
||||
{{< warning >}}
|
||||
Ephemeral containers are in early alpha state and are not suitable for production
|
||||
clusters. You should expect the feature not to work in some situations, such as
|
||||
when targeting the namespaces of a container. In accordance with the [Kubernetes
|
||||
Deprecation Policy](/docs/reference/using-api/deprecation-policy/), this alpha
|
||||
feature could change significantly in the future or be removed entirely.
|
||||
clusters. In accordance with the [Kubernetes Deprecation Policy](
|
||||
/docs/reference/using-api/deprecation-policy/), this alpha feature could change
|
||||
significantly in the future or be removed entirely.
|
||||
{{< /warning >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
@ -78,7 +77,11 @@ When using ephemeral containers, it's helpful to enable [process namespace
|
|||
sharing](/docs/tasks/configure-pod-container/share-process-namespace/) so
|
||||
you can view processes in other containers.
|
||||
|
||||
### Examples
|
||||
See [Debugging with Ephemeral Debug Container](
|
||||
/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-with-ephemeral-debug-container)
|
||||
for examples of troubleshooting using ephemeral containers.
|
||||
|
||||
## Ephemeral containers API
|
||||
|
||||
{{< note >}}
|
||||
The examples in this section require the `EphemeralContainers` [feature
|
||||
|
@ -87,8 +90,9 @@ enabled, and Kubernetes client and server version v1.16 or later.
|
|||
{{< /note >}}
|
||||
|
||||
The examples in this section demonstrate how ephemeral containers appear in
|
||||
the API. You would normally use a `kubectl` plugin for troubleshooting that
|
||||
automates these steps.
|
||||
the API. You would normally use `kubectl alpha debug` or another `kubectl`
|
||||
[plugin](/docs/tasks/extend-kubectl/kubectl-plugins/) to automate these steps
|
||||
rather than invoking the API directly.
|
||||
|
||||
Ephemeral containers are created using the `ephemeralcontainers` subresource
|
||||
of Pod, which can be demonstrated using `kubectl --raw`. First describe
|
||||
|
@ -180,35 +184,12 @@ Ephemeral Containers:
|
|||
...
|
||||
```
|
||||
|
||||
You can attach to the new ephemeral container using `kubectl attach`:
|
||||
You can interact with the new ephemeral container in the same way as other
|
||||
containers using `kubectl attach`, `kubectl exec`, and `kubectl logs`, for
|
||||
example:
|
||||
|
||||
```shell
|
||||
kubectl attach -it example-pod -c debugger
|
||||
```
|
||||
|
||||
If process namespace sharing is enabled, you can see processes from all the containers in that Pod.
|
||||
For example, after attaching, you run `ps` in the debugger container:
|
||||
|
||||
```shell
|
||||
# Run this in a shell inside the "debugger" ephemeral container
|
||||
ps auxww
|
||||
```
|
||||
The output is similar to:
|
||||
```
|
||||
PID USER TIME COMMAND
|
||||
1 root 0:00 /pause
|
||||
6 root 0:00 nginx: master process nginx -g daemon off;
|
||||
11 101 0:00 nginx: worker process
|
||||
12 101 0:00 nginx: worker process
|
||||
13 101 0:00 nginx: worker process
|
||||
14 101 0:00 nginx: worker process
|
||||
15 101 0:00 nginx: worker process
|
||||
16 101 0:00 nginx: worker process
|
||||
17 101 0:00 nginx: worker process
|
||||
18 101 0:00 nginx: worker process
|
||||
19 root 0:00 /pause
|
||||
24 root 0:00 sh
|
||||
29 root 0:00 ps auxww
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 50
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
You can use _topology spread constraints_ to control how {{< glossary_tooltip text="Pods" term_id="Pod" >}} are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.
|
||||
|
||||
|
@ -18,9 +18,8 @@ You can use _topology spread constraints_ to control how {{< glossary_tooltip te
|
|||
|
||||
### Enable Feature Gate
|
||||
|
||||
Ensure the `EvenPodsSpread` feature gate is enabled (it is disabled by default
|
||||
in 1.16). See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
for an explanation of enabling feature gates. The `EvenPodsSpread` feature gate must be enabled for the
|
||||
The `EvenPodsSpread` [feature gate] (/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
must be enabled for the
|
||||
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} **and**
|
||||
{{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}}.
|
||||
|
||||
|
@ -183,6 +182,46 @@ There are some implicit conventions worth noting here:
|
|||
and you know that "zoneC" must be excluded. In this case, you can compose the yaml as below, so that "mypod" will be placed onto "zoneB" instead of "zoneC". Similarly `spec.nodeSelector` is also respected.
|
||||
|
||||
{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
|
||||
|
||||
### Cluster-level default constraints
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
It is possible to set default topology spread constraints for a cluster. Default
|
||||
topology spread constraints are applied to a Pod if, and only if:
|
||||
|
||||
- It doesn't define any constraints in its `.spec.topologySpreadConstraints`.
|
||||
- It belongs to a service, replication controller, replica set or stateful set.
|
||||
|
||||
Default constraints can be set as part of the `PodTopologySpread` plugin args
|
||||
in a [scheduling profile](/docs/reference/scheduling/profiles).
|
||||
The constraints are specified with the same [API above](#api), except that
|
||||
`labelSelector` must be empty. The selectors are calculated from the services,
|
||||
replication controllers, replica sets or stateful sets that the Pod belongs to.
|
||||
|
||||
An example configuration might look like follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubescheduler.config.k8s.io/v1alpha2
|
||||
kind: KubeSchedulerConfiguration
|
||||
|
||||
profiles:
|
||||
pluginConfig:
|
||||
- name: PodTopologySpread
|
||||
args:
|
||||
defaultConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: failure-domain.beta.kubernetes.io/zone
|
||||
whenUnsatisfiable: ScheduleAnyway
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
The score produced by default scheduling constraints might conflict with the
|
||||
score produced by the
|
||||
[`DefaultPodTopologySpread` plugin](/docs/reference/scheduling/profiles/#scheduling-plugins).
|
||||
It is recommended that you disable this plugin in the scheduling profile when
|
||||
using default constraints for `PodTopologySpread`.
|
||||
{{< /note >}}
|
||||
|
||||
## Comparison with PodAffinity/PodAntiAffinity
|
||||
|
||||
|
@ -201,9 +240,9 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig
|
|||
|
||||
## Known Limitations
|
||||
|
||||
As of 1.16, at which this feature is Alpha, there are some known limitations:
|
||||
As of 1.18, at which this feature is Beta, there are some known limitations:
|
||||
|
||||
- Scaling down a `Deployment` may result in imbalanced Pods distribution.
|
||||
- Scaling down a Deployment may result in imbalanced Pods distribution.
|
||||
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -38,13 +38,15 @@ client libraries:
|
|||
* [JSONPath](/docs/reference/kubectl/jsonpath/) - Syntax guide for using [JSONPath expressions](http://goessner.net/articles/JsonPath/) with kubectl.
|
||||
* [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) - CLI tool to easily provision a secure Kubernetes cluster.
|
||||
|
||||
## Config Reference
|
||||
## Components Reference
|
||||
|
||||
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/) - The primary *node agent* that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy.
|
||||
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) - REST API that validates and configures data for API objects such as pods, services, replication controllers.
|
||||
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Daemon that embeds the core control loops shipped with Kubernetes.
|
||||
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends.
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - Scheduler that manages availability, performance, and capacity.
|
||||
* [kube-scheduler Policies](/docs/reference/scheduling/policies)
|
||||
* [kube-scheduler Profiles](/docs/reference/scheduling/profiles)
|
||||
|
||||
## Design Docs
|
||||
|
||||
|
|
|
@ -115,6 +115,30 @@ required.
|
|||
|
||||
Rejects all requests. AlwaysDeny is DEPRECATED as no real meaning.
|
||||
|
||||
### CertificateApproval {#certificateapproval}
|
||||
|
||||
This admission controller observes requests to 'approve' CertificateSigningRequest resources and performs additional
|
||||
authorization checks to ensure the approving user has permission to `approve` certificate requests with the
|
||||
`spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
information on the permissions required to perform different actions on CertificateSigningRequest resources.
|
||||
|
||||
### CertificateSigning {#certificatesigning}
|
||||
|
||||
This admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources
|
||||
and performs an additional authorization checks to ensure the signing user has permission to `sign` certificate
|
||||
requests with the `spec.signerName` requested on the CertificateSigningRequest resource.
|
||||
|
||||
See [Certificate Signing Requests](/docs/reference/access-authn-authz/certificate-signing-requests/) for more
|
||||
information on the permissions required to perform different actions on CertificateSigningRequest resources.
|
||||
|
||||
### CertificateSubjectRestrictions {#certificatesubjectrestrictions}
|
||||
|
||||
This admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName`
|
||||
of `kubernetes.io/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute')
|
||||
of `system:masters`.
|
||||
|
||||
### DefaultStorageClass {#defaultstorageclass}
|
||||
|
||||
This admission controller observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class
|
||||
|
|
|
@ -120,7 +120,7 @@ Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269
|
|||
|
||||
### Bootstrap Tokens
|
||||
|
||||
This feature is currently in **beta**.
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
To allow for streamlined bootstrapping for new clusters, Kubernetes includes a
|
||||
dynamically-managed Bearer token type called a *Bootstrap Token*. These tokens
|
||||
|
|
|
@ -7,6 +7,9 @@ weight: 20
|
|||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
Bootstrap tokens are a simple bearer token that is meant to be used when
|
||||
creating new clusters or joining new nodes to an existing cluster. It was built
|
||||
to support [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/), but can be used in other contexts
|
||||
|
@ -26,8 +29,6 @@ Controller Manager. The tokens are also used to create a signature for a
|
|||
specific ConfigMap used in a "discovery" process through a BootstrapSigner
|
||||
controller.
|
||||
|
||||
{{< feature-state state="beta" >}}
|
||||
|
||||
## Token Format
|
||||
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`. More formally,
|
||||
|
@ -115,7 +116,7 @@ authenticate to the API server as a bearer token.
|
|||
`cluster-info` ConfigMap as described below.
|
||||
|
||||
The `expiration` field controls the expiry of the token. Expired tokens are
|
||||
rejected when used for authentication and ignored during ConfigMap signing.
|
||||
rejected when used for authentication and ignored during ConfigMap signing.
|
||||
The expiry value is encoded as an absolute UTC time using RFC3339. Enable the
|
||||
`tokencleaner` controller to automatically delete expired tokens.
|
||||
|
||||
|
|
|
@ -0,0 +1,330 @@
|
|||
---
|
||||
reviewers:
|
||||
- liggitt
|
||||
- mikedanese
|
||||
- munnerz
|
||||
title: Certificate Signing Requests
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
The Certificates API enables automation of
|
||||
[X.509](https://www.itu.int/rec/T-REC-X.509) credential provisioning by providing
|
||||
a programmatic interface for clients of the Kubernetes API to request and obtain
|
||||
X.509 {{< glossary_tooltip term_id="certificate" text="certificates" >}} from a Certificate Authority (CA).
|
||||
|
||||
A CertificateSigningRequest (CSR) resource is used to request that a certificate be signed
|
||||
by a denoted signer, after which the request may be approved or denied before
|
||||
finally being signed.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
## Request signing process
|
||||
|
||||
The _CertificateSigningRequest_ resource type allows a client to ask for an X.509 certificate
|
||||
be issued, based on a signing request.
|
||||
The CertificateSigningRequest object includes a PEM-encoded PKCS#10 signing request in
|
||||
the `spec.request` field. The CertificateSigningRequest denotes the _signer_ (the
|
||||
recipient that the request is being made to) using the `spec.signerName` field.
|
||||
|
||||
Once created, a CertificateSigningRequest must be approved before it can be signed.
|
||||
Depending on the signer selected, a CertificateSigningRequest may be automatically approved
|
||||
by a {{< glossary_tooltip text="controller" term_id="controller" >}}.
|
||||
Otherwise, a CertificateSigningRequest must be manually approved either via the REST API (or client-go)
|
||||
or by running `kubectl certificate approve`. Likewise, a CertificateSigningRequest may also be denied,
|
||||
which tells the configured signer that it must not sign the request.
|
||||
|
||||
For certificates that have been approved, the next step is signing. The relevant signing controller
|
||||
first validates that the signing conditions are met and then creates a certificate.
|
||||
The signing controller then updates the CertificateSigningRequest, storing the new certificate into
|
||||
the `status.certificate` field of the existing CertificateSigningRequest object. The
|
||||
`status.certificate` field is either empty or contains a X.509 certificate, encoded in PEM format.
|
||||
The CertificateSigningRequest `status.certificate` field is empty until the signer does this.
|
||||
|
||||
Once the `status.certificate` field has been populated, the request has been completed and clients can now
|
||||
fetch the signed certificate PEM data from the CertificateSigningRequest resource.
|
||||
Signers can instead deny certificate signing if the approval conditions are not met.
|
||||
|
||||
In order to reduce the number of old CertificateSigningRequest resources left in a cluster, a garbage collection
|
||||
controller runs periodically. The garbage collection removes CertificateSigningRequests that have not changed
|
||||
state for some duration:
|
||||
|
||||
* Approved requests: automatically deleted after 1 hour
|
||||
* Denied requests: automatically deleted after 1 hour
|
||||
* Pending requests: automatically deleted after 1 hour
|
||||
|
||||
## Signers
|
||||
|
||||
All signers should provide information about how they work so that clients can predict what will happen to their CSRs.
|
||||
This includes:
|
||||
|
||||
1. **Trust distribution**: how trust (CA bundles) are distributed.
|
||||
1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames, Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior when usages different than the signer-determined usages are specified in the CSR.
|
||||
1. **Expiration/certificate lifetime**: whether it is fixed by the signer, configurable by the admin, determined by the CSR object etc and behavior if an expiration different than the signer-determined expiration is specified in the CSR.
|
||||
1. **CA bit allowed/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.
|
||||
|
||||
Commonly, the `status.certificate` field contains a single PEM-encoded X.509 certificate once the CSR is approved and the certificate is issued. Some signers store multiple certificates into the `status.certificate` field. In that case, the documentation for the signer should specify the meaning of additional certificates; for example, this might be certificate plus intermediates to be presented during TLS handshakes.
|
||||
|
||||
### Kubernetes signers
|
||||
|
||||
Kubernetes provides built-in signers that each have a well-known `signerName`:
|
||||
|
||||
1. `kubernetes.io/kube-apiserver-client`: signs certificates that will be honored as client-certs by the kube-apiserver.
|
||||
Never auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}.
|
||||
1. Trust distribution: signed certificates must be honored as client-certificates by the kube-apiserver. The CA bundle is not distributed by any other means.
|
||||
1. Permitted subjects - no subject restrictions, but approvers and signers may choose not to approve or sign. Certain subjects like cluster-admin level users or groups vary between distributions and installations, but deserve additional scrutiny before approval and signing. The `CertificateSubjectRestriction` admission plugin is available and enabled by default to restrict `system:masters`, but it is often not the only cluster-admin subject in a cluster.
|
||||
1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.
|
||||
1. Permitted key usages - must include []string{"client auth"}. Must not include key usages beyond []string{"digital signature", "key encipherment", "client auth"}
|
||||
1. Expiration/certificate lifetime - minimum of CSR signer or request. The signer is responsible for checking that the certificate lifetime is valid and permissible.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
||||
1. `kubernetes.io/kube-apiserver-client-kubelet`: signs client certificates that will be honored as client-certs by the
|
||||
kube-apiserver.
|
||||
May be auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}.
|
||||
1. Trust distribution: signed certificates must be honored as client-certificates by the kube-apiserver. The CA bundle
|
||||
is not distributed by any other means.
|
||||
1. Permitted subjects - organizations are exactly `[]string{"system:nodes"}`, common name starts with `"system:node:"`
|
||||
1. Permitted x509 extensions - honors key usage extensions, forbids subjectAltName extensions, drops other extensions.
|
||||
1. Permitted key usages - exactly `[]string{"key encipherment", "digital signature", "client auth"}`
|
||||
1. Expiration/certificate lifetime - minimum of CSR signer or request. Sanity of the time is the concern of the signer.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
||||
1. `kubernetes.io/kubelet-serving`: signs serving certificates that are honored as a valid kubelet serving certificate
|
||||
by the kube-apiserver, but has no other guarantees.
|
||||
Never auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}.
|
||||
1. Trust distribution: signed certificates must be honored by the kube-apiserver as valid to terminate connections to a kubelet.
|
||||
The CA bundle is not distributed by any other means.
|
||||
1. Permitted subjects - organizations are exactly `[]string{"system:nodes"}`, common name starts with `"system:node:"`
|
||||
1. Permitted x509 extensions - honors key usage and DNSName/IPAddress subjectAltName extensions, forbids EmailAddress and URI subjectAltName extensions, drops other extensions. At least one DNS or IP subjectAltName must be present.
|
||||
1. Permitted key usages - exactly `[]string{"key encipherment", "digital signature", "server auth"}`
|
||||
1. Expiration/certificate lifetime - minimum of CSR signer or request.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
||||
1. `kubernetes.io/legacy-unknown`: has no guarantees for trust at all. Some distributions may honor these as client
|
||||
certs, but that behavior is not standard Kubernetes behavior.
|
||||
Never auto-approved by {{< glossary_tooltip term_id="kube-controller-manager" >}}.
|
||||
1. Trust distribution: None. There is no standard trust or distribution for this signer in a Kubernetes cluster.
|
||||
1. Permitted subjects - any
|
||||
1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.
|
||||
1. Permitted key usages - any
|
||||
1. Expiration/certificate lifetime - minimum of CSR signer or request. Sanity of the time is the concern of the signer.
|
||||
1. CA bit allowed/disallowed - not allowed.
|
||||
|
||||
{{< note >}}
|
||||
Failures for all of these are only reported in kube-controller-manager logs.
|
||||
{{< /note >}}
|
||||
|
||||
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
|
||||
coincidental. For instance, some distributions may honor `kubernetes.io/legacy-unknown` as client certificates for the
|
||||
kube-apiserver, but this is not a standard.
|
||||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
guaranteed to verify a connection to the kube-apiserver using the default service (`kubernetes.default.svc`).
|
||||
|
||||
## Authorization
|
||||
|
||||
To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest:
|
||||
|
||||
* Verbs: `create`, `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: csr-creator
|
||||
rules:
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- certificatesigningrequests
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
```
|
||||
|
||||
To allow approving a CertificateSigningRequest:
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/approval`
|
||||
* Verbs: `approve`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: csr-approver
|
||||
rules:
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- certificatesigningrequests
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- certificatesigningrequests/approval
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- signers
|
||||
resourceName:
|
||||
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
|
||||
verbs:
|
||||
- approve
|
||||
```
|
||||
|
||||
To allow signing a CertificateSigningRequest:
|
||||
|
||||
* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`
|
||||
* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests/status`
|
||||
* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>/<signerNamePath>` or `<signerNameDomain>/*`
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: csr-signer
|
||||
rules:
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- certificatesigningrequests
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- certificatesigningrequests/status
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- certificates.k8s.io
|
||||
resources:
|
||||
- signers
|
||||
resourceName:
|
||||
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
|
||||
verbs:
|
||||
- sign
|
||||
```
|
||||
|
||||
## Approval & rejection
|
||||
|
||||
### Control plane automated approval {#approval-rejection-control-plane}
|
||||
|
||||
The kube-controller-manager ships with a built-in approver for certificates with
|
||||
a signerName of `kubernetes.io/kube-apiserver-client-kubelet` that delegates various
|
||||
permissions on CSRs for node credentials to authorization.
|
||||
The kube-controller-manager POSTs SubjectAccessReview resources to the API server
|
||||
in order to check authorization for certificate approval.
|
||||
|
||||
### Approval & rejection using `kubectl` {#approval-rejection-kubectl}
|
||||
|
||||
A Kubernetes administrator (with appropriate permissions) can manually approve
|
||||
(or deny) CertificateSigningRequests by using the `kubectl certificate
|
||||
approve` and `kubectl certificate deny` commands.
|
||||
|
||||
To approve a CSR with kubectl:
|
||||
|
||||
```bash
|
||||
kubectl certificate approve <certificate-signing-request-name>
|
||||
```
|
||||
|
||||
Likewise, to deny a CSR:
|
||||
|
||||
```bash
|
||||
kubectl certificate deny <certificate-signing-request-name>
|
||||
```
|
||||
|
||||
### Approval & rejection using the Kubernetes API {#approval-rejection-api-client}
|
||||
|
||||
Users of the REST API can approve CSRs by submitting an UPDATE request to the `approval`
|
||||
subresource of the CSR to be approved. For example, you could write an
|
||||
{{< glossary_tooltip term_id="operator-pattern" text="operator" >}} that watches for a particular
|
||||
kind of CSR and then sends an UPDATE to approve them.
|
||||
|
||||
When you make an approval or rejection request, set either the `Approved` or `Denied`
|
||||
status condition based on the state you determine:
|
||||
|
||||
For `Approved` CSRs:
|
||||
|
||||
```yaml
|
||||
apiVersion: certificates.k8s.io/v1beta1
|
||||
kind: CertificateSigningRequest
|
||||
...
|
||||
status:
|
||||
conditions:
|
||||
- lastUpdateTime: "2020-02-08T11:37:35Z"
|
||||
message: Approved by my custom approver controller
|
||||
reason: ApprovedByMyPolicy # You can set this to any string
|
||||
type: Approved
|
||||
```
|
||||
|
||||
For `Denied` CSRs:
|
||||
|
||||
```yaml
|
||||
apiVersion: certificates.k8s.io/v1beta1
|
||||
kind: CertificateSigningRequest
|
||||
...
|
||||
status:
|
||||
conditions:
|
||||
- lastUpdateTime: "2020-02-08T11:37:35Z"
|
||||
message: Denied by my custom approver controller
|
||||
reason: DeniedByMyPolicy # You can set this to any string
|
||||
type: Denied
|
||||
```
|
||||
|
||||
It's usual to set `status.condtions.reason` to a machine-friendly reason
|
||||
code using TitleCase; this is a convention but you can set it to anything
|
||||
you like. If you want to add a note just for human consumption, use the
|
||||
`status.condtions.message` field.
|
||||
|
||||
## Signing
|
||||
|
||||
### Control plane signer {#signer-control-plane}
|
||||
|
||||
The Kubernetes control plane implements each of the [Kubernetes signers](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers),
|
||||
as part of the kube-controller-manager.
|
||||
|
||||
{{< note >}}
|
||||
Prior to Kubernetes v1.18, the kube-controller-manager would sign any CSRs that
|
||||
were marked as approved.
|
||||
{{< /note >}}
|
||||
|
||||
### API-based signers {#signer-api}
|
||||
|
||||
Users of the REST API can sign CSRs by submitting an UPDATE request to the `status`
|
||||
subresource of the CSR to be signed.
|
||||
|
||||
As part of this request, the `status.certificate` field should be set to contain the
|
||||
signed certificate.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
* Read [Manage TLS Certificates in a Cluster](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
* View the source code for the kube-controller-manager built in [signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
|
||||
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)
|
||||
|
||||
{{% /capture %}}
|
|
@ -85,7 +85,7 @@ Because ClusterRoles are cluster-scoped, you can also use them to grant access t
|
|||
|
||||
* cluster-scoped resources (like {{< glossary_tooltip text="nodes" term_id="node" >}})
|
||||
* non-resource endpoints (like `/healthz`)
|
||||
* namespaced resources (like Pods), across all namespaces
|
||||
* namespaced resources (like Pods), across all namespaces
|
||||
For example: you can use a ClusterRole to allow a particular user to run
|
||||
`kubectl get pods --all-namespaces`.
|
||||
|
||||
|
@ -215,7 +215,7 @@ There are two reasons for this restriction:
|
|||
1. Making `roleRef` immutable allows granting someone `update` permission on an existing binding
|
||||
object, so that they can manage the list of subjects, without being able to change
|
||||
the role that is granted to those subjects.
|
||||
1. A binding to a different role is a fundamentally different binding.
|
||||
1. A binding to a different role is a fundamentally different binding.
|
||||
Requiring a binding to be deleted/recreated in order to change the `roleRef`
|
||||
ensures the full list of subjects in the binding is intended to be granted
|
||||
the new role (as opposed to enabling accidentally modifying just the roleRef
|
||||
|
@ -223,7 +223,7 @@ without verifying all of the existing subjects should be given the new role's
|
|||
permissions).
|
||||
|
||||
The `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects,
|
||||
and handles deleting and recreating binding objects if required to change the role they refer to.
|
||||
and handles deleting and recreating binding objects if required to change the role they refer to.
|
||||
See [command usage and examples](#kubectl-auth-reconcile) for more information.
|
||||
|
||||
### Referring to resources
|
||||
|
@ -769,7 +769,7 @@ This is commonly used by add-on API servers for unified authentication and autho
|
|||
<td><b>system:kubelet-api-admin</b></td>
|
||||
<td>None</td>
|
||||
<td>Allows full access to the kubelet API.</td>
|
||||
</tr>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>system:node-bootstrapper</b></td>
|
||||
<td>None</td>
|
||||
|
@ -1034,8 +1034,8 @@ Examples:
|
|||
|
||||
* Test applying a manifest file of RBAC objects, displaying changes that would be made:
|
||||
|
||||
```shell
|
||||
kubectl auth reconcile -f my-rbac-rules.yaml --dry-run
|
||||
```
|
||||
kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client
|
||||
```
|
||||
|
||||
* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):
|
||||
|
|
|
@ -48,23 +48,18 @@ different Kubernetes components.
|
|||
|
||||
| Feature | Default | Stage | Since | Until |
|
||||
|---------|---------|-------|-------|-------|
|
||||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
|
||||
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
|
||||
| `APIListChunking` | `true` | Beta | 1.9 | |
|
||||
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | |
|
||||
| `APIResponseCompression` | `false` | Alpha | 1.7 | |
|
||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||
| `BalanceAttachedNodeVolumes` | `false` | Alpha | 1.11 | |
|
||||
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
|
||||
| `BlockVolume` | `true` | Beta | 1.13 | - |
|
||||
| `BoundServiceAccountTokenVolume` | `false` | Alpha | 1.13 | |
|
||||
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
|
||||
| `CPUManager` | `true` | Beta | 1.10 | |
|
||||
| `CRIContainerLogRotation` | `false` | Alpha | 1.10 | 1.10 |
|
||||
| `CRIContainerLogRotation` | `true` | Beta| 1.11 | |
|
||||
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
|
||||
| `CSIBlockVolume` | `true` | Beta | 1.14 | |
|
||||
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `CSIDriverRegistry` | `true` | Beta | 1.14 | |
|
||||
| `CSIInlineVolume` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `CSIInlineVolume` | `true` | Beta | 1.16 | - |
|
||||
| `CSIMigration` | `false` | Alpha | 1.14 | 1.16 |
|
||||
|
@ -81,6 +76,7 @@ different Kubernetes components.
|
|||
| `CSIMigrationGCEComplete` | `false` | Alpha | 1.17 | |
|
||||
| `CSIMigrationOpenStack` | `false` | Alpha | 1.14 | |
|
||||
| `CSIMigrationOpenStackComplete` | `false` | Alpha | 1.17 | |
|
||||
| `ConfigurableFSGroupPolicy` | `false` | Alpha | 1.18 | |
|
||||
| `CustomCPUCFSQuotaPeriod` | `false` | Alpha | 1.12 | |
|
||||
| `CustomResourceDefaulting` | `false` | Alpha| 1.15 | 1.15 |
|
||||
| `CustomResourceDefaulting` | `true` | Beta | 1.16 | |
|
||||
|
@ -93,6 +89,8 @@ different Kubernetes components.
|
|||
| `DynamicKubeletConfig` | `true` | Beta | 1.11 | |
|
||||
| `EndpointSlice` | `false` | Alpha | 1.16 | 1.16 |
|
||||
| `EndpointSlice` | `false` | Beta | 1.17 | |
|
||||
| `EndpointSlice` | `true` | Beta | 1.18 | |
|
||||
| `EndpointSliceProxying` | `false` | Alpha | 1.18 | |
|
||||
| `EphemeralContainers` | `false` | Alpha | 1.16 | |
|
||||
| `ExpandCSIVolumes` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ExpandCSIVolumes` | `true` | Beta | 1.16 | |
|
||||
|
@ -101,9 +99,12 @@ different Kubernetes components.
|
|||
| `ExpandPersistentVolumes` | `false` | Alpha | 1.8 | 1.10 |
|
||||
| `ExpandPersistentVolumes` | `true` | Beta | 1.11 | |
|
||||
| `ExperimentalHostUserNamespaceDefaulting` | `false` | Beta | 1.5 | |
|
||||
| `EvenPodsSpread` | `false` | Alpha | 1.16 | |
|
||||
| `EvenPodsSpread` | `false` | Alpha | 1.16 | 1.17 |
|
||||
| `EvenPodsSpread` | `true` | Beta | 1.18 | |
|
||||
| `HPAScaleToZero` | `false` | Alpha | 1.16 | |
|
||||
| `HugePageStorageMediumSize` | `false` | Alpha | 1.18 | |
|
||||
| `HyperVContainer` | `false` | Alpha | 1.10 | |
|
||||
| `ImmutableEphemeralVolumes` | `false` | Alpha | 1.18 | |
|
||||
| `KubeletPodResources` | `false` | Alpha | 1.13 | 1.14 |
|
||||
| `KubeletPodResources` | `true` | Beta | 1.15 | |
|
||||
| `LegacyNodeRoleBehavior` | `true` | Alpha | 1.16 | |
|
||||
|
@ -125,6 +126,7 @@ different Kubernetes components.
|
|||
| `RuntimeClass` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `RuntimeClass` | `true` | Beta | 1.14 | |
|
||||
| `SCTPSupport` | `false` | Alpha | 1.12 | |
|
||||
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | |
|
||||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | Beta | 1.16 | |
|
||||
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | |
|
||||
|
@ -138,8 +140,6 @@ different Kubernetes components.
|
|||
| `SupportPodPidsLimit` | `false` | Alpha | 1.10 | 1.13 |
|
||||
| `SupportPodPidsLimit` | `true` | Beta | 1.14 | |
|
||||
| `Sysctls` | `true` | Beta | 1.11 | |
|
||||
| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 |
|
||||
| `TaintBasedEvictions` | `true` | Beta | 1.13 | |
|
||||
| `TokenRequest` | `false` | Alpha | 1.10 | 1.11 |
|
||||
| `TokenRequest` | `true` | Beta | 1.12 | |
|
||||
| `TokenRequestProjection` | `false` | Alpha | 1.11 | 1.11 |
|
||||
|
@ -148,8 +148,6 @@ different Kubernetes components.
|
|||
| `TopologyManager` | `false` | Alpha | 1.16 | |
|
||||
| `ValidateProxyRedirects` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `ValidateProxyRedirects` | `true` | Beta | 1.14 | |
|
||||
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `VolumePVCDataSource` | `true` | Beta | 1.16 | |
|
||||
| `VolumeSnapshotDataSource` | `false` | Alpha | 1.12 | 1.16 |
|
||||
| `VolumeSnapshotDataSource` | `true` | Beta | 1.17 | - |
|
||||
| `WindowsGMSA` | `false` | Alpha | 1.14 | |
|
||||
|
@ -173,6 +171,15 @@ different Kubernetes components.
|
|||
| `AffinityInAnnotations` | - | Deprecated | 1.8 | - |
|
||||
| `AllowExtTrafficLocalEndpoints` | `false` | Beta | 1.4 | 1.6 |
|
||||
| `AllowExtTrafficLocalEndpoints` | `true` | GA | 1.7 | - |
|
||||
| `BlockVolume` | `false` | Alpha | 1.9 | 1.12 |
|
||||
| `BlockVolume` | `true` | Beta | 1.13 | 1.17 |
|
||||
| `BlockVolume` | `true` | GA | 1.18 | - |
|
||||
| `CSIBlockVolume` | `false` | Alpha | 1.11 | 1.13 |
|
||||
| `CSIBlockVolume` | `true` | Beta | 1.14 | 1.17 |
|
||||
| `CSIBlockVolume` | `true` | GA | 1.18 | - |
|
||||
| `CSIDriverRegistry` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `CSIDriverRegistry` | `true` | Beta | 1.14 | 1.17 |
|
||||
| `CSIDriverRegistry` | `true` | GA | 1.18 | |
|
||||
| `CSINodeInfo` | `false` | Alpha | 1.12 | 1.13 |
|
||||
| `CSINodeInfo` | `true` | Beta | 1.14 | 1.16 |
|
||||
| `CSINodeInfo` | `true` | GA | 1.17 | |
|
||||
|
@ -253,9 +260,15 @@ different Kubernetes components.
|
|||
| `SupportIPVSProxyMode` | `false` | Beta | 1.9 | 1.9 |
|
||||
| `SupportIPVSProxyMode` | `true` | Beta | 1.10 | 1.10 |
|
||||
| `SupportIPVSProxyMode` | `true` | GA | 1.11 | - |
|
||||
| `TaintBasedEvictions` | `false` | Alpha | 1.6 | 1.12 |
|
||||
| `TaintBasedEvictions` | `true` | Beta | 1.13 | 1.17 |
|
||||
| `TaintBasedEvictions` | `true` | GA | 1.18 | - |
|
||||
| `TaintNodesByCondition` | `false` | Alpha | 1.8 | 1.11 |
|
||||
| `TaintNodesByCondition` | `true` | Beta | 1.12 | 1.16 |
|
||||
| `TaintNodesByCondition` | `true` | GA | 1.17 | - |
|
||||
| `VolumePVCDataSource` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `VolumePVCDataSource` | `true` | Beta | 1.16 | 1.17 |
|
||||
| `VolumePVCDataSource` | `true` | GA | 1.18 | - |
|
||||
| `VolumeScheduling` | `false` | Alpha | 1.9 | 1.9 |
|
||||
| `VolumeScheduling` | `true` | Beta | 1.10 | 1.12 |
|
||||
| `VolumeScheduling` | `true` | GA | 1.13 | - |
|
||||
|
@ -266,6 +279,12 @@ different Kubernetes components.
|
|||
| `WatchBookmark` | `false` | Alpha | 1.15 | 1.15 |
|
||||
| `WatchBookmark` | `true` | Beta | 1.16 | 1.16 |
|
||||
| `WatchBookmark` | `true` | GA | 1.17 | - |
|
||||
| `WindowsGMSA` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `WindowsGMSA` | `true` | Beta | 1.16 | 1.17 |
|
||||
| `WindowsGMSA` | `true` | GA | 1.18 | - |
|
||||
| `WindowsRunAsUserName` | `false` | Alpha | 1.16 | 1.16 |
|
||||
| `WindowsRunAsUserName` | `true` | Beta | 1.17 | 1.17 |
|
||||
| `WindowsRunAsUserName` | `true` | GA | 1.18 | - |
|
||||
{{< /table >}}
|
||||
|
||||
## Using a feature
|
||||
|
@ -315,6 +334,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `AdvancedAuditing`: Enable [advanced auditing](/docs/tasks/debug-application-cluster/audit/#advanced-audit)
|
||||
- `AffinityInAnnotations`(*deprecated*): Enable setting [Pod affinity or anti-affinity](/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity).
|
||||
- `AllowExtTrafficLocalEndpoints`: Enable a service to route external requests to node local endpoints.
|
||||
- `AnyVolumeDataSource`: Enable use of any custom resource as the `DataSource` of a
|
||||
{{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}.
|
||||
- `APIListChunking`: Enable the API clients to retrieve (`LIST` or `GET`) resources from API server in chunks.
|
||||
- `APIPriorityAndFairness`: Enable managing request concurrency with prioritization and fairness at each server. (Renamed from `RequestManagement`)
|
||||
- `APIResponseCompression`: Compress the API responses for `LIST` or `GET` requests.
|
||||
|
@ -333,6 +354,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
ServiceAccountTokenVolumeProjection.
|
||||
Check [Service Account Token Volumes](https://git.k8s.io/community/contributors/design-proposals/storage/svcacct-token-volume-source.md)
|
||||
for more details.
|
||||
- `ConfigurableFSGroupPolicy`: Allows user to configure volume permission change policy for fsGroups when mounting a volume in a Pod. See [Configure volume permission and ownership change policy for Pods](/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) for more details.
|
||||
- `CPUManager`: Enable container level CPU affinity support, see [CPU Management Policies](/docs/tasks/administer-cluster/cpu-management-policies/).
|
||||
- `CRIContainerLogRotation`: Enable container log rotation for cri container runtime.
|
||||
- `CSIBlockVolume`: Enable external CSI volume drivers to support block storage. See the [`csi` raw block volume support](/docs/concepts/storage/volumes/#csi-raw-block-volume-support) documentation for more details.
|
||||
|
@ -391,12 +413,16 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
capabilities (e.g. `MKNODE`, `SYS_MODULE` etc.). This should only be enabled
|
||||
if user namespace remapping is enabled in the Docker daemon.
|
||||
- `EndpointSlice`: Enables Endpoint Slices for more scalable and extensible
|
||||
network endpoints. Requires corresponding API and Controller to be enabled.
|
||||
See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||
network endpoints. See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||
- `EndpointSliceProxying`: When this feature gate is enabled, kube-proxy will
|
||||
use EndpointSlices as the primary data source instead of Endpoints, enabling
|
||||
scalability and performance improvements. See [Enabling Endpoint Slices](/docs/tasks/administer-cluster/enabling-endpointslices/).
|
||||
- `GCERegionalPersistentDisk`: Enable the regional PD feature on GCE.
|
||||
- `HugePages`: Enable the allocation and consumption of pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
||||
- `HugePageStorageMediumSize`: Enable support for multiple sizes pre-allocated [huge pages](/docs/tasks/manage-hugepages/scheduling-hugepages/).
|
||||
- `HyperVContainer`: Enable [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) for Windows containers.
|
||||
- `HPAScaleToZero`: Enables setting `minReplicas` to 0 for `HorizontalPodAutoscaler` resources when using custom or external metrics.
|
||||
- `ImmutableEphemeralVolumes`: Allows for marking individual Secrets and ConfigMaps as immutable for better safety and performance.
|
||||
- `KubeletConfigFile`: Enable loading kubelet configuration from a file specified using a config file.
|
||||
See [setting kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file/) for more details.
|
||||
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
|
||||
|
@ -441,6 +467,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller.
|
||||
- `SCTPSupport`: Enables the usage of SCTP as `protocol` value in `Service`, `Endpoint`, `NetworkPolicy` and `Pod` definitions
|
||||
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/api-concepts/#server-side-apply) path at the API Server.
|
||||
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
|
||||
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
|
||||
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
|
||||
A node is eligible for exclusion if labelled with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key or `node.kubernetes.io/exclude-from-external-load-balancers`.
|
||||
|
@ -473,6 +500,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `VolumeSubpathEnvExpansion`: Enable `subPathExpr` field for expanding environment variables into a `subPath`.
|
||||
- `WatchBookmark`: Enable support for watch bookmark events.
|
||||
- `WindowsGMSA`: Enables passing of GMSA credential specs from pods to container runtimes.
|
||||
- `WindowsRunAsUserName` : Enable support for running applications in Windows containers with as a non-default user.
|
||||
See [Configuring RunAsUserName](/docs/tasks/configure-pod-container/configure-runasusername) for more details.
|
||||
- `WinDSR`: Allows kube-proxy to create DSR loadbalancers for Windows.
|
||||
- `WinOverlay`: Allows kube-proxy to run in overlay mode for Windows.
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ In the bootstrap initialization process, the following occurs:
|
|||
4. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token"
|
||||
5. kubelet connects to the API server, authenticates using the token
|
||||
6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)
|
||||
7. kubelet creates a CSR for itself
|
||||
7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet`
|
||||
8. CSR is approved in one of two ways:
|
||||
* If configured, kube-controller-manager automatically approves the CSR
|
||||
* If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`
|
||||
|
@ -117,7 +117,7 @@ While any authentication strategy can be used for the kubelet's initial
|
|||
bootstrap credentials, the following two authenticators are recommended for ease
|
||||
of provisioning.
|
||||
|
||||
1. [Bootstrap Tokens](#bootstrap-tokens) - __beta__
|
||||
1. [Bootstrap Tokens](#bootstrap-tokens)
|
||||
2. [Token authentication file](#token-authentication-file)
|
||||
|
||||
Bootstrap tokens are a simpler and more easily managed method to authenticate kubelets, and do not require any additional flags when starting kube-apiserver.
|
||||
|
@ -292,35 +292,6 @@ roleRef:
|
|||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
**Note: Kubernetes Below 1.8**: If you are running an earlier version of Kubernetes, notably a version below 1.8, then the cluster roles referenced above do not ship by default. You will have to create them yourself _in addition to_ the `ClusterRoleBindings` listed.
|
||||
|
||||
To create the `ClusterRole`s:
|
||||
|
||||
```yml
|
||||
# A ClusterRole which instructs the CSR approver to approve a user requesting
|
||||
# node client credentials.
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
|
||||
rules:
|
||||
- apiGroups: ["certificates.k8s.io"]
|
||||
resources: ["certificatesigningrequests/nodeclient"]
|
||||
verbs: ["create"]
|
||||
---
|
||||
# A ClusterRole which instructs the CSR approver to approve a node renewing its
|
||||
# own client credentials.
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
|
||||
rules:
|
||||
- apiGroups: ["certificates.k8s.io"]
|
||||
resources: ["certificatesigningrequests/selfnodeclient"]
|
||||
verbs: ["create"]
|
||||
```
|
||||
|
||||
|
||||
The `csrapproving` controller that ships as part of
|
||||
[kube-controller-manager](/docs/admin/kube-controller-manager/) and is enabled
|
||||
by default. The controller uses the [`SubjectAccessReview`
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: shuffle sharding
|
||||
id: shuffle-sharding
|
||||
date: 2020-03-04
|
||||
full_link:
|
||||
short_description: >
|
||||
A technique for assigning requests to queues that provides better isolation than hashing modulo the number of queues.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
A technique for assigning requests to queues that provides better isolation than hashing modulo the number of queues.
|
||||
|
||||
<!--more-->
|
||||
|
||||
We are often concerned with insulating different flows of requests
|
||||
from each other, so that a high-intensity flow does not crowd out low-intensity flows.
|
||||
A simple way to put requests into queues is to hash some
|
||||
characteristics of the request, modulo the number of queues, to get
|
||||
the index of the queue to use. The hash function uses as input
|
||||
characteristics of the request that align with flows. For example, in
|
||||
the Internet this is often the 5-tuple of source and destination
|
||||
address, protocol, and source and destination port.
|
||||
|
||||
That simple hash-based scheme has the property that any high-intensity flow
|
||||
will crowd out all the low-intensity flows that hash to the same queue.
|
||||
Providing good insulation for a large number of flows requires a large
|
||||
number of queues, which is problematic. Shuffle sharding is a more
|
||||
nimble technique that can do a better job of insulating the low-intensity
|
||||
flows from the high-intensity flows. The terminology of shuffle sharding uses
|
||||
the metaphor of dealing a hand from a deck of cards; each queue is a
|
||||
metaphorical card. The shuffle sharding technique starts with hashing
|
||||
the flow-identifying characteristics of the request, to produce a hash
|
||||
value with dozens or more of bits. Then the hash value is used as a
|
||||
source of entropy to shuffle the deck and deal a hand of cards
|
||||
(queues). All the dealt queues are examined, and the request is put
|
||||
into one of the examined queues with the shortest length. With a
|
||||
modest hand size, it does not cost much to examine all the dealt cards
|
||||
and a given low-intensity flow has a good chance to dodge the effects of a
|
||||
given high-intensity flow. With a large hand size it is expensive to examine
|
||||
the dealt queues and more difficult for the low-intensity flows to dodge the
|
||||
collective effects of a set of high-intensity flows. Thus, the hand size
|
||||
should be chosen judiciously.
|
||||
|
|
@ -79,7 +79,7 @@ Operation | Syntax | Description
|
|||
`create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin.
|
||||
`delete` | <code>kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]</code> | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
|
||||
`describe` | <code>kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]</code> | Display the detailed state of one or more resources.
|
||||
`diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration (**BETA**)
|
||||
`diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration.
|
||||
`edit` | <code>kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]</code> | Edit and update the definition of one or more resources on the server by using the default editor.
|
||||
`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod.
|
||||
`explain` | `kubectl explain [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc.
|
||||
|
@ -92,7 +92,7 @@ Operation | Syntax | Description
|
|||
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
|
||||
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
|
||||
`rolling-update` | <code>kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]</code> | Perform a rolling update by gradually replacing the specified replication controller and its pods.
|
||||
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
|
||||
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=server|client|none] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
|
||||
`scale` | <code>kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]</code> | Update the size of the specified replication controller.
|
||||
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
|
||||
|
||||
|
@ -370,6 +370,16 @@ kubectl logs <pod-name>
|
|||
kubectl logs -f <pod-name>
|
||||
```
|
||||
|
||||
`kubectl diff` - View a diff of the proposed updates to a cluster.
|
||||
|
||||
```shell
|
||||
# Diff resources included in "pod.json".
|
||||
kubectl diff -f pod.json
|
||||
|
||||
# Diff file read from stdin.
|
||||
cat service.yaml | kubectl diff -f -
|
||||
```
|
||||
|
||||
## Examples: Creating and using plugins
|
||||
|
||||
Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins:
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: Scheduling
|
||||
weight: 70
|
||||
toc-hide: true
|
||||
---
|
|
@ -0,0 +1,125 @@
|
|||
---
|
||||
title: Scheduling Policies
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
A scheduling Policy can be used to specify the *predicates* and *priorities*
|
||||
that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
||||
runs to [filter and score nodes](/docs/concepts/scheduling/kube-scheduler/#kube-scheduler-implementation),
|
||||
respectively.
|
||||
|
||||
You can set a scheduling policy by running
|
||||
`kube-scheduler --policy-config-file <filename>` or
|
||||
`kube-scheduler --policy-configmap <ConfigMap>`
|
||||
and using the [Policy type](https://pkg.go.dev/k8s.io/kube-scheduler@v0.18.0/config/v1?tab=doc#Policy).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Predicates
|
||||
|
||||
The following *predicates* implement filtering:
|
||||
|
||||
- `PodFitsHostPorts`: Checks if a Node has free ports (the network protocol kind)
|
||||
for the Pod ports the Pod is requesting.
|
||||
|
||||
- `PodFitsHost`: Checks if a Pod specifies a specific Node by its hostname.
|
||||
|
||||
- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory)
|
||||
to meet the requirement of the Pod.
|
||||
|
||||
- `PodMatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}}
|
||||
matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}.
|
||||
|
||||
- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}}
|
||||
that a Pod requests are available on the Node, given the failure zone restrictions for
|
||||
that storage.
|
||||
|
||||
- `NoDiskConflict`: Evaluates if a Pod can fit on a Node due to the volumes it requests,
|
||||
and those that are already mounted.
|
||||
|
||||
- `MaxCSIVolumeCount`: Decides how many {{< glossary_tooltip text="CSI" term_id="csi" >}}
|
||||
volumes should be attached, and whether that's over a configured limit.
|
||||
|
||||
- `CheckNodeMemoryPressure`: If a Node is reporting memory pressure, and there's no
|
||||
configured exception, the Pod won't be scheduled there.
|
||||
|
||||
- `CheckNodePIDPressure`: If a Node is reporting that process IDs are scarce, and
|
||||
there's no configured exception, the Pod won't be scheduled there.
|
||||
|
||||
- `CheckNodeDiskPressure`: If a Node is reporting storage pressure (a filesystem that
|
||||
is full or nearly full), and there's no configured exception, the Pod won't be
|
||||
scheduled there.
|
||||
|
||||
- `CheckNodeCondition`: Nodes can report that they have a completely full filesystem,
|
||||
that networking isn't available or that kubelet is otherwise not ready to run Pods.
|
||||
If such a condition is set for a Node, and there's no configured exception, the Pod
|
||||
won't be scheduled there.
|
||||
|
||||
- `PodToleratesNodeTaints`: checks if a Pod's {{< glossary_tooltip text="tolerations" term_id="toleration" >}}
|
||||
can tolerate the Node's {{< glossary_tooltip text="taints" term_id="taint" >}}.
|
||||
|
||||
- `CheckVolumeBinding`: Evaluates if a Pod can fit due to the volumes it requests.
|
||||
This applies for both bound and unbound
|
||||
{{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}.
|
||||
|
||||
## Priorities
|
||||
|
||||
The following *priorities* implement scoring:
|
||||
|
||||
- `SelectorSpreadPriority`: Spreads Pods across hosts, considering Pods that
|
||||
belong to the same {{< glossary_tooltip text="Service" term_id="service" >}},
|
||||
{{< glossary_tooltip term_id="statefulset" >}} or
|
||||
{{< glossary_tooltip term_id="replica-set" >}}.
|
||||
|
||||
- `InterPodAffinityPriority`: Implements preferred
|
||||
[inter pod affininity and antiaffinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
|
||||
- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other
|
||||
words, the more Pods that are placed on a Node, and the more resources those
|
||||
Pods use, the lower the ranking this policy will give.
|
||||
|
||||
- `MostRequestedPriority`: Favors nodes with most requested resources. This policy
|
||||
will fit the scheduled Pods onto the smallest number of Nodes needed to run your
|
||||
overall set of workloads.
|
||||
|
||||
- `RequestedToCapacityRatioPriority`: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.
|
||||
|
||||
- `BalancedResourceAllocation`: Favors nodes with balanced resource usage.
|
||||
|
||||
- `NodePreferAvoidPodsPriority`: Prioritizes nodes according to the node annotation
|
||||
`scheduler.alpha.kubernetes.io/preferAvoidPods`. You can use this to hint that
|
||||
two different Pods shouldn't run on the same Node.
|
||||
|
||||
- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling
|
||||
preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution.
|
||||
You can read more about this in [Assigning Pods to Nodes](/docs/concepts/configuration/assign-pod-node/).
|
||||
|
||||
- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on
|
||||
the number of intolerable taints on the node. This policy adjusts a node's rank
|
||||
taking that list into account.
|
||||
|
||||
- `ImageLocalityPriority`: Favors nodes that already have the
|
||||
{{< glossary_tooltip text="container images" term_id="image" >}} for that
|
||||
Pod cached locally.
|
||||
|
||||
- `ServiceSpreadingPriority`: For a given Service, this policy aims to make sure that
|
||||
the Pods for the Service run on different nodes. It favours scheduling onto nodes
|
||||
that don't have Pods for the service already assigned there. The overall outcome is
|
||||
that the Service becomes more resilient to a single Node failure.
|
||||
|
||||
- `EqualPriority`: Gives an equal weight of one to all nodes.
|
||||
|
||||
- `EvenPodsSpreadPriority`: Implements preferred
|
||||
[pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/)
|
||||
* Learn about [kube-scheduler profiles](/docs/reference/scheduling/profiles/)
|
||||
{{% /capture %}}
|
|
@ -0,0 +1,181 @@
|
|||
---
|
||||
title: Scheduling Profiles
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
A scheduling Profile allows you to configure the different stages of scheduling
|
||||
in the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}.
|
||||
Each stage is exposed in a extension point. Plugins provide scheduling behaviors
|
||||
by implementing one or more of these extension points.
|
||||
|
||||
You can specify scheduling profiles by running `kube-scheduler --config <filename>`,
|
||||
using the component config APIs
|
||||
([`v1alpha1`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha1?tab=doc#KubeSchedulerConfiguration)
|
||||
or [`v1alpha2`](https://pkg.go.dev/k8s.io/kube-scheduler@{{< param "fullversion" >}}/config/v1alpha2?tab=doc#KubeSchedulerConfiguration)).
|
||||
The `v1alpha2` API allows you to configure kube-scheduler to run
|
||||
[multiple profiles](#multiple-profiles).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
## Extension points
|
||||
|
||||
Scheduling happens in a series of stages that are exposed through the following
|
||||
extension points:
|
||||
|
||||
1. `QueueSort`: These plugins provide an ordering function that is used to
|
||||
sort pending Pods in the scheduling queue. Exactly one queue sort plugin
|
||||
may be enabled at a time.
|
||||
1. `PreFilter`: These plugins are used to pre-process or check information
|
||||
about a Pod or the cluster before filtering.
|
||||
1. `Filter`: These plugins are the equivalent of Predicates in a scheduling
|
||||
Policy and are used to filter out nodes that can not run the Pod. Filters
|
||||
are called in the configured order.
|
||||
1. `PreScore`: This is an informational extension point that can be used
|
||||
for doing pre-scoring work.
|
||||
1. `Score`: These plugins provide a score to each node that has passed the
|
||||
filtering phase. The scheduler will then select the node with the highest
|
||||
weighted scores sum.
|
||||
1. `Reserve`: This is an informational extension point that notifies plugins
|
||||
when resources have being reserved for a given Pod.
|
||||
1. `Permit`: These plugins can prevent or delay the binding of a Pod.
|
||||
1. `PreBind`: These plugins perform any work required before a Pod is bound.
|
||||
1. `Bind`: The plugins bind a Pod to a Node. Bind plugins are called in order
|
||||
and once one has done the binding, the remaining plugins are skipped. At
|
||||
least one bind plugin is required.
|
||||
1. `PostBind`: This is an informational extension point that is called after
|
||||
a Pod has been bound.
|
||||
1. `UnReserve`: This is an informational extension point that is called if
|
||||
a Pod is rejected after being reserved and put on hold by a `Permit` plugin.
|
||||
|
||||
## Scheduling plugins
|
||||
|
||||
The following plugins, enabled by default, implement one or more of these
|
||||
extension points:
|
||||
|
||||
- `DefaultTopologySpread`: Favors spreading across nodes for Pods that belong to
|
||||
{{< glossary_tooltip text="Services" term_id="service" >}},
|
||||
{{< glossary_tooltip text="ReplicaSets" term_id="replica-set" >}} and
|
||||
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}
|
||||
Extension points: `PreScore`, `Score`.
|
||||
- `ImageLocality`: Favors nodes that already have the container images that the
|
||||
Pod runs.
|
||||
Extension points: `Score`.
|
||||
- `TaintToleration`: Implements
|
||||
[taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
|
||||
Implements extension points: `Filter`, `Prescore`, `Score`.
|
||||
- `NodeName`: Checks if a Pod spec node name matches the current node.
|
||||
Extension points: `Filter`.
|
||||
- `NodePorts`: Checks if a node has free ports for the requested Pod ports.
|
||||
Extension points: `PreFilter`, `Filter`.
|
||||
- `NodePreferAvoidPods`: Scores nodes according to the node
|
||||
{{< glossary_tooltip text="annotation" term_id="annotation" >}}
|
||||
`scheduler.alpha.kubernetes.io/preferAvoidPods`.
|
||||
Extension points: `Score`.
|
||||
- `NodeAffinity`: Implements
|
||||
[node selectors](/docs/concepts/configuration/assign-pod-node/#nodeselector)
|
||||
and [node affinity](/docs/concepts/configuration/assign-pod-node/#node-affinity).
|
||||
Extension points: `Filter`, `Score`.
|
||||
- `PodTopologySpread`: Implements
|
||||
[Pod topology spread](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
|
||||
Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`.
|
||||
- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to
|
||||
true.
|
||||
Extension points: `Filter`.
|
||||
- `NodeResourcesFit`: Checks if the node has all the resources that the Pod is
|
||||
requesting.
|
||||
Extension points: `PreFilter`, `Filter`.
|
||||
- `NodeResourcesBallancedAllocation`: Favors nodes that would obtain a more
|
||||
balanced resource usage if the Pod is scheduled there.
|
||||
Extension points: `Score`.
|
||||
- `NodeResourcesLeastAllocated`: Favors nodes that have a low allocation of
|
||||
resources.
|
||||
Extension points: `Score`.
|
||||
- `VolumeBinding`: Checks if the node has or if it can bind the requested
|
||||
{{< glossary_tooltip text="volumes" term_id="volume" >}}.
|
||||
Extension points: `Filter`.
|
||||
- `VolumeRestrictions`: Checks that volumes mounted in the node satisfy
|
||||
restrictions that are specific to the volume provider.
|
||||
Extension points: `Filter`.
|
||||
- `VolumeZone`: Checks that volumes requested satisfy any zone requirements they
|
||||
might have.
|
||||
Extension points: `Filter`.
|
||||
- `NodeVolumeLimits`: Checks that CSI volume limits can be satisfied for the
|
||||
node.
|
||||
Extension points: `Filter`.
|
||||
- `EBSLimits`: Checks that AWS EBS volume limits can be satisfied for the node.
|
||||
Extension points: `Filter`.
|
||||
- `GCEPDLimits`: Checks that GCP-PD volume limits can be satisfied for the node.
|
||||
Extension points: `Filter`.
|
||||
- `AzureDiskLimits`: Checks that Azure disk volume limits can be satisfied for
|
||||
the node.
|
||||
Extension points: `Filter`.
|
||||
- `InterPodAffinity`: Implements
|
||||
[inter-Pod affinity and anti-affinity](/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
|
||||
Extension points: `PreFilter`, `Filter`, `PreScore`, `Score`.
|
||||
- `PrioritySort`: Provides the default priority based sorting.
|
||||
Extension points: `QueueSort`.
|
||||
- `DefaultBinder`: Provides the default binding mechanism.
|
||||
Extension points: `Bind`.
|
||||
|
||||
You can also enable the following plugins, through the component config APIs,
|
||||
that are not enabled by default:
|
||||
|
||||
- `NodeResourcesMostAllocated`: Favors nodes that have a high allocation of
|
||||
resources.
|
||||
Extension points: `Score`.
|
||||
- `RequestedToCapacityRatio`: Favor nodes according to a configured function of
|
||||
the allocated resources.
|
||||
Extension points: `Score`.
|
||||
- `NodeResourceLimits`: Favors nodes that satisfy the Pod resource limits.
|
||||
Extension points: `PreScore`, `Score`.
|
||||
- `CinderVolume`: Checks that OpenStack Cinder volume limits can be satisfied
|
||||
for the node.
|
||||
Extension points: `Filter`.
|
||||
- `NodeLabel`: Filters and / or scores a node according to configured
|
||||
{{< glossary_tooltip text="label(s)" term_id="label" >}}.
|
||||
Extension points: `Filter`, `Score`.
|
||||
- `ServiceAffinity`: Checks that Pods that belong to a
|
||||
{{< glossary_tooltip term_id="service" >}} fit in a set of nodes defined by
|
||||
configured labels. This plugin also favors spreading the Pods belonging to a
|
||||
Service across nodes.
|
||||
Extension points: `PreFilter`, `Filter`, `Score`.
|
||||
|
||||
## Multiple profiles
|
||||
|
||||
When using the component config API v1alpha2, a scheduler can be configured to
|
||||
run more than one profile. Each profile has an associated scheduler name.
|
||||
Pods that want to be scheduled according to a specific profile can include
|
||||
the corresponding scheduler name in its `.spec.schedulerName`.
|
||||
|
||||
By default, one profile with the scheduler name `default-scheduler` is created.
|
||||
This profile includes the default plugins described above. When declaring more
|
||||
than one profile, a unique scheduler name for each of them is required.
|
||||
|
||||
If a Pod doesn't specify a scheduler name, kube-apiserver will set it to
|
||||
`default-scheduler`. Therefore, a profile with this scheduler name should exist
|
||||
to get those pods scheduled.
|
||||
|
||||
{{< note >}}
|
||||
Pod's scheduling events have `.spec.schedulerName` as the ReportingController.
|
||||
Events for leader election use the scheduler name of the first profile in the
|
||||
list.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
All profiles must use the same plugin in the QueueSort extension point and have
|
||||
the same configuration parameters (if applicable). This is because the scheduler
|
||||
only has one pending pods queue.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
* Learn about [scheduling](/docs/concepts/scheduling/kube-scheduler/)
|
||||
{{% /capture %}}
|
|
@ -447,21 +447,11 @@ A ServiceAccount for `kube-proxy` is created in the `kube-system` namespace; the
|
|||
|
||||
#### DNS
|
||||
|
||||
Note that:
|
||||
|
||||
- In Kubernetes version 1.18 kube-dns usage with kubeadm is deprecated and will be removed in a future release
|
||||
- The CoreDNS service is named `kube-dns`. This is done to prevent any interruption
|
||||
in service when the user is switching the cluster DNS from kube-dns to CoreDNS or vice-versa
|
||||
- In Kubernetes version 1.10 and earlier, you must enable CoreDNS with `--feature-gates=CoreDNS=true`
|
||||
- In Kubernetes version 1.11 and 1.12, CoreDNS is the default DNS server and you must
|
||||
invoke kubeadm with `--feature-gates=CoreDNS=false` to install kube-dns instead
|
||||
- In Kubernetes version 1.13 and later, the `CoreDNS` feature gate is no longer available and kube-dns can be installed using the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon)
|
||||
|
||||
|
||||
A ServiceAccount for CoreDNS/kube-dns is created in the `kube-system` namespace.
|
||||
|
||||
Deploy the `kube-dns` Deployment and Service:
|
||||
|
||||
- It's the upstream CoreDNS deployment relatively unmodified
|
||||
the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon)
|
||||
- A ServiceAccount for CoreDNS/kube-dns is created in the `kube-system` namespace.
|
||||
- The `kube-dns` ServiceAccount is bound to the privileges in the `system:kube-dns` ClusterRole
|
||||
|
||||
## kubeadm join phases internal design
|
||||
|
|
|
@ -157,6 +157,8 @@ dns:
|
|||
type: "kube-dns"
|
||||
```
|
||||
|
||||
Please note that kube-dns usage with kubeadm is deprecated as of v1.18 and will be removed in a future release.
|
||||
|
||||
For more details on each field in the `v1beta2` configuration you can navigate to our
|
||||
[API reference pages.] (https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2)
|
||||
|
||||
|
|
|
@ -67,10 +67,14 @@ following steps:
|
|||
|
||||
1. Installs a DNS server (CoreDNS) and the kube-proxy addon components via the API server.
|
||||
In Kubernetes version 1.11 and later CoreDNS is the default DNS server.
|
||||
To install kube-dns instead of CoreDNS, the DNS addon has to be configured in the kubeadm `ClusterConfiguration`. For more information about the configuration see the section
|
||||
`Using kubeadm init with a configuration file` below.
|
||||
To install kube-dns instead of CoreDNS, the DNS addon has to be configured in the kubeadm `ClusterConfiguration`.
|
||||
For more information about the configuration see the section `Using kubeadm init with a configuration file` below.
|
||||
Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.
|
||||
|
||||
{{< warning >}}
|
||||
kube-dns usage with kubeadm is deprecated as of v1.18 and will be removed in a future release.
|
||||
{{< /warning >}}
|
||||
|
||||
### Using init phases with kubeadm {#init-phases}
|
||||
|
||||
Kubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command.
|
||||
|
|
|
@ -336,12 +336,14 @@ Once the last finalizer is removed, the resource is actually removed from etcd.
|
|||
|
||||
## Dry-run
|
||||
|
||||
{{< feature-state for_k8s_version="v1.13" state="beta" >}} In version 1.13, the dry-run beta feature is enabled by default. The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a dry-run mode. DryRun mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. The system guarantees that dry-run requests will not be persisted in storage or have any other side effects.
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
The modifying verbs (`POST`, `PUT`, `PATCH`, and `DELETE`) can accept requests in a _dry run_ mode. Dry run mode helps to evaluate a request through the typical request stages (admission chain, validation, merge conflicts) up until persisting objects to storage. The response body for the request is as close as possible to a non-dry-run response. The system guarantees that dry-run requests will not be persisted in storage or have any other side effects.
|
||||
|
||||
|
||||
### Make a dry-run request
|
||||
|
||||
Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and in 1.13 the only accepted values are:
|
||||
Dry-run is triggered by setting the `dryRun` query parameter. This parameter is a string, working as an enum, and the only accepted values are:
|
||||
|
||||
* `All`: Every stage runs as normal, except for the final storage stage. Admission controllers are run to check that the request is valid, mutating controllers mutate the request, merge is performed on `PATCH`, fields are defaulted, and schema validation occurs. The changes are not persisted to the underlying storage, but the final object which would have been persisted is still returned to the user, along with the normal status code. If the request would trigger an admission controller which would have side effects, the request will be failed rather than risk an unwanted side effect. All built in admission control plugins support dry-run. Additionally, admission webhooks can declare in their [configuration object](/docs/reference/generated/kubernetes-api/v1.13/#webhook-v1beta1-admissionregistration-k8s-io) that they do not have side effects by setting the sideEffects field to "None". If a webhook actually does have side effects, then the sideEffects field should be set to "NoneOnDryRun", and the webhook should also be modified to understand the `DryRun` field in AdmissionReview, and prevent side effects on dry-run requests.
|
||||
* Leave the value empty, which is also the default: Keep the default modifying behavior.
|
||||
|
@ -386,6 +388,8 @@ Some values of an object are typically generated before the object is persisted.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
|
||||
{{< note >}}Starting from Kubernetes v1.18, if you have Server Side Apply enabled then the control plane tracks managed fields for all newly created objects.{{< /note >}}
|
||||
|
||||
### Introduction
|
||||
|
||||
Server Side Apply helps users and controllers manage their resources via
|
||||
|
@ -515,6 +519,13 @@ content type `application/apply-patch+yaml`) and `Update` (all other operations
|
|||
which modify the object). Both operations update the `managedFields`, but behave
|
||||
a little differently.
|
||||
|
||||
{{< note >}}
|
||||
Whether you are submitting JSON data or YAML data, use `application/apply-patch+yaml` as the
|
||||
Content-Type header value.
|
||||
|
||||
All JSON documents are valid YAML.
|
||||
{{< /note >}}
|
||||
|
||||
For instance, only the apply operation fails on conflicts while update does
|
||||
not. Also, apply operations are required to identify themselves by providing a
|
||||
`fieldManager` query parameter, while the query parameter is optional for update
|
||||
|
@ -626,8 +637,9 @@ case.
|
|||
|
||||
With the Server Side Apply feature enabled, the `PATCH` endpoint accepts the
|
||||
additional `application/apply-patch+yaml` content type. Users of Server Side
|
||||
Apply can send partially specified objects to this endpoint. An applied config
|
||||
should always include every field that the applier has an opinion about.
|
||||
Apply can send partially specified objects as YAML to this endpoint.
|
||||
When applying a configuration, one should always include all the fields
|
||||
that they have an opinion about.
|
||||
|
||||
### Clearing ManagedFields
|
||||
|
||||
|
@ -661,6 +673,11 @@ the managedFields, this will result in the managedFields being reset first and
|
|||
the other changes being processed afterwards. As a result the applier takes
|
||||
ownership of any fields updated in the same request.
|
||||
|
||||
{{< caution >}} Server Side Apply does not correctly track ownership on
|
||||
sub-resources that don't receive the resource object type. If you are
|
||||
using Server Side Apply with such a sub-resource, the changed fields
|
||||
won't be tracked. {{< /caution >}}
|
||||
|
||||
### Disabling the feature
|
||||
|
||||
Server Side Apply is a beta feature, so it is enabled by default. To turn this
|
||||
|
|
|
@ -64,7 +64,7 @@ is to drain the Node from its workloads, remove it from the cluster and re-join
|
|||
## Docker
|
||||
|
||||
On each of your machines, install Docker.
|
||||
Version 19.03.4 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well.
|
||||
Version 19.03.8 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well.
|
||||
Keep track of the latest verified Docker version in the Kubernetes release notes.
|
||||
|
||||
Use the following commands to install Docker on your system:
|
||||
|
@ -88,9 +88,9 @@ add-apt-repository \
|
|||
|
||||
## Install Docker CE.
|
||||
apt-get update && apt-get install -y \
|
||||
containerd.io=1.2.10-3 \
|
||||
docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) \
|
||||
docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs)
|
||||
containerd.io=1.2.13-1 \
|
||||
docker-ce=5:19.03.8~3-0~ubuntu-$(lsb_release -cs) \
|
||||
docker-ce-cli=5:19.03.8~3-0~ubuntu-$(lsb_release -cs)
|
||||
|
||||
# Setup daemon.
|
||||
cat > /etc/docker/daemon.json <<EOF
|
||||
|
@ -123,9 +123,9 @@ yum-config-manager --add-repo \
|
|||
|
||||
## Install Docker CE.
|
||||
yum update -y && yum install -y \
|
||||
containerd.io-1.2.10 \
|
||||
docker-ce-19.03.4 \
|
||||
docker-ce-cli-19.03.4
|
||||
containerd.io-1.2.13 \
|
||||
docker-ce-19.03.8 \
|
||||
docker-ce-cli-19.03.8
|
||||
|
||||
## Create /etc/docker directory.
|
||||
mkdir /etc/docker
|
||||
|
|
|
@ -100,7 +100,26 @@ Pods, Controllers and Services are critical elements to managing Windows workloa
|
|||
|
||||
#### Container Runtime
|
||||
|
||||
Docker EE-basic 18.09 is required on Windows Server 2019 / 1809 nodes for Kubernetes. This works with the dockershim code included in the kubelet. Additional runtimes such as CRI-ContainerD may be supported in later Kubernetes versions.
|
||||
##### Docker EE
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
Docker EE-basic 18.09+ is the recommended container runtime for Windows Server 2019 / 1809 nodes running Kubernetes. This works with the dockershim code included in the kubelet.
|
||||
|
||||
##### CRI-ContainerD
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
ContainerD is an OCI-compliant runtime that works with Kubernetes on Linux. Kubernetes v1.18 adds support for {{< glossary_tooltip term_id="containerd" text="ContainerD" >}} on Windows. Progress for ContainerD on Windows can be tracked at [enhancements#1001](https://github.com/kubernetes/enhancements/issues/1001).
|
||||
|
||||
{{< caution >}}
|
||||
|
||||
ContainerD on Windows in Kubernetes v1.18 has the following known shortcomings:
|
||||
|
||||
* ContainerD does not have an official release with support for Windows; all development in Kubernetes has been performed against active ContainerD development branches. Production deployments should always use official releases that have been fully tested and are supported with security fixes.
|
||||
* Group-Managed Service Accounts are not implemented when using ContainerD - see [containerd/cri#1276](https://github.com/containerd/cri/issues/1276).
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
#### Persistent Storage
|
||||
|
||||
|
@ -408,7 +427,6 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
|
|||
|
||||
# Register kubelet.exe
|
||||
# Microsoft releases the pause infrastructure container at mcr.microsoft.com/k8s/core/pause:1.2.0
|
||||
# For more info search for "pause" in the "Guide for adding Windows Nodes in Kubernetes"
|
||||
nssm install kubelet C:\k\kubelet.exe
|
||||
nssm set kubelet AppParameters --hostname-override=<hostname> --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.2.0 --resolv-conf="" --allow-privileged=true --enable-debugging-handlers --cluster-dns=<DNS-service-IP> --cluster-domain=cluster.local --kubeconfig=c:\k\config --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir=<log directory> --logtostderr=false --enforce-node-allocatable="" --network-plugin=cni --cni-bin-dir=c:\k\cni --cni-conf-dir=c:\k\cni\config
|
||||
nssm set kubelet AppDirectory C:\k
|
||||
|
@ -520,7 +538,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
|
|||
|
||||
Check that your pause image is compatible with your OS version. The [instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources) assume that both the OS and the containers are version 1803. If you have a later version of Windows, such as an Insider build, you need to adjust the images accordingly. Please refer to the Microsoft's [Docker repository](https://hub.docker.com/u/microsoft/) for images. Regardless, both the pause image Dockerfile and the sample service expect the image to be tagged as :latest.
|
||||
|
||||
Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.2.0`. For more information search for "pause" in the [Guide for adding Windows Nodes in Kubernetes](../user-guide-windows-nodes).
|
||||
Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.2.0`.
|
||||
|
||||
1. DNS resolution is not properly working
|
||||
|
||||
|
@ -534,6 +552,7 @@ Your main source of help for troubleshooting your Kubernetes cluster should star
|
|||
1. My Kubernetes installation is failing because my Windows Server node is behind a proxy
|
||||
|
||||
If you are behind a proxy, the following PowerShell environment variables must be defined:
|
||||
|
||||
```PowerShell
|
||||
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine)
|
||||
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine)
|
||||
|
@ -571,19 +590,15 @@ If filing a bug, please include detailed information about how to reproduce the
|
|||
|
||||
We have a lot of features in our roadmap. An abbreviated high level list is included below, but we encourage you to view our [roadmap project](https://github.com/orgs/kubernetes/projects/8) and help us make Windows support better by [contributing](https://github.com/kubernetes/community/blob/master/sig-windows/).
|
||||
|
||||
### CRI-ContainerD
|
||||
### Hyper-V isolation
|
||||
|
||||
{{< glossary_tooltip term_id="containerd" >}} is another OCI-compliant runtime that recently graduated as a {{< glossary_tooltip text="CNCF" term_id="cncf" >}} project. It's currently tested on Linux, but 1.3 will bring support for Windows and Hyper-V. [[reference](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)]
|
||||
|
||||
The CRI-ContainerD interface will be able to manage sandboxes based on Hyper-V. This provides a foundation where RuntimeClass could be implemented for new use cases including:
|
||||
Hyper-V isolation is requried to enable the following use cases for Windows containers in Kubernetes:
|
||||
|
||||
* Hypervisor-based isolation between pods for additional security
|
||||
* Backwards compatibility allowing a node to run a newer Windows Server version without requiring containers to be rebuilt
|
||||
* Specific CPU/NUMA settings for a pod
|
||||
* Memory isolation and reservations
|
||||
|
||||
### Hyper-V isolation
|
||||
|
||||
The existing Hyper-V isolation support, an experimental feature as of v1.10, will be deprecated in the future in favor of the CRI-ContainerD and RuntimeClass features mentioned above. To use the current features and create a Hyper-V isolated container, the kubelet should be started with feature gates `HyperVContainer=true` and the Pod should include the annotation `experimental.windows.kubernetes.io/isolation-type=hyperv`. In the experiemental release, this feature is limited to 1 container per Pod.
|
||||
|
||||
```yaml
|
||||
|
@ -612,7 +627,11 @@ spec:
|
|||
|
||||
### Deployment with kubeadm and cluster API
|
||||
|
||||
Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm will come in a future release. We are also making investments in cluster API to ensure Windows nodes are properly provisioned.
|
||||
Kubeadm is becoming the de facto standard for users to deploy a Kubernetes
|
||||
cluster. Windows node support in kubeadm is currently a work-in-progress but a
|
||||
guide is available [here](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/).
|
||||
We are also making investments in cluster API to ensure Windows nodes are
|
||||
properly provisioned.
|
||||
|
||||
### A few other key features
|
||||
* Beta support for Group Managed Service Accounts
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 6.9 MiB |
Binary file not shown.
Before Width: | Height: | Size: 3.5 MiB |
Binary file not shown.
Before Width: | Height: | Size: 3.4 MiB |
|
@ -22,7 +22,7 @@ Windows applications constitute a large portion of the services and applications
|
|||
|
||||
## Before you begin
|
||||
|
||||
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](../user-guide-windows-nodes)
|
||||
* Create a Kubernetes cluster that includes a [master and a worker node running Windows Server](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes)
|
||||
* It is important to note that creating and deploying services and workloads on Kubernetes behaves in much the same way for Linux and Windows containers. [Kubectl commands](/docs/reference/kubectl/overview/) to interface with the cluster are identical. The example in the section below is provided simply to jumpstart your experience with Windows containers.
|
||||
|
||||
## Getting Started: Deploying a Windows container
|
||||
|
|
|
@ -1,356 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- michmike
|
||||
- patricklang
|
||||
title: Guide for adding Windows Nodes in Kubernetes
|
||||
min-kubernetes-server-version: v1.14
|
||||
content_template: templates/tutorial
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
The Kubernetes platform can now be used to run both Linux and Windows containers. This page shows how one or more Windows nodes can be registered to a cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) (or higher) in order to configure the Windows node that hosts Windows containers. You can use your organization's licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A [time-limited trial](https://www.microsoft.com/en-us/cloud-platform/windows-server-trial) is also available.
|
||||
|
||||
* Build a Linux-based Kubernetes cluster in which you have access to the control-plane (some examples include [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/), [AKS Engine](/docs/setup/production-environment/turnkey/azure/), [GCE](/docs/setup/production-environment/turnkey/gce/), [AWS](/docs/setup/production-environment/turnkey/aws/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture objectives %}}
|
||||
|
||||
* Register a Windows node to the cluster
|
||||
* Configure networking so Pods and Services on Linux and Windows can communicate with each other
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Getting Started: Adding a Windows Node to Your Cluster
|
||||
|
||||
### Plan IP Addressing
|
||||
|
||||
Kubernetes cluster management requires careful planning of your IP addresses so that you do not inadvertently cause network collision. This guide assumes that you are familiar with the [Kubernetes networking concepts](/docs/concepts/cluster-administration/networking/).
|
||||
|
||||
In order to deploy your cluster you need the following address spaces:
|
||||
|
||||
| Subnet / address range | Description | Default value |
|
||||
| --- | --- | --- |
|
||||
| Service Subnet | A non-routable, purely virtual subnet that is used by pods to uniformly access services without caring about the network topology. It is translated to/from routable address space by `kube-proxy` running on the nodes. | 10.96.0.0/12 |
|
||||
| Cluster Subnet | This is a global subnet that is used by all pods in the cluster. Each node is assigned a smaller /24 subnet from this for their pods to use. It must be large enough to accommodate all pods used in your cluster. To calculate *minimumsubnet* size: `(number of nodes) + (number of nodes * maximum pods per node that you configure)`. Example: for a 5 node cluster for 100 pods per node: `(5) + (5 * 100) = 505.` | 10.244.0.0/16 |
|
||||
| Kubernetes DNS Service IP | IP address of `kube-dns` service that is used for DNS resolution & cluster service discovery. | 10.96.0.10 |
|
||||
|
||||
Review the networking options supported in 'Intro to Windows containers in Kubernetes: Supported Functionality: Networking' to determine how you need to allocate IP addresses for your cluster.
|
||||
|
||||
### Components that run on Windows
|
||||
|
||||
While the Kubernetes control-plane runs on your Linux node(s), the following components are configured and run on your Windows node(s).
|
||||
|
||||
1. kubelet
|
||||
2. kube-proxy
|
||||
3. kubectl (optional)
|
||||
4. Container runtime
|
||||
|
||||
Get the latest binaries from [https://github.com/kubernetes/kubernetes/releases](https://github.com/kubernetes/kubernetes/releases), starting with v1.14 or later. The Windows-amd64 binaries for kubeadm, kubectl, kubelet, and kube-proxy can be found under the CHANGELOG link.
|
||||
|
||||
### Networking Configuration
|
||||
|
||||
Once you have a Linux-based Kubernetes control-plane ("Master") node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity.
|
||||
|
||||
#### Configuring Flannel in VXLAN mode on the Linux control-plane
|
||||
|
||||
1. Prepare Kubernetes master for Flannel
|
||||
|
||||
Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:
|
||||
|
||||
```bash
|
||||
sudo sysctl net.bridge.bridge-nf-call-iptables=1
|
||||
```
|
||||
|
||||
1. Download & configure Flannel
|
||||
|
||||
Download the most recent Flannel manifest:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
There are two sections you should modify to enable the vxlan networking backend:
|
||||
|
||||
After applying the steps below, the `net-conf.json` section of `kube-flannel.yml` should look as follows:
|
||||
|
||||
```json
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "10.244.0.0/16",
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"VNI" : 4096,
|
||||
"Port": 4789
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)
|
||||
for an explanation of these fields.{{< /note >}}
|
||||
|
||||
1. In the `net-conf.json` section of your `kube-flannel.yml`, double-check:
|
||||
1. The cluster subnet (e.g. "10.244.0.0/16") is set as per your IP plan.
|
||||
* VNI 4096 is set in the backend
|
||||
* Port 4789 is set in the backend
|
||||
1. In the `cni-conf.json` section of your `kube-flannel.yml`, change the network name to `vxlan0`.
|
||||
|
||||
Your `cni-conf.json` should look as follows:
|
||||
|
||||
```json
|
||||
cni-conf.json: |
|
||||
{
|
||||
"name": "vxlan0",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "flannel",
|
||||
"delegate": {
|
||||
"hairpinMode": true,
|
||||
"isDefaultGateway": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "portmap",
|
||||
"capabilities": {
|
||||
"portMappings": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
1. Apply the Flannel manifest and validate
|
||||
|
||||
Let's apply the Flannel configuration:
|
||||
|
||||
```bash
|
||||
kubectl apply -f kube-flannel.yml
|
||||
```
|
||||
|
||||
After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.
|
||||
|
||||
```bash
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
The output looks like as follows:
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system etcd-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-apiserver-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m
|
||||
kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m
|
||||
kube-system kube-flannel-ds-54954 1/1 Running 0 1m
|
||||
kube-system kube-proxy-Zjlxz 1/1 Running 0 1m
|
||||
kube-system kube-scheduler-flannel-master 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
Verify that the Flannel DaemonSet has the NodeSelector applied.
|
||||
|
||||
```bash
|
||||
kubectl get ds -n kube-system
|
||||
```
|
||||
|
||||
The output looks like as follows. The NodeSelector `beta.kubernetes.io/os=linux` is applied.
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d
|
||||
kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Join Windows Worker Node
|
||||
|
||||
In this section we'll cover configuring a Windows node from scratch to join a cluster on-prem. If your cluster is on a cloud you'll likely want to follow the cloud specific guides in the [public cloud providers section](#public-cloud-providers).
|
||||
|
||||
#### Preparing a Windows Node
|
||||
|
||||
{{< note >}}
|
||||
All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Administrator) on the Windows worker node.
|
||||
{{< /note >}}
|
||||
|
||||
1. Download the [SIG Windows tools](https://github.com/kubernetes-sigs/sig-windows-tools) repository containing install and join scripts
|
||||
```PowerShell
|
||||
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
|
||||
Start-BitsTransfer https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip
|
||||
tar -xvf .\master.zip --strip-components 3 sig-windows-tools-master/kubeadm/v1.15.0/*
|
||||
Remove-Item .\master.zip
|
||||
```
|
||||
|
||||
1. Customize the Kubernetes [configuration file](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclustervxlan.json)
|
||||
|
||||
```
|
||||
{
|
||||
"Cri" : { // Contains values for container runtime and base container setup
|
||||
"Name" : "dockerd", // Container runtime name
|
||||
"Images" : {
|
||||
"Pause" : "mcr.microsoft.com/k8s/core/pause:1.2.0", // Infrastructure container image
|
||||
"Nanoserver" : "mcr.microsoft.com/windows/nanoserver:1809", // Base Nanoserver container image
|
||||
"ServerCore" : "mcr.microsoft.com/windows/servercore:ltsc2019" // Base ServerCore container image
|
||||
}
|
||||
},
|
||||
"Cni" : { // Contains values for networking executables
|
||||
"Name" : "flannel", // Name of network fabric
|
||||
"Source" : [{ // Contains array of objects containing values for network daemon(s)
|
||||
"Name" : "flanneld", // Name of network daemon
|
||||
"Url" : "https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld.exe" // Direct URL pointing to network daemon executable
|
||||
}
|
||||
],
|
||||
"Plugin" : { // Contains values for CNI network plugin
|
||||
"Name": "vxlan" // Backend network mechanism to use: ["vxlan" | "bridge"]
|
||||
},
|
||||
"InterfaceName" : "Ethernet" // Designated network interface name on Windows node to use as container network
|
||||
},
|
||||
"Kubernetes" : { // Contains values for Kubernetes node binaries
|
||||
"Source" : { // Contains values for Kubernetes node binaries
|
||||
"Release" : "1.15.0", // Version of Kubernetes node binaries
|
||||
"Url" : "https://dl.k8s.io/v1.15.0/kubernetes-node-windows-amd64.tar.gz" // Direct URL pointing to Kubernetes node binaries tarball
|
||||
},
|
||||
"ControlPlane" : { // Contains values associated with Kubernetes control-plane ("Master") node
|
||||
"IpAddress" : "kubemasterIP", // IP address of control-plane ("Master") node
|
||||
"Username" : "localadmin", // Username on control-plane ("Master") node with remote SSH access
|
||||
"KubeadmToken" : "token", // Kubeadm bootstrap token
|
||||
"KubeadmCAHash" : "discovery-token-ca-cert-hash" // Kubeadm CA key hash
|
||||
},
|
||||
"KubeProxy" : { // Contains values for Kubernetes network proxy configuration
|
||||
"Gates" : "WinOverlay=true" // Comma-separated key-value pairs passed to kube-proxy feature gate flag
|
||||
},
|
||||
"Network" : { // Contains values for IP ranges in CIDR notation for Kubernetes networking
|
||||
"ServiceCidr" : "10.96.0.0/12", // Service IP subnet used by Services in CIDR notation
|
||||
"ClusterCidr" : "10.244.0.0/16" // Cluster IP subnet used by Pods in CIDR notation
|
||||
}
|
||||
},
|
||||
"Install" : { // Contains values and configurations for Windows node installation
|
||||
"Destination" : "C:\\ProgramData\\Kubernetes" // Absolute DOS path where Kubernetes will be installed on the Windows node
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
Users can generate values for the `ControlPlane.KubeadmToken` and `ControlPlane.KubeadmCAHash` fields by running `kubeadm token create --print-join-command` on the Kubernetes control-plane ("Master") node.
|
||||
{{< /note >}}
|
||||
|
||||
1. Install containers and Kubernetes (requires a system reboot)
|
||||
|
||||
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to install Kubernetes on the Windows Server container host:
|
||||
|
||||
```PowerShell
|
||||
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install
|
||||
```
|
||||
where `-ConfigFile` points to the path of the Kubernetes configuration file.
|
||||
|
||||
{{< note >}}
|
||||
In the example below, we are using overlay networking mode. This requires Windows Server version 2019 with [KB4489899](https://support.microsoft.com/help/4489899) and at least Kubernetes v1.14 or above. Users that cannot meet this requirement must use `L2bridge` networking instead by selecting `bridge` as the [plugin](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/v1.15.0/Kubeclusterbridge.json#L18) in the configuration file.
|
||||
{{< /note >}}
|
||||
|
||||
![alt_text](../kubecluster.ps1-install.gif "KubeCluster.ps1 install output")
|
||||
|
||||
|
||||
On the Windows node you target, this step will:
|
||||
|
||||
1. Enable Windows Server containers role (and reboot)
|
||||
1. Download and install the chosen container runtime
|
||||
1. Download all needed container images
|
||||
1. Download Kubernetes binaries and add them to the `$PATH` environment variable
|
||||
1. Download CNI plugins based on the selection made in the Kubernetes Configuration file
|
||||
1. (Optionally) Generate a new SSH key which is required to connect to the control-plane ("Master") node during joining
|
||||
|
||||
{{< note >}}For the SSH key generation step, you also need to add the generated public SSH key to the `authorized_keys` file on your (Linux) control-plane node. You only need to do this once. The script prints out the steps you can follow to do this, at the end of its output.{{< /note >}}
|
||||
|
||||
Once installation is complete, any of the generated configuration files or binaries can be modified before joining the Windows node.
|
||||
|
||||
#### Join the Windows Node to the Kubernetes cluster
|
||||
This section covers how to join a [Windows node with Kubernetes installed](#preparing-a-windows-node) with an existing (Linux) control-plane, to form a cluster.
|
||||
|
||||
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to join the Windows node to the cluster:
|
||||
|
||||
```PowerShell
|
||||
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join
|
||||
```
|
||||
where `-ConfigFile` points to the path of the Kubernetes configuration file.
|
||||
|
||||
![alt_text](../kubecluster.ps1-join.gif "KubeCluster.ps1 join output")
|
||||
|
||||
{{< note >}}
|
||||
Should the script fail during the bootstrap or joining procedure for whatever reason, start a new PowerShell session before starting each consecutive join attempt.
|
||||
{{< /note >}}
|
||||
|
||||
This step will perform the following actions:
|
||||
|
||||
1. Connect to the control-plane ("Master") node via SSH, to retrieve the [Kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file.
|
||||
1. Register kubelet as a Windows service
|
||||
1. Configure CNI network plugins
|
||||
1. Create an HNS network on top of the chosen network interface
|
||||
{{< note >}}
|
||||
This may cause a network blip for a few seconds while the vSwitch is being created.
|
||||
{{< /note >}}
|
||||
1. (If vxlan plugin is selected) Open up inbound firewall UDP port 4789 for overlay traffic
|
||||
1. Register flanneld as a Windows service
|
||||
1. Register kube-proxy as a Windows service
|
||||
|
||||
Now you can view the Windows nodes in your cluster by running the following:
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
#### Remove the Windows Node from the Kubernetes cluster
|
||||
In this section we'll cover how to remove a Windows node from a Kubernetes cluster.
|
||||
|
||||
Use the previously downloaded [KubeCluster.ps1](https://github.com/kubernetes-sigs/sig-windows-tools/blob/master/kubeadm/KubeCluster.ps1) script to remove the Windows node from the cluster:
|
||||
|
||||
```PowerShell
|
||||
.\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset
|
||||
```
|
||||
where `-ConfigFile` points to the path of the Kubernetes configuration file.
|
||||
|
||||
![alt_text](../kubecluster.ps1-reset.gif "KubeCluster.ps1 reset output")
|
||||
|
||||
This step will perform the following actions on the targeted Windows node:
|
||||
|
||||
1. Delete the Windows node from the Kubernetes cluster
|
||||
1. Stop all running containers
|
||||
1. Remove all container networking (HNS) resources
|
||||
1. Unregister all Kubernetes services (flanneld, kubelet, kube-proxy)
|
||||
1. Delete all Kubernetes binaries (kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe)
|
||||
1. Delete all CNI network plugins binaries
|
||||
1. Delete [Kubeconfig file](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) used to access the Kubernetes cluster
|
||||
|
||||
|
||||
### Public Cloud Providers
|
||||
|
||||
#### Azure
|
||||
|
||||
AKS-Engine can deploy a complete, customizable Kubernetes cluster with both Linux & Windows nodes. There is a step-by-step walkthrough available in the [docs on GitHub](https://github.com/Azure/aks-engine/blob/master/docs/topics/windows.md).
|
||||
|
||||
#### GCP
|
||||
|
||||
Users can easily deploy a complete Kubernetes cluster on GCE following this step-by-step walkthrough on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/README-GCE-Windows-kube-up.md)
|
||||
|
||||
#### Deployment with kubeadm and cluster API
|
||||
|
||||
Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm is an alpha feature since Kubernetes release v1.16. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. For more details, please consult the [kubeadm for Windows KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/20190424-kubeadm-for-windows.md).
|
||||
|
||||
|
||||
### Next Steps
|
||||
|
||||
Now that you've configured a Windows worker in your cluster to run Windows containers you may want to add one or more Linux nodes as well to run Linux containers. You are now ready to schedule Windows containers on your cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
@ -63,6 +63,10 @@ In Kubernetes 1.11, CoreDNS has graduated to General Availability (GA)
|
|||
and is installed by default.
|
||||
{{< /note >}}
|
||||
|
||||
{{< warning >}}
|
||||
In Kubernetes 1.18, kube-dns usage with kubeadm has been deprecated and will be removed in a future version.
|
||||
{{< /warning >}}
|
||||
|
||||
To install kube-dns on versions prior to 1.13, set the `CoreDNS` feature gate
|
||||
value to `false`:
|
||||
|
||||
|
@ -72,9 +76,9 @@ kubeadm init --feature-gates=CoreDNS=false
|
|||
|
||||
For versions 1.13 and later, follow the guide outlined [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase#cmd-phase-addon).
|
||||
|
||||
## Upgrading CoreDNS
|
||||
## Upgrading CoreDNS
|
||||
|
||||
CoreDNS is available in Kubernetes since v1.9.
|
||||
CoreDNS is available in Kubernetes since v1.9.
|
||||
You can check the version of CoreDNS shipped with Kubernetes and the changes made to CoreDNS [here](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md).
|
||||
|
||||
CoreDNS can be upgraded manually in case you want to only upgrade CoreDNS or use your own custom image.
|
||||
|
|
|
@ -35,28 +35,25 @@ components still rely on Endpoints. For now, enabling EndpointSlices should be
|
|||
seen as an addition to Endpoints in a cluster, not a replacement for them.
|
||||
{{< /note >}}
|
||||
|
||||
EndpointSlices are considered a beta feature, but only the API is enabled by
|
||||
default. Both the EndpointSlice controller and the usage of EndpointSlices by
|
||||
kube-proxy are not enabled by default.
|
||||
EndpointSlices are a beta feature. Both the API and the EndpointSlice
|
||||
{{< glossary_tooltip term_id="controller" >}} are enabled by default.
|
||||
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}
|
||||
uses Endpoints by default, not EndpointSlices.
|
||||
|
||||
The EndpointSlice controller creates and manages EndpointSlices in a cluster.
|
||||
You can enable it with the `EndpointSlice` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{<
|
||||
glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} and {{<
|
||||
glossary_tooltip text="kube-controller-manager"
|
||||
term_id="kube-controller-manager" >}} (`--feature-gates=EndpointSlice=true`).
|
||||
|
||||
For better scalability, you can also enable this feature gate on {{<
|
||||
glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} so EndpointSlices
|
||||
will be used as the data source instead of Endpoints.
|
||||
For better scalability and performance, you can enable the
|
||||
`EndpointSliceProxying`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
on kube-proxy. That change
|
||||
switches the data source to be EndpointSlices, which reduces the amount of
|
||||
Kubernetes API traffic to and from kube-proxy.
|
||||
|
||||
## Using EndpointSlices
|
||||
|
||||
With EndpointSlices fully enabled in your cluster, you should see corresponding
|
||||
EndpointSlice resources for each Endpoints resource. In addition to supporting
|
||||
existing Endpoints functionality, EndpointSlices should include new bits of
|
||||
information such as topology. They will allow for greater scalability and
|
||||
extensibility of network endpoints in your cluster.
|
||||
existing Endpoints functionality, EndpointSlices include new bits of information
|
||||
such as topology. They will allow for greater scalability and extensibility of
|
||||
network endpoints in your cluster.
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,169 @@
|
|||
---
|
||||
reviewers:
|
||||
- michmike
|
||||
- patricklang
|
||||
title: Adding Windows nodes
|
||||
min-kubernetes-server-version: 1.17
|
||||
content_template: templates/tutorial
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
You can use Kubernetes to run a mixture of Linux and Windows nodes, so you can mix Pods that run on Linux on with Pods that run on Windows. This page shows how to register Windows nodes to your cluster.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}} {{< version-check >}}
|
||||
|
||||
* Obtain a [Windows Server 2019 license](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing)
|
||||
(or higher) in order to configure the Windows node that hosts Windows containers.
|
||||
If you are using VXLAN/Overlay networking you must have also have [KB4489899](https://support.microsoft.com/help/4489899) installed.
|
||||
|
||||
* A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see [Creating a single control-plane cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture objectives %}}
|
||||
|
||||
* Register a Windows node to the cluster
|
||||
* Configure networking so Pods and Services on Linux and Windows can communicate with each other
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture lessoncontent %}}
|
||||
|
||||
## Getting Started: Adding a Windows Node to Your Cluster
|
||||
|
||||
### Networking Configuration
|
||||
|
||||
Once you have a Linux-based Kubernetes control-plane node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity.
|
||||
|
||||
#### Configuring Flannel
|
||||
|
||||
1. Prepare Kubernetes control plane for Flannel
|
||||
|
||||
Some minor preparation is recommended on the Kubernetes control plane in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:
|
||||
|
||||
```bash
|
||||
sudo sysctl net.bridge.bridge-nf-call-iptables=1
|
||||
```
|
||||
|
||||
1. Download & configure Flannel for Linux
|
||||
|
||||
Download the most recent Flannel manifest:
|
||||
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
||||
```
|
||||
|
||||
Modify the `net-conf.json` section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows:
|
||||
|
||||
```json
|
||||
net-conf.json: |
|
||||
{
|
||||
"Network": "10.244.0.0/16",
|
||||
"Backend": {
|
||||
"Type": "vxlan",
|
||||
"VNI" : 4096,
|
||||
"Port": 4789
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< note >}}The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. See the [VXLAN documentation](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan).
|
||||
for an explanation of these fields.{{< /note >}}
|
||||
|
||||
{{< note >}}To use L2Bridge/Host-gateway mode instead change the value of `Type` to `"host-gw"` and omit `VNI` and `Port`.{{< /note >}}
|
||||
|
||||
1. Apply the Flannel manifest and validate
|
||||
|
||||
Let's apply the Flannel configuration:
|
||||
|
||||
```bash
|
||||
kubectl apply -f kube-flannel.yml
|
||||
```
|
||||
|
||||
After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.
|
||||
|
||||
```bash
|
||||
kubectl get pods -n kube-system
|
||||
```
|
||||
|
||||
The output should include the Linux flannel DaemonSet as running:
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
kube-system kube-flannel-ds-54954 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
1. Add Windows Flannel and kube-proxy DaemonSets
|
||||
|
||||
Now you can add Windows-compatible versions of Flannel and kube-proxy. In order
|
||||
to ensure that you get a compatible version of kube-proxy, you'll need to substitute
|
||||
the tag of the image. The following example shows usage for Kubernetes {{< param "fullversion" >}},
|
||||
but you should adjust the version for your own deployment.
|
||||
|
||||
```bash
|
||||
curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f -
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
If you're using host-gateway use https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.yml instead
|
||||
{{< /note >}}
|
||||
|
||||
### Joining a Windows worker node
|
||||
{{< note >}}
|
||||
You must install the `Containers` feature and install Docker. Instructions
|
||||
to do so are available at [Install Docker Engine - Enterprise on Windows Servers](https://docs.docker.com/ee/docker-ee/windows/docker-ee/#install-docker-engine---enterprise).
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
All code snippets in Windows sections are to be run in a PowerShell environment
|
||||
with elevated permissions (Administrator) on the Windows worker node.
|
||||
{{< /note >}}
|
||||
|
||||
1. Install wins, kubelet, and kubeadm.
|
||||
|
||||
```PowerShell
|
||||
curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/PrepareNode.ps1
|
||||
.\PrepareNode.ps1 -KubernetesVersion {{< param "fullversion" >}}
|
||||
```
|
||||
|
||||
1. Run `kubeadm` to join the node
|
||||
|
||||
Use the command that was given to you when you ran `kubeadm init` on a control plane host.
|
||||
If you no longer have this command, or the token has expired, you can run `kubeadm token create --print-join-command`
|
||||
(on a control plane host) to generate a new token and join command.
|
||||
|
||||
|
||||
#### Verifying your installation
|
||||
You should now be able to view the Windows node in your cluster by running:
|
||||
|
||||
```bash
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
If your new node is in the `NotReady` state it is likely because the flannel image is still downloading.
|
||||
You can check the progress as before by checking on the flannel pods in the `kube-system` namespace:
|
||||
|
||||
```shell
|
||||
kubectl -n kube-system get pods -l app=flannel
|
||||
```
|
||||
|
||||
Once the flannel Pod is running, your node should enter the `Ready` state and then be available to handle workloads.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
- [Upgrading Windows kubeadm nodes](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes)
|
||||
|
||||
{{% /capture %}}
|
|
@ -3,6 +3,7 @@ reviewers:
|
|||
- sig-cluster-lifecycle
|
||||
title: Certificate Management with kubeadm
|
||||
content_template: templates/task
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
|
|
@ -3,16 +3,19 @@ reviewers:
|
|||
- sig-cluster-lifecycle
|
||||
title: Upgrading kubeadm clusters
|
||||
content_template: templates/task
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.18
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
|
||||
1.16.x to version 1.17.x, and from version 1.17.x to 1.17.y (where `y > x`).
|
||||
1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`).
|
||||
|
||||
To see information about upgrading clusters created using older versions of kubeadm,
|
||||
please refer to following pages instead:
|
||||
|
||||
- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
|
||||
- [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
|
||||
|
@ -27,7 +30,7 @@ The upgrade workflow at high level is the following:
|
|||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.16.0 or later.
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later.
|
||||
- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux).
|
||||
- The cluster should use a static control plane and etcd pods or external etcd.
|
||||
- Make sure you read the [release notes]({{< latest-release-notes >}}) carefully.
|
||||
|
@ -54,12 +57,12 @@ The upgrade workflow at high level is the following:
|
|||
apt update
|
||||
apt-cache madison kubeadm
|
||||
# find the latest 1.17 version in the list
|
||||
# it should look like 1.17.x-00, where x is the latest patch
|
||||
# it should look like 1.18.x-00, where x is the latest patch
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# find the latest 1.17 version in the list
|
||||
# it should look like 1.17.x-0, where x is the latest patch
|
||||
# it should look like 1.18.x-0, where x is the latest patch
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -71,18 +74,18 @@ The upgrade workflow at high level is the following:
|
|||
|
||||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.17.x-00 with the latest patch version
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.17.x-00 && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
|
||||
# since apt-get version 1.1 you can also use the following method
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.17.x-00
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.17.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -112,28 +115,30 @@ The upgrade workflow at high level is the following:
|
|||
[upgrade/config] Reading configuration from the cluster...
|
||||
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
|
||||
[preflight] Running pre-flight checks.
|
||||
[upgrade] Making sure the cluster is healthy:
|
||||
[upgrade] Running cluster health checks
|
||||
[upgrade] Fetching available versions to upgrade to
|
||||
[upgrade/versions] Cluster version: v1.16.0
|
||||
[upgrade/versions] kubeadm version: v1.17.0
|
||||
[upgrade/versions] Cluster version: v1.17.3
|
||||
[upgrade/versions] kubeadm version: v1.18.0
|
||||
[upgrade/versions] Latest stable version: v1.18.0
|
||||
[upgrade/versions] Latest version in the v1.17 series: v1.18.0
|
||||
|
||||
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
|
||||
COMPONENT CURRENT AVAILABLE
|
||||
Kubelet 1 x v1.16.0 v1.17.0
|
||||
COMPONENT CURRENT AVAILABLE
|
||||
Kubelet 1 x v1.17.3 v1.18.0
|
||||
|
||||
Upgrade to the latest version in the v1.16 series:
|
||||
Upgrade to the latest version in the v1.17 series:
|
||||
|
||||
COMPONENT CURRENT AVAILABLE
|
||||
API Server v1.16.0 v1.17.0
|
||||
Controller Manager v1.16.0 v1.17.0
|
||||
Scheduler v1.16.0 v1.17.0
|
||||
Kube Proxy v1.16.0 v1.17.0
|
||||
CoreDNS 1.6.2 1.6.5
|
||||
Etcd 3.3.15 3.4.3-0
|
||||
API Server v1.17.3 v1.18.0
|
||||
Controller Manager v1.17.3 v1.18.0
|
||||
Scheduler v1.17.3 v1.18.0
|
||||
Kube Proxy v1.17.3 v1.18.0
|
||||
CoreDNS 1.6.5 1.6.7
|
||||
Etcd 3.4.3 3.4.3-0
|
||||
|
||||
You can now apply the upgrade by executing the following command:
|
||||
|
||||
kubeadm upgrade apply v1.17.0
|
||||
kubeadm upgrade apply v1.18.0
|
||||
|
||||
_____________________________________________________________________
|
||||
```
|
||||
|
@ -150,78 +155,79 @@ The upgrade workflow at high level is the following:
|
|||
|
||||
```shell
|
||||
# replace x with the patch version you picked for this upgrade
|
||||
sudo kubeadm upgrade apply v1.17.x
|
||||
sudo kubeadm upgrade apply v1.18.x
|
||||
```
|
||||
|
||||
|
||||
You should see output similar to this:
|
||||
|
||||
```
|
||||
[preflight] Running pre-flight checks.
|
||||
[upgrade] Making sure the cluster is healthy:
|
||||
[upgrade/config] Making sure the configuration is correct:
|
||||
[upgrade/config] Reading configuration from the cluster...
|
||||
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
|
||||
[upgrade/version] You have chosen to change the cluster version to "v1.17.0"
|
||||
[upgrade/versions] Cluster version: v1.16.0
|
||||
[upgrade/versions] kubeadm version: v1.17.0
|
||||
[preflight] Running pre-flight checks.
|
||||
[upgrade] Running cluster health checks
|
||||
[upgrade/version] You have chosen to change the cluster version to "v1.18.0"
|
||||
[upgrade/versions] Cluster version: v1.17.3
|
||||
[upgrade/versions] kubeadm version: v1.18.0
|
||||
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
|
||||
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
|
||||
[upgrade/prepull] Prepulling image for component etcd.
|
||||
[upgrade/prepull] Prepulling image for component kube-apiserver.
|
||||
[upgrade/prepull] Prepulling image for component kube-controller-manager.
|
||||
[upgrade/prepull] Prepulling image for component kube-scheduler.
|
||||
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
|
||||
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[upgrade/prepull] Prepulled image for component etcd.
|
||||
[upgrade/prepull] Prepulled image for component kube-controller-manager.
|
||||
[upgrade/prepull] Prepulled image for component kube-apiserver.
|
||||
[upgrade/prepull] Prepulled image for component kube-controller-manager.
|
||||
[upgrade/prepull] Prepulled image for component kube-scheduler.
|
||||
[upgrade/prepull] Successfully prepulled the images for all the control plane components
|
||||
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"...
|
||||
Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
|
||||
Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
|
||||
Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
|
||||
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"...
|
||||
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
|
||||
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
|
||||
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
|
||||
[upgrade/etcd] Upgrading to TLS for etcd
|
||||
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests446257614"
|
||||
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
|
||||
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012"
|
||||
W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
|
||||
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
|
||||
[upgrade/staticpods] Renewing "apiserver-etcd-client" certificate
|
||||
[upgrade/staticpods] Renewing "apiserver" certificate
|
||||
[upgrade/staticpods] Renewing "apiserver-kubelet-client" certificate
|
||||
[upgrade/staticpods] Renewing "front-proxy-client" certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-apiserver.yaml"
|
||||
[upgrade/staticpods] Renewing apiserver certificate
|
||||
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
|
||||
[upgrade/staticpods] Renewing front-proxy-client certificate
|
||||
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
|
||||
Static pod: kube-apiserver-luboitvbox hash: 1b4e2b09a408c844f9d7b535e593ead9
|
||||
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
|
||||
Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4
|
||||
[apiclient] Found 1 Pods for label selector component=kube-apiserver
|
||||
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
|
||||
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
|
||||
[upgrade/staticpods] Renewing certificate embedded in "controller-manager.conf"
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-controller-manager.yaml"
|
||||
[upgrade/staticpods] Renewing controller-manager.conf certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
|
||||
Static pod: kube-controller-manager-luboitvbox hash: 6617d53423348aa619f1d6e568bb894a
|
||||
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
|
||||
Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156
|
||||
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
|
||||
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
|
||||
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
|
||||
[upgrade/staticpods] Renewing certificate embedded in "scheduler.conf"
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-scheduler.yaml"
|
||||
[upgrade/staticpods] Renewing scheduler.conf certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
|
||||
Static pod: kube-scheduler-luboitvbox hash: edf58ab819741a5d1eb9c33de756e3ca
|
||||
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
|
||||
Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d
|
||||
[apiclient] Found 1 Pods for label selector component=kube-scheduler
|
||||
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
|
||||
[upgrade/staticpods] Renewing certificate embedded in "admin.conf"
|
||||
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
||||
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
|
||||
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
|
||||
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
|
||||
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
|
||||
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
|
||||
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
||||
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
||||
|
@ -229,7 +235,7 @@ The upgrade workflow at high level is the following:
|
|||
[addons] Applied essential addon: CoreDNS
|
||||
[addons] Applied essential addon: kube-proxy
|
||||
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy!
|
||||
|
||||
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
|
||||
```
|
||||
|
@ -271,18 +277,18 @@ Also `sudo kubeadm upgrade plan` is not needed.
|
|||
|
||||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.17.x-00 with the latest patch version
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
|
||||
# since apt-get version 1.1 you can also use the following method
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.17.x-00 kubectl=1.17.x-00
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.17.x-0 with the latest patch version
|
||||
yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -303,18 +309,18 @@ without compromising the minimum required capacity for running your workloads.
|
|||
|
||||
{{< tabs name="k8s_install_kubeadm_worker_nodes" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.17.x-00 with the latest patch version
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.17.x-00 && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
|
||||
# since apt-get version 1.1 you can also use the following method
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.17.x-00
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.17.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -349,18 +355,18 @@ without compromising the minimum required capacity for running your workloads.
|
|||
|
||||
{{< tabs name="k8s_kubelet_and_kubectl" >}}
|
||||
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
|
||||
# replace x in 1.17.x-00 with the latest patch version
|
||||
# replace x in 1.18.x-00 with the latest patch version
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.17.x-00 kubectl=1.17.x-00 && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
|
||||
# since apt-get version 1.1 you can also use the following method
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.17.x-00 kubectl=1.17.x-00
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS, RHEL or Fedora" %}}
|
||||
# replace x in 1.17.x-0 with the latest patch version
|
||||
yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes
|
||||
# replace x in 1.18.x-0 with the latest patch version
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
@ -375,7 +381,7 @@ without compromising the minimum required capacity for running your workloads.
|
|||
1. Bring the node back online by marking it schedulable:
|
||||
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node
|
||||
# replace <node-to-drain> with the name of your node
|
||||
kubectl uncordon <node-to-drain>
|
||||
```
|
||||
|
||||
|
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
title: Upgrading Windows nodes
|
||||
min-kubernetes-server-version: 1.17
|
||||
content_template: templates/task
|
||||
weight: 40
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
This page explains how to upgrade a Windows node [created with kubeadm](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
* Familiarize yourself with [the process for upgrading the rest of your kubeadm
|
||||
cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to
|
||||
upgrade the control plane nodes before upgrading your Windows nodes.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Upgrading worker nodes
|
||||
|
||||
### Upgrade kubeadm
|
||||
|
||||
1. From the Windows node, upgrade kubeadm:
|
||||
|
||||
```powershell
|
||||
# replace {{< param "fullversion" >}} with your desired version
|
||||
curl.exe -Lo C:\k\kubeadm.exe https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubeadm.exe
|
||||
```
|
||||
|
||||
### Drain the node
|
||||
|
||||
1. From a machine with access to the Kubernetes API,
|
||||
prepare the node for maintenance by marking it unschedulable and evicting the workloads:
|
||||
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node you are draining
|
||||
kubectl drain <node-to-drain> --ignore-daemonsets
|
||||
```
|
||||
|
||||
You should see output similar to this:
|
||||
|
||||
```
|
||||
node/ip-172-31-85-18 cordoned
|
||||
node/ip-172-31-85-18 drained
|
||||
```
|
||||
|
||||
### Upgrade the kubelet configuration
|
||||
|
||||
1. From the Windows node, call the following command to sync new kubelet configuration:
|
||||
|
||||
```powershell
|
||||
kubeadm upgrade node
|
||||
```
|
||||
|
||||
### Upgrade kubelet
|
||||
|
||||
1. From the Windows node, upgrade and restart the kubelet:
|
||||
|
||||
```powershell
|
||||
stop-service kubelet
|
||||
curl.exe -Lo C:\k\kubelet.exe https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubelet.exe
|
||||
restart-service kubelet
|
||||
```
|
||||
|
||||
### Uncordon the node
|
||||
|
||||
1. From a machine with access to the Kubernetes API,
|
||||
bring the node back online by marking it schedulable:
|
||||
|
||||
```shell
|
||||
# replace <node-to-drain> with the name of your node
|
||||
kubectl uncordon <node-to-drain>
|
||||
```
|
||||
### Upgrade kube-proxy
|
||||
|
||||
1. From a machine with access to the Kubernetes API, run the following,
|
||||
again replacing {{< param "fullversion" >}} with your desired version:
|
||||
|
||||
```shell
|
||||
curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param "fullversion" >}}/g' | kubectl apply -f -
|
||||
```
|
||||
|
||||
|
||||
{{% /capture %}}
|
|
@ -5,6 +5,7 @@ reviewers:
|
|||
- dashpole
|
||||
title: Reserve Compute Resources for System Daemons
|
||||
content_template: templates/task
|
||||
min-kubernetes-server-version: 1.8
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
@ -27,6 +28,9 @@ on each node.
|
|||
{{% capture prerequisites %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
Your Kubernetes server must be at or later than version 1.17 to use
|
||||
the kubelet command line option `--reserved-cpus` to set an
|
||||
[explicitly reserved CPU list](#explicitly-reserved-cpu-list).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -146,9 +150,9 @@ exist. Kubelet will fail if an invalid cgroup is specified.
|
|||
- **Kubelet Flag**: `--reserved-cpus=0-3`
|
||||
|
||||
`reserved-cpus` is meant to define an explicit CPU set for OS system daemons and
|
||||
kubernetes system daemons. This option is added in 1.17 release. `reserved-cpus`
|
||||
is for systems that do not intent to define separate top level cgroups for
|
||||
OS system daemons and kubernetes system daemons with regard to cpuset resource.
|
||||
kubernetes system daemons. `reserved-cpus` is for systems that do not intend to
|
||||
define separate top level cgroups for OS system daemons and kubernetes system daemons
|
||||
with regard to cpuset resource.
|
||||
If the Kubelet **does not** have `--system-reserved-cgroup` and `--kube-reserved-cgroup`,
|
||||
the explicit cpuset provided by `reserved-cpus` will take precedence over the CPUs
|
||||
defined by `--kube-reserved` and `--system-reserved` options.
|
||||
|
@ -247,36 +251,4 @@ If `kube-reserved` and/or `system-reserved` is not enforced and system daemons
|
|||
exceed their reservation, `kubelet` evicts pods whenever the overall node memory
|
||||
usage is higher than `31.5Gi` or `storage` is greater than `90Gi`
|
||||
|
||||
## Feature Availability
|
||||
|
||||
As of Kubernetes version 1.2, it has been possible to **optionally** specify
|
||||
`kube-reserved` and `system-reserved` reservations. The scheduler switched to
|
||||
using `Allocatable` instead of `Capacity` when available in the same release.
|
||||
|
||||
As of Kubernetes version 1.6, `eviction-thresholds` are being considered by
|
||||
computing `Allocatable`. To revert to the old behavior set
|
||||
`--experimental-allocatable-ignore-eviction` kubelet flag to `true`.
|
||||
|
||||
As of Kubernetes version 1.6, `kubelet` enforces `Allocatable` on pods using
|
||||
control groups. To revert to the old behavior unset `--enforce-node-allocatable`
|
||||
kubelet flag. Note that unless `--kube-reserved`, or `--system-reserved` or
|
||||
`--eviction-hard` flags have non-default values, `Allocatable` enforcement does
|
||||
not affect existing deployments.
|
||||
|
||||
As of Kubernetes version 1.6, `kubelet` launches pods in their own cgroup
|
||||
sandbox in a dedicated part of the cgroup hierarchy it manages. Operators are
|
||||
required to drain their nodes prior to upgrade of the `kubelet` from prior
|
||||
versions in order to ensure pods and their associated containers are launched in
|
||||
the proper part of the cgroup hierarchy.
|
||||
|
||||
As of Kubernetes version 1.7, `kubelet` supports specifying `storage` as a resource
|
||||
for `kube-reserved` and `system-reserved`.
|
||||
|
||||
As of Kubernetes version 1.8, the `storage` key name was changed to `ephemeral-storage`
|
||||
for the alpha release.
|
||||
|
||||
As of Kubernetes version 1.17, you can optionally specify
|
||||
explicit cpuset by `reserved-cpus` as CPUs reserved for OS system
|
||||
daemons/interrupts/timers and Kubernetes daemons.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -8,11 +8,12 @@ reviewers:
|
|||
- nolancon
|
||||
|
||||
content_template: templates/task
|
||||
min-kubernetes-server-version: v1.18
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state state="alpha" >}}
|
||||
{{< feature-state state="beta" >}}
|
||||
|
||||
An increasing number of systems leverage a combination of CPUs and hardware accelerators to support latency-critical execution and high-throughput parallel computation. These include workloads in fields such as telecommunications, scientific computing, machine learning, financial services and data analytics. Such hybrid systems comprise a high performance environment.
|
||||
|
||||
|
@ -44,6 +45,10 @@ The Topology manager receives Topology information from the *Hint Providers* as
|
|||
The selected hint is stored as part of the Topology Manager. Depending on the policy configured the pod can be accepted or rejected from the node based on the selected hint.
|
||||
The hint is then stored in the Topology Manager for use by the *Hint Providers* when making the resource allocation decisions.
|
||||
|
||||
### Enable the Topology Manager feature
|
||||
|
||||
Support for the Topology Manager requires `TopologyManager` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled. It is enabled by default starting with Kubernetes 1.18.
|
||||
|
||||
### Topology Manager Policies
|
||||
|
||||
The Topology Manager currently:
|
||||
|
@ -176,12 +181,10 @@ In the case of the `BestEffort` pod the CPU Manager would send back the default
|
|||
Using this information the Topology Manager calculates the optimal hint for the pod and stores this information, which will be used by the Hint Providers when they are making their resource assignments.
|
||||
|
||||
### Known Limitations
|
||||
1. As of K8s 1.16 the Topology Manager is currently only guaranteed to work if a *single* container in the pod spec requires aligned resources. This is due to the hint generation being based on current resource allocations, and all containers in a pod generate hints before any resource allocation has been made. This results in unreliable hints for all but the first container in a pod.
|
||||
*Due to this limitation if multiple pods/containers are considered by Kubelet in quick succession they may not respect the Topology Manager policy.
|
||||
1. The maximum number of NUMA nodes that Topology Manager allows is 8. With more than 8 NUMA nodes there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
|
||||
|
||||
2. The maximum number of NUMA nodes that Topology Manager will allow is 8, past this there will be a state explosion when trying to enumerate the possible NUMA affinities and generating their hints.
|
||||
|
||||
3. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
|
||||
2. The scheduler is not topology-aware, so it is possible to be scheduled on a node and then fail on the node due to the Topology Manager.
|
||||
|
||||
3. The Device Manager and the CPU Manager are the only components to adopt the Topology Manager's HintProvider interface. This means that NUMA alignment can only be achieved for resources managed by the CPU Manager and the Device Manager. Memory or Hugepages are not considered by the Topology Manager for NUMA alignment.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 20
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
This page shows how to configure [Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA) for Pods and containers that will run on Windows nodes. Group Managed Service Accounts are a specific type of Active Directory account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers.
|
||||
|
||||
|
@ -18,9 +18,6 @@ In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide
|
|||
|
||||
You need to have a Kubernetes cluster and the `kubectl` command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes. This section covers a set of initial steps required once for each cluster:
|
||||
|
||||
### WindowsGMSA feature gate
|
||||
The `WindowsGMSA` feature gate (required to pass down GMSA credential specs from the pod specs to the container runtime) is enabled by default on the API server and the kubelet. See [Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/) for an explanation of enabling or disabling feature gates.
|
||||
|
||||
### Install the GMSACredentialSpec CRD
|
||||
A [CustomResourceDefinition](/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)(CRD) for GMSA credential spec resources needs to be configured on the cluster to define the custom resource type `GMSACredentialSpec`. Download the GMSA CRD [YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml) and save it as gmsa-crd.yaml.
|
||||
Next, install the CRD with `kubectl apply -f gmsa-crd.yaml`
|
||||
|
@ -42,7 +39,7 @@ Installing the above webhooks and associated objects require the steps below:
|
|||
|
||||
1. Create the validating and mutating webhook configurations referring to the deployment.
|
||||
|
||||
A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run``` option to allow you to review the changes that would be made to your cluster.
|
||||
A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run=server``` option to allow you to review the changes that would be made to your cluster.
|
||||
|
||||
The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) used by the script may also be used to deploy the webhooks and associated objects manually (with appropriate substitutions for the parameters)
|
||||
|
||||
|
|
|
@ -6,13 +6,9 @@ weight: 20
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
This page shows how to enable and use the `RunAsUserName` feature for pods and containers that will run on Windows nodes. This feature is meant to be the Windows equivalent of the Linux-specific `runAsUser` feature, allowing users to run the container entrypoints with a different username that their default ones.
|
||||
|
||||
{{< note >}}
|
||||
This feature is in beta. The overall functionality for `RunAsUserName` will not change, but there may be some changes regarding the username validation.
|
||||
{{< /note >}}
|
||||
This page shows how to use the `runAsUserName` setting for Pods and containers that will run on Windows nodes. This is roughly equivalent of the Linux-specific `runAsUser` setting, allowing you to run applications in a container as a different username than the default.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -60,7 +56,6 @@ The output should be:
|
|||
ContainerUser
|
||||
```
|
||||
|
||||
|
||||
## Set the Username for a Container
|
||||
|
||||
To specify the username with which to execute a Container's processes, include the `securityContext` field ([SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)) in the Container manifest, and within it, the `windowsOptions` ([WindowsSecurityContextOptions](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#windowssecuritycontextoptions-v1-core) field containing the `runAsUserName` field.
|
||||
|
|
|
@ -299,9 +299,67 @@ token available to the pod at a configurable file path, and refresh the token as
|
|||
|
||||
The application is responsible for reloading the token when it rotates. Periodic reloading (e.g. once every 5 minutes) is sufficient for most usecases.
|
||||
|
||||
## Service Account Issuer Discovery
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
The Service Account Issuer Discovery feature is enabled by enabling the
|
||||
`ServiceAccountIssuerDiscovery` [feature gate](/docs/reference/command-line-tools-reference/feature)
|
||||
and then enabling the Service Account Token Projection feature as described
|
||||
[above](#service-account-token-volume-projection).
|
||||
|
||||
{{< note >}}
|
||||
The issuer URL must comply with the
|
||||
[OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html). In
|
||||
practice, this means it must use the `https` scheme, and should serve an OpenID
|
||||
provider configuration at `{service-account-issuer}/.well-known/openid-configuration`.
|
||||
|
||||
If the URL does not comply, the `ServiceAccountIssuerDiscovery` endpoints will
|
||||
not be registered, even if the feature is enabled.
|
||||
{{< /note >}}
|
||||
|
||||
The Service Account Issuer Discovery feature enables federation of Kubernetes
|
||||
service account tokens issued by a cluster (the _identity provider_) with
|
||||
external systems (_relying parties_).
|
||||
|
||||
When enabled, the Kubernetes API server provides an OpenID Provider
|
||||
Configuration document at `/.well-known/openid-configuration` and the associated
|
||||
JSON Web Key Set (JWKS) at `/openid/v1/jwks`. The OpenID Provider Configuration
|
||||
is sometimes referred to as the _discovery document_.
|
||||
|
||||
When enabled, the cluster is also configured with a default RBAC ClusterRole
|
||||
called `system:service-account-issuer-discovery`. No role bindings are provided
|
||||
by default. Administrators may, for example, choose whether to bind the role to
|
||||
`system:authenticated` or `system:unauthenticated` depending on their security
|
||||
requirements and which external systems they intend to federate with.
|
||||
|
||||
{{< note >}}
|
||||
The responses served at `/.well-known/openid-configuration` and
|
||||
`/openid/v1/jwks` are designed to be OIDC compatible, but not strictly OIDC
|
||||
compliant. Those documents contain only the parameters necessary to perform
|
||||
validation of Kubernetes service account tokens.
|
||||
{{< /note >}}
|
||||
|
||||
The JWKS response contains public keys that a relying party can use to validate
|
||||
the Kubernetes service account tokens. Relying parties first query for the
|
||||
OpenID Provider Configuration, and use the `jwks_uri` field in the response to
|
||||
find the JWKS.
|
||||
|
||||
In many cases, Kubernetes API servers are not available on the public internet,
|
||||
but public endpoints that serve cached responses from the API server can be made
|
||||
available by users or service providers. In these cases, it is possible to
|
||||
override the `jwks_uri` in the OpenID Provider Configuration so that it points
|
||||
to the public endpoint, rather than the API server's address, by passing the
|
||||
`--service-account-jwks-uri` flag to the API server. Like the issuer URL, the
|
||||
JWKS URI is required to use the `https` scheme.
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
See also the
|
||||
[Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/).
|
||||
|
||||
See also:
|
||||
|
||||
- [Cluster Admin Guide to Service Accounts](/docs/reference/access-authn-authz/service-accounts-admin/)
|
||||
- [Service Account Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md)
|
||||
- [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -140,6 +140,45 @@ Exit your shell:
|
|||
exit
|
||||
```
|
||||
|
||||
## Configure volume permission and ownership change policy for Pods
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
|
||||
|
||||
By default, Kubernetes recursively changes ownership and permissions for the contents of each
|
||||
volume to match the `fsGroup` specified in a Pod's `securityContext` when that volume is
|
||||
mounted.
|
||||
For large volumes, checking and changing ownership and permissions can take a lot of time,
|
||||
slowing Pod startup. You can use the `fsGroupChangePolicy` field inside a `securityContext`
|
||||
to control the way that Kubernetes checks and manages ownership and permissions
|
||||
for a volume.
|
||||
|
||||
**fsGroupChangePolicy** - `fsGroupChangePolicy` defines behavior for changing ownership and permission of the volume
|
||||
before being exposed inside a Pod. This field only applies to volume types that support
|
||||
`fsGroup` controlled ownership and permissions. This field has two possible values:
|
||||
|
||||
* _OnRootMismatch_: Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This could help shorten the time it takes to change ownership and permission of a volume.
|
||||
* _Always_: Always change permission and ownership of the volume when volume is mounted.
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
securityContext:
|
||||
runAsUser: 1000
|
||||
runAsGroup: 3000
|
||||
fsGroup: 2000
|
||||
fsGroupChangePolicy: "OnRootMismatch"
|
||||
```
|
||||
|
||||
This is an alpha feature. To use it, enable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ConfigurableFSGroupPolicy` for the kube-api-server, the kube-controller-manager, and for the kubelet.
|
||||
|
||||
{{< note >}}
|
||||
This field has no effect on ephemeral volume types such as
|
||||
[`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret),
|
||||
[`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap),
|
||||
and [`emptydir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir).
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Set the security context for a Container
|
||||
|
||||
To specify security settings for a Container, include the `securityContext` field
|
||||
|
|
|
@ -64,38 +64,8 @@ Again, the information from `kubectl describe ...` should be informative. The m
|
|||
|
||||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
First, take a look at the logs of
|
||||
the current container:
|
||||
|
||||
```shell
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
|
||||
```shell
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
Alternately, you can run commands inside that container with `exec`:
|
||||
|
||||
```shell
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container.
|
||||
{{< /note >}}
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
```shell
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host,
|
||||
but this should generally not be necessary given tools in the Kubernetes API. Therefore, if you find yourself needing to ssh into a machine, please file a
|
||||
feature request on GitHub describing your use case and why these tools are insufficient.
|
||||
Once your pod has been scheduled, the methods described in [Debug Running Pods](
|
||||
/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging.
|
||||
|
||||
#### My pod is running but not doing what I told it to do
|
||||
|
||||
|
|
|
@ -93,40 +93,9 @@ worker node, but it can't run on that machine. Again, the information from
|
|||
|
||||
### My pod is crashing or otherwise unhealthy
|
||||
|
||||
First, take a look at the logs of the current container:
|
||||
Once your pod has been scheduled, the methods described in [Debug Running Pods](
|
||||
/docs/tasks/debug-application-cluster/debug-running-pods/) are available for debugging.
|
||||
|
||||
```shell
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous
|
||||
container's crash log with:
|
||||
|
||||
```shell
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
Alternately, you can run commands inside that container with `exec`:
|
||||
|
||||
```shell
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`-c ${CONTAINER_NAME}` is optional. You can omit it for pods that
|
||||
only contain a single container.
|
||||
{{< /note >}}
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run:
|
||||
|
||||
```shell
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
If your cluster enabled it, you can also try adding an [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) into the existing pod. You can use the new temporary container to run arbitrary commands, for example, to diagnose problems inside the Pod. See the page about [ephemeral container](/docs/concepts/workloads/pods/ephemeral-containers/) for more details, including feature availability.
|
||||
|
||||
If none of these approaches work, you can find the host machine that the pod is
|
||||
running on and SSH into that host.
|
||||
|
||||
## Debugging ReplicationControllers
|
||||
|
||||
|
|
|
@ -0,0 +1,190 @@
|
|||
---
|
||||
reviewers:
|
||||
- verb
|
||||
- soltysh
|
||||
title: Debug Running Pods
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
This page explains how to debug Pods running (or crashing) on a Node.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
* Your {{< glossary_tooltip text="Pod" term_id="pod" >}} should already be
|
||||
scheduled and running. If your Pod is not yet running, start with [Troubleshoot
|
||||
Applications](/docs/tasks/debug-application-cluster/debug-application/).
|
||||
* For some of the advanced debugging steps you need to know on which Node the
|
||||
Pod is running and have shell access to run commands on that Node. You don't
|
||||
need that access to run the standard debug steps that use `kubectl`.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
## Examining pod logs {#examine-pod-logs}
|
||||
|
||||
First, look at the logs of the affected container:
|
||||
|
||||
```shell
|
||||
kubectl logs ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
If your container has previously crashed, you can access the previous container's crash log with:
|
||||
|
||||
```shell
|
||||
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
## Debugging with container exec {#container-exec}
|
||||
|
||||
If the {{< glossary_tooltip text="container image" term_id="image" >}} includes
|
||||
debugging utilities, as is the case with images built from Linux and Windows OS
|
||||
base images, you can run commands inside a specific container with
|
||||
`kubectl exec`:
|
||||
|
||||
```shell
|
||||
kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
`-c ${CONTAINER_NAME}` is optional. You can omit it for Pods that only contain a single container.
|
||||
{{< /note >}}
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
```shell
|
||||
kubectl exec cassandra -- cat /var/log/cassandra/system.log
|
||||
```
|
||||
|
||||
You can run a shell that's connected to your terminal using the `-i` and `-t`
|
||||
arguments to `kubectl exec`, for example:
|
||||
|
||||
```shell
|
||||
kubectl exec -it cassandra -- sh
|
||||
```
|
||||
|
||||
For more details, see [Get a Shell to a Running Container](
|
||||
/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
|
||||
## Debugging with an ephemeral debug container {#ephemeral-container}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.18" >}}
|
||||
|
||||
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
|
||||
are useful for interactive troubleshooting when `kubectl exec` is insufficient
|
||||
because a container has crashed or a container image doesn't include debugging
|
||||
utilities, such as with [distroless images](
|
||||
https://github.com/GoogleContainerTools/distroless). `kubectl` has an alpha
|
||||
command that can create ephemeral containers for debugging beginning with version
|
||||
`v1.18`.
|
||||
|
||||
### Example debugging using ephemeral containers {#ephemeral-container-example}
|
||||
|
||||
{{< note >}}
|
||||
The examples in this section require the `EphemeralContainers` [feature gate](
|
||||
/docs/reference/command-line-tools-reference/feature-gates/) enabled in your
|
||||
cluster and `kubectl` version v1.18 or later.
|
||||
{{< /note >}}
|
||||
|
||||
You can use the `kubectl alpha debug` command to add ephemeral containers to a
|
||||
running Pod. First, create a pod for the example:
|
||||
|
||||
```shell
|
||||
kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
|
||||
```
|
||||
|
||||
{{< note >}}
|
||||
This section use the `pause` container image in examples because it does not
|
||||
contain userland debugging utilities, but this method works with all container
|
||||
images.
|
||||
{{< /note >}}
|
||||
|
||||
If you attempt to use `kubectl exec` to create a shell you will see an error
|
||||
because there is no shell in this container image.
|
||||
|
||||
```shell
|
||||
kubectl exec -it pause -- sh
|
||||
```
|
||||
|
||||
```
|
||||
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
|
||||
```
|
||||
|
||||
You can instead add a debugging container using `kubectl alpha debug`. If you
|
||||
specify the `-i`/`--interactive` argument, `kubectl` will automatically attach
|
||||
to the console of the Ephemeral Container.
|
||||
|
||||
```shell
|
||||
kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
Defaulting debug container name to debugger-8xzrl.
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
/ #
|
||||
```
|
||||
|
||||
This command adds a new busybox container and attaches to it. The `--target`
|
||||
parameter targets the process namespace of another container. It's necessary
|
||||
here because `kubectl run` does not enable [process namespace sharing](
|
||||
/docs/tasks/configure-pod-container/share-process-namespace/) in the pod it
|
||||
creates.
|
||||
|
||||
{{< note >}}
|
||||
The `--target` parameter must be supported by the {{< glossary_tooltip
|
||||
text="Container Runtime" term_id="container-runtime" >}}. When not supported,
|
||||
the Ephemeral Container may not be started, or it may be started with an
|
||||
isolated process namespace.
|
||||
{{< /note >}}
|
||||
|
||||
You can view the state of the newly created ephemeral container using `kubectl describe`:
|
||||
|
||||
```shell
|
||||
kubectl describe pod ephemeral-demo
|
||||
```
|
||||
|
||||
```
|
||||
...
|
||||
Ephemeral Containers:
|
||||
debugger-8xzrl:
|
||||
Container ID: docker://b888f9adfd15bd5739fefaa39e1df4dd3c617b9902082b1cfdc29c4028ffb2eb
|
||||
Image: busybox
|
||||
Image ID: docker-pullable://busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084
|
||||
Port: <none>
|
||||
Host Port: <none>
|
||||
State: Running
|
||||
Started: Wed, 12 Feb 2020 14:25:42 +0100
|
||||
Ready: False
|
||||
Restart Count: 0
|
||||
Environment: <none>
|
||||
Mounts: <none>
|
||||
...
|
||||
```
|
||||
|
||||
Use `kubectl delete` to remove the Pod when you're finished:
|
||||
|
||||
```shell
|
||||
kubectl delete pod ephemeral-demo
|
||||
```
|
||||
|
||||
<!--
|
||||
Planned future sections include:
|
||||
|
||||
* Debugging with a copy of the pod
|
||||
|
||||
See https://git.k8s.io/enhancements/keps/sig-cli/20190805-kubectl-debug.md
|
||||
-->
|
||||
|
||||
## Debugging via a shell on the node {#node-shell-session}
|
||||
|
||||
If none of these approaches work, you can find the host machine that the pod is
|
||||
running on and SSH into that host, but this should generally not be necessary
|
||||
given tools in the Kubernetes API. Therefore, if you find yourself needing to
|
||||
ssh into a machine, please file a feature request on GitHub describing your use
|
||||
case and why these tools are insufficient.
|
||||
|
||||
{{% /capture %}}
|
|
@ -57,7 +57,7 @@ If you haven't created the DaemonSet in the system, check your DaemonSet
|
|||
manifest with the following command instead:
|
||||
|
||||
```shell
|
||||
kubectl apply -f ds.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
|
||||
kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
|
||||
```
|
||||
|
||||
The output from both commands should be:
|
||||
|
|
|
@ -17,11 +17,11 @@ can consume huge pages and the current limitations.
|
|||
{{% capture prerequisites %}}
|
||||
|
||||
1. Kubernetes nodes must pre-allocate huge pages in order for the node to report
|
||||
its huge page capacity. A node may only pre-allocate huge pages for a single
|
||||
size.
|
||||
its huge page capacity. A node can pre-allocate huge pages for multiple
|
||||
sizes.
|
||||
|
||||
The nodes will automatically discover and report all huge page resources as a
|
||||
schedulable resource.
|
||||
The nodes will automatically discover and report all huge page resources as
|
||||
schedulable resources.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
@ -30,12 +30,51 @@ schedulable resource.
|
|||
## API
|
||||
|
||||
Huge pages can be consumed via container level resource requirements using the
|
||||
resource name `hugepages-<size>`, where size is the most compact binary notation
|
||||
using integer values supported on a particular node. For example, if a node
|
||||
supports 2048KiB page sizes, it will expose a schedulable resource
|
||||
`hugepages-2Mi`. Unlike CPU or memory, huge pages do not support overcommit. Note
|
||||
that when requesting hugepage resources, either memory or CPU resources must
|
||||
be requested as well.
|
||||
resource name `hugepages-<size>`, where `<size>` is the most compact binary
|
||||
notation using integer values supported on a particular node. For example, if a
|
||||
node supports 2048KiB and 1048576KiB page sizes, it will expose a schedulable
|
||||
resources `hugepages-2Mi` and `hugepages-1Gi`. Unlike CPU or memory, huge pages
|
||||
do not support overcommit. Note that when requesting hugepage resources, either
|
||||
memory or CPU resources must be requested as well.
|
||||
|
||||
A pod may consume multiple huge page sizes in a single pod spec. In this case it
|
||||
must use `medium: HugePages-<hugepagesize>` notation for all volume mounts.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: huge-pages-example
|
||||
spec:
|
||||
containers:
|
||||
- name: example
|
||||
image: fedora:latest
|
||||
command:
|
||||
- sleep
|
||||
- inf
|
||||
volumeMounts:
|
||||
- mountPath: /hugepages-2Mi
|
||||
name: hugepage-2mi
|
||||
- mountPath: /hugepages-1Gi
|
||||
name: hugepage-1gi
|
||||
resources:
|
||||
limits:
|
||||
hugepages-2Mi: 100Mi
|
||||
hugepages-1Gi: 2Gi
|
||||
memory: 100Mi
|
||||
requests:
|
||||
memory: 100Mi
|
||||
volumes:
|
||||
- name: hugepage-2mi
|
||||
emptyDir:
|
||||
medium: HugePages-2Mi
|
||||
- name: hugepage-1gi
|
||||
emptyDir:
|
||||
medium: HugePages-1Gi
|
||||
```
|
||||
|
||||
A pod may use `medium: HugePages` only if it requests huge pages of one size.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -66,8 +105,7 @@ spec:
|
|||
|
||||
- Huge page requests must equal the limits. This is the default if limits are
|
||||
specified, but requests are not.
|
||||
- Huge pages are isolated at a pod scope, container isolation is planned in a
|
||||
future iteration.
|
||||
- Huge pages are isolated at a container scope, so each container has own limit on their cgroup sandbox as requested in a container spec.
|
||||
- EmptyDir volumes backed by huge pages may not consume more huge page memory
|
||||
than the pod request.
|
||||
- Applications that consume huge pages via `shmget()` with `SHM_HUGETLB` must
|
||||
|
@ -75,10 +113,15 @@ spec:
|
|||
- Huge page usage in a namespace is controllable via ResourceQuota similar
|
||||
to other compute resources like `cpu` or `memory` using the `hugepages-<size>`
|
||||
token.
|
||||
- Support of multiple sizes huge pages is feature gated. It can be
|
||||
enabled with the `HugePageStorageMediumSize` [feature
|
||||
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the {{<
|
||||
glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{<
|
||||
glossary_tooltip text="kube-apiserver"
|
||||
term_id="kube-apiserver" >}} (`--feature-gates=HugePageStorageMediumSize=true`).
|
||||
|
||||
## Future
|
||||
|
||||
- Support container isolation of huge pages in addition to pod isolation.
|
||||
- NUMA locality guarantees as a feature of quality of service.
|
||||
- LimitRange support.
|
||||
|
||||
|
|
|
@ -139,10 +139,10 @@ creation. This is done by piping the output of the `create` command to the
|
|||
`set` command, and then back to the `create` command. Here's an example:
|
||||
|
||||
```sh
|
||||
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -
|
||||
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -
|
||||
```
|
||||
|
||||
1. The `kubectl create service -o yaml --dry-run` command creates the configuration for the Service, but prints it to stdout as YAML instead of sending it to the Kubernetes API server.
|
||||
1. The `kubectl create service -o yaml --dry-run=client` command creates the configuration for the Service, but prints it to stdout as YAML instead of sending it to the Kubernetes API server.
|
||||
1. The `kubectl set selector --local -f - -o yaml` command reads the configuration from stdin, and writes the updated configuration to stdout as YAML.
|
||||
1. The `kubectl create -f -` command creates the object using the configuration provided via stdin.
|
||||
|
||||
|
@ -152,7 +152,7 @@ You can use `kubectl create --edit` to make arbitrary changes to an object
|
|||
before it is created. Here's an example:
|
||||
|
||||
```sh
|
||||
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run > /tmp/srv.yaml
|
||||
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client > /tmp/srv.yaml
|
||||
kubectl create --edit -f /tmp/srv.yaml
|
||||
```
|
||||
|
||||
|
|
|
@ -791,6 +791,12 @@ kubectl get -k ./
|
|||
kubectl describe -k ./
|
||||
```
|
||||
|
||||
Run the following command to compare the Deployment object `dev-my-nginx` against the state that the cluster would be in if the manifest was applied:
|
||||
|
||||
```shell
|
||||
kubectl diff -k ./
|
||||
```
|
||||
|
||||
Run the following command to delete the Deployment object `dev-my-nginx`:
|
||||
|
||||
```shell
|
||||
|
|
|
@ -284,6 +284,154 @@ and [external.metrics.k8s.io](https://github.com/kubernetes/community/blob/maste
|
|||
For examples of how to use them see [the walkthrough for using custom metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)
|
||||
and [the walkthrough for using external metrics](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects).
|
||||
|
||||
## Support for configurable scaling behavior
|
||||
|
||||
Starting from
|
||||
[v1.18](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
|
||||
the `v2beta2` API allows scaling behavior to be configured through the HPA
|
||||
`behavior` field. Behaviors are specified separately for scaling up and down in
|
||||
`scaleUp` or `scaleDown` section under the `behavior` field. A stabilization
|
||||
window can be specified for both directions which prevents the flapping of the
|
||||
number of the replicas in the scaling target. Similarly specifing scaling
|
||||
policies controls the rate of change of replicas while scaling.
|
||||
|
||||
### Scaling Policies
|
||||
|
||||
One or more scaling policies can be specified in the `behavior` section of the spec.
|
||||
When multiple policies are specified the policy which allows the highest amount of
|
||||
change is the policy which is selected by default. The following example shows this behavior
|
||||
while scaling down:
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Pods
|
||||
value: 4
|
||||
periodSeconds: 60
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
```
|
||||
|
||||
When the number of pods is more than 40 the second policy will be used for scaling down.
|
||||
For instance if there are 80 replicas and the target has to be scaled down to 10 replicas
|
||||
then during the first step 8 replicas will be reduced. In the next iteration when the number
|
||||
of replicas is 72, 10% of the pods is 7.2 but the number is rounded up to 8. On each loop of
|
||||
the autoscaler controller the number of pods to be change is re-calculated based on the number
|
||||
of current replicas. When the number of replicas falls below 40 the first policy_(Pods)_ is applied
|
||||
and 4 replicas will be reduced at a time.
|
||||
|
||||
`periodSeconds` indicates the length of time in the past for which the policy must hold true.
|
||||
The first policy allows at most 4 replicas to be scaled down in one minute. The second policy
|
||||
allows at most 10% of the current replicas to be scaled down in one minute.
|
||||
|
||||
The policy selection can be changed by specifying the `selectPolicy` field for a scaling
|
||||
direction. By setting the value to `Min` which would select the policy which allows the
|
||||
smallest change in the replica count. Setting the value to `Disabled` completely disabled
|
||||
scaling in that direction.
|
||||
|
||||
### Stabilization Window
|
||||
|
||||
The stabilization window is used to retrict the flapping of replicas when the metrics
|
||||
used for scaling keep fluctuating. The stabilization window is used by the autoscaling
|
||||
algorithm to consider the computed desired state from the past to prevent scaling. In
|
||||
the following example the stabilization window is specified for `scaleDown`.
|
||||
|
||||
```yaml
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 300
|
||||
```
|
||||
|
||||
When the metrics indicate that the target should be scaled down the algorithm looks
|
||||
into previously computed desired states and uses the highest value from the specified
|
||||
interval. In above example all desired states from the past 5 minutes will be considered.
|
||||
|
||||
### Default Behavior
|
||||
|
||||
To use the custom scaling not all fields have to be specified. Only values which need to be
|
||||
customized can be specified. These custom values are merged with default values. The default values
|
||||
match the existing behavior in the HPA algorithm.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 300
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 100
|
||||
periodSeconds: 15
|
||||
scaleUp:
|
||||
stabilizationWindowSeconds: 0
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 100
|
||||
periodSeconds: 15
|
||||
- type: Pods
|
||||
value: 4
|
||||
periodSeconds: 15
|
||||
selectPolicy: Max
|
||||
```
|
||||
For scaling down the stabilization window is _300_ seconds(or the value of the
|
||||
`--horizontal-pod-autoscaler-downscale-stabilization` flag if provided). There is only a single policy
|
||||
for scaling down which allows a 100% of the currently running replicas to be removed which
|
||||
means the scaling target can be scaled down to the minimum allowed replicas.
|
||||
For scaling up there is no stabilization window. When the metrics indicate that the target should be
|
||||
scaled up the target is scaled up immediately. There are 2 policies which. 4 pods or a 100% of the currently
|
||||
running replicas will be added every 15 seconds till the HPA reaches its steady state.
|
||||
|
||||
### Example: change downscale stabilization window
|
||||
|
||||
To provide a custom downscale stabilization window of 1 minute, the following
|
||||
behavior would be added to the HPA:
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 60
|
||||
```
|
||||
|
||||
### Example: limit scale down rate
|
||||
|
||||
To limit the rate at which pods are removed by the HPA to 10% per minute, the
|
||||
following behavior would be added to the HPA:
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
```
|
||||
|
||||
To allow a final drop of 5 pods, another policy can be added and a selection
|
||||
strategy of minimum:
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
- type: Pods
|
||||
value: 5
|
||||
periodSeconds: 60
|
||||
selectPolicy: Max
|
||||
```
|
||||
|
||||
### Example: disable scale down
|
||||
|
||||
The `selectPolicy` value of `Disabled` turns off scaling the given direction.
|
||||
So to prevent downscaling the following policy would be used:
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
selectPolicy: Disabled
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 10
|
|||
|
||||
{{% capture overview %}}
|
||||
|
||||
사용자 Docker 이미지를 생성하고 레지스트리에 푸시(push)하여 쿠버네티스 파드에서 참조되기 이전에 대비한다.
|
||||
사용자 Docker 이미지를 생성하고 레지스트리에 푸시(push)하여 쿠버네티스 파드에서 참조되기 이전에 대비한다.
|
||||
|
||||
컨테이너의 `image` 속성은 `docker` 커맨드에서 지원하는 문법과 같은 문법을 지원한다. 이는 프라이빗 레지스트리와 태그를 포함한다.
|
||||
|
||||
|
@ -17,8 +17,8 @@ weight: 10
|
|||
|
||||
## 이미지 업데이트
|
||||
|
||||
기본 풀(pull) 정책은 `IfNotPresent`이며, 이것은 Kubelet이 이미
|
||||
존재하는 이미지에 대한 풀을 생략하게 한다. 만약 항상 풀을 강제하고 싶다면,
|
||||
기본 풀(pull) 정책은 `IfNotPresent`이며, 이것은 Kubelet이 이미
|
||||
존재하는 이미지에 대한 풀을 생략하게 한다. 만약 항상 풀을 강제하고 싶다면,
|
||||
다음 중 하나를 수행하면 된다.
|
||||
|
||||
- 컨테이너의 `imagePullPolicy`를 `Always`로 설정.
|
||||
|
@ -35,7 +35,7 @@ Docker CLI는 현재 `docker manifest` 커맨드와 `create`, `annotate`, `push`
|
|||
다음에서 docker 문서를 확인하기 바란다.
|
||||
https://docs.docker.com/edge/engine/reference/commandline/manifest/
|
||||
|
||||
이것을 사용하는 방법에 대한 예제는 빌드 하니스(harness)에서 참조한다.
|
||||
이것을 사용하는 방법에 대한 예제는 빌드 하니스(harness)에서 참조한다.
|
||||
https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=
|
||||
|
||||
이 커맨드는 Docker CLI에 의존하며 그에 전적으로 구현된다. `$HOME/.docker/config.json` 편집 및 `experimental` 키를 `enabled`로 설정하거나, CLI 커맨드 호출 시 간단히 `DOCKER_CLI_EXPERIMENTAL` 환경 변수를 `enabled`로만 설정해도 된다.
|
||||
|
@ -79,9 +79,9 @@ Docker *18.06 또는 그 이상* 을 사용하길 바란다. 더 낮은 버전
|
|||
|
||||
### Google 컨테이너 레지스트리 사용
|
||||
|
||||
쿠버네티스는 Google 컴퓨트 엔진(GCE)에서 동작할 때, [Google 컨테이너
|
||||
레지스트리(GCR)](https://cloud.google.com/tools/container-registry/)를 자연스럽게
|
||||
지원한다. 사용자의 클러스터가 GCE 또는 Google 쿠버네티스 엔진에서 동작 중이라면, 간단히
|
||||
쿠버네티스는 Google 컴퓨트 엔진(GCE)에서 동작할 때, [Google 컨테이너
|
||||
레지스트리(GCR)](https://cloud.google.com/tools/container-registry/)를 자연스럽게
|
||||
지원한다. 사용자의 클러스터가 GCE 또는 Google 쿠버네티스 엔진에서 동작 중이라면, 간단히
|
||||
이미지의 전체 이름(예: gcr.io/my_project/image:tag)을 사용하면 된다.
|
||||
|
||||
클러스터 내에서 모든 파드는 해당 레지스트리에 있는 이미지에 읽기 접근 권한을 가질 것이다.
|
||||
|
@ -95,10 +95,10 @@ GCR을 인증할 것이다. 인스턴스의 서비스 계정은
|
|||
|
||||
쿠버네티스는 노드가 AWS EC2 인스턴스일 때, [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/)를 자연스럽게 지원한다.
|
||||
|
||||
간단히 이미지의 전체 이름(예: `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)을
|
||||
간단히 이미지의 전체 이름(예: `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)을
|
||||
파드 정의에 사용하면 된다.
|
||||
|
||||
파드를 생성할 수 있는 클러스터의 모든 사용자는 ECR 레지스트리에 있는 어떠한
|
||||
파드를 생성할 수 있는 클러스터의 모든 사용자는 ECR 레지스트리에 있는 어떠한
|
||||
이미지든지 파드를 실행하는데 사용할 수 있다.
|
||||
|
||||
kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다. 이것을 위해서는 다음에 대한 권한이 필요하다.
|
||||
|
@ -127,12 +127,12 @@ kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다
|
|||
- `aws_credentials.go:116] Got ECR credentials from ECR API for <AWS account ID for ECR>.dkr.ecr.<AWS region>.amazonaws.com`
|
||||
|
||||
### Azure 컨테이너 레지스트리(ACR) 사용
|
||||
[Azure 컨테이너 레지스트리](https://azure.microsoft.com/en-us/services/container-registry/)를 사용하는 경우
|
||||
[Azure 컨테이너 레지스트리](https://azure.microsoft.com/en-us/services/container-registry/)를 사용하는 경우
|
||||
관리자 역할의 사용자나 서비스 주체(principal) 중 하나를 사용하여 인증할 수 있다.
|
||||
어느 경우라도, 인증은 표준 Docker 인증을 통해서 수행된다. 이러한 지침은
|
||||
어느 경우라도, 인증은 표준 Docker 인증을 통해서 수행된다. 이러한 지침은
|
||||
[azure-cli](https://github.com/azure/azure-cli) 명령줄 도구 사용을 가정한다.
|
||||
|
||||
우선 레지스트리를 생성하고 자격 증명을 만들어야한다. 이에 대한 전체 문서는
|
||||
우선 레지스트리를 생성하고 자격 증명을 만들어야한다. 이에 대한 전체 문서는
|
||||
[Azure 컨테이너 레지스트리 문서](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)에서 찾을 수 있다.
|
||||
|
||||
컨테이너 레지스트리를 생성하고 나면, 다음의 자격 증명을 사용하여 로그인한다.
|
||||
|
@ -142,7 +142,7 @@ kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다
|
|||
* `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io`
|
||||
* `DOCKER_EMAIL`: `${some-email-address}`
|
||||
|
||||
해당 변수에 대한 값을 채우고 나면
|
||||
해당 변수에 대한 값을 채우고 나면
|
||||
[쿠버네티스 시크릿을 구성하고 그것을 파드 디플로이를 위해서 사용](/ko/docs/concepts/containers/images/#파드에-imagepullsecrets-명시)할 수 있다.
|
||||
|
||||
### IBM 클라우드 컨테이너 레지스트리 사용
|
||||
|
@ -159,13 +159,13 @@ Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Go
|
|||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
AWS EC2에서 동작 중이고 EC2 컨테이너 레지스트리(ECR)을 사용 중이라면, 각 노드의 kubelet은
|
||||
AWS EC2에서 동작 중이고 EC2 컨테이너 레지스트리(ECR)을 사용 중이라면, 각 노드의 kubelet은
|
||||
ECR 로그인 자격 증명을 관리하고 업데이트할 것이다. 그렇다면 이 방법은 쓸 수 없다.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
|
||||
GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
|
||||
이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
|
||||
GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
|
||||
않을 것이다.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -174,7 +174,7 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에
|
|||
{{< /note >}}
|
||||
|
||||
|
||||
Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또는 `$HOME/.docker/config.json` 파일에 저장한다. 만약 동일한 파일을
|
||||
Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또는 `$HOME/.docker/config.json` 파일에 저장한다. 만약 동일한 파일을
|
||||
아래의 검색 경로 리스트에 넣으면, kubelete은 이미지를 풀 할 때 해당 파일을 자격 증명 공급자로 사용한다.
|
||||
|
||||
* `{--root-dir:-/var/lib/kubelet}/config.json`
|
||||
|
@ -190,11 +190,11 @@ Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또
|
|||
아마도 kubelet을 위한 사용자의 환경 파일에 `HOME=/root`을 명시적으로 설정해야 할 것이다.
|
||||
{{< /note >}}
|
||||
|
||||
프라이빗 레지스트리를 사용도록 사용자의 노드를 구성하기 위해서 권장되는 단계는 다음과 같다. 이
|
||||
프라이빗 레지스트리를 사용도록 사용자의 노드를 구성하기 위해서 권장되는 단계는 다음과 같다. 이
|
||||
예제의 경우, 사용자의 데스크탑/랩탑에서 아래 내용을 실행한다.
|
||||
|
||||
1. 사용하고 싶은 각 자격 증명 세트에 대해서 `docker login [서버]`를 실행한다. 이것은 `$HOME/.docker/config.json`를 업데이트한다.
|
||||
1. 편집기에서 `$HOME/.docker/config.json`를 보고 사용하고 싶은 자격 증명만 포함하고 있는지 확인한다.
|
||||
1. 편집기에서 `$HOME/.docker/config.json`를 보고 사용하고 싶은 자격 증명만 포함하고 있는지 확인한다.
|
||||
1. 노드의 리스트를 구한다. 예를 들면 다음과 같다.
|
||||
- 이름을 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
|
||||
- IP를 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
|
||||
|
@ -241,11 +241,11 @@ kubectl describe pods/private-image-test-1 | grep 'Failed'
|
|||
```
|
||||
|
||||
|
||||
클러스터의 모든 노드가 반드시 동일한 `.docker/config.json`를 가져야 한다. 그렇지 않으면, 파드가
|
||||
일부 노드에서만 실행되고 다른 노드에서는 실패할 것이다. 예를 들어, 노드 오토스케일링을 사용한다면, 각 인스턴스
|
||||
클러스터의 모든 노드가 반드시 동일한 `.docker/config.json`를 가져야 한다. 그렇지 않으면, 파드가
|
||||
일부 노드에서만 실행되고 다른 노드에서는 실패할 것이다. 예를 들어, 노드 오토스케일링을 사용한다면, 각 인스턴스
|
||||
템플릿은 `.docker/config.json`을 포함하거나 그것을 포함한 드라이브를 마운트해야 한다.
|
||||
|
||||
프라이빗 레지스트리 키가 `.docker/config.json`에 추가되고 나면 모든 파드는
|
||||
프라이빗 레지스트리 키가 `.docker/config.json`에 추가되고 나면 모든 파드는
|
||||
프라이빗 레지스트리의 이미지에 읽기 접근 권한을 가지게 될 것이다.
|
||||
|
||||
### 미리 내려받은 이미지
|
||||
|
@ -255,16 +255,16 @@ Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Go
|
|||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
|
||||
GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
|
||||
이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
|
||||
GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
|
||||
않을 것이다.
|
||||
{{< /note >}}
|
||||
|
||||
기본적으로, kubelet은 지정된 레지스트리에서 각 이미지를 풀 하려고 할 것이다.
|
||||
그러나, 컨테이너의 `imagePullPolicy` 속성이 `IfNotPresent` 또는 `Never`으로 설정되어 있다면,
|
||||
그러나, 컨테이너의 `imagePullPolicy` 속성이 `IfNotPresent` 또는 `Never`으로 설정되어 있다면,
|
||||
로컬 이미지가 사용된다(우선적으로 또는 배타적으로).
|
||||
|
||||
레지스트리 인증의 대안으로 미리 풀 된 이미지에 의존하고 싶다면,
|
||||
레지스트리 인증의 대안으로 미리 풀 된 이미지에 의존하고 싶다면,
|
||||
클러스터의 모든 노드가 동일한 미리 내려받은 이미지를 가지고 있는지 확인해야 한다.
|
||||
|
||||
이것은 특정 이미지를 속도를 위해 미리 로드하거나 프라이빗 레지스트리에 대한 인증의 대안으로 사용될 수 있다.
|
||||
|
@ -274,7 +274,7 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에
|
|||
### 파드에 ImagePullSecrets 명시
|
||||
|
||||
{{< note >}}
|
||||
이 방법은 현재 Google 쿠버네티스 엔진, GCE 및 노드 생성이 자동화된 모든 클라우드 제공자에게
|
||||
이 방법은 현재 Google 쿠버네티스 엔진, GCE 및 노드 생성이 자동화된 모든 클라우드 제공자에게
|
||||
권장된다.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -288,10 +288,10 @@ GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에
|
|||
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
|
||||
```
|
||||
|
||||
만약 Docker 자격 증명 파일이 이미 존재한다면, 위의 명령을 사용하지 않고,
|
||||
만약 Docker 자격 증명 파일이 이미 존재한다면, 위의 명령을 사용하지 않고,
|
||||
자격 증명 파일을 쿠버네티스 시크릿으로 가져올 수 있다.
|
||||
[기존 Docker 자격 증명으로 시크릿 생성](/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)에서 관련 방법을 설명하고 있다.
|
||||
`kubectl create secret docker-registry`는
|
||||
`kubectl create secret docker-registry`는
|
||||
하나의 개인 레지스트리에서만 작동하는 시크릿을 생성하기 때문에,
|
||||
여러 개인 컨테이너 레지스트리를 사용하는 경우 특히 유용하다.
|
||||
|
||||
|
@ -302,7 +302,7 @@ kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SER
|
|||
|
||||
#### 파드의 imagePullSecrets 참조
|
||||
|
||||
이제, `imagePullSecrets` 섹션을 파드의 정의에 추가함으로써 해당 시크릿을
|
||||
이제, `imagePullSecrets` 섹션을 파드의 정의에 추가함으로써 해당 시크릿을
|
||||
참조하는 파드를 생성할 수 있다.
|
||||
|
||||
```shell
|
||||
|
@ -328,38 +328,38 @@ EOF
|
|||
|
||||
이것은 프라이빗 레지스트리를 사용하는 각 파드에 대해서 수행될 필요가 있다.
|
||||
|
||||
그러나, 이 필드의 셋팅은 [서비스 어카운트](/docs/user-guide/service-accounts) 리소스에
|
||||
그러나, 이 필드의 셋팅은 [서비스 어카운트](/docs/user-guide/service-accounts) 리소스에
|
||||
imagePullSecrets을 셋팅하여 자동화할 수 있다.
|
||||
자세한 지침을 위해서는 [서비스 어카운트에 ImagePullSecrets 추가](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)를 확인한다.
|
||||
|
||||
이것은 노드 당 `.docker/config.json`와 함께 사용할 수 있다. 자격 증명은
|
||||
이것은 노드 당 `.docker/config.json`와 함께 사용할 수 있다. 자격 증명은
|
||||
병합될 것이다. 이 방법은 Google 쿠버네티스 엔진에서 작동될 것이다.
|
||||
|
||||
### 유스케이스
|
||||
|
||||
프라이빗 레지스트리를 구성하기 위한 많은 솔루션이 있다. 다음은 여러 가지
|
||||
일반적인 유스케이스와 제안된 솔루션이다.
|
||||
프라이빗 레지스트리를 구성하기 위한 많은 솔루션이 있다. 다음은 여러 가지
|
||||
일반적인 유스케이스와 제안된 솔루션이다.
|
||||
|
||||
1. 비소유 이미지(예를 들어, 오픈소스)만 실행하는 클러스터의 경우. 이미지를 숨길 필요가 없다.
|
||||
- Docker hub의 퍼블릭 이미지를 사용한다.
|
||||
- 설정이 필요 없다.
|
||||
- GCE 및 Google 쿠버네티스 엔진에서는, 속도와 가용성 향상을 위해서 로컬 미러가 자동적으로 사용된다.
|
||||
1. 모든 클러스터 사용자에게는 보이지만, 회사 외부에는 숨겨야하는 일부 독점 이미지를
|
||||
- GCE 및 Google 쿠버네티스 엔진에서는, 속도와 가용성 향상을 위해서 로컬 미러가 자동적으로 사용된다.
|
||||
1. 모든 클러스터 사용자에게는 보이지만, 회사 외부에는 숨겨야하는 일부 독점 이미지를
|
||||
실행하는 클러스터의 경우.
|
||||
- 호스트 된 프라이빗 [Docker 레지스트리](https://docs.docker.com/registry/)를 사용한다.
|
||||
- 그것은 [Docker Hub](https://hub.docker.com/signup)에 호스트 되어 있거나, 다른 곳에 되어 있을 것이다.
|
||||
- 위에 설명된 바와 같이 수동으로 .docker/config.json을 구성한다.
|
||||
- 또는, 방화벽 뒤에서 읽기 접근 권한을 가진 내부 프라이빗 레지스트리를 실행한다.
|
||||
- 쿠버네티스 구성은 필요 없다.
|
||||
- 쿠버네티스 구성은 필요 없다.
|
||||
- 또는, GCE 및 Google 쿠버네티스 엔진에서는, 프로젝트의 Google 컨테이너 레지스트리를 사용한다.
|
||||
- 그것은 수동 노드 구성에 비해서 클러스터 오토스케일링과 더 잘 동작할 것이다.
|
||||
- 또는, 노드의 구성 변경이 불편한 클러스터에서는, `imagePullSecrets`를 사용한다.
|
||||
1. 독점 이미지를 가진 클러스터로, 그 중 일부가 더 엄격한 접근 제어를 필요로 하는 경우.
|
||||
- [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다.
|
||||
- 민감한 데이터는 이미지 안에 포장하는 대신, "시크릿" 리소스로 이동한다.
|
||||
- 민감한 데이터는 이미지 안에 포장하는 대신, "시크릿" 리소스로 이동한다.
|
||||
1. 멀티-테넌트 클러스터에서 각 테넌트가 자신의 프라이빗 레지스트리를 필요로 하는 경우.
|
||||
- [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다.
|
||||
- 인가가 요구되도록 프라이빗 레지스트리를 실행한다.
|
||||
- 인가가 요구되도록 프라이빗 레지스트리를 실행한다.
|
||||
- 각 테넌트에 대한 레지스트리 자격 증명을 생성하고, 시크릿에 넣고, 각 테넌트 네임스페이스에 시크릿을 채운다.
|
||||
- 테넌트는 해당 시크릿을 각 네임스페이스의 imagePullSecrets에 추가한다.
|
||||
|
||||
|
|
|
@ -16,8 +16,8 @@ weight: 10
|
|||
|
||||
## 레플리카셋의 작동 방식
|
||||
|
||||
레플리카셋을 정의하는 필드는 획득 가능한 파드를 식별하는 방법이 명시된 셀렉터, 유지해야 하는 파드 개수를 명시하는 레플리카의 개수,
|
||||
그리고 레플리카 수 유지를 위해 생성하는 신규 파드에 대한 데이터를 명시하는 파드 템플릿을 포함한다.
|
||||
레플리카셋을 정의하는 필드는 획득 가능한 파드를 식별하는 방법이 명시된 셀렉터, 유지해야 하는 파드 개수를 명시하는 레플리카의 개수,
|
||||
그리고 레플리카 수 유지를 위해 생성하는 신규 파드에 대한 데이터를 명시하는 파드 템플릿을 포함한다.
|
||||
그러면 레플리카셋은 필드에 지정된 설정을 충족하기 위해 필요한 만큼 파드를 만들고 삭제한다.
|
||||
레플리카셋이 새로운 파드를 생성해야 할 경우, 명시된 파드 템플릿을
|
||||
사용한다.
|
||||
|
@ -27,16 +27,16 @@ weight: 10
|
|||
레플리카셋이 가지고 있는 모든 파드의 ownerReferences 필드는 해당 파드를 소유한 레플리카셋을 식별하기 위한 소유자 정보를 가진다.
|
||||
이 링크를 통해 레플리카셋은 자신이 유지하는 파드의 상태를 확인하고 이에 따라 관리 한다.
|
||||
|
||||
레플리카셋은 셀렉터를 이용해서 필요한 새 파드를 식별한다. 만약 파드에 OwnerReference이 없거나
|
||||
OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고 레플리카셋의 셀렉터와 일치한다면 레플리카셋이 즉각 파드를
|
||||
레플리카셋은 셀렉터를 이용해서 필요한 새 파드를 식별한다. 만약 파드에 OwnerReference이 없거나
|
||||
OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고 레플리카셋의 셀렉터와 일치한다면 레플리카셋이 즉각 파드를
|
||||
가지게 될 것이다.
|
||||
|
||||
## 레플리카셋을 사용하는 시기
|
||||
|
||||
레플리카셋은 지정된 수의 파드 레플리카가 항상 실행되도록 보장한다.
|
||||
그러나 디플로이먼트는 레플리카셋을 관리하고 다른 유용한 기능과 함께
|
||||
그러나 디플로이먼트는 레플리카셋을 관리하고 다른 유용한 기능과 함께
|
||||
파드에 대한 선언적 업데이트를 제공하는 상위 개념이다.
|
||||
따라서 우리는 사용자 지정 오케스트레이션이 필요하거나 업데이트가 전혀 필요하지 않은 경우라면
|
||||
따라서 우리는 사용자 지정 오케스트레이션이 필요하거나 업데이트가 전혀 필요하지 않은 경우라면
|
||||
레플리카셋을 직접적으로 사용하기 보다는 디플로이먼트를 사용하는 것을 권장한다.
|
||||
|
||||
이는 레플리카셋 오브젝트를 직접 조작할 필요가 없다는 것을 의미한다.
|
||||
|
@ -46,7 +46,7 @@ OwnerReference가 {{< glossary_tooltip term_id="controller" >}} 가 아니고
|
|||
|
||||
{{< codenew file="controllers/frontend.yaml" >}}
|
||||
|
||||
이 매니페스트를 `frontend.yaml`에 저장하고 쿠버네티스 클러스터에 적용하면 정의되어있는 레플리카셋이
|
||||
이 매니페스트를 `frontend.yaml`에 저장하고 쿠버네티스 클러스터에 적용하면 정의되어있는 레플리카셋이
|
||||
생성되고 레플리카셋이 관리하는 파드가 생성된다.
|
||||
|
||||
```shell
|
||||
|
@ -141,24 +141,24 @@ metadata:
|
|||
## 템플릿을 사용하지 않는 파드의 획득
|
||||
|
||||
단독(bare) 파드를 생성하는 것에는 문제가 없지만, 단독 파드가 레플리카셋의 셀렉터와 일치하는 레이블을 가지지
|
||||
않도록 하는 것을 강력하게 권장한다. 그 이유는 레플리카셋이 소유하는 파드가 템플릿에 명시된 파드에만 국한되지 않고,
|
||||
않도록 하는 것을 강력하게 권장한다. 그 이유는 레플리카셋이 소유하는 파드가 템플릿에 명시된 파드에만 국한되지 않고,
|
||||
이전 섹션에서 명시된 방식에 의해서도 다른 파드의 획득이 가능하기 때문이다.
|
||||
|
||||
이전 프런트엔드 레플리카셋 예제와 다음의 매니페스트에 명시된 파드를 가져와 참조한다.
|
||||
|
||||
{{< codenew file="pods/pod-rs.yaml" >}}
|
||||
|
||||
기본 파드는 소유자 관련 정보에 컨트롤러(또는 오브젝트)를 가지지 않기 때문에 프런트엔드
|
||||
기본 파드는 소유자 관련 정보에 컨트롤러(또는 오브젝트)를 가지지 않기 때문에 프런트엔드
|
||||
레플리카셋의 셀렉터와 일치하면 즉시 레플리카셋에 소유된다.
|
||||
|
||||
프런트엔드 레플리카셋이 배치되고 초기 파드 레플리카가 셋업된 이후에, 레플리카 수 요구 사항을 충족시키기 위해서
|
||||
프런트엔드 레플리카셋이 배치되고 초기 파드 레플리카가 셋업된 이후에, 레플리카 수 요구 사항을 충족시키기 위해서
|
||||
신규 파드를 생성한다고 가정해보자.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
|
||||
```
|
||||
|
||||
새로운 파드는 레플리카셋에 의해 인식되며 레플리카셋이 필요한 수량을 초과하면
|
||||
새로운 파드는 레플리카셋에 의해 인식되며 레플리카셋이 필요한 수량을 초과하면
|
||||
즉시 종료된다.
|
||||
|
||||
파드를 가져온다.
|
||||
|
@ -186,7 +186,7 @@ kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
|
|||
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
|
||||
```
|
||||
|
||||
레플리카셋이 해당 파드를 소유한 것을 볼 수 있으며 새 파드 및 기존 파드의 수가
|
||||
레플리카셋이 해당 파드를 소유한 것을 볼 수 있으며 새 파드 및 기존 파드의 수가
|
||||
레플리카셋이 필요로 하는 수와 일치할 때까지 사양에 따라 신규 파드만 생성한다. 파드를 가져온다.
|
||||
```shell
|
||||
kubectl get pods
|
||||
|
@ -230,7 +230,7 @@ matchLabels:
|
|||
tier: frontend
|
||||
```
|
||||
|
||||
레플리카셋에서 `.spec.template.metadata.labels`는 `spec.selector`과 일치해야 하며
|
||||
레플리카셋에서 `.spec.template.metadata.labels`는 `spec.selector`과 일치해야 하며
|
||||
그렇지 않으면 API에 의해 거부된다.
|
||||
|
||||
{{< note >}}
|
||||
|
@ -250,7 +250,7 @@ matchLabels:
|
|||
|
||||
레플리카셋 및 모든 파드를 삭제하려면 [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete)를 사용한다. [가비지 수집기](/ko/docs/concepts/workloads/controllers/garbage-collection/)는 기본적으로 종속되어있는 모든 파드를 자동으로 삭제한다.
|
||||
|
||||
REST API또는 `client-go` 라이브러리를 이용할 때는 -d 옵션으로 `propagationPolicy`를 `Background`또는 `Foreground`로
|
||||
REST API또는 `client-go` 라이브러리를 이용할 때는 -d 옵션으로 `propagationPolicy`를 `Background`또는 `Foreground`로
|
||||
설정해야 한다.
|
||||
예시:
|
||||
```shell
|
||||
|
@ -275,12 +275,12 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
|
|||
원본이 삭제되면 새 레플리카셋을 생성해서 대체할 수 있다.
|
||||
기존 `.spec.selector`와 신규 `.spec.selector`가 같으면 새 레플리카셋은 기존 파드를 선택한다.
|
||||
하지만 신규 레플리카셋은 기존 파드를 신규 레플리카셋의 새롭고 다른 파드 템플릿에 일치시키는 작업을 수행하지는 않는다.
|
||||
컨트롤 방식으로 파드를 새로운 사양으로 업데이트 하기 위해서는 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-생성)를 이용하면 된다.
|
||||
컨트롤 방식으로 파드를 새로운 사양으로 업데이트 하기 위해서는 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/#디플로이먼트-생성)를 이용하면 된다.
|
||||
이는 레플리카셋이 롤링 업데이트를 직접적으로 지원하지 않기 때문이다.
|
||||
|
||||
### 레플리카셋에서 파드 격리
|
||||
|
||||
레이블을 변경하면 레플리카셋에서 파드를 제거할 수 있다. 이 방식은 디버깅과 데이터 복구 등을
|
||||
레이블을 변경하면 레플리카셋에서 파드를 제거할 수 있다. 이 방식은 디버깅과 데이터 복구 등을
|
||||
위해 서비스에서 파드를 제거하는 데 사용할 수 있다. 이 방식으로 제거된 파드는 자동으로 교체된다(
|
||||
레플리카의 수가 변경되지 않는다고 가정한다).
|
||||
|
||||
|
@ -291,15 +291,15 @@ curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/fron
|
|||
|
||||
### 레플리카셋을 Horizontal Pod Autoscaler 대상으로 설정
|
||||
|
||||
레플리카 셋은
|
||||
레플리카 셋은
|
||||
[Horizontal Pod Autoscalers (HPA)](/ko/docs/tasks/run-application/horizontal-pod-autoscale/)의 대상이 될 수 있다.
|
||||
즉, 레플리카셋은 HPA에 의해 오토스케일될 수 있다.
|
||||
다음은 이전에 만든 예시에서 만든 레플리카셋을 대상으로 하는 HPA 예시이다.
|
||||
|
||||
{{< codenew file="controllers/hpa-rs.yaml" >}}
|
||||
|
||||
이 매니페스트를 `hpa-rs.yaml`로 저장한 다음 쿠버네티스
|
||||
클러스터에 적용하면 CPU 사용량에 따라 파드가 복제되는
|
||||
이 매니페스트를 `hpa-rs.yaml`로 저장한 다음 쿠버네티스
|
||||
클러스터에 적용하면 CPU 사용량에 따라 파드가 복제되는
|
||||
오토스케일 레플리카 셋 HPA가 생성된다.
|
||||
|
||||
```shell
|
||||
|
@ -317,7 +317,7 @@ kubectl autoscale rs frontend --max=10
|
|||
|
||||
### 디플로이먼트(권장)
|
||||
|
||||
[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/)는 레플리카셋을 소유하거나 업데이트를 하고,
|
||||
[`디플로이먼트`](/ko/docs/concepts/workloads/controllers/deployment/)는 레플리카셋을 소유하거나 업데이트를 하고,
|
||||
파드의 선언적인 업데이트와 서버측 롤링 업데이트를 할 수 있는 오브젝트이다.
|
||||
레플리카셋은 단독으로 사용할 수 있지만, 오늘날에는 주로 디플로이먼트로 파드의 생성과 삭제 그리고 업데이트를 오케스트레이션하는 메커니즘으로 사용한다.
|
||||
디플로이먼트를 이용해서 배포할 때 생성되는 레플리카셋을 관리하는 것에 대해 걱정하지 않아도 된다.
|
||||
|
@ -335,14 +335,14 @@ kubectl autoscale rs frontend --max=10
|
|||
|
||||
### 데몬셋
|
||||
|
||||
머신 모니터링 또는 머신 로깅과 같은 머신-레벨의 기능을 제공하는 파드를 위해서는 레플리카셋 대신
|
||||
머신 모니터링 또는 머신 로깅과 같은 머신-레벨의 기능을 제공하는 파드를 위해서는 레플리카셋 대신
|
||||
[`데몬셋`](/ko/docs/concepts/workloads/controllers/daemonset/)을 사용한다.
|
||||
이러한 파드의 수명은 머신의 수명과 연관되어 있고, 머신에서 다른 파드가 시작하기 전에 실행되어야 하며,
|
||||
이러한 파드의 수명은 머신의 수명과 연관되어 있고, 머신에서 다른 파드가 시작하기 전에 실행되어야 하며,
|
||||
머신의 재부팅/종료가 준비되었을 때, 해당 파드를 종료하는 것이 안전하다.
|
||||
|
||||
### 레플리케이션 컨트롤러
|
||||
레플리카셋은 [_레플리케이션 컨트롤러_](/ko/docs/concepts/workloads/controllers/replicationcontroller/)를 계승하였다.
|
||||
이 두 개의 용도는 동일하고, 유사하게 동작하며, 레플리케이션 컨트롤러가 [레이블 사용자 가이드](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)에
|
||||
이 두 개의 용도는 동일하고, 유사하게 동작하며, 레플리케이션 컨트롤러가 [레이블 사용자 가이드](/ko/docs/concepts/overview/working-with-objects/labels/#레이블-셀렉터)에
|
||||
설명된 설정-기반의 셀렉터의 요건을 지원하지 않는다는 점을 제외하면 유사하다.
|
||||
따라서 레플리카셋이 레플리케이션 컨트롤러보다 선호된다.
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ weight: 10
|
|||
|
||||
{{< caution >}}
|
||||
컨테이너를 실행할 때 runc가 시스템 파일 디스크립터를 처리하는 방식에서 결함이 발견되었다.
|
||||
악성 컨테이너는 이 결함을 사용하여 runc 바이너리의 내용을 덮어쓸 수 있으며
|
||||
악성 컨테이너는 이 결함을 사용하여 runc 바이너리의 내용을 덮어쓸 수 있으며
|
||||
따라서 컨테이너 호스트 시스템에서 임의의 명령을 실행할 수 있다.
|
||||
|
||||
이 문제에 대한 자세한 내용은
|
||||
|
@ -34,18 +34,18 @@ weight: 10
|
|||
|
||||
### Cgroup 드라이버
|
||||
|
||||
Linux 배포판의 init 시스템이 systemd인 경우, init 프로세스는
|
||||
root control group(`cgroup`)을 생성 및 사용하는 cgroup 관리자로 작동한다.
|
||||
Systemd는 cgroup과의 긴밀한 통합을 통해 프로세스당 cgroup을 할당한다.
|
||||
컨테이너 런타임과 kubelet이 `cgroupfs`를 사용하도록 설정할 수 있다.
|
||||
Linux 배포판의 init 시스템이 systemd인 경우, init 프로세스는
|
||||
root control group(`cgroup`)을 생성 및 사용하는 cgroup 관리자로 작동한다.
|
||||
Systemd는 cgroup과의 긴밀한 통합을 통해 프로세스당 cgroup을 할당한다.
|
||||
컨테이너 런타임과 kubelet이 `cgroupfs`를 사용하도록 설정할 수 있다.
|
||||
systemd와 함께`cgroupfs`를 사용하면 두 개의 서로 다른 cgroup 관리자가 존재하게 된다는 뜻이다.
|
||||
|
||||
Control group은 프로세스에 할당된 리소스를 제한하는데 사용된다.
|
||||
단일 cgroup 관리자는 할당된 리소스가 무엇인지를 단순화하고,
|
||||
기본적으로 사용가능한 리소스와 사용중인 리소스를 일관성있게 볼 수 있다.
|
||||
관리자가 두 개인 경우, 이런 리소스도 두 개의 관점에서 보게 된다. kubelet과 Docker는
|
||||
`cgroupfs`를 사용하고 나머지 프로세스는
|
||||
`systemd`를 사용하도록 노드가 설정된 경우,
|
||||
Control group은 프로세스에 할당된 리소스를 제한하는데 사용된다.
|
||||
단일 cgroup 관리자는 할당된 리소스가 무엇인지를 단순화하고,
|
||||
기본적으로 사용가능한 리소스와 사용중인 리소스를 일관성있게 볼 수 있다.
|
||||
관리자가 두 개인 경우, 이런 리소스도 두 개의 관점에서 보게 된다. kubelet과 Docker는
|
||||
`cgroupfs`를 사용하고 나머지 프로세스는
|
||||
`systemd`를 사용하도록 노드가 설정된 경우,
|
||||
리소스가 부족할 때 불안정해지는 사례를 본 적이 있다.
|
||||
|
||||
컨테이너 런타임과 kubelet이 `systemd`를 cgroup 드라이버로 사용하도록 설정을 변경하면
|
||||
|
@ -53,7 +53,7 @@ Control group은 프로세스에 할당된 리소스를 제한하는데 사용
|
|||
|
||||
{{< caution >}}
|
||||
클러스터에 결합되어 있는 노드의 cgroup 관리자를 변경하는 것은 권장하지 않는다.
|
||||
하나의 cgroup 드라이버의 의미를 사용하여 kubelet이 파드를 생성해왔다면,
|
||||
하나의 cgroup 드라이버의 의미를 사용하여 kubelet이 파드를 생성해왔다면,
|
||||
컨테이너 런타임을 다른 cgroup 드라이버로 변경하는 것은 존재하는 기존 파드에 대해 PodSandBox를 재생성을 시도할 때, 에러가 발생할 수 있다.
|
||||
kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다.
|
||||
추천하는 방법은 워크로드에서 노드를 제거하고, 클러스터에서 제거한 다음 다시 결합시키는 것이다.
|
||||
|
@ -62,7 +62,7 @@ kubelet을 재시작 하는 것은 에러를 해결할 수 없을 것이다.
|
|||
## Docker
|
||||
|
||||
각 머신들에 대해서, Docker를 설치한다.
|
||||
버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다.
|
||||
버전 19.03.4가 추천된다. 그러나 1.13.1, 17.03, 17.06, 17.09, 18.06 그리고 18.09도 동작하는 것으로 알려져 있다.
|
||||
쿠버네티스 릴리스 노트를 통해서, 최신에 검증된 Docker 버전의 지속적인 파악이 필요하다.
|
||||
|
||||
시스템에 Docker를 설치하기 위해서 아래의 커맨드들을 사용한다.
|
||||
|
@ -218,7 +218,7 @@ systemctl start crio
|
|||
|
||||
## Containerd
|
||||
|
||||
이 섹션은 `containerd`를 CRI 런타임으로써 사용하는데 필요한 단계를 담고 있다.
|
||||
이 섹션은 `containerd`를 CRI 런타임으로써 사용하는데 필요한 단계를 담고 있다.
|
||||
|
||||
Containerd를 시스템에 설치하기 위해서 다음의 커맨드들을 사용한다.
|
||||
|
||||
|
@ -304,4 +304,4 @@ kubeadm을 사용하는 경우에도 마찬가지로, 수동으로
|
|||
|
||||
자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart)를 참고한다.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -158,11 +158,15 @@ HorizontalPodAutoscaler에 여러 메트릭이 지정된 경우, 이 계산은
|
|||
현재 값보다 높은 `desiredReplicas` 을 제공하는 경우
|
||||
HPA가 여전히 확장할 수 있음을 의미한다.
|
||||
|
||||
마지막으로, HPA가 목표를 스케일하기 직전에 스케일 권장 사항이 기록된다.
|
||||
컨트롤러는 구성 가능한 창(window) 내에서 가장 높은 권장 사항을 선택하도록 해당 창 내의
|
||||
모든 권장 사항을 고려한다. 이 값은 `--horizontal-pod-autoscaler-downscale-stabilization` 플래그를 사용하여 설정할 수 있고, 기본 값은 5분이다.
|
||||
즉, 스케일 다운이 점진적으로 발생하여 급격히 변동하는
|
||||
메트릭 값의 영향을 완만하게 한다.
|
||||
마지막으로, HPA가 목표를 스케일하기 직전에 스케일 권장 사항이
|
||||
기록된다. 컨트롤러는 구성 가능한 창(window) 내에서 가장 높은 권장
|
||||
사항을 선택하도록 해당 창 내의 모든 권장 사항을 고려한다. 이 값은
|
||||
`--horizontal-pod-autoscaler-downscale-stabilization` 플래그 또는 HPA 오브젝트
|
||||
동작 `behavior.scaleDown.stabilizationWindowSeconds` ([구성가능한
|
||||
스케일링 동작 지원](#구성가능한-스케일링-동작-지원)을 본다)을
|
||||
사용하여 설정할 수 있고, 기본 값은 5분이다.
|
||||
즉, 스케일 다운이 점진적으로 발생하여 급격히 변동하는 메트릭 값의
|
||||
영향을 완만하게 한다.
|
||||
|
||||
## API 오브젝트
|
||||
|
||||
|
@ -228,6 +232,11 @@ v1.12부터는 새로운 알고리즘 업데이트가 업스케일 지연에 대
|
|||
있다.
|
||||
{{< /note >}}
|
||||
|
||||
v1.17 부터 v2beta2 API 필드에서 `behavior.scaleDown.stabilizationWindowSeconds`
|
||||
를 설정하여 다운스케일 안정화 창을 HPA별로 설정할 수 있다.
|
||||
[구성가능한 스케일링
|
||||
동작 지원](#구성가능한-스케일링-동작-지원)을 본다.
|
||||
|
||||
## 멀티 메트릭을 위한 지원
|
||||
|
||||
Kubernetes 1.6은 멀티 메트릭을 기반으로 스케일링을 지원한다. `autoscaling/v2beta2` API
|
||||
|
@ -278,6 +287,154 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다.
|
|||
어떻게 사용하는지에 대한 예시는 [커스텀 메트릭 사용하는 작업 과정](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics)과
|
||||
[외부 메트릭스 사용하는 작업 과정](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects)을 참조한다.
|
||||
|
||||
## 구성가능한 스케일링 동작 지원
|
||||
|
||||
[v1.17](https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/20190307-configurable-scale-velocity-for-hpa.md)
|
||||
부터 `v2beta2` API는 HPA `behavior` 필드를 통해
|
||||
스케일링 동작을 구성할 수 있다.
|
||||
동작은 `behavior` 필드 아래의 `scaleUp` 또는 `scaleDown`
|
||||
섹션에서 스케일링 업과 다운을 위해 별도로 지정된다. 안정화 윈도우는
|
||||
스케일링 대상에서 레플리카 수의 플래핑(flapping)을 방지하는
|
||||
양방향에 대해 지정할 수 있다. 마찬가지로 스케일링 정책을 지정하면
|
||||
스케일링 중 레플리카 변경 속도를 제어할 수 있다.
|
||||
|
||||
### 스케일링 정책
|
||||
|
||||
스펙의 `behavior` 섹션에 하나 이상의 스케일링 폴리시를 지정할 수 있다.
|
||||
폴리시가 여러 개 지정된 경우 가장 많은 양의 변경을
|
||||
허용하는 정책이 기본적으로 선택된 폴리시이다. 다음 예시는 스케일 다운 중 이
|
||||
동작을 보여준다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Pods
|
||||
value: 4
|
||||
periodSeconds: 60
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
```
|
||||
|
||||
파드 수가 40개를 초과하면 두 번째 폴리시가 스케일링 다운에 사용된다.
|
||||
예를 들어 80개의 레플리카가 있고 대상을 10개의 레플리카로 축소해야 하는
|
||||
경우 첫 번째 단계에서 8개의 레플리카가 스케일 다운 된다. 레플리카의 수가 72개일 때
|
||||
다음 반복에서 파드의 10%는 7.2 이지만, 숫자는 8로 올림된다. 오토스케일러 컨트롤러의
|
||||
각 루프에서 변경될 파드의 수는 현재 레플리카의 수에 따라 재계산된다. 레플리카의 수가 40
|
||||
미만으로 떨어지면 첫 번째 폴리시 _(파드들)_ 가 적용되고 한번에
|
||||
4개의 레플리카가 줄어든다.
|
||||
|
||||
`periodSeconds` 는 폴리시가 참(true)으로 유지되어야 하는 기간을 나타낸다.
|
||||
첫 번째 정책은 1분 내에 최대 4개의 레플리카를 스케일 다운할 수 있도록 허용한다.
|
||||
두 번째 정책은 현재 레플리카의 최대 10%를 1분 내에 스케일 다운할 수 있도록 허용한다.
|
||||
|
||||
확장 방향에 대해 `selectPolicy` 필드를 확인하여 폴리시 선택을 변경할 수 있다.
|
||||
레플리카의 수를 최소로 변경할 수 있는 폴리시를 선택하는 `최소(Min)`로 값을 설정한다.
|
||||
값을 `Disabled` 로 설정하면 해당 방향으로 스케일링이 완전히
|
||||
비활성화 된다.
|
||||
|
||||
### 안정화 윈도우
|
||||
|
||||
안정화 윈도우는 스케일링에 사용되는 메트릭이 계속 변동할 때 레플리카의 플래핑을
|
||||
다시 제한하기 위해 사용된다. 안정화 윈도우는 스케일링을 방지하기 위해 과거부터
|
||||
계산된 의도한 상태를 고려하는 오토스케일링 알고리즘에 의해 사용된다.
|
||||
다음의 예시에서 `scaleDown` 에 대해 안정화 윈도우가 지정되어있다.
|
||||
|
||||
```yaml
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 300
|
||||
```
|
||||
|
||||
메트릭이 대상을 축소해야하는 것을 나타내는 경우 알고리즘은
|
||||
이전에 계산된 의도한 상태를 살펴보고 지정된 간격의 최고 값을 사용한다.
|
||||
위의 예시에서 지난 5분 동안 모든 의도한 상태가 고려된다.
|
||||
|
||||
### 기본 동작
|
||||
|
||||
사용자 지정 스케일링을 사용하려면 일부 필드를 지정해야 한다. 사용자 정의해야
|
||||
하는 값만 지정할 수 있다. 이러한 사용자 지정 값은 기본값과 병합된다. 기본값은 HPA
|
||||
알고리즘의 기존 동작과 일치한다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 300
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 100
|
||||
periodSeconds: 15
|
||||
scaleUp:
|
||||
stabilizationWindowSeconds: 0
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 100
|
||||
periodSeconds: 15
|
||||
- type: Pods
|
||||
value: 4
|
||||
periodSeconds: 15
|
||||
selectPolicy: Max
|
||||
```
|
||||
안정화 윈도우의 스케일링 다운의 경우 _300_ 초(또는 제공된
|
||||
경우`--horizontal-pod-autoscaler-downscale-stabilization` 플래그의 값)이다. 스케일링 다운에서는 현재
|
||||
실행 중인 레플리카의 100%를 제거할 수 있는 단일 정책만 있으며, 이는 스케일링
|
||||
대상을 최소 허용 레플리카로 축소할 수 있음을 의미한다.
|
||||
스케일링 업에는 안정화 윈도우가 없다. 메트릭이 대상을 스케일 업해야 한다고 표시된다면 대상이 즉시 스케일 업된다.
|
||||
두 가지 폴리시가 있다. HPA가 정상 상태에 도달 할 때까지 15초 마다
|
||||
4개의 파드 또는 현재 실행 중인 레플리카의 100% 가 추가된다.
|
||||
|
||||
### 예시: 다운스케일 안정화 윈도우 변경
|
||||
|
||||
사용자 지정 다운스케일 안정화 윈도우를 1분 동안 제공하기 위해
|
||||
다음 동작이 HPA에 추가된다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
stabilizationWindowSeconds: 60
|
||||
```
|
||||
|
||||
### 예시: 스케일 다운 비율 제한
|
||||
|
||||
HPA에 의해 파드가 제거되는 속도를 분당 10%로 제한하기 위해
|
||||
다음 동작이 HPA에 추가된다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
```
|
||||
|
||||
마지막으로 5개의 파드를 드롭하기 위해 다른 폴리시를 추가하고, 최소 선택
|
||||
전략을 추가할 수 있다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
policies:
|
||||
- type: Percent
|
||||
value: 10
|
||||
periodSeconds: 60
|
||||
- type: Pods
|
||||
value: 5
|
||||
periodSeconds: 60
|
||||
selectPolicy: Max
|
||||
```
|
||||
|
||||
### 예시: 스케일 다운 비활성화
|
||||
|
||||
`selectPolicy` 의 `Disabled` 값은 주어진 방향으로의 스케일링을 끈다.
|
||||
따라서 다운 스케일링을 방지하기 위해 다음 폴리시가 사용된다.
|
||||
|
||||
```yaml
|
||||
behavior:
|
||||
scaleDown:
|
||||
selectPolicy: Disabled
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
@ -286,4 +443,4 @@ API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다.
|
|||
* kubectl 오토스케일 커맨드: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
* [Horizontal Pod Autoscaler](/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)의 사용 예제.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -10,12 +10,12 @@ menu:
|
|||
<p>Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.</p>
|
||||
card:
|
||||
name: tutorials
|
||||
weight: 10
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
이 튜토리얼에서는 [Minikube](/ko/docs/setup/learning-environment/minikube)와 Katacoda를 이용하여
|
||||
이 튜토리얼에서는 [Minikube](/ko/docs/setup/learning-environment/minikube)와 Katacoda를 이용하여
|
||||
쿠버네티스에서 Node.js 로 작성된 간단한 Hello World 애플리케이션을 어떻게 실행하는지 살펴본다.
|
||||
Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다.
|
||||
|
||||
|
@ -69,7 +69,7 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다.
|
|||
|
||||
쿠버네티스 [*파드*](/ko/docs/concepts/workloads/pods/pod/)는 관리와
|
||||
네트워킹 목적으로 함께 묶여 있는 하나 이상의 컨테이너 그룹이다.
|
||||
이 튜토리얼의 파드에는 단 하나의 컨테이너만 있다. 쿠버네티스
|
||||
이 튜토리얼의 파드에는 단 하나의 컨테이너만 있다. 쿠버네티스
|
||||
[*디플로이먼트*](/ko/docs/concepts/workloads/controllers/deployment/)는 파드의
|
||||
헬스를 검사해서 파드의 컨테이너가 종료되었다면 재시작해준다.
|
||||
파드의 생성 및 스케일링을 관리하는 방법으로 디플로이먼트를 권장한다.
|
||||
|
@ -281,4 +281,4 @@ minikube delete
|
|||
* [애플리케이션 배포](/docs/tasks/run-application/run-stateless-application-deployment/)에 대해서 더 배워 본다.
|
||||
* [서비스 오브젝트](/ko/docs/concepts/services-networking/service/)에 대해서 더 배워 본다.
|
||||
|
||||
{{% /capture %}}
|
||||
{{% /capture %}}
|
|
@ -25,12 +25,21 @@ card:
|
|||
|
||||
Вы можете создавать новые задачи, редактировать содержимое и проверять изменения от других участников, — всё это доступно с сайта GitHub. Вы также можете использовать встроенный в GitHub поиск и историю коммитов.
|
||||
|
||||
<<<<<<< HEAD
|
||||
Не все задачи могут быть выполнены с помощью интерфейса GitHub, но некоторые из них обсуждаются в руководствах для [продвинутых](/ru/docs/contribute/intermediate/) и
|
||||
[опытных](/ru/docs/contribute/advanced/) участников.
|
||||
|
||||
### Участие в документации SIG
|
||||
|
||||
Документация Kubernetes поддерживается {{< glossary_tooltip text="специальной группой" term_id="sig" >}} (Special Interest Group, SIG) под названием SIG Docs. Мы [общаемся](#participate-in-sig-docs-discussions) с помощью канала Slack, списка рассылки и еженедельных видеозвонков. Будем рады новым участникам. Для получения дополнительной информации обратитесь к странице [Участие в SIG Docs](ru/docs/contribute/participating/).
|
||||
=======
|
||||
Не все задачи могут быть выполнены на GitHub, поэтому они обсуждаются в [intermediate](/docs/contribute/intermediate/) and
|
||||
[advanced](/docs/contribute/advanced/) docs contribution guides.
|
||||
|
||||
### Участие в документации SIG
|
||||
|
||||
Документация Kubernetes поддерживается {{< glossary_tooltip text="специальной группой" term_id="sig" >}} (Special Interest Group, SIG) под названием SIG Docs. Мы [общаемся](#participate-in-sig-docs-discussions) с помощью канала Slack, списка рассылки и еженедельных видеозвонков. Будем рады новым участникам. Для получения дополнительной информации обратитесь к странице [Participating in SIG Docs](/docs/contribute/participating/).
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
### Руководящие принципы по содержанию
|
||||
|
||||
|
@ -40,7 +49,11 @@ card:
|
|||
|
||||
Мы поддерживаем [руководство по оформлению](/docs/contribute/style/style-guide/) с информацией о выборе, сделанном сообществом SIG Docs в отношении грамматики, синтаксиса, исходного форматирования и типографских соглашений. Прежде чем сделать свой первый вклад, просмотрите руководство по стилю и используйте его, когда у вас есть вопросы.
|
||||
|
||||
<<<<<<< HEAD
|
||||
SIG Docs совместными усилиями вносит изменения в руководство по оформлению. Чтобы предложить изменение или дополнение, добавьте его в повестку дня предстоящей встречи SIG Docs и посетите её, чтобы принять участие в обсуждении. Смотрите руководство для [продвинутых участников](/docs/contribute/advanced/), чтобы получить дополнительную информацию.
|
||||
=======
|
||||
SIG Docs совместными усилиями вносит изменения в руководство по оформлению. Чтобы предложить изменение или дополнение, добавьте его в повестку дня предстоящей встречи SIG Docs и посетите её, чтобы принять участие в обсуждении. Смотрите страницу с [продвинутым руководством](/docs/contribute/advanced/) для получения дополнительной информации.
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
### Шаблоны страниц
|
||||
|
||||
|
@ -56,7 +69,11 @@ SIG Docs совместными усилиями вносит изменения
|
|||
|
||||
Более подробную информацию про участие в работе над документацией на нескольких языках ["Localize content"](/docs/contribute/intermediate#localize-content) в промежуточном руководстве по добавлению.
|
||||
|
||||
<<<<<<< HEAD
|
||||
Если вы заинтересованы в переводе документации на новый язык, посмотрите раздел ["Локализация"](/ru/docs/contribute/localization/).
|
||||
=======
|
||||
Если вы заинтересованы в переводе документации на новый язык, посмотрите раздел ["Локализация"](/docs/contribute/localization/).
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
## Создание хороших заявок
|
||||
|
||||
|
@ -66,7 +83,11 @@ SIG Docs совместными усилиями вносит изменения
|
|||
|
||||
- **Для существующей страницы**
|
||||
|
||||
<<<<<<< HEAD
|
||||
Если заметили проблему на существующей странице в [документации Kubernetes](/ru/docs/), перейдите в конец страницы и нажмите кнопку **Create an Issue**. Если вы ещё не авторизованы в GitHub, сделайте это. После этого откроется страница с форма для создания нового запроса в GitHub с уже предварительно заполненным полями.
|
||||
=======
|
||||
Если заметили проблему на существующей странице в [документации Kubernetes](/docs/), перейдите в конец страницы и нажмите кнопку **Create an Issue**. Если вы ещё не авторизованы в GitHub, сделайте это. После этого откроется страница с форма для создания нового запроса в GitHub с уже предварительно заполненным полями.
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
При помощи разметки Markdown опишите как можно подробнее, что хотите. Там, где вы видите пустые квадратные скобки (`[ ]`), проставьте `x` между скобками. Если у вас есть предлагаемое решение проблемы, напишите его.
|
||||
|
||||
|
@ -106,7 +127,11 @@ SIG Docs совместными усилиями вносит изменения
|
|||
Примечание. Разработчики кода Kubernetes. Если вы документируете новую функцию для предстоящего выпуска Kubernetes, ваш процесс будет немного другим. См. Документирование функции для руководства по процессу и информации о сроках.
|
||||
|
||||
{{< note >}}
|
||||
<<<<<<< HEAD
|
||||
**Для разработчиков кода Kubernetes**: если вы документируете новую функциональность для новой версии Kubernetes, то процесс рассмотрения будет немного другим. Посетите страницу [Документирование функциональности](/ru/docs/contribute/intermediate/#добавление-документации-для-новой-функциональности), чтобы узнать про процесс и информацию о крайних сроках.
|
||||
=======
|
||||
**Для разработчиков кода Kubernetes**: если вы документируете новую функциональность для новой версии Kubernetes, то процесс рассмотрения будет немного другим. Посетите страницу [Документирование функциональности](/docs/contribute/intermediate/#sig-members-documenting-new-features), чтобы узнать про процесс и информацию о крайних сроках.
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
{{< /note >}}
|
||||
|
||||
### Подписание CLA-соглашения CNCF {#sign-the-cla}
|
||||
|
@ -116,7 +141,11 @@ SIG Docs совместными усилиями вносит изменения
|
|||
|
||||
### Поиск задач для работы
|
||||
|
||||
<<<<<<< HEAD
|
||||
Если вы уже нашли что исправить, просто следуйте инструкциям ниже. Для этого вам не обязательно [создавать ишью](#создание-хороших-заявок) (хотя вы, безусловно, пойти этим путём).
|
||||
=======
|
||||
Если вы уже нашли что исправить, просто следуйте инструкциям ниже. Для этого вам не обязательно [создавать ишью](#file-actionable-issues) (хотя вы, безусловно, пойти этим путём).
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
Если вы хотите ещё не определились с тем, над чем хотите поработать, перейдите по адресу [https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues) и найдите ишью с меткой `good first issue` (вы можете использовать [эту](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) ссылку для быстрого поиска). Прочитайте комментарии, чтобы убедиться что нет открытого пулреквеста для решения текущей ишью, а также, что никто другой не оставил комментарий, что он работает над этой задачей в последнее время (как правило, 3 дня). Добавьте комментарий, что вы хотели бы заняться решением этой задачи.
|
||||
|
||||
|
@ -174,7 +203,11 @@ SIG Docs совместными усилиями вносит изменения
|
|||
|
||||
7. Если ваши изменения одобрены, то рецензент объединяет соответствующий пулреквест. Через несколько минут вы сможете сможете увидеть его в действии на сайте Kubernetes.
|
||||
|
||||
<<<<<<< HEAD
|
||||
Это только один из способов отправить пулреквест. Если вы уже опытный пользователь Git и GitHub, вы можете вносить изменения, используя локальный GUI-клиент или Git из терминала вместо того, чтобы использовать интерфейс GitHub для этого. Некоторые основы использования Git-клиента из командной строки обсуждаются в руководстве для [продвинутого участника](/ru/docs/contribute/intermediate/).
|
||||
=======
|
||||
Это только один из способов отправить пулреквест. Если вы уже опытный пользователь Git и GitHub, вы можете вносить изменения, используя локальный GUI-клиент или Git из терминала вместо того, чтобы использовать интерфейс GitHub для этого. Некоторые основы использования Git-клиента из командной строки обсуждаются в [продвинутом](/docs/contribute/intermediate/) руководстве участника.
|
||||
>>>>>>> bc6cb17bb... Translate Start contributing page into Russian (#19124)
|
||||
|
||||
## Просмотр пулреквестов в документацию
|
||||
|
||||
|
|
|
@ -42,9 +42,13 @@ Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in th
|
|||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
You describe a _desired state_ in a Deployment, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
|
||||
|
||||
-->
|
||||
描述 Deployment 中的 _desired state_,并且 Deployment 控制器以受控速率更改实际状态,以达到期望状态。可以定义 Deployments 以创建新的 ReplicaSets ,或删除现有 Deployments ,并通过新的 Deployments 使用其所有资源。
|
||||
|
||||
<!--
|
||||
|
||||
## Use Case
|
||||
|
@ -212,7 +216,7 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following:
|
||||
-->
|
||||
2. 运行 `kubectl get deployments` 以检查 Deployment 是否已创建。如果仍在创建 Deployment ,则输出以下内容:
|
||||
|
||||
|
||||
```shell
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 0 0 0 1s
|
||||
|
@ -247,7 +251,7 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. The output is similar to this:
|
||||
-->
|
||||
3. 要查看 Deployment 展开状态,运行 `kubectl rollout status deployment.v1.apps/nginx-deployment`。输出:
|
||||
|
||||
|
||||
```shell
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment.apps/nginx-deployment successfully rolled out
|
||||
|
@ -257,7 +261,7 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
4. Run the `kubectl get deployments` again a few seconds later. The output is similar to this:
|
||||
-->
|
||||
4. 几秒钟后再次运行 `kubectl get deployments`。输出:
|
||||
|
||||
|
||||
```shell
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 18s
|
||||
|
@ -271,7 +275,7 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
5. To see the ReplicaSet (`rs`) created by the Deployment, run `kubectl get rs`. The output is similar to this:
|
||||
-->
|
||||
5. 要查看 Deployment 创建的 ReplicaSet (`rs`),运行 `kubectl get rs`。输出:
|
||||
|
||||
|
||||
```shell
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-75675f5897 3 3 3 18s
|
||||
|
@ -286,7 +290,7 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up
|
|||
6. To see the labels automatically generated for each Pod, run `kubectl get pods --show-labels`. The following output is returned:
|
||||
-->
|
||||
6. 要查看每个 Pod 自动生成的标签,运行 `kubectl get pods --show-labels`。返回以下输出:
|
||||
|
||||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
|
@ -361,7 +365,7 @@ is changed, for example if the labels or container images of the template are up
|
|||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
||||
```shell
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
@ -378,7 +382,7 @@ is changed, for example if the labels or container images of the template are up
|
|||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
|
||||
|
||||
输出:
|
||||
|
||||
```shell
|
||||
|
@ -499,12 +503,11 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
|
|||
* Get details of your Deployment:
|
||||
-->
|
||||
* 获取 Deployment 的更多信息
|
||||
|
||||
```shell
|
||||
kubectl describe deployments
|
||||
```
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -648,7 +651,6 @@ rolled back.
|
|||
* Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
|
||||
-->
|
||||
* 假设在更新 Deployment 时犯了一个拼写错误,将镜像名称命名为 `nginx:1.91` 而不是 `nginx:1.9.1`:
|
||||
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.91 --record=true
|
||||
```
|
||||
|
@ -693,7 +695,6 @@ rolled back.
|
|||
* You see that the number of old replicas
|
||||
-->
|
||||
* 查看旧 ReplicaSets :
|
||||
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
@ -742,16 +743,15 @@ rolled back.
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
* Get the description of the Deployment:
|
||||
* Get the description of the Deployment:
|
||||
-->
|
||||
* 获取 Deployment 描述信息:
|
||||
|
||||
```shell
|
||||
kubectl describe deployment
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -811,7 +811,7 @@ rolled back.
|
|||
按照如下步骤检查回滚历史:
|
||||
|
||||
<!--
|
||||
1. First, check the revisions of this Deployment:
|
||||
1. First, check the revisions of this Deployment:
|
||||
-->
|
||||
1. 首先,检查 Deployment 修改历史:
|
||||
|
||||
|
@ -819,7 +819,7 @@ rolled back.
|
|||
kubectl rollout history deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -855,7 +855,7 @@ rolled back.
|
|||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -893,7 +893,7 @@ Follow the steps given below to rollback the Deployment from the current version
|
|||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -910,7 +910,7 @@ Follow the steps given below to rollback the Deployment from the current version
|
|||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -939,7 +939,7 @@ Follow the steps given below to rollback the Deployment from the current version
|
|||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -958,7 +958,7 @@ Follow the steps given below to rollback the Deployment from the current version
|
|||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
|
||||
|
@ -1073,7 +1073,6 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p
|
|||
* Ensure that the 10 replicas in your Deployment are running.
|
||||
-->
|
||||
* 确保这10个副本都在运行。
|
||||
|
||||
```shell
|
||||
kubectl get deploy
|
||||
```
|
||||
|
@ -1092,7 +1091,6 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p
|
|||
* You update to a new image which happens to be unresolvable from inside the cluster.
|
||||
-->
|
||||
* 更新到新镜像,该镜像恰好无法从集群内部解析。
|
||||
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:sometag
|
||||
```
|
||||
|
@ -1111,7 +1109,6 @@ ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called *p
|
|||
`maxUnavailable` requirement that you mentioned above. Check out the rollout status:
|
||||
-->
|
||||
* 镜像更新使用 ReplicaSet nginx-deployment-1989198191 启动新的展开,但由于上面提到的最大不可用要求。检查展开状态:
|
||||
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
@ -1139,7 +1136,7 @@ ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled
|
|||
<!--
|
||||
In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the
|
||||
new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming
|
||||
the new replicas become healthy. To confirm this, run:
|
||||
the new replicas become healthy. To confirm this, run:
|
||||
-->
|
||||
在上面的示例中,3 个副本添加到旧 ReplicaSet 中,2 个副本添加到新 ReplicaSet 。展开过程最终应将所有副本移动到新的 ReplicaSet ,假定新的副本变得正常。要确认这一点,请运行:
|
||||
|
||||
|
@ -1395,6 +1392,198 @@ apply multiple fixes in between pausing and resuming without triggering unnecess
|
|||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
|
||||
<!--
|
||||
You can pause a Deployment before triggering one or more updates and then resume it. This allows you to
|
||||
apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.
|
||||
-->
|
||||
可以在触发一个或多个更新之前暂停 Deployment ,然后继续它。这允许在暂停和恢复之间应用多个修补程序,而不会触发不必要的 Deployment 。
|
||||
|
||||
<!--
|
||||
* For example, with a Deployment that was just created:
|
||||
Get the Deployment details:
|
||||
-->
|
||||
* 例如,对于一个刚刚创建的 Deployment :
|
||||
获取 Deployment 信息:
|
||||
```shell
|
||||
kubectl get deploy
|
||||
```
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx 3 3 3 3 1m
|
||||
```
|
||||
|
||||
<!--
|
||||
Get the rollout status:
|
||||
-->
|
||||
获取 Deployment 状态:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 1m
|
||||
```
|
||||
|
||||
<!--
|
||||
* Pause by running the following command:
|
||||
-->
|
||||
使用如下指令中断运行:
|
||||
```shell
|
||||
kubectl rollout pause deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
deployment.apps/nginx-deployment paused
|
||||
```
|
||||
|
||||
<!--
|
||||
* Then update the image of the Deployment:
|
||||
-->
|
||||
* 然后更新 Deployment 镜像:
|
||||
```shell
|
||||
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
deployment.apps/nginx-deployment image updated
|
||||
```
|
||||
|
||||
<!--
|
||||
* Notice that no new rollout started:
|
||||
-->
|
||||
* 注意没有新的展开:
|
||||
```shell
|
||||
kubectl rollout history deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
deployments "nginx"
|
||||
REVISION CHANGE-CAUSE
|
||||
1 <none>
|
||||
```
|
||||
|
||||
<!--
|
||||
* Get the rollout status to ensure that the Deployment is updates successfully:
|
||||
-->
|
||||
* 获取展开状态确保 Deployment 更新已经成功:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 3 3 3 2m
|
||||
```
|
||||
|
||||
<!--
|
||||
* You can make as many updates as you wish, for example, update the resources that will be used:
|
||||
-->
|
||||
* 更新是很容易的,例如,可以这样更新使用到的资源:
|
||||
```shell
|
||||
kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
deployment.apps/nginx-deployment resource requirements updated
|
||||
```
|
||||
|
||||
<!--
|
||||
The initial state of the Deployment prior to pausing it will continue its function, but new updates to
|
||||
the Deployment will not have any effect as long as the Deployment is paused.
|
||||
-->
|
||||
暂停 Deployment 之前的初始状态将继续其功能,但新的更新只要暂停 Deployment , Deployment 就不会产生任何效果。
|
||||
|
||||
<!--
|
||||
* Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:
|
||||
-->
|
||||
* 最后,恢复 Deployment 并观察新的 ReplicaSet ,并更新所有新的更新:
|
||||
```shell
|
||||
kubectl rollout resume deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
deployment.apps/nginx-deployment resumed
|
||||
```
|
||||
<!--
|
||||
* Watch the status of the rollout until it's done.
|
||||
-->
|
||||
* 观察展开的状态,直到完成。
|
||||
```shell
|
||||
kubectl get rs -w
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 2 2 2 2m
|
||||
nginx-3926361531 2 2 0 6s
|
||||
nginx-3926361531 2 2 1 18s
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-2142116321 1 2 2 2m
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-3926361531 3 2 1 18s
|
||||
nginx-2142116321 1 1 1 2m
|
||||
nginx-3926361531 3 3 1 18s
|
||||
nginx-3926361531 3 3 2 19s
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 1 1 2m
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 20s
|
||||
```
|
||||
|
||||
<!--
|
||||
* Get the status of the latest rollout:
|
||||
-->
|
||||
* 获取最近展开的状态:
|
||||
```shell
|
||||
kubectl get rs
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出:
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-2142116321 0 0 0 2m
|
||||
nginx-3926361531 3 3 3 28s
|
||||
```
|
||||
{{< note >}}
|
||||
<!--
|
||||
You cannot rollback a paused Deployment until you resume it.
|
||||
|
|
|
@ -17,6 +17,7 @@ toc:
|
|||
- docs/admin/accessing-the-api.md
|
||||
- docs/admin/authentication.md
|
||||
- docs/admin/bootstrap-tokens.md
|
||||
- docs/admin/certificate-signing-requests.md
|
||||
- docs/admin/admission-controllers.md
|
||||
- docs/admin/extensible-admission-controllers.md
|
||||
- docs/admin/service-accounts-admin.md
|
||||
|
|
|
@ -93,6 +93,7 @@
|
|||
/docs/concepts/configuration/container-command-arg/ /docs/tasks/inject-data-application/define-command-argument-container/ 301
|
||||
/docs/concepts/configuration/container-command-args/ /docs/tasks/inject-data-application/define-command-argument-container/ 301
|
||||
/docs/concepts/configuration/scheduler-perf-tuning/ /docs/concepts/scheduling/scheduler-perf-tuning/ 301
|
||||
/docs/concepts/configuration/scheduling-framework/ /docs/concepts/scheduling/scheduling-framework/ 301
|
||||
/docs/concepts/ecosystem/thirdpartyresource/ /docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ 301
|
||||
/docs/concepts/jobs/cron-jobs/ /docs/concepts/workloads/controllers/cron-jobs/ 301
|
||||
/docs/concepts/jobs/run-to-completion-finite-workloads/ /docs/concepts/workloads/controllers/jobs-run-to-completion/ 301
|
||||
|
@ -502,7 +503,8 @@
|
|||
/docs/setup/on-premises-vm/dcos/ /docs/setup/production-environment/on-premises-vm/dcos/ 301
|
||||
/docs/setup/on-premises-vm/ovirt/ /docs/setup/production-environment/on-premises-vm/ovirt/ 301
|
||||
/docs/setup/windows/intro-windows-in-kubernetes/ /docs/setup/production-environment/windows/intro-windows-in-kubernetes/ 301
|
||||
/docs/setup/windows/user-guide-windows-nodes/ /docs/setup/production-environment/windows/user-guide-windows-nodes/ 301
|
||||
/docs/setup/windows/user-guide-windows-nodes/ /docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/ 301
|
||||
/docs/setup/production-environment/windows/user-guide-windows-nodes/ /docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/ 301
|
||||
/docs/setup/windows/user-guide-windows-containers/ /docs/setup/production-environment/windows/user-guide-windows-containers/ 301
|
||||
/docs/setup/multiple-zones/ /docs/setup/best-practices/multiple-zones/ 301
|
||||
/docs/setup/cluster-large/ /docs/setup/best-practices/cluster-large/ 301
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 73 KiB After Width: | Height: | Size: 55 KiB |
Loading…
Reference in New Issue