Merge remote-tracking branch 'upstream/main' into dev-1.24
commit
e8b19637df
|
@ -771,6 +771,10 @@ figure {
|
|||
max-width: clamp(0vw, 95vw, 100%);
|
||||
max-height: calc(80vh - 8rem);
|
||||
}
|
||||
|
||||
figure + noscript > *{
|
||||
max-width: calc(max(100%, 100vw));
|
||||
}
|
||||
}
|
||||
|
||||
@media only screen and (min-width: 768px) {
|
||||
|
@ -793,6 +797,9 @@ figure {
|
|||
max-height: calc(100vh - 10rem);
|
||||
}
|
||||
}
|
||||
figure + noscript > * {
|
||||
max-width: 80%;
|
||||
}
|
||||
}
|
||||
|
||||
// Indent definition lists
|
||||
|
@ -825,3 +832,13 @@ dl {
|
|||
margin-bottom: 1em;
|
||||
}
|
||||
}
|
||||
|
||||
.no-js .mermaid {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.alert > em.javascript-required {
|
||||
display: inline-block;
|
||||
min-height: 1.5em;
|
||||
margin: calc(max(4em, ( 8vh + 4em ) / 2)) 0 0.25em 0;
|
||||
}
|
||||
|
|
|
@ -3,9 +3,11 @@ layout: blog
|
|||
title: "Dockershim Deprecation FAQ"
|
||||
date: 2020-12-02
|
||||
slug: dockershim-faq
|
||||
aliases: [ '/dockershim' ]
|
||||
---
|
||||
|
||||
|
||||
_**Update**: There is a [newer version](/blog/2022/02/17/dockershim-faq/) of this article available._
|
||||
|
||||
This document goes over some frequently asked questions regarding the Dockershim
|
||||
deprecation announced as a part of the Kubernetes v1.20 release. For more detail
|
||||
on the deprecation of Docker as a container runtime for Kubernetes kubelets, and
|
||||
|
|
|
@ -101,4 +101,4 @@ questions regardless of experience level or complexity! Our goal is to make sure
|
|||
everyone is educated as much as possible on the upcoming changes. We hope
|
||||
this has answered most of your questions and soothed some anxieties! ❤️
|
||||
|
||||
Looking for more answers? Check out our accompanying [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
|
||||
Looking for more answers? Check out our accompanying [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) _(updated February 2022)_.
|
||||
|
|
|
@ -0,0 +1,192 @@
|
|||
---
|
||||
layout: blog
|
||||
title: 'SIG Node CI Subproject Celebrates Two Years of Test Improvements'
|
||||
date: 2022-02-16
|
||||
slug: sig-node-ci-subproject-celebrates
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2022/02/16/sig-node-ci-subproject-celebrates-two-years-of-test-improvements/
|
||||
---
|
||||
|
||||
**Authors:** Sergey Kanzhelev (Google), Elana Hashman (Red Hat)
|
||||
|
||||
Ensuring the reliability of SIG Node upstream code is a continuous effort
|
||||
that takes a lot of behind-the-scenes effort from many contributors.
|
||||
There are frequent releases of Kubernetes, base operating systems,
|
||||
container runtimes, and test infrastructure that result in a complex matrix that
|
||||
requires attention and steady investment to "keep the lights on."
|
||||
In May 2020, the Kubernetes node special interest group ("SIG Node") organized a new
|
||||
subproject for continuous integration (CI) for node-related code and tests. Since its
|
||||
inauguration, the SIG Node CI subproject has run a weekly meeting, and even the full hour
|
||||
is often not enough to complete triage of all bugs, test-related PRs and issues, and discuss all
|
||||
related ongoing work within the subgroup.
|
||||
|
||||
Over the past two years, we've fixed merge-blocking and release-blocking tests, reducing time to merge Kubernetes contributors' pull requests thanks to reduced test flakes. When we started, Node test jobs only passed 42% of the time, and through our efforts, we now ensure a consistent >90% job pass rate. We've closed 144 test failure issues and merged 176 pull requests just in kubernetes/kubernetes. And we've helped subproject participants ascend the Kubernetes contributor ladder, with 3 new org members, 6 new reviewers, and 2 new approvers.
|
||||
|
||||
The Node CI subproject is an approachable first stop to help new contributors
|
||||
get started with SIG Node. There is a low barrier to entry for new contributors
|
||||
to address high-impact bugs and test fixes, although there is a long
|
||||
road before contributors can climb the entire contributor ladder:
|
||||
it took over a year to establish two new approvers for the group.
|
||||
The complexity of all the different components that power Kubernetes nodes
|
||||
and its test infrastructure requires a sustained investment over a long period
|
||||
for developers to deeply understand the entire system,
|
||||
both at high and low levels of detail.
|
||||
|
||||
We have several regular contributors at our meetings, however; our reviewers
|
||||
and approvers pool is still small. It is our goal to continue to grow
|
||||
contributors to ensure a sustainable distribution of work
|
||||
that does not just fall to a few key approvers.
|
||||
|
||||
It's not always obvious how subprojects within SIGs are formed, operate,
|
||||
and work. Each is unique to its sponsoring SIG and tailored to the projects
|
||||
that the group is intended to support. As a group that has welcomed many
|
||||
first-time SIG Node contributors, we'd like to share some of the details and
|
||||
accomplishments over the past two years,
|
||||
helping to demystify our inner workings and celebrate the hard work
|
||||
of all our dedicated contributors!
|
||||
|
||||
## Timeline
|
||||
|
||||
***May 2020.*** SIG Node CI group was formed on May 11, 2020, with more than
|
||||
[30 volunteers](https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#bookmark=id.vsb8pqnf4gib)
|
||||
signed up, to improve SIG Node CI signal and overall observability.
|
||||
Victor Pickard focused on getting
|
||||
[testgrid jobs](https://testgrid.k8s.io/sig-node) passing
|
||||
when Ning Liao suggested forming a group around this effort and came up with
|
||||
the [original group charter document](https://docs.google.com/document/d/1yS-XoUl6GjZdjrwxInEZVHhxxLXlTIX2CeWOARmD8tY/edit#heading=h.te6sgum6s8uf).
|
||||
The SIG Node chairs sponsored group creation with Victor as a subproject lead.
|
||||
Sergey Kanzhelev joined Victor shortly after as a co-lead.
|
||||
|
||||
At the kick-off meeting, we discussed which tests to concentrate on fixing first
|
||||
and discussed merge-blocking and release-blocking tests, many of which were failing due
|
||||
to infrastructure issues or buggy test code.
|
||||
|
||||
The subproject launched weekly hour-long meetings to discuss ongoing work
|
||||
discussion and triage.
|
||||
|
||||
***June 2020.*** Morgan Bauer, Karan Goel, and Jorge Alarcon Ochoa were
|
||||
recognized as reviewers for the SIG Node CI group for their contributions,
|
||||
helping significantly with the early stages of the subproject.
|
||||
David Porter and Roy Yang also joined the SIG test failures GitHub team.
|
||||
|
||||
***August 2020.*** All merge-blocking and release-blocking tests were passing,
|
||||
with some flakes. However, only 42% of all SIG Node test jobs were green, as there
|
||||
were many flakes and failing tests.
|
||||
|
||||
***October 2020.*** Amim Knabben becomes a Kubernetes org member for his
|
||||
contributions to the subproject.
|
||||
|
||||
***January 2021.*** With healthy presubmit and critical periodic jobs passing,
|
||||
the subproject discussed its goal for cleaning up the rest of periodic tests
|
||||
and ensuring they passed without flakes.
|
||||
|
||||
Elana Hashman joined the subproject, stepping up to help lead it after
|
||||
Victor's departure.
|
||||
|
||||
***February 2021.*** Artyom Lukianov becomes a Kubernetes org member for his
|
||||
contributions to the subproject.
|
||||
|
||||
***August 2021.*** After SIG Node successfully ran a [bug scrub](https://groups.google.com/g/kubernetes-dev/c/w2ghO4ihje0/m/VeEql1LJBAAJ)
|
||||
to clean up its bug backlog, the scope of the meeting was extended to
|
||||
include bug triage to increase overall reliability, anticipating issues
|
||||
before they affect the CI signal.
|
||||
|
||||
Subproject leads Elana Hashman and Sergey Kanzhelev are both recognized as
|
||||
approvers on all node test code, supported by SIG Node and SIG Testing.
|
||||
|
||||
***September 2021.*** After significant deflaking progress with serial tests in
|
||||
the 1.22 release spearheaded by Francesco Romani, the subproject set a goal
|
||||
for getting the serial job fully passing by the 1.23 release date.
|
||||
|
||||
Mike Miranda becomes a Kubernetes org member for his contributions
|
||||
to the subproject.
|
||||
|
||||
***November 2021.*** Throughout 2021, SIG Node had no merge or
|
||||
release-blocking test failures. Many flaky tests from past releases are removed
|
||||
from release-blocking dashboards as they had been fully cleaned up.
|
||||
|
||||
Danielle Lancashire was recognized as a reviewer for SIG Node's subgroup, test code.
|
||||
|
||||
The final node serial tests were completely fixed. The serial tests consist of
|
||||
many disruptive and slow tests which tend to be flakey and are hard
|
||||
to troubleshoot. By the 1.23 release freeze, the last serial tests were
|
||||
fixed and the job was passing without flakes.
|
||||
|
||||
[![Slack announcement that Serial tests are green](serial-tests-green.png)](https://kubernetes.slack.com/archives/C0BP8PW9G/p1638211041322900)
|
||||
|
||||
The 1.23 release got a special shout out for the tests quality and CI signal.
|
||||
The SIG Node CI subproject was proud to have helped contribute to such
|
||||
a high-quality release, in part due to our efforts in identifying
|
||||
and fixing flakes in Node and beyond.
|
||||
|
||||
[![Slack shoutout that release was mostly green](release-mostly-green.png)](https://kubernetes.slack.com/archives/C92G08FGD/p1637175755023200)
|
||||
|
||||
***December 2021.*** An estimated 90% of test jobs were passing at the time of
|
||||
the 1.23 release (up from 42% in August 2020).
|
||||
|
||||
Dockershim code was removed from Kubernetes. This affected nearly half of SIG Node's
|
||||
test jobs and the SIG Node CI subproject reacted quickly and retargeted all the
|
||||
tests. SIG Node was the first SIG to complete test migrations off dockershim,
|
||||
providing examples for other affected SIGs. The vast majority of new jobs passed
|
||||
at the time of introduction without further fixes required. The [effort of
|
||||
removing dockershim](https://k8s.io/dockershim)) from Kubernetes is ongoing.
|
||||
There are still some wrinkles from the dockershim removal as we uncover more
|
||||
dependencies on dockershim, but we plan to stabilize all test jobs
|
||||
by the 1.24 release.
|
||||
|
||||
## Statistics
|
||||
|
||||
Our regular meeting attendees and subproject participants for the past few months:
|
||||
|
||||
- Aditi Sharma
|
||||
- Artyom Lukianov
|
||||
- Arnaud Meukam
|
||||
- Danielle Lancashire
|
||||
- David Porter
|
||||
- Davanum Srinivas
|
||||
- Elana Hashman
|
||||
- Francesco Romani
|
||||
- Matthias Bertschy
|
||||
- Mike Miranda
|
||||
- Paco Xu
|
||||
- Peter Hunt
|
||||
- Ruiwen Zhao
|
||||
- Ryan Phillips
|
||||
- Sergey Kanzhelev
|
||||
- Skyler Clark
|
||||
- Swati Sehgal
|
||||
- Wenjun Wu
|
||||
|
||||
The [kubernetes/test-infra](https://github.com/kubernetes/test-infra/) source code repository contains test definitions. The number of
|
||||
Node PRs just in that repository:
|
||||
- 2020 PRs (since May): [183](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2020-05-01..2020-12-31+-author%3Ak8s-infra-ci-robot+)
|
||||
- 2021 PRs: [264](https://github.com/kubernetes/test-infra/pulls?q=is%3Apr+is%3Aclosed+label%3Asig%2Fnode+created%3A2021-01-01..2021-12-31+-author%3Ak8s-infra-ci-robot+)
|
||||
|
||||
Triaged issues and PRs on CI board (including triaging away from the subgroup scope):
|
||||
|
||||
- 2020 (since May): [132](https://github.com/issues?q=project%3Akubernetes%2F43+created%3A2020-05-01..2020-12-31)
|
||||
- 2021: [532](https://github.com/issues?q=project%3Akubernetes%2F43+created%3A2021-01-01..2021-12-31+)
|
||||
|
||||
## Future
|
||||
|
||||
Just "keeping the lights on" is a bold task and we are committed to improving this experience.
|
||||
We are working to simplify the triage and review processes for SIG Node.
|
||||
|
||||
Specifically, we are working on better test organization, naming,
|
||||
and tracking:
|
||||
|
||||
- https://github.com/kubernetes/enhancements/pull/3042
|
||||
- https://github.com/kubernetes/test-infra/issues/24641
|
||||
- [Kubernetes SIG-Node CI Testgrid Tracker](https://docs.google.com/spreadsheets/d/1IwONkeXSc2SG_EQMYGRSkfiSWNk8yWLpVhPm-LOTbGM/edit#gid=0)
|
||||
|
||||
We are also constantly making progress on improved tests debuggability and de-flaking.
|
||||
|
||||
If any of this interests you, we'd love for you to join us!
|
||||
There's plenty to learn in debugging test failures, and it will help you gain
|
||||
familiarity with the code that SIG Node maintains.
|
||||
|
||||
You can always find information about the group on the
|
||||
[SIG Node](https://github.com/kubernetes/community/tree/master/sig-node) page.
|
||||
We give group updates at our maintainer track sessions, such as
|
||||
[KubeCon + CloudNativeCon Europe 2021](https://kccnceu2021.sched.com/event/iE8E/kubernetes-sig-node-intro-and-deep-dive-elana-hashman-red-hat-sergey-kanzhelev-google) and
|
||||
[KubeCon + CloudNative North America 2021](https://kccncna2021.sched.com/event/lV9D/kubenetes-sig-node-intro-and-deep-dive-elana-hashman-derek-carr-red-hat-sergey-kanzhelev-dawn-chen-google?iframe=no&w=100%&sidebar=yes&bg=no).
|
||||
Join us in our mission to keep the kubelet and other SIG Node components reliable and ensure smooth and uneventful releases!
|
Binary file not shown.
After Width: | Height: | Size: 99 KiB |
Binary file not shown.
After Width: | Height: | Size: 58 KiB |
|
@ -0,0 +1,205 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Updated: Dockershim Removal FAQ"
|
||||
date: 2022-02-17
|
||||
slug: dockershim-faq
|
||||
aliases: [ '/dockershim' ]
|
||||
---
|
||||
|
||||
**This is an update to the original [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/) article,
|
||||
published in late 2020.**
|
||||
|
||||
This document goes over some frequently asked questions regarding the
|
||||
deprecation and removal of _dockershim_, that was
|
||||
[announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/)
|
||||
as a part of the Kubernetes v1.20 release. For more detail
|
||||
on what that means, check out the blog post
|
||||
[Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/).
|
||||
|
||||
Also, you can read [check whether dockershim removal affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)
|
||||
to determine how much impact the removal of dockershim would have for you
|
||||
or for your organization.
|
||||
|
||||
As the Kubernetes 1.24 release has become imminent, we've been working hard to try to make this a smooth transition.
|
||||
|
||||
- We've written a blog post detailing our [commitment and next steps](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/).
|
||||
- We believe there are no major blockers to migration to [other container runtimes](/docs/setup/production-environment/container-runtimes/#container-runtimes).
|
||||
- There is also a [Migrating from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/) guide available.
|
||||
- We've also created a page to list
|
||||
[articles on dockershim removal and on using CRI-compatible runtimes](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/).
|
||||
That list includes some of the already mentioned docs, and also covers selected external sources
|
||||
(including vendor guides).
|
||||
|
||||
### Why is the dockershim being removed from Kubernetes?
|
||||
|
||||
Early versions of Kubernetes only worked with a specific container runtime:
|
||||
Docker Engine. Later, Kubernetes added support for working with other container runtimes.
|
||||
The CRI standard was [created](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) to
|
||||
enable interoperability between orchestrators (like Kubernetes) and many different container
|
||||
runtimes.
|
||||
Docker Engine doesn't implement that interface (CRI), so the Kubernetes project created
|
||||
special code to help with the transition, and made that _dockershim_ code part of Kubernetes
|
||||
itself.
|
||||
|
||||
The dockershim code was always intended to be a temporary solution (hence the name: shim).
|
||||
You can read more about the community discussion and planning in the
|
||||
[Dockershim Removal Kubernetes Enhancement Proposal][drkep].
|
||||
In fact, maintaining dockershim had become a heavy burden on the Kubernetes maintainers.
|
||||
|
||||
Additionally, features that were largely incompatible with the dockershim, such
|
||||
as cgroups v2 and user namespaces are being implemented in these newer CRI
|
||||
runtimes. Removing support for the dockershim will allow further development in
|
||||
those areas.
|
||||
|
||||
[drkep]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim
|
||||
|
||||
### Can I still use Docker Engine in Kubernetes 1.23?
|
||||
|
||||
Yes, the only thing changed in 1.20 is a single warning log printed at [kubelet]
|
||||
startup if using Docker Engine as the runtime. You'll see this warning in all versions up to 1.23. The dockershim removal occurs in Kubernetes 1.24.
|
||||
|
||||
[kubelet]: /docs/reference/command-line-tools-reference/kubelet/
|
||||
|
||||
### When will dockershim be removed?
|
||||
|
||||
Given the impact of this change, we are using an extended deprecation timeline.
|
||||
Removal of dockershim is scheduled for Kubernetes v1.24, see [Dockershim Removal Kubernetes Enhancement Proposal][drkep].
|
||||
The Kubernetes project will be working closely with vendors and other ecosystem groups to ensure
|
||||
a smooth transition and will evaluate things as the situation evolves.
|
||||
|
||||
### Can I still use Docker Engine as my container runtime?
|
||||
|
||||
First off, if you use Docker on your own PC to develop or test containers: nothing changes.
|
||||
You can still use Docker locally no matter what container runtime(s) you use for your
|
||||
Kubernetes clusters. Containers make this kind of interoperability possible.
|
||||
|
||||
Mirantis and Docker have [committed][mirantis] to maintaining a replacement adapter for
|
||||
Docker Engine, and to maintain that adapter even after the in-tree dockershim is removed
|
||||
from Kubernetes. The replacement adapter is named [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
|
||||
|
||||
[mirantis]: https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/
|
||||
|
||||
### Will my existing container images still work?
|
||||
|
||||
Yes, the images produced from `docker build` will work with all CRI implementations.
|
||||
All your existing images will still work exactly the same.
|
||||
|
||||
#### What about private images?
|
||||
|
||||
Yes. All CRI runtimes support the same pull secrets configuration used in
|
||||
Kubernetes, either via the PodSpec or ServiceAccount.
|
||||
|
||||
### Are Docker and containers the same thing?
|
||||
|
||||
Docker popularized the Linux containers pattern and has been instrumental in
|
||||
developing the underlying technology, however containers in Linux have existed
|
||||
for a long time. The container ecosystem has grown to be much broader than just
|
||||
Docker. Standards like OCI and CRI have helped many tools grow and thrive in our
|
||||
ecosystem, some replacing aspects of Docker while others enhance existing
|
||||
functionality.
|
||||
|
||||
### Are there examples of folks using other runtimes in production today?
|
||||
|
||||
All Kubernetes project produced artifacts (Kubernetes binaries) are validated
|
||||
with each release.
|
||||
|
||||
Additionally, the [kind] project has been using containerd for some time and has
|
||||
seen an improvement in stability for its use case. Kind and containerd are leveraged
|
||||
multiple times every day to validate any changes to the Kubernetes codebase. Other
|
||||
related projects follow a similar pattern as well, demonstrating the stability and
|
||||
usability of other container runtimes. As an example, OpenShift 4.x has been
|
||||
using the [CRI-O] runtime in production since June 2019.
|
||||
|
||||
For other examples and references you can look at the adopters of containerd and
|
||||
CRI-O, two container runtimes under the Cloud Native Computing Foundation ([CNCF]).
|
||||
|
||||
- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md)
|
||||
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md)
|
||||
|
||||
[CRI-O]: https://cri-o.io/
|
||||
[kind]: https://kind.sigs.k8s.io/
|
||||
[CNCF]: https://cncf.io
|
||||
|
||||
### People keep referencing OCI, what is that?
|
||||
|
||||
OCI stands for the [Open Container Initiative], which standardized many of the
|
||||
interfaces between container tools and technologies. They maintain a standard
|
||||
specification for packaging container images (OCI image-spec) and running containers
|
||||
(OCI runtime-spec). They also maintain an actual implementation of the runtime-spec
|
||||
in the form of [runc], which is the underlying default runtime for both
|
||||
[containerd] and [CRI-O]. The CRI builds on these low-level specifications to
|
||||
provide an end-to-end standard for managing containers.
|
||||
|
||||
[Open Container Initiative]: https://opencontainers.org/about/overview/
|
||||
[runc]: https://github.com/opencontainers/runc
|
||||
[containerd]: https://containerd.io/
|
||||
|
||||
### Which CRI implementation should I use?
|
||||
|
||||
That’s a complex question and it depends on a lot of factors. If Docker is
|
||||
working for you, moving to containerd should be a relatively easy swap and
|
||||
will have strictly better performance and less overhead. However, we encourage you
|
||||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
|
||||
[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
|
||||
|
||||
### What should I look out for when changing CRI implementations?
|
||||
|
||||
While the underlying containerization code is the same between Docker and most
|
||||
CRIs (including containerd), there are a few differences around the edges. Some
|
||||
common things to consider when migrating are:
|
||||
|
||||
- Logging configuration
|
||||
- Runtime resource limitations
|
||||
- Node provisioning scripts that call docker or use docker via it's control socket
|
||||
- Kubectl plugins that require docker CLI or the control socket
|
||||
- Tools from the Kubernetes project that require direct access to Docker Engine
|
||||
(for example: the deprecated `kube-imagepuller` tool)
|
||||
- Configuration of functionality like `registry-mirrors` and insecure registries
|
||||
- Other support scripts or daemons that expect Docker Engine to be available and are run
|
||||
outside of Kubernetes (for example, monitoring or security agents)
|
||||
- GPUs or special hardware and how they integrate with your runtime and Kubernetes
|
||||
|
||||
If you use Kubernetes resource requests/limits or file-based log collection
|
||||
DaemonSets then they will continue to work the same, but if you’ve customized
|
||||
your `dockerd` configuration, you’ll need to adapt that for your new container
|
||||
runtime where possible.
|
||||
|
||||
Another thing to look out for is anything expecting to run for system maintenance
|
||||
or nested inside a container when building images will no longer work. For the
|
||||
former, you can use the [`crictl`][cr] tool as a drop-in replacement (see [mapping from docker cli to crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) and for the
|
||||
latter you can use newer container build options like [img], [buildah],
|
||||
[kaniko], or [buildkit-cli-for-kubectl] that don’t require Docker.
|
||||
|
||||
[cr]: https://github.com/kubernetes-sigs/cri-tools
|
||||
[img]: https://github.com/genuinetools/img
|
||||
[buildah]: https://github.com/containers/buildah
|
||||
[kaniko]: https://github.com/GoogleContainerTools/kaniko
|
||||
[buildkit-cli-for-kubectl]: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl
|
||||
|
||||
For containerd, you can start with their [documentation] to see what configuration
|
||||
options are available as you migrate things over.
|
||||
|
||||
[documentation]: https://github.com/containerd/cri/blob/master/docs/registry.md
|
||||
|
||||
For instructions on how to use containerd and CRI-O with Kubernetes, see the
|
||||
Kubernetes documentation on [Container Runtimes]
|
||||
|
||||
[Container Runtimes]: /docs/setup/production-environment/container-runtimes/
|
||||
|
||||
### What if I have more questions?
|
||||
|
||||
If you use a vendor-supported Kubernetes distribution, you can ask them about
|
||||
upgrade plans for their products. For end-user questions, please post them
|
||||
to our end user community forum: https://discuss.kubernetes.io/.
|
||||
|
||||
You can also check out the excellent blog post
|
||||
[Wait, Docker is deprecated in Kubernetes now?][dep] a more in-depth technical
|
||||
discussion of the changes.
|
||||
|
||||
[dep]: https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m
|
||||
|
||||
### Can I have a hug?
|
||||
|
||||
Yes, we're still giving hugs as requested. 🤗🤗🤗
|
|
@ -21,7 +21,6 @@ This page lists some of the available add-ons and links to their respective inst
|
|||
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
|
||||
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
|
||||
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
|
||||
* [Contiv](https://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
|
||||
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
|
||||
* [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes.
|
||||
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
|
||||
|
|
|
@ -66,7 +66,7 @@ with `--tracing-config-file=<path-to-config>`. This is an example config that re
|
|||
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1beta1
|
||||
apiVersion: apiserver.config.k8s.io/v1alpha1
|
||||
kind: TracingConfiguration
|
||||
# default value
|
||||
#endpoint: localhost:4317
|
||||
|
@ -74,7 +74,7 @@ samplingRatePerMillion: 100
|
|||
```
|
||||
|
||||
For more information about the `TracingConfiguration` struct, see
|
||||
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).
|
||||
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
|
||||
|
||||
## Stability
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ The kubelet reports the resource usage of a Pod as part of the Pod
|
|||
|
||||
If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
|
||||
are available in your cluster, then Pod resource usage can be retrieved either
|
||||
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#the-metrics-api)
|
||||
from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-api)
|
||||
directly or from your monitoring tools.
|
||||
|
||||
## Local ephemeral storage
|
||||
|
|
|
@ -111,6 +111,7 @@ Operator.
|
|||
{{% thirdparty-content %}}
|
||||
|
||||
* [Charmed Operator Framework](https://juju.is/)
|
||||
* [Kopf](https://github.com/nolar/kopf) (Kubernetes Operator Pythonic Framework)
|
||||
* [kubebuilder](https://book.kubebuilder.io/)
|
||||
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET operator SDK)
|
||||
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
|
||||
|
|
|
@ -63,7 +63,7 @@ One way to create a Deployment using a `.yaml` file like the one above is to use
|
|||
in the `kubectl` command-line interface, passing the `.yaml` file as an argument. Here's an example:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml --record
|
||||
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
|
|
@ -339,7 +339,7 @@ If that has happened, or you suspect that it might have, you can retry expansion
|
|||
size that is within the capacity limits of underlying storage provider. You can monitor status of resize operation by watching `.status.resizeStatus` and events on the PVC.
|
||||
|
||||
Note that,
|
||||
although you can a specify a lower amount of storage than what was requested previously,
|
||||
although you can specify a lower amount of storage than what was requested previously,
|
||||
the new value must still be higher than `.status.capacity`.
|
||||
Kubernetes does not support shrinking a PVC to less than its current size.
|
||||
{{% /tab %}}
|
||||
|
|
|
@ -49,7 +49,7 @@ metadata:
|
|||
name: standard
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
parameters:
|
||||
type: gp2
|
||||
type: gp3
|
||||
reclaimPolicy: Retain
|
||||
allowVolumeExpansion: true
|
||||
mountOptions:
|
||||
|
@ -271,9 +271,9 @@ parameters:
|
|||
fsType: ext4
|
||||
```
|
||||
|
||||
* `type`: `io1`, `gp2`, `sc1`, `st1`. See
|
||||
* `type`: `io1`, `gp2`, `gp3`, `sc1`, `st1`. See
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
for details. Default: `gp2`.
|
||||
for details. Default: `gp3`.
|
||||
* `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster
|
||||
has a node. `zone` and `zones` parameters must not be used at the same time.
|
||||
|
|
|
@ -86,6 +86,7 @@ Responsibilities for New Contributor Ambassadors include:
|
|||
- Mentoring new contributors through their first few PRs to the docs repo.
|
||||
- Helping new contributors create the more complex PRs they need to become Kubernetes members.
|
||||
- [Sponsoring contributors](/docs/contribute/advanced/#sponsor-a-new-contributor) on their path to becoming Kubernetes members.
|
||||
- Hosting a monthly meeting to help and mentor new contributors.
|
||||
|
||||
Current New Contributor Ambassadors are announced at each SIG-Docs meeting and in the [Kubernetes #sig-docs channel](https://kubernetes.slack.com).
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ cards:
|
|||
title: K8s Release Notes
|
||||
description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
|
||||
button: "Download Kubernetes"
|
||||
button_path: "/docs/setup/release/notes"
|
||||
button_path: "/releases/download"
|
||||
- name: about
|
||||
title: About the documentation
|
||||
description: This website contains documentation for the current and previous 4 versions of Kubernetes.
|
||||
|
|
|
@ -74,6 +74,7 @@ by the API server in a RESTful way though they are essential for a user or an
|
|||
operator to use or manage a cluster.
|
||||
|
||||
|
||||
* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/)
|
||||
* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
|
||||
* [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/)
|
||||
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) and
|
||||
|
|
|
@ -401,7 +401,7 @@ different Kubernetes components.
|
|||
| `PodShareProcessNamespace` | `true` | Beta | 1.12 | 1.16 |
|
||||
| `PodShareProcessNamespace` | `true` | GA | 1.17 | - |
|
||||
| `RequestManagement` | `false` | Alpha | 1.15 | 1.16 |
|
||||
| `RequestManagement` | - | Derecated | 1.17 | - |
|
||||
| `RequestManagement` | - | Deprecated | 1.17 | - |
|
||||
| `ResourceLimitsPriorityFunction` | `false` | Alpha | 1.9 | 1.18 |
|
||||
| `ResourceLimitsPriorityFunction` | - | Deprecated | 1.19 | - |
|
||||
| `ResourceQuotaScopeSelectors` | `false` | Alpha | 1.11 | 1.11 |
|
||||
|
@ -816,7 +816,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
and gracefully terminate pods running on the node. See
|
||||
[Graceful Node Shutdown](/docs/concepts/architecture/nodes/#graceful-node-shutdown)
|
||||
for more details.
|
||||
= `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
|
||||
- `GracefulNodeShutdownBasedOnPodPriority`: Enables the kubelet to check Pod priorities
|
||||
when shutting down a node gracefully.
|
||||
- `GRPCContainerProbe`: Enables the gRPC probe method for {Liveness,Readiness,Startup}Probe. See [Configure Liveness, Readiness and Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe).
|
||||
- `HonorPVReclaimPolicy`: Honor persistent volume reclaim policy when it is `Delete` irrespective of PV-PVC deletion ordering.
|
||||
|
|
|
@ -946,7 +946,7 @@ kube-apiserver [flags]
|
|||
<td colspan="2">--service-account-key-file strings</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key is provided</p></td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><p>File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key-file is provided</p></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
|
@ -0,0 +1,375 @@
|
|||
---
|
||||
title: kube-apiserver Configuration (v1alpha1)
|
||||
content_type: tool-reference
|
||||
package: apiserver.k8s.io/v1alpha1
|
||||
auto_generated: true
|
||||
---
|
||||
<p>Package v1alpha1 is the v1alpha1 version of the API.</p>
|
||||
|
||||
|
||||
## Resource Types
|
||||
|
||||
|
||||
- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration)
|
||||
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration)
|
||||
- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
|
||||
|
||||
|
||||
|
||||
## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>AdmissionConfiguration provides versioned configuration for admission controllers.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>AdmissionConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>plugins</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-AdmissionPluginConfiguration"><code>[]AdmissionPluginConfiguration</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Plugins allows specifying a configuration per admission control plugin.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>EgressSelectorConfiguration provides versioned configuration for egress selector clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>EgressSelectorConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>egressSelections</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-EgressSelection"><code>[]EgressSelection</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>connectionServices contains a list of egress selection client configurations</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TracingConfiguration` {#apiserver-k8s-io-v1alpha1-TracingConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>TracingConfiguration provides versioned configuration for tracing clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1alpha1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>TracingConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>endpoint</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Endpoint of the collector that's running on the control-plane node.
|
||||
The APIServer uses the egressType ControlPlane when sending data to the collector.
|
||||
The syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md.
|
||||
Defaults to the otlpgrpc default, localhost:4317
|
||||
The connection is insecure, and does not support TLS.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>samplingRatePerMillion</code><br/>
|
||||
<code>int32</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>SamplingRatePerMillion is the number of samples to collect per million spans.
|
||||
Defaults to 0.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `AdmissionPluginConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionPluginConfiguration}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration)
|
||||
|
||||
|
||||
<p>AdmissionPluginConfiguration provides the configuration for a single plug-in.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Name is the name of the admission controller.
|
||||
It must match the registered admission plugin name.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>path</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>Path is the path to a configuration file that contains the plugin's
|
||||
configuration</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>configuration</code><br/>
|
||||
<a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Configuration is an embedded configuration object to be used as the plugin's
|
||||
configuration. If present, it will be used instead of the path to the configuration file.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `Connection` {#apiserver-k8s-io-v1alpha1-Connection}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [EgressSelection](#apiserver-k8s-io-v1alpha1-EgressSelection)
|
||||
|
||||
|
||||
<p>Connection provides the configuration for a single egress selection client.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>proxyProtocol</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-ProtocolType"><code>ProtocolType</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Protocol is the protocol used to connect from client to the konnectivity server.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>transport</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-Transport"><code>Transport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Transport defines the transport configurations we use to dial to the konnectivity server.
|
||||
This is required if ProxyProtocol is HTTPConnect or GRPC.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `EgressSelection` {#apiserver-k8s-io-v1alpha1-EgressSelection}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration)
|
||||
|
||||
|
||||
<p>EgressSelection provides the configuration for a single egress selection client.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>name is the name of the egress selection.
|
||||
Currently supported values are "controlplane", "master", "etcd" and "cluster"
|
||||
The "master" egress selector is deprecated in favor of "controlplane"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>connection</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-Connection"><code>Connection</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>connection is the exact information used to configure the egress selection</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ProtocolType` {#apiserver-k8s-io-v1alpha1-ProtocolType}
|
||||
|
||||
(Alias of `string`)
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Connection](#apiserver-k8s-io-v1alpha1-Connection)
|
||||
|
||||
|
||||
<p>ProtocolType is a set of valid values for Connection.ProtocolType</p>
|
||||
|
||||
|
||||
|
||||
|
||||
## `TCPTransport` {#apiserver-k8s-io-v1alpha1-TCPTransport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Transport](#apiserver-k8s-io-v1alpha1-Transport)
|
||||
|
||||
|
||||
<p>TCPTransport provides the information to connect to konnectivity server via TCP</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>url</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>URL is the location of the konnectivity server to connect to.
|
||||
As an example it might be "https://127.0.0.1:8131"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>tlsConfig</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-TLSConfig"><code>TLSConfig</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>TLSConfig is the config needed to use TLS when connecting to konnectivity server</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TLSConfig` {#apiserver-k8s-io-v1alpha1-TLSConfig}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [TCPTransport](#apiserver-k8s-io-v1alpha1-TCPTransport)
|
||||
|
||||
|
||||
<p>TLSConfig provides the authentication information to connect to konnectivity server
|
||||
Only used with TCPTransport</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>caBundle</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>caBundle is the file location of the CA to be used to determine trust with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
If absent while TCPTransport.URL is prefixed with https://, default to system trust roots.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>clientKey</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
Must be configured if TCPTransport.URL is prefixed with https://</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>clientCert</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
Must be configured if TCPTransport.URL is prefixed with https://</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `Transport` {#apiserver-k8s-io-v1alpha1-Transport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Connection](#apiserver-k8s-io-v1alpha1-Connection)
|
||||
|
||||
|
||||
<p>Transport defines the transport configurations we use to dial to the konnectivity server</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>tcp</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-TCPTransport"><code>TCPTransport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>TCP is the TCP configuration for communicating with the konnectivity server via TCP
|
||||
ProxyProtocol of GRPC is not supported with TCP transport at the moment
|
||||
Requires at least one of TCP or UDS to be set</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>uds</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1alpha1-UDSTransport"><code>UDSTransport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>UDS is the UDS configuration for communicating with the konnectivity server via UDS
|
||||
Requires at least one of TCP or UDS to be set</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `UDSTransport` {#apiserver-k8s-io-v1alpha1-UDSTransport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Transport](#apiserver-k8s-io-v1alpha1-Transport)
|
||||
|
||||
|
||||
<p>UDSTransport provides the information to connect to konnectivity server via UDS</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>udsName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>UDSName is the name of the unix domain socket to connect to konnectivity server
|
||||
This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket)</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
|
@ -72,6 +72,14 @@ This annotation is a best guess at why something was changed.
|
|||
|
||||
It is populated when adding `--record` to a `kubectl` command that may change an object.
|
||||
|
||||
## kubernetes.io/description {#description}
|
||||
|
||||
Example: `kubernetes.io/description: "Description of K8s object."`
|
||||
|
||||
Used on: All Objects
|
||||
|
||||
This annotation is used for describing specific behaviour of given object.
|
||||
|
||||
## controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost}
|
||||
|
||||
Example: `controller.kubernetes.io/pod-deletion-cost=10`
|
||||
|
@ -464,4 +472,4 @@ This annotation has been deprecated since Kubernetes v1.19 and will become non-f
|
|||
The tutorial [Restrict a Container's Syscalls with seccomp](/docs/tutorials/clusters/seccomp/) takes
|
||||
you through the steps you follow to apply a seccomp profile to a Pod or to one of
|
||||
its containers. That tutorial covers the supported mechanism for configuring seccomp in Kubernetes,
|
||||
based on setting `securityContext` within the Pod's `.spec`.
|
||||
based on setting `securityContext` within the Pod's `.spec`.
|
||||
|
|
|
@ -12,7 +12,7 @@ with that removal.
|
|||
|
||||
## Kubernetes project
|
||||
|
||||
* Kubernetes blog: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/) (originally published 2020/12/02)
|
||||
* Kubernetes blog: [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) (originally published 2022/02/17)
|
||||
|
||||
* Kubernetes blog: [Kubernetes is Moving on From Dockershim: Commitments and Next Steps](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) (published 2022/01/07)
|
||||
|
||||
|
|
|
@ -327,6 +327,17 @@ In a cluster that includes Windows nodes, you can use the following types of Ser
|
|||
* `LoadBalancer`
|
||||
* `ExternalName`
|
||||
|
||||
{{< warning >}}
|
||||
There are known issue with NodePort services on overlay networking, if the target destination node is running Windows Server 2022.
|
||||
To avoid the issue entirely, you can configure the service with `externalTrafficPolicy: Local`.
|
||||
|
||||
There are known issues with pod to pod connectivity on l2bridge network on Windows Server 2022 with KB5005619 or higher installed.
|
||||
To workaround the issue and restore pod-pod connectivity, you can disable the WinDSR feature in kube-proxy.
|
||||
|
||||
These issues require OS fixes.
|
||||
Please follow https://github.com/microsoft/Windows-Containers/issues/204 for updates.
|
||||
{{< /warning >}}
|
||||
|
||||
Windows container networking differs in some important ways from Linux networking.
|
||||
The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) provides
|
||||
additional details and background.
|
||||
|
|
|
@ -16,7 +16,7 @@ A Kubernetes cluster can be divided into namespaces. Once you have a namespace t
|
|||
has a default memory
|
||||
[limit](/docs/concepts/configuration/manage-resources-containers/#requests-and-limits),
|
||||
and you then try to create a Pod with a container that does not specify its own memory
|
||||
limit its own memory limit, then the
|
||||
limit, then the
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}} assigns the default
|
||||
memory limit to that container.
|
||||
|
||||
|
|
|
@ -11,7 +11,8 @@ dockershim to other container runtimes.
|
|||
|
||||
Since the announcement of [dockershim deprecation](/blog/2020/12/08/kubernetes-1-20-release-announcement/#dockershim-deprecation)
|
||||
in Kubernetes 1.20, there were questions on how this will affect various workloads and Kubernetes
|
||||
installations. You can find this blog post useful to understand the problem better: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/)
|
||||
installations. Our [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/) is there to help you
|
||||
to understand the problem better.
|
||||
|
||||
It is recommended to migrate from dockershim to alternative container runtimes.
|
||||
Check out [container runtimes](/docs/setup/production-environment/container-runtimes/)
|
||||
|
|
|
@ -2,75 +2,208 @@
|
|||
reviewers:
|
||||
- fgrzadkowski
|
||||
- piosz
|
||||
title: Resource metrics pipeline
|
||||
title: Resource metrics pipeline
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Resource usage metrics, such as container CPU and memory usage,
|
||||
are available in Kubernetes through the Metrics API. These metrics can be accessed either directly
|
||||
by the user with the `kubectl top` command, or by a controller in the cluster, for example
|
||||
Horizontal Pod Autoscaler, to make decisions.
|
||||
For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and similar use cases.
|
||||
This API makes information available about resource usage for node and pod, including metrics for CPU and memory.
|
||||
If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information, and
|
||||
you can use Kubernetes' access control mechanisms to manage permissions to do so.
|
||||
|
||||
<!-- body -->
|
||||
The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) use data from the metrics API to adjust workload replicas and resources to meet customer demand.
|
||||
|
||||
## The Metrics API
|
||||
|
||||
Through the Metrics API, you can get the amount of resource currently used
|
||||
by a given node or a given pod. This API doesn't store the metric values,
|
||||
so it's not possible, for example, to get the amount of resources used by a
|
||||
given node 10 minutes ago.
|
||||
|
||||
The API is no different from any other API:
|
||||
|
||||
- it is discoverable through the same endpoint as the other Kubernetes APIs under the path: `/apis/metrics.k8s.io/`
|
||||
- it offers the same security, scalability, and reliability guarantees
|
||||
|
||||
The API is defined in [k8s.io/metrics](https://github.com/kubernetes/metrics/blob/master/pkg/apis/metrics/v1beta1/types.go)
|
||||
repository. You can find more information about the API there.
|
||||
You can also view the resource metrics using the [`kubectl top`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top) command.
|
||||
|
||||
{{< note >}}
|
||||
The API requires the metrics server to be deployed in the cluster. Otherwise it will be not available.
|
||||
The Metrics API, and the metrics pipeline that it enables, only offers the minimum
|
||||
CPU and memory metrics to enable automatic scaling using HPA and / or VPA.
|
||||
If you would like to provide a more complete set of metrics, you can complement
|
||||
the simpler Metrics API by deploying a second
|
||||
[metrics pipeline](/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline)
|
||||
that uses the _Custom Metrics API_.
|
||||
{{< /note >}}
|
||||
|
||||
## Measuring Resource Usage
|
||||
|
||||
Figure 1 illustrates the architecture of the resource metrics pipeline.
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart RL
|
||||
subgraph cluster[Cluster]
|
||||
direction RL
|
||||
S[ <br><br> ]
|
||||
A[Metrics-<br>Server]
|
||||
subgraph B[Nodes]
|
||||
direction TB
|
||||
D[cAdvisor] --> C[kubelet]
|
||||
E[Container<br>runtime] --> D
|
||||
E1[Container<br>runtime] --> D
|
||||
P[pod data] -.- C
|
||||
end
|
||||
L[API<br>server]
|
||||
W[HPA]
|
||||
C ---->|Summary<br>API| A -->|metrics<br>API| L --> W
|
||||
end
|
||||
L ---> K[kubectl<br>top]
|
||||
classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000;
|
||||
class W,B,P,K,cluster,D,E,E1 box
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class S spacewhite
|
||||
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff;
|
||||
class A,L,C k8s
|
||||
{{< /mermaid >}}
|
||||
|
||||
Figure 1. Resource Metrics Pipeline
|
||||
|
||||
The architecture components, from right to left in the figure, consist of the following:
|
||||
|
||||
* [cAdvisor](https://github.com/google/cadvisor): Daemon for collecting, aggregating and exposing container metrics included in Kubelet.
|
||||
* [kubelet](/docs/concepts/overview/components/#kubelet): Node agent for managing container resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet API endpoints.
|
||||
* [Summary API](#summary-api-source): API provided by the kubelet for discovering and retrieving per-node summarized stats available through the `/stats` endpoint.
|
||||
* [metrics-server](#metrics-server): Cluster addon component that collects and aggregates resource metrics pulled from each kubelet. The API server serves Metrics API for use by HPA, VPA, and by the `kubectl top` command. Metrics Server is a reference implementation of the Metrics API.
|
||||
* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for workload autoscaling. To make this work in your cluster, you need an API extension server that provides the Metrics API.
|
||||
|
||||
{{< note >}}
|
||||
cAdvisor supports reading metrics from cgroups, which works with typical container runtimes on Linux.
|
||||
If you use a container runtime that uses another resource isolation mechanism, for example virtualization, then that container runtime must support [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) in order for metrics to be available to the kubelet.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Metrics API
|
||||
|
||||
The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s autoscaler components.
|
||||
|
||||
Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading:
|
||||
```shell
|
||||
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/minikube" | jq '.'
|
||||
```
|
||||
|
||||
Here is the same API call using `curl`:
|
||||
```shell
|
||||
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube
|
||||
```
|
||||
Sample reply:
|
||||
```json
|
||||
{
|
||||
"kind": "NodeMetrics",
|
||||
"apiVersion": "metrics.k8s.io/v1beta1",
|
||||
"metadata": {
|
||||
"name": "minikube",
|
||||
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/minikube",
|
||||
"creationTimestamp": "2022-01-27T18:48:43Z"
|
||||
},
|
||||
"timestamp": "2022-01-27T18:48:33Z",
|
||||
"window": "30s",
|
||||
"usage": {
|
||||
"cpu": "487558164n",
|
||||
"memory": "732212Ki"
|
||||
}
|
||||
}
|
||||
```
|
||||
Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the `kube-system` namespace and piped through `jq` for easier reading:
|
||||
|
||||
```shell
|
||||
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube" | jq '.'
|
||||
```
|
||||
Here is the same API call using `curl`:
|
||||
```shell
|
||||
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube
|
||||
```
|
||||
Sample reply:
|
||||
```json
|
||||
{
|
||||
"kind": "PodMetrics",
|
||||
"apiVersion": "metrics.k8s.io/v1beta1",
|
||||
"metadata": {
|
||||
"name": "kube-scheduler-minikube",
|
||||
"namespace": "kube-system",
|
||||
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube",
|
||||
"creationTimestamp": "2022-01-27T19:25:00Z"
|
||||
},
|
||||
"timestamp": "2022-01-27T19:24:31Z",
|
||||
"window": "30s",
|
||||
"containers": [
|
||||
{
|
||||
"name": "kube-scheduler",
|
||||
"usage": {
|
||||
"cpu": "9559630n",
|
||||
"memory": "22244Ki"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics) repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) for the `metrics.k8s.io` API.
|
||||
|
||||
To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md), the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the [resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api).
|
||||
|
||||
|
||||
{{< note >}} You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able to access it. {{< /note >}}
|
||||
|
||||
## Measuring resource usage
|
||||
|
||||
### CPU
|
||||
|
||||
CPU is reported as the average usage, in
|
||||
[CPU cores](/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu),
|
||||
over a period of time. This value is derived by taking a rate over a cumulative CPU counter
|
||||
provided by the kernel (in both Linux and Windows kernels).
|
||||
The kubelet chooses the window for the rate calculation.
|
||||
CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors.
|
||||
|
||||
This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The time window used to calculate CPU is shown under window field in Metrics API.
|
||||
|
||||
To learn more about how Kubernetes allocates and measures CPU resources, see [meaning of CPU](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu).
|
||||
|
||||
### Memory
|
||||
|
||||
Memory is reported as the working set, in bytes, at the instant the metric was collected.
|
||||
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure.
|
||||
However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
|
||||
It includes all anonymous (non-file-backed) memory since Kubernetes does not support swap.
|
||||
The metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim such pages.
|
||||
Memory is reported as the working set, measured in bytes, at the instant the metric was collected.
|
||||
|
||||
In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
|
||||
|
||||
The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
|
||||
|
||||
To learn more about how Kubernetes allocates and measures memory resources, see [meaning of memory](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory).
|
||||
|
||||
## Metrics Server
|
||||
|
||||
[Metrics Server](https://github.com/kubernetes-sigs/metrics-server) is a cluster-wide aggregator of resource usage data.
|
||||
By default, it is deployed in clusters created by `kube-up.sh` script
|
||||
as a Deployment object. If you use a different Kubernetes setup mechanism, you can deploy it using the provided
|
||||
[deployment components.yaml](https://github.com/kubernetes-sigs/metrics-server/releases) file.
|
||||
The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA. You can also view these metrics using the `kubectl top` command.
|
||||
|
||||
Metrics Server collects metrics from the Summary API, exposed by
|
||||
[Kubelet](/docs/reference/command-line-tools-reference/kubelet/) on each node, and is registered with the main API server via
|
||||
[Kubernetes aggregator](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an internal view of pod metadata, and keeps a cache of pod health. That cached pod health information is available via the extension API that the metrics-server makes available.
|
||||
|
||||
Learn more about the metrics server in
|
||||
[the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
For example with an HPA query, the metrics-server needs to identify which pods fulfill the label selectors in the deployment.
|
||||
|
||||
### Summary API Source
|
||||
The [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at node, volume, pod and container level, and emits their statistics in
|
||||
The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API to collect metrics from each node. Depending on the metrics-server version it uses:
|
||||
* Metrics resource endpoint `/metrics/resource` in version v0.6.0+ or
|
||||
* Summary API endpoint `/stats/summary` in older versions
|
||||
|
||||
|
||||
To learn more about the metrics-server, see the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server).
|
||||
|
||||
You can also check out the following:
|
||||
|
||||
* [metrics-server design](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md)
|
||||
* [metrics-server FAQ](https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md)
|
||||
* [metrics-server known issues](https://github.com/kubernetes-sigs/metrics-server/blob/master/KNOWN_ISSUES.md)
|
||||
* [metrics-server releases](https://github.com/kubernetes-sigs/metrics-server/releases)
|
||||
* [Horizontal Pod Autoscaling](/docs/tasks/run-application/horizontal-pod-autoscale/)
|
||||
|
||||
### Summary API source
|
||||
|
||||
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node, volume, pod and container level, and emits this information in
|
||||
the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)
|
||||
for consumers to read.
|
||||
|
||||
Pre-1.23, these resources have been primarily gathered from [cAdvisor](https://github.com/google/cadvisor). However, in 1.23 with the
|
||||
introduction of the `PodAndContainerStatsFromCRI` FeatureGate, container and pod level stats can be gathered by the CRI implementation.
|
||||
Note: this also requires support from the CRI implementations (containerd >= 1.6.0, CRI-O >= 1.23.0).
|
||||
Here is an example of a Summary API request for a `minikube` node:
|
||||
|
||||
|
||||
```shell
|
||||
kubectl get --raw "/api/v1/nodes/minikube/proxy/stats/summary"
|
||||
```
|
||||
Here is the same API call using `curl`:
|
||||
```shell
|
||||
curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary
|
||||
```
|
||||
{{< note >}}
|
||||
The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint beginning with metrics-server 0.6.x.
|
||||
{{< /note >}}
|
|
@ -65,7 +65,7 @@ Kubernetes APIサーバーは、`/openapi/v2`エンドポイントを介してOp
|
|||
</table>
|
||||
|
||||
|
||||
Kubernetesは、他の手段として主にクラスター間の連携用途向けのAPIに、Protocol buffersをベースにしたシリアライズフォーマットを実装しています。このフォーマットに関しては、[Kubernetes Protobuf serialization](https://github.com/kubernetes/community/blob/master/contributors/des ign-proposals/api-machinery/protobuf.md)デザイン提案を参照してください。また、各スキーマのInterface Definition Language(IDL)ファイルは、APIオブジェクトを定義しているGoパッケージ内に配置されています。
|
||||
Kubernetesは、他の手段として主にクラスター間の連携用途向けのAPIに、Protocol buffersをベースにしたシリアライズフォーマットを実装しています。このフォーマットに関しては、[Kubernetes Protobuf serialization](https://github.com/kubernetes/design-proposals-archive/blob/main/api-machinery/protobuf.md)デザイン提案を参照してください。また、各スキーマのInterface Definition Language(IDL)ファイルは、APIオブジェクトを定義しているGoパッケージ内に配置されています。
|
||||
|
||||
## 永続性
|
||||
|
||||
|
|
|
@ -307,9 +307,9 @@ kubectl top pod POD_NAME --containers # 特定のPodとそのコ
|
|||
## ノードおよびクラスターとの対話処理
|
||||
|
||||
```bash
|
||||
kubectl cordon my-node # my-nodeをスケーリングされないように設定します
|
||||
kubectl cordon my-node # my-nodeをスケジューリング不能に設定します
|
||||
kubectl drain my-node # メンテナンスの準備としてmy-nodeで動作中のPodを空にします
|
||||
kubectl uncordon my-node # my-nodeをスケーリングされるように設定します
|
||||
kubectl uncordon my-node # my-nodeをスケジューリング可能に設定します
|
||||
kubectl top node my-node # 特定のノードのメトリクスを表示します
|
||||
kubectl cluster-info # Kubernetesクラスターのマスターとサービスのアドレスを表示します
|
||||
kubectl cluster-info dump # 現在のクラスター状態を標準出力にダンプします
|
||||
|
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
title: Proxies no Kubernetes
|
||||
content_type: concept
|
||||
weight: 90
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
Esta página descreve o uso de proxies com Kubernetes.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Proxies
|
||||
Existem vários tipos diferentes de proxies que você pode encontrar usando Kubernetes:
|
||||
|
||||
|
||||
1. O [kubectl proxy](/docs/tasks/access-application-cluster/access-cluster/#directly-accessing-the-rest-api):
|
||||
|
||||
Quando o kubectl proxy é utilizado ocorre o seguinte:
|
||||
- executa na máquina do usuário ou em um pod
|
||||
- redireciona/encapsula conexões direcionadas ao localhost para o servidor de API
|
||||
- a comunicação entre o cliente e o o proxy usa HTTP
|
||||
- a comunicação entre o proxy e o servidor de API usa HTTPS
|
||||
- o proxy localiza o servidor de API do cluster
|
||||
- o proxy adiciona os cabeçalhos de comunicação.
|
||||
|
||||
1. O [apiserver proxy](/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services):
|
||||
|
||||
- é um bastion server, construído no servidor de API
|
||||
- conecta um usuário fora do cluster com os IPs do cluster que não podem ser acessados de outra forma
|
||||
- executa dentro do processo do servidor de API
|
||||
- cliente para proxy usa HTTPS (ou HTTP se o servidor de API for configurado)
|
||||
- proxy para o destino pode usar HTTP ou HTTPS conforme escolhido pelo proxy usando as informações disponíveis
|
||||
- pode ser usado para acessar um Nó, Pod ou serviço
|
||||
- faz balanceamento de carga quando usado para acessar um Service.
|
||||
|
||||
1. O [kube proxy](/docs/concepts/services-networking/service/#ips-and-vips):
|
||||
|
||||
- executa em todos os Nós
|
||||
- atua como proxy para UDP, TCP e SCTP
|
||||
- não aceita HTTP
|
||||
- provém balanceamento de carga
|
||||
- apenas é usado para acessar serviços.
|
||||
|
||||
1. Um Proxy/Balanceador de carga na frente de servidores de API(s):
|
||||
|
||||
- a existência e a implementação de tal elemento varia de cluster para cluster, por exemplo nginx
|
||||
- fica entre todos os clientes e um ou mais serviços
|
||||
- atua como balanceador de carga se existe mais de um servidor de API.
|
||||
|
||||
|
||||
1. Balanceadores de carga da nuvem em serviços externos:
|
||||
- são fornecidos por algum provedor de nuvem (e.x AWS ELB, Google Cloud Load Balancer)
|
||||
- são criados automaticamente quando o serviço de Kubernetes tem o tipo `LoadBalancer`
|
||||
- geralmente suportam apenas UDP/TCP
|
||||
- O suporte ao SCTP fica por conta da implementação do balanceador de carga da provedora de nuvem
|
||||
- a implementação varia de acordo com o provedor de cloud.
|
||||
|
||||
Os usuários de Kubernetes geralmente não precisam se preocupar com outras coisas além dos dois primeiros tipos. O
|
||||
administrador do cluster tipicamente garante que os últimos tipos serão configurados corretamente.
|
||||
|
||||
|
||||
|
||||
## Redirecionamento de requisições
|
||||
|
||||
Os proxies substituíram as capacidades de redirecionamento. O redirecionamento foi depreciado.
|
|
@ -0,0 +1,106 @@
|
|||
---
|
||||
title: Namespaces
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
No Kubernetes, _namespaces_ disponibilizam um mecanismo para isolar grupos de recursos dentro de um único cluster. Nomes de recursos precisam ser únicos dentro de um namespace, porém podem se repetir em diferentes namespaces. Escopos baseados em namespaces são aplicáveis apenas para objetos com namespace _(como: Deployments, Services, etc)_ e não em objetos que abrangem todo o cluster _(como: StorageClass, Nodes, PersistentVolumes, etc)_.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Quando Utilizar Múltiplos Namespaces
|
||||
|
||||
Namespaces devem ser utilizados em ambientes com múltiplos usuários espalhados por diversos times ou projetos. Para clusters com poucos ou até algumas dezenas de usuários, você não deveria precisar criar ou pensar a respeito de namespaces. Comece a utilizar namespaces quando você precisar das funcionalidades que eles oferecem.
|
||||
|
||||
Namespaces oferecem escopo para nomes. Nomes de recursos precisam ser únicos dentro de um namespace, porém não em diferentes namespaces. Namespaces não podem ser aninhados dentro de outros namespaces e cada recurso Kubernetes pode pertencer à apenas um namespace.
|
||||
|
||||
Namespaces nos permitem dividir os recursos do cluster entre diferentes usuários (via [resource quota](/docs/concepts/policy/resource-quotas/)).
|
||||
|
||||
Não é necessário utilizar múltiplos namespaces para separar recursos levemente diferentes, como diferentes versões de um mesmo software: use {{< glossary_tooltip text="labels" term_id="label" >}} para distinguir recursos dentro de um mesmo namespace.
|
||||
|
||||
## Trabalhando com Namespaces
|
||||
|
||||
Criação e eliminação de namespaces estão descritas na
|
||||
[documentação de namespaces do guia de administradores](/docs/tasks/administer-cluster/namespaces).
|
||||
|
||||
{{< note >}}
|
||||
Evite criar namespaces com o prefixo `kube-`, já que este prefixo é reservado para namespaces do sistema Kubernetes.
|
||||
{{< /note >}}
|
||||
|
||||
### Visualizando namespaces
|
||||
|
||||
Você pode obter uma lista dos namespaces atuais dentro de um cluster com:
|
||||
|
||||
```shell
|
||||
kubectl get namespace
|
||||
```
|
||||
```
|
||||
NAME STATUS AGE
|
||||
default Active 1d
|
||||
kube-node-lease Active 1d
|
||||
kube-public Active 1d
|
||||
kube-system Active 1d
|
||||
```
|
||||
|
||||
O Kubernetes é inicializado com quatro namespaces:
|
||||
|
||||
* `default` O namespace padrão para objetos sem namespace
|
||||
* `kube-system` O namespace para objetos criados pelo sistema Kubernetes
|
||||
* `kube-public` Este namespace é criado automaticamente e é legível por todos os usuários (incluindo usuários não autenticados). Este namespace é reservado principalmente para uso do cluster, no caso de alguns recursos que precisem ser visíveis e legíveis publicamente por todo o cluster. O aspecto público deste namespace é apenas uma convenção, não um requisito.
|
||||
* `kube-node-lease` Este namespace contém os objetos de [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) associados com cada node. Node leases permitem que o kubelet envie [heartbeats](/docs/concepts/architecture/nodes/#heartbeats) para que a camada de gerenciamento detecte falhas nos nodes.
|
||||
|
||||
### Preparando o namespace para uma requisição
|
||||
|
||||
Para preparar o namespace para a requisição atual, utilize o parâmetro `--namespace`. Por exemplo:
|
||||
|
||||
```shell
|
||||
kubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>
|
||||
kubectl get pods --namespace=<insert-namespace-name-here>
|
||||
```
|
||||
|
||||
### Configurando a preferência de namespaces
|
||||
|
||||
Você pode salvar permanentemente o namespace para todos os comandos `kubectl` subsequentes no mesmo contexto:
|
||||
|
||||
```shell
|
||||
kubectl config set-context --current --namespace=<insert-namespace-name-here>
|
||||
# Validando
|
||||
kubectl config view --minify | grep namespace:
|
||||
```
|
||||
|
||||
## Namespaces e DNS
|
||||
|
||||
Quando você cria um [Serviço](/docs/concepts/services-networking/service/), ele cria uma
|
||||
[entrada DNS](/docs/concepts/services-networking/dns-pod-service/) correspondente.
|
||||
Esta entrada possui o formato: `<service-name>.<namespace-name>.svc.cluster.local`, de forma que se um contêiner utilizar apenas `<service-name>` ele será resolvido para um serviço que é local ao namespace.
|
||||
Isso é útil para utilizar a mesma configuração em vários namespaces, por exemplo em Desenvolvimento, `Staging` e Produç. Se você quiser acessar múltiplos namespaces, precisará utilizar um _Fully Qualified Domain Name_ (FQDN).
|
||||
|
||||
## Nem todos os objetos pertencem a algum Namespace
|
||||
|
||||
A maior parte dos recursos Kubernetes (como Pods, Services, controladores de replicação e outros) pertencem a algum namespace. Entretanto, recursos de namespaces não pertencem a nenhum namespace. Além deles, recursos de baixo nível, como [nodes](/docs/concepts/architecture/nodes/) e persistentVolumes, também não pertencem a nenhum namespace.
|
||||
|
||||
Para visualizar quais recursos Kubernetes pertencem ou não a algum namespace, utilize:
|
||||
|
||||
```shell
|
||||
# Em um namespace
|
||||
kubectl api-resources --namespaced=true
|
||||
|
||||
# Sem namespace
|
||||
kubectl api-resources --namespaced=false
|
||||
```
|
||||
|
||||
## Rotulamento Automático
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="1.21" >}}
|
||||
|
||||
A camada de gerenciamento Kubernetes configura um {{< glossary_tooltip text="label" term_id="label" >}} imutável `kubernetes.io/metadata.name` em todos os namespaces se a
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`NamespaceDefaultLabelName` estiver habilitada. O valor do label é o nome do namespace.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Leia sobre [a criação de um novo namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
|
||||
* Leia sobre [a eliminação de um namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).
|
||||
|
|
@ -142,7 +142,7 @@ cd <web-base>/update-imported-docs
|
|||
## Исправление ссылок
|
||||
|
||||
Конфигурационный файл `release.yml` содержит инструкции по исправлению относительных ссылок
|
||||
Для исправления относительных ссылок в импортированных файлах, установите для свойство `gen-absolute-links` в значение `true`. В качестве примера можете посмотреть файл [`release.yml`](https://github.com/kubernetes/website/blob/master/update-imported-docs/release.yml).
|
||||
Для исправления относительных ссылок в импортированных файлах, установите для свойство `gen-absolute-links` в значение `true`. В качестве примера можете посмотреть файл [`release.yml`](https://github.com/kubernetes/website/blob/main/update-imported-docs/release.yml).
|
||||
|
||||
## Внесение изменений в kubernetes/website
|
||||
|
||||
|
@ -218,5 +218,3 @@ static/docs/reference/generated/kubernetes-api/v1.17/fonts/fontawesome-webfont.w
|
|||
* [Генерация справочной документации для компонентов и инструментов Kubernetes](/ru/docs/contribute/generate-ref-docs/kubernetes-components/)
|
||||
* [Генерация справочной документации для команд kubectl](/ru/docs/contribute/generate-ref-docs/kubectl/)
|
||||
* [Генерация справочной документации для API Kubernetes](/ru/docs/contribute/generate-ref-docs/kubernetes-api/)
|
||||
|
||||
|
||||
|
|
|
@ -82,7 +82,7 @@ PR можно проверять только, если он соответст
|
|||
|
||||
{{< note >}}Не добавляйте `/lgtm`, если вы не уверены в технической точности документации, измененной или добавленной в PR.{{< /note >}}
|
||||
|
||||
- Утверждающий проверяет содержание запроса на предмет качества и соответствия рекомендациям SIG Docs, приведенным в руководствах по содержанию и оформлению. Только люди, указанные в качестве утверждающих в файле [`OWNERS`](https://github.com/kubernetes/website/blob/master/OWNERS), могут одобрить PR. Чтобы одобрить PR, оставьте комментарий `/approve` к PR.
|
||||
- Утверждающий проверяет содержание запроса на предмет качества и соответствия рекомендациям SIG Docs, приведенным в руководствах по содержанию и оформлению. Только люди, указанные в качестве утверждающих в файле [`OWNERS`](https://github.com/kubernetes/website/blob/main/OWNERS), могут одобрить PR. Чтобы одобрить PR, оставьте комментарий `/approve` к PR.
|
||||
|
||||
PR объединяется, когда у него есть комментарий `/lgtm` от кого-либо из организации Kubernetes и комментарий `/approve` от утверждающего в группе `sig-docs-maintainers`, если он не удерживается, а автор PR подписал CLA.
|
||||
|
||||
|
@ -603,5 +603,3 @@ If this is a documentation issue, please re-open this issue.
|
|||
|
||||
|
||||
Если вы хорошо осознали все задачи, затронутые в этом разделе, и хотите более тесно работать с командой документации Kubernetes, переходите к изучению [руководства для опытного участника](/ru/docs/contribute/advanced/).
|
||||
|
||||
|
||||
|
|
|
@ -209,7 +209,7 @@ To ensure accuracy in grammar and meaning, members of your localization team sho
|
|||
|
||||
### Сообщения на сайте в i18n/
|
||||
|
||||
Локализации должны включать содержимое файла [`i18n/en.toml`](https://github.com/kubernetes/website/blob/master/i18n/en.toml) в новый языковой файл. В качестве примера рассмотрим немецкую локализацию: `i18n/de.toml`.
|
||||
Локализации должны включать содержимое файла [`i18n/en.toml`](https://github.com/kubernetes/website/blob/main/i18n/en.toml) в новый языковой файл. В качестве примера рассмотрим немецкую локализацию: `i18n/de.toml`.
|
||||
|
||||
Добавьте новый файл локализации в `i18n/`. Например, для немецкой локализации (`de`):
|
||||
|
||||
|
@ -283,5 +283,3 @@ SIG Docs приветствует [участие и дополнения](/ru/d
|
|||
|
||||
- Добавит язык на сайт
|
||||
- Сообщит о новой локализации на каналах [Cloud Native Computing Foundation](https://www.cncf.io/about/) (CNCF), включая [блог Kubernetes](https://kubernetes.io/blog/).
|
||||
|
||||
|
||||
|
|
|
@ -100,7 +100,7 @@ SIG Docs активно принимает правки и дополнения
|
|||
|
||||
Если вы соответствуете [требованием](https://github.com/kubernetes/community/blob/master/community-membership.md#reviewer), то можете стать рецензентом SIG Docs. Рецензенты в других SIG-группах должны подать новую заявку для получения статуса рецензента в SIG Docs.
|
||||
|
||||
Для отправки заявки откройте пулреквест с добавлением самого себя в секцию `reviewers` [корневого файла OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) в репозитории `kubernetes/website`. Запросите проверку вашего пулреквеста одному или нескольким текущим утверждающим в группе SIG Docs.
|
||||
Для отправки заявки откройте пулреквест с добавлением самого себя в секцию `reviewers` [корневого файла OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) в репозитории `kubernetes/website`. Запросите проверку вашего пулреквеста одному или нескольким текущим утверждающим в группе SIG Docs.
|
||||
|
||||
Если ваш пулреквест одобрен, вы становитесь рецензентом SIG Docs. Теперь бот [K8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) будет назначать и предлагать вас в качестве рецензента для проверки новых пулреквестов.
|
||||
|
||||
|
@ -126,7 +126,7 @@ SIG Docs активно принимает правки и дополнения
|
|||
|
||||
Если вы соответствуете [требованием](https://github.com/kubernetes/community/blob/master/community-membership.md#approver), вы можете стать утверждающим SIG Docs. Утверждающие в других SIG-группах должны подать новую заявку для получения статуса утверждающего в SIG Docs.
|
||||
|
||||
Для отправки заявки откройте пулреквест с добавлением самого себя в секцию `approvers` [корневого файла OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) в репозитории `kubernetes/website`. Запросите проверку вашего пулреквеста одному или нескольким текущим утверждающим в группе SIG Docs.
|
||||
Для отправки заявки откройте пулреквест с добавлением самого себя в секцию `approvers` [корневого файла OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) в репозитории `kubernetes/website`. Запросите проверку вашего пулреквеста одному или нескольким текущим утверждающим в группе SIG Docs.
|
||||
|
||||
Если ваш пулреквест одобрен, вы становитесь утверждающим SIG Docs. Теперь бот [K8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) будет назначать и предлагать вас в качестве рецензента для проверки новых пулреквестов.
|
||||
|
||||
|
@ -179,7 +179,7 @@ SIG Docs активно принимает правки и дополнения
|
|||
- blunderbuss
|
||||
- approve
|
||||
|
||||
Все эти плагины используют файлы [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) и [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES) в корневой директории GitHub-репозитория `kubernetes/website`, чтобы контролировать работу prow по всему репозиторию.
|
||||
Все эти плагины используют файлы [OWNERS](https://github.com/kubernetes/website/blob/main/OWNERS) и [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) в корневой директории GitHub-репозитория `kubernetes/website`, чтобы контролировать работу prow по всему репозиторию.
|
||||
|
||||
Файл OWNERS содержит список людей, которые являются рецензентами и утверждающими в SIG Docs. Файлы OWNERS также может быть в поддиректориях и могут переопределять тех, кто может выступать в качестве рецензента или утверждающего в изменениях файлов этой директории и её поддиректорий. Для получения дополнительной информации о файлах OWNERS в целом, перейдите в [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md).
|
||||
|
||||
|
@ -205,5 +205,3 @@ SIG Docs активно принимает правки и дополнения
|
|||
|
||||
- [Участие для начинающих](/ru/docs/contribute/start/)
|
||||
- [Правила оформления документации](/ru/docs/contribute/style/)
|
||||
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ card:
|
|||
|
||||
## Язык
|
||||
|
||||
Документация Kubernetes была переведена на несколько языков (см. [README-файлы локализаций](https://github.com/kubernetes/website/blob/master/README.md#localization-readmemds)).
|
||||
Документация Kubernetes была переведена на несколько языков (см. [README-файлы локализаций](https://github.com/kubernetes/website/blob/main/README.md#localization-readmemds)).
|
||||
|
||||
Процесс локализации документации на другие языки описан в [соответствующей странице по локализации](/ru/docs/contribute/localization/).
|
||||
|
||||
|
@ -567,5 +567,3 @@ Create a new cluster. | Turn up a new cluster.
|
|||
* Подробнее про [написание новой темы](/ru/docs/contribute/style/write-new-topic/).
|
||||
* Подробнее про [использование шаблонов страниц](/ru/docs/contribute/style/page-templates/).
|
||||
* Подробнее про [создание пулреквеста](/ru/docs/contribute/start/#отправка-пулреквеста)).
|
||||
|
||||
|
||||
|
|
|
@ -81,11 +81,12 @@ on every resource object.
|
|||
| `app.kubernetes.io/managed-by` | 用于管理应用程序的工具 | `helm` | 字符串 |
|
||||
| `app.kubernetes.io/created-by` | 创建该资源的控制器或者用户 | `controller-manager` | 字符串 |
|
||||
<!--
|
||||
To illustrate these labels in action, consider the following StatefulSet object:
|
||||
To illustrate these labels in action, consider the following {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} object:
|
||||
-->
|
||||
为说明这些标签的实际使用情况,请看下面的 StatefulSet 对象:
|
||||
为说明这些标签的实际使用情况,请看下面的 {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} 对象:
|
||||
|
||||
```yaml
|
||||
# 这是一段节选
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
|
|
|
@ -188,10 +188,9 @@ and the domain name for your cluster is `cluster.local`, then the Pod has a DNS
|
|||
|
||||
`172-17-0-3.default.pod.cluster.local`.
|
||||
|
||||
Any pods created by a Deployment or DaemonSet exposed by a Service have the
|
||||
following DNS resolution available:
|
||||
Any pods exposed by a Service have the following DNS resolution available:
|
||||
|
||||
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
|
||||
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
|
||||
-->
|
||||
### A/AAAA 记录
|
||||
|
||||
|
@ -204,10 +203,9 @@ following DNS resolution available:
|
|||
|
||||
`172-17-0-3.default.pod.cluster.local`.
|
||||
|
||||
Deployment 或通过 Service 暴露出来的 DaemonSet 所创建的 Pod 会有如下 DNS
|
||||
解析名称可用:
|
||||
通过 Service 暴露出来的所有 Pod 都会有如下 DNS 解析名称可用:
|
||||
|
||||
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
|
||||
`pod-ip-address.service-name.my-namespace.svc.cluster-domain.example`.
|
||||
|
||||
<!--
|
||||
### Pod's hostname and subdomain fields
|
||||
|
|
|
@ -109,6 +109,7 @@ their authors, not the Kubernetes team.
|
|||
| PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) |
|
||||
| PHP | [github.com/renoki-co/php-k8s](https://github.com/renoki-co/php-k8s) |
|
||||
| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) |
|
||||
| Python | [github.com/gtsystem/lightkube](https://github.com/gtsystem/lightkube) |
|
||||
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
|
||||
| Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) |
|
||||
| Python | [github.com/Frankkkkk/pykorm](https://github.com/Frankkkkk/pykorm) |
|
||||
|
|
|
@ -7,7 +7,7 @@ weight: 60
|
|||
<!--
|
||||
reviewers:
|
||||
- sig-cluster-lifecycle
|
||||
title: Creating Highly Available clusters with kubeadm
|
||||
title: Creating Highly Available Clusters with kubeadm
|
||||
content_type: task
|
||||
weight: 60
|
||||
-->
|
||||
|
@ -19,9 +19,9 @@ This page explains two different approaches to setting up a highly available Kub
|
|||
cluster using kubeadm:
|
||||
|
||||
- With stacked control plane nodes. This approach requires less infrastructure. The etcd members
|
||||
and control plane nodes are co-located.
|
||||
and control plane nodes are co-located.
|
||||
- With an external etcd cluster. This approach requires more infrastructure. The
|
||||
control plane nodes and etcd members are separated.
|
||||
control plane nodes and etcd members are separated.
|
||||
|
||||
-->
|
||||
本文讲述了使用 kubeadm 设置一个高可用的 Kubernetes 集群的两种不同方式:
|
||||
|
@ -31,19 +31,19 @@ control plane nodes and etcd members are separated.
|
|||
|
||||
<!--
|
||||
Before proceeding, you should carefully consider which approach best meets the needs of your applications
|
||||
and environment. [This comparison topic](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
|
||||
and environment. [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) outlines the advantages and disadvantages of each.
|
||||
|
||||
If you encounter issues with setting up the HA cluster, please provide us with feedback
|
||||
If you encounter issues with setting up the HA cluster, please report these
|
||||
in the kubeadm [issue tracker](https://github.com/kubernetes/kubeadm/issues/new).
|
||||
|
||||
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15).
|
||||
See also the [upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15).
|
||||
-->
|
||||
在下一步之前,你应该仔细考虑哪种方法更好的满足你的应用程序和环境的需求。
|
||||
[这是对比文档](/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/) 讲述了每种方法的优缺点。
|
||||
[高可用拓扑选项](/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/) 讲述了每种方法的优缺点。
|
||||
|
||||
如果你在安装 HA 集群时遇到问题,请在 kubeadm [问题跟踪](https://github.com/kubernetes/kubeadm/issues/new)里向我们提供反馈。
|
||||
|
||||
你也可以阅读 [升级文件](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
你也可以阅读[升级文档](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
<!--
|
||||
This page does not address running your cluster on a cloud provider. In a cloud
|
||||
environment, neither approach documented here works with Service objects of type
|
||||
|
@ -59,37 +59,127 @@ LoadBalancer, or with dynamic PersistentVolumes.
|
|||
|
||||
|
||||
<!--
|
||||
For both methods you need this infrastructure:
|
||||
The prerequisites depend on which topology you have selected for your cluster's
|
||||
control plane:
|
||||
-->
|
||||
根据集群控制平面所选择的拓扑结构不同,准备工作也有所差异:
|
||||
|
||||
- Three machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
|
||||
the control-plane nodes
|
||||
- Three machines that meet [kubeadm's minimum
|
||||
{{< tabs name="prerequisite_tabs" >}}
|
||||
{{% tab name="堆叠(Stacked) etcd 拓扑" %}}
|
||||
<!--
|
||||
note to reviewers: these prerequisites should match the start of the
|
||||
external etc tab
|
||||
-->
|
||||
<!--
|
||||
You need:
|
||||
|
||||
- Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
|
||||
the control-plane nodes. Having an odd number of control plane nodes can help
|
||||
with leader selection in the case of machine or zone failure.
|
||||
- including a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, already set up and working
|
||||
- Three or more machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
|
||||
- including a container runtime, already set up and working
|
||||
- Full network connectivity between all machines in the cluster (public or
|
||||
private network)
|
||||
- sudo privileges on all machines
|
||||
- Superuser privileges on all machines using `sudo`
|
||||
- You can use a different tool; this guide uses `sudo` in the examples.
|
||||
- SSH access from one device to all nodes in the system
|
||||
- `kubeadm` and `kubelet` installed on all machines. `kubectl` is optional.
|
||||
-->
|
||||
对于这两种方法,你都需要以下基础设施:
|
||||
- `kubeadm` and `kubelet` already installed on all machines.
|
||||
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)
|
||||
的三台机器作为控制面节点
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)
|
||||
_See [Stacked etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology) for context._
|
||||
-->
|
||||
需要准备:
|
||||
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#准备开始)
|
||||
的三台机器作为控制面节点。奇数台控制平面节点有利于机器故障或者网络分区时进行重新选主。
|
||||
- 机器已经安装好{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}},并正常运行
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#准备开始)
|
||||
的三台机器作为工作节点
|
||||
- 机器已经安装好{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}},并正常运行
|
||||
- 在集群中,确保所有计算机之间存在全网络连接(公网或私网)
|
||||
- 在所有机器上具有 sudo 权限
|
||||
- 可以使用其他工具;本教程以 `sudo` 举例
|
||||
- 从某台设备通过 SSH 访问系统中所有节点的能力
|
||||
- 所有机器上已经安装 `kubeadm` 和 `kubelet`,`kubectl` 是可选的。
|
||||
- 所有机器上已经安装 `kubeadm` 和 `kubelet`
|
||||
|
||||
_拓扑详情请参考[堆叠(Stacked)etcd 拓扑](/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/#堆叠-stacked-etcd-拓扑)。_
|
||||
{{% /tab %}}
|
||||
{{% tab name="外部 etcd 拓扑" %}}
|
||||
<!--
|
||||
note to reviewers: these prerequisites should match the start of the
|
||||
stacked etc tab
|
||||
-->
|
||||
<!--
|
||||
You need:
|
||||
|
||||
- Three or more machines that meet [kubeadm's minimum requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for
|
||||
the control-plane nodes. Having an odd number of control plane nodes can help
|
||||
with leader selection in the case of machine or zone failure.
|
||||
- including a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, already set up and working
|
||||
- Three or more machines that meet [kubeadm's minimum
|
||||
requirements](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) for the workers
|
||||
- including a container runtime, already set up and working
|
||||
- Full network connectivity between all machines in the cluster (public or
|
||||
private network)
|
||||
- Superuser privileges on all machines using `sudo`
|
||||
- You can use a different tool; this guide uses `sudo` in the examples.
|
||||
- SSH access from one device to all nodes in the system
|
||||
- `kubeadm` and `kubelet` already installed on all machines.
|
||||
-->
|
||||
需要准备:
|
||||
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#准备开始)
|
||||
的三台机器作为控制面节点。奇数台控制平面节点有利于机器故障或者网络分区时进行重新选主。
|
||||
- 机器已经安装好{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}},并正常运行
|
||||
- 配置满足 [kubeadm 的最低要求](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#准备开始)
|
||||
的三台机器作为工作节点
|
||||
- 机器已经安装好{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}},并正常运行
|
||||
- 在集群中,确保所有计算机之间存在全网络连接(公网或私网)
|
||||
- 在所有机器上具有 sudo 权限
|
||||
- 可以使用其他工具;本教程以 `sudo` 举例
|
||||
- 从某台设备通过 SSH 访问系统中所有节点的能力
|
||||
- 所有机器上已经安装 `kubeadm` 和 `kubelet`
|
||||
<!-- end of shared prerequisites -->
|
||||
<!--
|
||||
And you also need:
|
||||
- Three or more additional machines, that will become etcd cluster members.
|
||||
Having an odd number of members in the etcd cluster is a requirement for achieving
|
||||
optimal voting quorum.
|
||||
- These machines again need to have `kubeadm` and `kubelet` installed.
|
||||
- These machines also require a container runtime, that is already set up and working.
|
||||
-->
|
||||
还需要准备:
|
||||
- 给 etcd 集群使用的另外三台及以上机器。为了分布式一致性算法达到更好的投票效果,集群必须由奇数个节点组成。
|
||||
- 机器上已经安装 `kubeadm` 和 `kubelet`。
|
||||
- 机器上同样需要安装好容器运行时,并能正常运行。
|
||||
<!--
|
||||
_See [External etcd topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/#external-etcd-topology) for context._
|
||||
-->
|
||||
_拓扑详情请参考[外部 etcd 拓扑](/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/#外部-etcd-拓扑)。_
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!-- ### Container images -->
|
||||
### 容器镜像
|
||||
|
||||
<!--
|
||||
For the external etcd cluster only, you also need:
|
||||
|
||||
- Three additional machines for etcd members
|
||||
Each host should have access read and fetch images from the Kubernetes container image registry, `k8s.gcr.io`.
|
||||
If you want to deploy a highly-available cluster where the hosts do not have access to pull images, this is possible. You must ensure by some other means that the correct container images are already available on the relevant hosts.
|
||||
-->
|
||||
仅对于外部 etcd 集群来说,你还需要:
|
||||
每台主机需要能够从 Kubernetes 容器镜像仓库( `k8s.gcr.io` )读取和拉取镜像。
|
||||
想要在无法拉取 Kubernetes 仓库镜像的机器上部署高可用集群也是可行的。通过其他的手段保证主机上已经有对应的容器镜像即可。
|
||||
|
||||
- 给 etcd 成员使用的另外三台机器
|
||||
<!-- ### Command line interface {#kubectl} -->
|
||||
### 命令行 {#kubectl}
|
||||
|
||||
<!--
|
||||
To manage Kubernetes once your cluster is set up, you should
|
||||
[install kubectl](/docs/tasks/tools/#kubectl) on your PC. It is also useful
|
||||
to install the `kubectl` tool on each control plane node, as this can be
|
||||
helpful for troubleshooting.
|
||||
-->
|
||||
一旦集群创建成功,需要在 PC 上[安装 kubectl](/zh/docs/tasks/tools/#kubectl) 用于管理 Kubernetes。为了方便故障排查,也可以在每个控制平面节点上安装 `kubectl`。
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -102,78 +192,79 @@ For the external etcd cluster only, you also need:
|
|||
|
||||
### 为 kube-apiserver 创建负载均衡器
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
There are many configurations for load balancers. The following example is only one
|
||||
option. Your cluster requirements may need a different configuration.
|
||||
-->
|
||||
{{< note >}}
|
||||
使用负载均衡器需要许多配置。你的集群搭建可能需要不同的配置。
|
||||
下面的例子只是其中的一方面配置。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
1. Create a kube-apiserver load balancer with a name that resolves to DNS.
|
||||
1. Create a kube-apiserver load balancer with a name that resolves to DNS.
|
||||
|
||||
- In a cloud environment you should place your control plane nodes behind a TCP
|
||||
forwarding load balancer. This load balancer distributes traffic to all
|
||||
healthy control plane nodes in its target list. The health check for
|
||||
an apiserver is a TCP check on the port the kube-apiserver listens on
|
||||
(default value `:6443`).
|
||||
- In a cloud environment you should place your control plane nodes behind a TCP
|
||||
forwarding load balancer. This load balancer distributes traffic to all
|
||||
healthy control plane nodes in its target list. The health check for
|
||||
an apiserver is a TCP check on the port the kube-apiserver listens on
|
||||
(default value `:6443`).
|
||||
|
||||
- It is not recommended to use an IP address directly in a cloud environment.
|
||||
- It is not recommended to use an IP address directly in a cloud environment.
|
||||
|
||||
- The load balancer must be able to communicate with all control plane nodes
|
||||
on the apiserver port. It must also allow incoming traffic on its
|
||||
listening port.
|
||||
- The load balancer must be able to communicate with all control plane nodes
|
||||
on the apiserver port. It must also allow incoming traffic on its
|
||||
listening port.
|
||||
|
||||
- Make sure the address of the load balancer always matches
|
||||
the address of kubeadm's `ControlPlaneEndpoint`.
|
||||
- Make sure the address of the load balancer always matches
|
||||
the address of kubeadm's `ControlPlaneEndpoint`.
|
||||
|
||||
- Read the [Options for Software Load Balancing](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)
|
||||
guide for more details.
|
||||
- Read the [Options for Software Load Balancing](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)
|
||||
guide for more details.
|
||||
-->
|
||||
1. 创建一个名为 kube-apiserver 的负载均衡器解析 DNS。
|
||||
1. 创建一个名为 kube-apiserver 的负载均衡器解析 DNS。
|
||||
|
||||
- 在云环境中,应该将控制平面节点放置在 TCP 后面转发负载平衡。
|
||||
该负载均衡器将流量分配给目标列表中所有运行状况良好的控制平面节点。
|
||||
API 服务器的健康检查是在 kube-apiserver 的监听端口(默认值 `:6443`)
|
||||
上进行的一个 TCP 检查。
|
||||
- 在云环境中,应该将控制平面节点放置在 TCP 转发负载平衡后面。
|
||||
该负载均衡器将流量分配给目标列表中所有运行状况良好的控制平面节点。
|
||||
API 服务器的健康检查是在 kube-apiserver 的监听端口(默认值 `:6443`)
|
||||
上进行的一个 TCP 检查。
|
||||
|
||||
- 不建议在云环境中直接使用 IP 地址。
|
||||
- 不建议在云环境中直接使用 IP 地址。
|
||||
|
||||
- 负载均衡器必须能够在 API 服务器端口上与所有控制平面节点通信。
|
||||
它还必须允许其监听端口的入站流量。
|
||||
- 负载均衡器必须能够在 API 服务器端口上与所有控制平面节点通信。
|
||||
它还必须允许其监听端口的入站流量。
|
||||
|
||||
- 确保负载均衡器的地址始终匹配 kubeadm 的 `ControlPlaneEndpoint` 地址。
|
||||
- 确保负载均衡器的地址始终匹配 kubeadm 的 `ControlPlaneEndpoint` 地址。
|
||||
|
||||
- 阅读[软件负载平衡选项指南](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)以获取更多详细信息。
|
||||
- 阅读[软件负载平衡选项指南](https://git.k8s.io/kubeadm/docs/ha-considerations.md#options-for-software-load-balancing)
|
||||
以获取更多详细信息。
|
||||
|
||||
<!--
|
||||
1. Add the first control plane nodes to the load balancer and test the
|
||||
connection:
|
||||
1. Add the first control plane nodes to the load balancer and test the
|
||||
connection:
|
||||
|
||||
```sh
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
```sh
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
|
||||
- A connection refused error is expected because the apiserver is not yet
|
||||
running. A timeout, however, means the load balancer cannot communicate
|
||||
with the control plane node. If a timeout occurs, reconfigure the load
|
||||
balancer to communicate with the control plane node.
|
||||
- A connection refused error is expected because the apiserver is not yet
|
||||
running. A timeout, however, means the load balancer cannot communicate
|
||||
with the control plane node. If a timeout occurs, reconfigure the load
|
||||
balancer to communicate with the control plane node.
|
||||
|
||||
1. Add the remaining control plane nodes to the load balancer target group.
|
||||
1. Add the remaining control plane nodes to the load balancer target group.
|
||||
-->
|
||||
2. 添加第一个控制平面节点到负载均衡器并测试连接:
|
||||
2. 添加第一个控制平面节点到负载均衡器并测试连接:
|
||||
|
||||
```shell
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
```shell
|
||||
nc -v LOAD_BALANCER_IP PORT
|
||||
```
|
||||
|
||||
- 由于 apiserver 尚未运行,预期会出现一个连接拒绝错误。
|
||||
然而超时意味着负载均衡器不能和控制平面节点通信。
|
||||
如果发生超时,请重新配置负载均衡器与控制平面节点进行通信。
|
||||
由于 apiserver 尚未运行,预期会出现一个连接拒绝错误。
|
||||
然而超时意味着负载均衡器不能和控制平面节点通信。
|
||||
如果发生超时,请重新配置负载均衡器与控制平面节点进行通信。
|
||||
|
||||
3. 将其余控制平面节点添加到负载均衡器目标组。
|
||||
3. 将其余控制平面节点添加到负载均衡器目标组。
|
||||
|
||||
<!--
|
||||
## Stacked control plane and etcd nodes
|
||||
|
@ -185,151 +276,150 @@ option. Your cluster requirements may need a different configuration.
|
|||
### 控制平面节点的第一步
|
||||
|
||||
<!--
|
||||
1. Initialize the control plane:
|
||||
1. Initialize the control plane:
|
||||
|
||||
```sh
|
||||
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
|
||||
```
|
||||
- You can use the `--kubernetes-version` flag to set the Kubernetes version to use.
|
||||
It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
|
||||
- The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.
|
||||
```sh
|
||||
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
|
||||
```
|
||||
- You can use the `--kubernetes-version` flag to set the Kubernetes version to use.
|
||||
It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
|
||||
- The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.
|
||||
|
||||
- The `--upload-certs` flag is used to upload the certificates that should be shared
|
||||
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
|
||||
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
|
||||
certificate distribution](#manual-certs) section bellow.
|
||||
- The `--upload-certs` flag is used to upload the certificates that should be shared
|
||||
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
|
||||
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
|
||||
certificate distribution](#manual-certs) section bellow.
|
||||
-->
|
||||
1. 初始化控制平面:
|
||||
1. 初始化控制平面:
|
||||
|
||||
```shell
|
||||
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
|
||||
```
|
||||
```shell
|
||||
sudo kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
|
||||
```
|
||||
|
||||
- 你可以使用 `--kubernetes-version` 标志来设置要使用的 Kubernetes 版本。
|
||||
建议将 kubeadm、kebelet、kubectl 和 Kubernetes 的版本匹配。
|
||||
- 这个 `--control-plane-endpoint` 标志应该被设置成负载均衡器的地址或 DNS 和端口。
|
||||
- 这个 `--upload-certs` 标志用来将在所有控制平面实例之间的共享证书上传到集群。
|
||||
如果正好相反,你更喜欢手动地通过控制平面节点或者使用自动化
|
||||
工具复制证书,请删除此标志并参考如下部分[证书分配手册](#manual-certs)。
|
||||
- 你可以使用 `--kubernetes-version` 标志来设置要使用的 Kubernetes 版本。
|
||||
建议将 kubeadm、kebelet、kubectl 和 Kubernetes 的版本匹配。
|
||||
- 这个 `--control-plane-endpoint` 标志应该被设置成负载均衡器的地址或 DNS 和端口。
|
||||
- 这个 `--upload-certs` 标志用来将在所有控制平面实例之间的共享证书上传到集群。
|
||||
如果正好相反,你更喜欢手动地通过控制平面节点或者使用自动化工具复制证书,
|
||||
请删除此标志并参考如下部分[证书分配手册](#manual-certs)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want
|
||||
to use the [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) you must add the `certificateKey` field in the appropriate config locations (under `InitConfiguration` and `JoinConfiguration: controlPlane`).
|
||||
-->
|
||||
标志 `kubeadm init`、`--config` 和 `--certificate-key` 不能混合使用,
|
||||
因此如果你要使用
|
||||
[kubeadm 配置](/docs/reference/config-api/kubeadm-config.v1beta3/),你必须在相应的配置文件
|
||||
(位于 `InitConfiguration` 和 `JoinConfiguration: controlPlane`)添加 `certificateKey` 字段。
|
||||
{{< /note >}}
|
||||
<!--
|
||||
The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want
|
||||
to use the [kubeadm configuration](/docs/reference/config-api/kubeadm-config.v1beta3/) you must add the `certificateKey` field in the appropriate config locations (under `InitConfiguration` and `JoinConfiguration: controlPlane`).
|
||||
-->
|
||||
{{< note >}}
|
||||
标志 `kubeadm init`、`--config` 和 `--certificate-key` 不能混合使用,
|
||||
因此如果你要使用
|
||||
[kubeadm 配置](/docs/reference/config-api/kubeadm-config.v1beta3/),你必须在相应的配置结构
|
||||
(位于 `InitConfiguration` 和 `JoinConfiguration: controlPlane`)添加 `certificateKey` 字段。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
|
||||
some like Weave do not. See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network).
|
||||
To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file
|
||||
set the `podSubnet` field under the `networking` object of `ClusterConfiguration`.
|
||||
-->
|
||||
一些 CNI 网络插件如 Calico 需要 CIDR 例如 `192.168.0.0/16` 和一些像 Weave 没有。参考
|
||||
[CNI 网络文档](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)。
|
||||
通过传递 `--pod-network-cidr` 标志添加 pod CIDR,或者你可以使用 kubeadm
|
||||
配置文件,在 `ClusterConfiguration` 的 `networking` 对象下设置 `podSubnet` 字段。
|
||||
{{< /note >}}
|
||||
<!--
|
||||
Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
|
||||
some like Weave do not. See the [CNI network documentation](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network).
|
||||
To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file
|
||||
set the `podSubnet` field under the `networking` object of `ClusterConfiguration`.
|
||||
-->
|
||||
{{< note >}}
|
||||
一些 CNI 网络插件如 Calico 需要 CIDR 例如 `192.168.0.0/16` 和一些像 Weave 没有。参考
|
||||
[CNI 网络文档](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)。
|
||||
通过传递 `--pod-network-cidr` 标志添加 pod CIDR,或者你可以使用 kubeadm
|
||||
配置文件,在 `ClusterConfiguration` 的 `networking` 对象下设置 `podSubnet` 字段。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
- The output looks similar to:
|
||||
-->
|
||||
- 输出类似于:
|
||||
<!--
|
||||
- The output looks similar to:
|
||||
-->
|
||||
- 输出类似于:
|
||||
|
||||
```sh
|
||||
...
|
||||
You can now join any number of control-plane node by running the following command on each as a root:
|
||||
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
```sh
|
||||
...
|
||||
You can now join any number of control-plane node by running the following command on each as a root:
|
||||
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
|
||||
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
|
||||
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
|
||||
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
|
||||
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
|
||||
|
||||
Then you can join any number of worker nodes by running the following on each as root:
|
||||
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
|
||||
```
|
||||
Then you can join any number of worker nodes by running the following on each as root:
|
||||
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
|
||||
```
|
||||
|
||||
<!--
|
||||
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
|
||||
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
|
||||
are encrypted and uploaded in the `kubeadm-certs` Secret.
|
||||
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane
|
||||
node that is already joined to the cluster:
|
||||
-->
|
||||
- 将此输出复制到文本文件。 稍后你将需要它来将控制平面节点和工作节点加入集群。
|
||||
- 当 `--upload-certs` 与 `kubeadm init` 一起使用时,主控制平面的证书
|
||||
被加密并上传到 `kubeadm-certs` Secret 中。
|
||||
- 要重新上传证书并生成新的解密密钥,请在已加入集群节点的控制平面上使用以下命令:
|
||||
<!--
|
||||
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
|
||||
- When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
|
||||
are encrypted and uploaded in the `kubeadm-certs` Secret.
|
||||
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane
|
||||
node that is already joined to the cluster:
|
||||
-->
|
||||
- 将此输出复制到文本文件。 稍后你将需要它来将控制平面节点和工作节点加入集群。
|
||||
- 当使用 `--upload-certs` 调用 `kubeadm init` 时,主控制平面的证书被加密并上传到 `kubeadm-certs` Secret 中。
|
||||
- 要重新上传证书并生成新的解密密钥,请在已加入集群节点的控制平面上使用以下命令:
|
||||
|
||||
```shell
|
||||
sudo kubeadm init phase upload-certs --upload-certs
|
||||
```
|
||||
<!--
|
||||
- You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.
|
||||
To generate such a key you can use the following command:
|
||||
-->
|
||||
- 你还可以在 `init` 期间指定自定义的 `--certificate-key`,以后可以由 `join` 使用。
|
||||
要生成这样的密钥,可以使用以下命令:
|
||||
```shell
|
||||
sudo kubeadm init phase upload-certs --upload-certs
|
||||
```
|
||||
<!--
|
||||
- You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.
|
||||
To generate such a key you can use the following command:
|
||||
-->
|
||||
- 你还可以在 `init` 期间指定自定义的 `--certificate-key`,以后可以由 `join` 使用。
|
||||
要生成这样的密钥,可以使用以下命令:
|
||||
|
||||
```shell
|
||||
kubeadm certs certificate-key
|
||||
```
|
||||
```shell
|
||||
kubeadm certs certificate-key
|
||||
```
|
||||
<!--
|
||||
The `kubeadm-certs` Secret and decryption key expire after two hours.
|
||||
-->
|
||||
{{< note >}}
|
||||
`kubeadm-certs` Secret 和解密密钥会在两个小时后失效。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `kubeadm-certs` Secret and decryption key expire after two hours.
|
||||
-->
|
||||
`kubeadm-certs` 密钥和解密密钥会在两个小时后失效。
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret!
|
||||
-->
|
||||
正如命令输出中所述,证书密钥可访问群集敏感数据。请妥善保管!
|
||||
{{< /caution >}}
|
||||
<!--
|
||||
As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret!
|
||||
-->
|
||||
{{< caution >}}
|
||||
正如命令输出中所述,证书密钥可访问群集敏感数据。请妥善保管!
|
||||
{{< /caution >}}
|
||||
|
||||
<!--
|
||||
1. Apply the CNI plugin of your choice:
|
||||
[Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network) to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file if applicable.
|
||||
|
||||
In this example we are using Weave Net:
|
||||
1. Apply the CNI plugin of your choice:
|
||||
[Follow these instructions](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network) to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm configuration file (if applicable).
|
||||
-->
|
||||
2. 应用你所选择的 CNI 插件:
|
||||
[请遵循以下指示](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
|
||||
安装 CNI 提供程序。如果适用,请确保配置与 kubeadm 配置文件中指定的 Pod
|
||||
CIDR 相对应。
|
||||
2. 应用你所选择的 CNI 插件:
|
||||
[请遵循以下指示](/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network)
|
||||
安装 CNI 驱动。如果适用,请确保配置与 kubeadm 配置文件中指定的 Pod
|
||||
CIDR 相对应。
|
||||
<!--
|
||||
You must pick a network plugin that suits your use case and deploy it before you move on to next step.
|
||||
If you don't do this, you will not be able to launch your cluster properly.
|
||||
-->
|
||||
{{< note >}}
|
||||
在进行下一步之前,必须选择并部署合适的网络插件。
|
||||
否则集群不会正常运行。
|
||||
{{< /note >}}
|
||||
|
||||
在此示例中,我们使用 Weave Net:
|
||||
|
||||
```shell
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Type the following and watch the pods of the control plane components get started:
|
||||
1. Type the following and watch the pods of the control plane components get started:
|
||||
-->
|
||||
3. 输入以下内容,并查看控制平面组件的 Pods 启动:
|
||||
3. 输入以下内容,并查看控制平面组件的 Pods 启动:
|
||||
|
||||
```shell
|
||||
kubectl get pod -n kube-system -w
|
||||
```
|
||||
```shell
|
||||
kubectl get pod -n kube-system -w
|
||||
```
|
||||
|
||||
<!--
|
||||
### Steps for the rest of the control plane nodes
|
||||
-->
|
||||
### 其余控制平面节点的步骤
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Since kubeadm version 1.15 you can join multiple control-plane nodes in parallel.
|
||||
Prior to this version, you must join new control plane nodes sequentially, only after
|
||||
the first node has finished initializing.
|
||||
-->
|
||||
{{< note >}}
|
||||
从 kubeadm 1.15 版本开始,你可以并行加入多个控制平面节点。
|
||||
在此版本之前,你必须在第一个节点初始化后才能依序的增加新的控制平面节点。
|
||||
{{< /note >}}
|
||||
|
@ -337,30 +427,30 @@ the first node has finished initializing.
|
|||
<!--
|
||||
For each additional control plane node you should:
|
||||
|
||||
1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.
|
||||
It should look something like this:
|
||||
1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.
|
||||
It should look something like this:
|
||||
|
||||
```sh
|
||||
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
```
|
||||
```sh
|
||||
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
```
|
||||
|
||||
- The `--control-plane` flag tells `kubeadm join` to create a new control plane.
|
||||
- The `--certificate-key ...` will cause the control plane certificates to be downloaded
|
||||
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
|
||||
- The `--control-plane` flag tells `kubeadm join` to create a new control plane.
|
||||
- The `--certificate-key ...` will cause the control plane certificates to be downloaded
|
||||
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
|
||||
|
||||
-->
|
||||
对于每个其他控制平面节点,你应该:
|
||||
|
||||
1. 执行先前由第一个节点上的 `kubeadm init` 输出提供给你的 join 命令。
|
||||
它看起来应该像这样:
|
||||
1. 执行先前由第一个节点上的 `kubeadm init` 输出提供给你的 join 命令。
|
||||
它看起来应该像这样:
|
||||
|
||||
```sh
|
||||
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
```
|
||||
```sh
|
||||
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
|
||||
```
|
||||
|
||||
- 这个 `--control-plane` 命令通知 `kubeadm join` 创建一个新的控制平面。
|
||||
- `--certificate-key ...` 将导致从集群中的 `kubeadm-certs` Secret 下载
|
||||
控制平面证书并使用给定的密钥进行解密。
|
||||
- 这个 `--control-plane` 标志通知 `kubeadm join` 创建一个新的控制平面。
|
||||
- `--certificate-key ...` 将导致从集群中的 `kubeadm-certs` Secret
|
||||
下载控制平面证书并使用给定的密钥进行解密。
|
||||
|
||||
<!--
|
||||
## External etcd nodes
|
||||
|
@ -377,102 +467,103 @@ in the kubeadm config file.
|
|||
<!--
|
||||
### Set up the etcd cluster
|
||||
|
||||
1. Follow [these instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
|
||||
1. Follow [these instructions](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) to set up the etcd cluster.
|
||||
|
||||
1. Setup SSH as described [here](#manual-certs).
|
||||
1. Setup SSH as described [here](#manual-certs).
|
||||
|
||||
1. Copy the following files from any etcd node in the cluster to the first control plane node:
|
||||
1. Copy the following files from any etcd node in the cluster to the first control plane node:
|
||||
|
||||
```sh
|
||||
export CONTROL_PLANE="ubuntu@10.0.0.7"
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
|
||||
```
|
||||
```sh
|
||||
export CONTROL_PLANE="ubuntu@10.0.0.7"
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
|
||||
```
|
||||
|
||||
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane machine.
|
||||
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane machine.
|
||||
-->
|
||||
### 设置 ectd 集群
|
||||
|
||||
1. 按照 [这些指示](/zh/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
|
||||
去设置 etcd 集群。
|
||||
1. 按照[这些指示](/zh/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)
|
||||
去设置 etcd 集群。
|
||||
|
||||
1. 根据[这里](#manual-certs)的描述配置 SSH。
|
||||
1. 根据[这里](#manual-certs) 的描述配置 SSH。
|
||||
|
||||
1. 将以下文件从集群中的任何 etcd 节点复制到第一个控制平面节点:
|
||||
1. 将以下文件从集群中的任何 etcd 节点复制到第一个控制平面节点:
|
||||
|
||||
```shell
|
||||
export CONTROL_PLANE="ubuntu@10.0.0.7"
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
|
||||
```
|
||||
```shell
|
||||
export CONTROL_PLANE="ubuntu@10.0.0.7"
|
||||
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
|
||||
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
|
||||
```
|
||||
|
||||
- 用第一台控制平面机的 `user@host` 替换 `CONTROL_PLANE` 的值。
|
||||
- 用第一台控制平面机的 `user@host` 替换 `CONTROL_PLANE` 的值。
|
||||
|
||||
<!--
|
||||
### Set up the first control plane node
|
||||
|
||||
1. Create a file called `kubeadm-config.yaml` with the following contents:
|
||||
1. Create a file called `kubeadm-config.yaml` with the following contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379
|
||||
- https://ETCD_1_IP:2379
|
||||
- https://ETCD_2_IP:2379
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
```
|
||||
```yaml
|
||||
---
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # change this (see below)
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately
|
||||
- https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately
|
||||
- https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
```
|
||||
-->
|
||||
|
||||
### 设置第一个控制平面节点
|
||||
|
||||
1. 用以下内容创建一个名为 `kubeadm-config.yaml` 的文件:
|
||||
1. 用以下内容创建一个名为 `kubeadm-config.yaml` 的文件:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubeadm.k8s.io/v1beta2
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379
|
||||
- https://ETCD_1_IP:2379
|
||||
- https://ETCD_2_IP:2379
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
```
|
||||
```yaml
|
||||
---
|
||||
apiVersion: kubeadm.k8s.io/v1beta3
|
||||
kind: ClusterConfiguration
|
||||
kubernetesVersion: stable
|
||||
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # change this (see below)
|
||||
etcd:
|
||||
external:
|
||||
endpoints:
|
||||
- https://ETCD_0_IP:2379 # change ETCD_0_IP appropriately
|
||||
- https://ETCD_1_IP:2379 # change ETCD_1_IP appropriately
|
||||
- https://ETCD_2_IP:2379 # change ETCD_2_IP appropriately
|
||||
caFile: /etc/kubernetes/pki/etcd/ca.crt
|
||||
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
```
|
||||
<!--
|
||||
The difference between stacked etcd and external etcd here is that the external etcd setup requires
|
||||
a configuration file with the etcd endpoints under the `external` object for `etcd`.
|
||||
In the case of the stacked etcd topology this is managed automatically.
|
||||
-->
|
||||
{{< note >}}
|
||||
这里的堆叠(stacked)etcd 和外部 etcd 之前的区别在于设置外部 etcd
|
||||
需要一个 `etcd` 的 `external` 对象下带有 etcd 端点的配置文件。
|
||||
如果是内部 etcd,是自动管理的。
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The difference between stacked etcd and external etcd here is that the external etcd setup requires
|
||||
a configuration file with the etcd endpoints under the `external` object for `etcd`.
|
||||
In the case of the stacked etcd topology this is managed automatically.
|
||||
-->
|
||||
这里的内部(stacked) etcd 和外部 etcd 之前的区别在于设置外部 etcd
|
||||
需要一个 `etcd` 的 `external` 对象下带有 etcd 端点的配置文件。
|
||||
如果是内部 etcd,是自动管理的。
|
||||
{{< /note >}}
|
||||
<!--
|
||||
- Replace the following variables in the config template with the appropriate values for your cluster:
|
||||
-->
|
||||
- 在你的集群中,将配置模板中的以下变量替换为适当值:
|
||||
|
||||
<!--
|
||||
- Replace the following variables in the config template with the appropriate values for your cluster:
|
||||
-->
|
||||
- 在你的集群中,将配置模板中的以下变量替换为适当值:
|
||||
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `ETCD_0_IP`
|
||||
- `ETCD_1_IP`
|
||||
- `ETCD_2_IP`
|
||||
- `LOAD_BALANCER_DNS`
|
||||
- `LOAD_BALANCER_PORT`
|
||||
- `ETCD_0_IP`
|
||||
- `ETCD_1_IP`
|
||||
- `ETCD_2_IP`
|
||||
|
||||
<!--
|
||||
The following steps are similar to the stacked etcd setup:
|
||||
|
@ -480,21 +571,25 @@ The following steps are similar to the stacked etcd setup:
|
|||
以下的步骤与设置内置 etcd 的集群是相似的:
|
||||
|
||||
<!--
|
||||
1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.
|
||||
1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.
|
||||
|
||||
1. Write the output join commands that are returned to a text file for later use.
|
||||
1. Write the output join commands that are returned to a text file for later use.
|
||||
|
||||
1. Apply the CNI plugin of your choice. The given example is for Weave Net:
|
||||
1. Apply the CNI plugin of your choice.
|
||||
-->
|
||||
1. 在节点上运行 `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` 命令。
|
||||
|
||||
1. 记下输出的 join 命令,这些命令将在以后使用。
|
||||
|
||||
1. 应用你选择的 CNI 插件。以下示例适用于 Weave Net:
|
||||
|
||||
```shell
|
||||
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
|
||||
```
|
||||
1. 应用你选择的 CNI 插件。
|
||||
<!--
|
||||
You must pick a network plugin that suits your use case and deploy it before you move on to next step.
|
||||
If you don't do this, you will not be able to launch your cluster properly.
|
||||
-->
|
||||
{{< note >}}
|
||||
在进行下一步之前,必须选择并部署合适的网络插件。
|
||||
否则集群不会正常运行。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Steps for the rest of the control plane nodes
|
||||
|
@ -627,13 +722,12 @@ SSH is required if you want to control all nodes from a single machine.
|
|||
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
|
||||
done
|
||||
```
|
||||
|
||||
{{< caution >}}
|
||||
<!--
|
||||
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
|
||||
with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
|
||||
the creation of additional nodes could fail due to a lack of required SANs.
|
||||
-->
|
||||
{{< caution >}}
|
||||
只需要复制上面列表中的证书。kubeadm 将负责生成其余证书以及加入控制平面实例所需的 SAN。
|
||||
如果你错误地复制了所有证书,由于缺少所需的 SAN,创建其他节点可能会失败。
|
||||
{{< /caution >}}
|
||||
|
|
|
@ -170,7 +170,7 @@ manually through `easyrsa`, `openssl` or `cfssl`.
|
|||
<!--
|
||||
1. Generate the server certificate using the ca.key, ca.crt and server.csr:
|
||||
-->
|
||||
1. 基于 ca.key、ca.key 和 server.csr 等三个文件生成服务端证书:
|
||||
1. 基于 ca.key、ca.crt 和 server.csr 等三个文件生成服务端证书:
|
||||
|
||||
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
|
||||
-CAcreateserial -out server.crt -days 10000 \
|
||||
|
|
|
@ -0,0 +1,532 @@
|
|||
---
|
||||
title: 在集群中使用级联删除
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Use Cascading Deletion in a Cluster
|
||||
content_type: task
|
||||
-->
|
||||
|
||||
<!--overview-->
|
||||
|
||||
<!--
|
||||
This page shows you how to specify the type of [cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#cascading-deletion)
|
||||
to use in your cluster during {{<glossary_tooltip text="garbage collection" term_id="garbage-collection">}}.
|
||||
-->
|
||||
本页面向你展示如何设置在你的集群执行{{<glossary_tooltip text="垃圾收集" term_id="garbage-collection">}}
|
||||
时要使用的[级联删除](/zh/docs/concepts/workloads/controllers/garbage-collection/#cascading-deletion)
|
||||
类型。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!--
|
||||
You also need to [create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment)
|
||||
to experiment with the different types of cascading deletion. You will need to
|
||||
recreate the Deployment for each type.
|
||||
-->
|
||||
你还需要[创建一个 Deployment 示例](/zh/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment)
|
||||
以试验不同类型的级联删除。你需要为每种级联删除类型来重建 Deployment。
|
||||
|
||||
<!--
|
||||
## Check owner references on your pods
|
||||
|
||||
Check that the `ownerReferences` field is present on your pods:
|
||||
-->
|
||||
## 检查 Pod 上的属主引用 {#check-owner-references-on-your-pods}
|
||||
|
||||
检查确认你的 Pods 上存在 `ownerReferences` 字段:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=nginx --output=yaml
|
||||
```
|
||||
|
||||
<!--
|
||||
The output has an `ownerReferences` field similar to this:
|
||||
-->
|
||||
输出中包含 `ownerReferences` 字段,类似这样:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
...
|
||||
ownerReferences:
|
||||
- apiVersion: apps/v1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: ReplicaSet
|
||||
name: nginx-deployment-6b474476c4
|
||||
uid: 4fdcd81c-bd5d-41f7-97af-3a3b759af9a7
|
||||
...
|
||||
```
|
||||
|
||||
<!--
|
||||
## Use foreground cascading deletion {#use-foreground-cascading-deletion}
|
||||
|
||||
By default, Kubernetes uses [background cascading deletion](/docs/concepts/workloads/controllers/garbage-collection/#background-deletion)
|
||||
to delete dependents of an object. You can switch to foreground cascading deletion
|
||||
using either `kubectl` or the Kubernetes API, depending on the Kubernetes
|
||||
version your cluster runs. {{<version-check>}}
|
||||
-->
|
||||
## 使用前台级联删除 {#use-foreground-cascading-deletion}
|
||||
|
||||
默认情况下,Kubernetes 使用[后台级联删除](/zh/docs/concepts/workloads/controllers/garbage-collection/#background-deletion)
|
||||
以删除依赖某对象的其他对象。取决于你的集群所运行的 Kubernetes 版本,
|
||||
你可以使用 `kubectl` 或者 Kubernetes API 来切换到前台级联删除。
|
||||
{{<version-check>}}
|
||||
|
||||
{{<tabs name="foreground_deletion">}}
|
||||
{{% tab name="Kubernetes 1.20.x 及更新版本" %}}
|
||||
|
||||
<!--
|
||||
You can delete objects using foreground cascading deletion using `kubectl` or the
|
||||
Kubernetes API.
|
||||
-->
|
||||
你可以使用 `kubectl` 或者 Kubernetes API 来基于前台级联删除来删除对象。
|
||||
|
||||
<!--
|
||||
**Using kubectl**
|
||||
|
||||
Run the following command:
|
||||
-->
|
||||
**使用 kubectl**
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
<!--TODO: verify release after which the --cascade flag is switched to a string in https://github.com/kubernetes/kubectl/commit/fd930e3995957b0093ecc4b9fd8b0525d94d3b4e-->
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment --cascade=foreground
|
||||
```
|
||||
|
||||
<!--
|
||||
**Using the Kubernetes API**
|
||||
-->
|
||||
**使用 Kubernetes API**
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
|
||||
like this:
|
||||
-->
|
||||
输出中包含 `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}},
|
||||
类似这样:
|
||||
|
||||
```
|
||||
"kind": "Deployment",
|
||||
"apiVersion": "apps/v1",
|
||||
"metadata": {
|
||||
"name": "nginx-deployment",
|
||||
"namespace": "default",
|
||||
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
|
||||
"resourceVersion": "1363097",
|
||||
"creationTimestamp": "2021-07-08T20:24:37Z",
|
||||
"deletionTimestamp": "2021-07-08T20:27:39Z",
|
||||
"finalizers": [
|
||||
"foregroundDeletion"
|
||||
]
|
||||
...
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Kubernetes 1.20.x 之前的版本" %}}
|
||||
|
||||
<!--
|
||||
You can delete objects using foreground cascading deletion by calling the
|
||||
Kubernetes API.
|
||||
|
||||
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
|
||||
-->
|
||||
你可以通过调用 Kubernetes API 来基于前台级联删除模式删除对象。
|
||||
|
||||
进一步的细节,可阅读[特定于你的 Kubernetes 版本的文档](/zh/docs/home/supported-doc-versions)。
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output contains a `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}}
|
||||
like this:
|
||||
-->
|
||||
输出中包含 `foregroundDeletion` {{<glossary_tooltip text="finalizer" term_id="finalizer">}},
|
||||
类似这样:
|
||||
|
||||
```none
|
||||
"kind": "Deployment",
|
||||
"apiVersion": "apps/v1",
|
||||
"metadata": {
|
||||
"name": "nginx-deployment",
|
||||
"namespace": "default",
|
||||
"uid": "d1ce1b02-cae8-4288-8a53-30e84d8fa505",
|
||||
"resourceVersion": "1363097",
|
||||
"creationTimestamp": "2021-07-08T20:24:37Z",
|
||||
"deletionTimestamp": "2021-07-08T20:27:39Z",
|
||||
"finalizers": [
|
||||
"foregroundDeletion"
|
||||
]
|
||||
...
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{</tabs>}}
|
||||
|
||||
<!--
|
||||
## Use background cascading deletion {#use-background-cascading-deletion}
|
||||
-->
|
||||
## 使用后台级联删除 {#use-background-cascading-deletion}
|
||||
|
||||
<!--
|
||||
1. [Create a sample Deployment](/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment).
|
||||
1. Use either `kubectl` or the Kubernetes API to delete the Deployment,
|
||||
depending on the Kubernetes version your cluster runs. {{<version-check>}}
|
||||
-->
|
||||
1. [创建一个 Deployment 示例](zh/docs/tasks/run-application/run-stateless-application-deployment/#creating-and-exploring-an-nginx-deployment)。
|
||||
1. 基于你的集群所运行的 Kubernetes 版本,使用 `kubectl` 或者 Kubernetes API 来删除 Deployment。
|
||||
{{<version-check>}}
|
||||
|
||||
{{<tabs name="background_deletion">}}
|
||||
{{% tab name="Kubernetes 1.20.x 及更新版本" %}}
|
||||
|
||||
<!--
|
||||
You can delete objects using background cascading deletion using `kubectl`
|
||||
or the Kubernetes API.
|
||||
|
||||
Kubernetes uses background cascading deletion by default, and does so
|
||||
even if you run the following commands without the `--cascade` flag or the
|
||||
`propagationPolicy` argument.
|
||||
-->
|
||||
你可以使用 `kubectl` 或者 Kubernetes API 来执行后台级联删除方式的对象删除操作。
|
||||
|
||||
Kubernetes 默认采用后台级联删除方式,如果你在运行下面的命令时不指定
|
||||
`--cascade` 标志或者 `propagationPolicy` 参数时,用这种方式来删除对象。
|
||||
|
||||
<!--
|
||||
**Using kubectl**
|
||||
|
||||
Run the following command:
|
||||
-->
|
||||
**使用 kubectl**
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment --cascade=background
|
||||
```
|
||||
|
||||
<!--
|
||||
**Using the Kubernetes API**
|
||||
-->
|
||||
**使用 Kubernetes API**
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
"kind": "Status",
|
||||
"apiVersion": "v1",
|
||||
...
|
||||
"status": "Success",
|
||||
"details": {
|
||||
"name": "nginx-deployment",
|
||||
"group": "apps",
|
||||
"kind": "deployments",
|
||||
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
|
||||
}
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Kubernetes 1.20.x 之前的版本" %}}
|
||||
|
||||
<!--
|
||||
Kubernetes uses background cascading deletion by default, and does so
|
||||
even if you run the following commands without the `--cascade` flag or the
|
||||
`propagationPolicy: Background` argument.
|
||||
-->
|
||||
Kubernetes 默认采用后台级联删除方式,如果你在运行下面的命令时不指定
|
||||
`--cascade` 标志或者 `propagationPolicy` 参数时,用这种方式来删除对象。
|
||||
|
||||
<!--
|
||||
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
|
||||
-->
|
||||
进一步的细节,可阅读[特定于你的 Kubernetes 版本的文档](/zh/docs/home/supported-doc-versions)。
|
||||
|
||||
<!--
|
||||
**Using kubectl**
|
||||
|
||||
Run the following command:
|
||||
-->
|
||||
**使用 kubectl**
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment --cascade=true
|
||||
```
|
||||
|
||||
<!--
|
||||
**Using the Kubernetes API**
|
||||
-->
|
||||
**使用 Kubernetes API**
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```
|
||||
"kind": "Status",
|
||||
"apiVersion": "v1",
|
||||
...
|
||||
"status": "Success",
|
||||
"details": {
|
||||
"name": "nginx-deployment",
|
||||
"group": "apps",
|
||||
"kind": "deployments",
|
||||
"uid": "cc9eefb9-2d49-4445-b1c1-d261c9396456"
|
||||
}
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{</tabs>}}
|
||||
|
||||
<!--
|
||||
## Delete owner objects and orphan dependents {#set-orphan-deletion-policy}
|
||||
|
||||
By default, when you tell Kubernetes to delete an object, the
|
||||
{{<glossary_tooltip text="controller" term_id="controller">}} also deletes
|
||||
dependent objects. You can make Kubernetes *orphan* these dependents using
|
||||
`kubectl` or the Kubernetes API, depending on the Kubernetes version your
|
||||
cluster runs. {{<version-check>}}
|
||||
-->
|
||||
## 删除属主对象和孤立的依赖对象 {#set-orphan-deletion-policy}
|
||||
|
||||
默认情况下,当你告诉 Kubernetes 删除某个对象时,
|
||||
{{<glossary_tooltip text="控制器" term_id="controller">}} 也会删除依赖该对象
|
||||
的其他对象。
|
||||
取决于你的集群所运行的 Kubernetes 版本,你也可以使用 `kubectl` 或者 Kubernetes
|
||||
API 来让 Kubernetes *孤立* 这些依赖对象。{{<version-check>}}
|
||||
|
||||
{{<tabs name="orphan_objects">}}
|
||||
{{% tab name="Kubernetes 1.20.x 及更新版本" %}}
|
||||
|
||||
<!--
|
||||
**Using kubectl**
|
||||
|
||||
Run the following command:
|
||||
-->
|
||||
**使用 kubectl**
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment --cascade=orphan
|
||||
```
|
||||
|
||||
<!--
|
||||
**Using the Kubernetes API**
|
||||
-->
|
||||
**使用 Kubernetes API**
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output contains `orphan` in the `finalizers` field, similar to this:
|
||||
-->
|
||||
输出中在 `finalizers` 字段中包含 `orphan`,如下所示:
|
||||
|
||||
```
|
||||
"kind": "Deployment",
|
||||
"apiVersion": "apps/v1",
|
||||
"namespace": "default",
|
||||
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
|
||||
"creationTimestamp": "2021-07-09T16:46:37Z",
|
||||
"deletionTimestamp": "2021-07-09T16:47:08Z",
|
||||
"deletionGracePeriodSeconds": 0,
|
||||
"finalizers": [
|
||||
"orphan"
|
||||
],
|
||||
...
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Kubernetes 1.20.x 之前的版本" %}}
|
||||
|
||||
<!--
|
||||
For details, read the [documentation for your Kubernetes version](/docs/home/supported-doc-versions/).
|
||||
-->
|
||||
进一步的细节,可阅读[特定于你的 Kubernetes 版本的文档](/zh/docs/home/supported-doc-versions)。
|
||||
|
||||
<!--
|
||||
**Using kubectl**
|
||||
|
||||
Run the following command:
|
||||
-->
|
||||
**使用 kubectl**
|
||||
|
||||
运行下面的命令:
|
||||
|
||||
```shell
|
||||
kubectl delete deployment nginx-deployment --cascade=orphan
|
||||
```
|
||||
|
||||
<!--
|
||||
**Using the Kubernetes API**
|
||||
-->
|
||||
**使用 Kubernetes API**
|
||||
|
||||
<!--
|
||||
1. Start a local proxy session:
|
||||
-->
|
||||
1. 启动一个本地代理会话:
|
||||
|
||||
```shell
|
||||
kubectl proxy --port=8080
|
||||
```
|
||||
|
||||
<!--
|
||||
1. Use `curl` to trigger deletion:
|
||||
-->
|
||||
2. 使用 `curl` 来触发删除操作:
|
||||
|
||||
|
||||
```shell
|
||||
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/deployments/nginx-deployment \
|
||||
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
<!--
|
||||
The output contains `orphan` in the `finalizers` field, similar to this:
|
||||
-->
|
||||
输出中在 `finalizers` 字段中包含 `orphan`,如下所示:
|
||||
|
||||
```
|
||||
"kind": "Deployment",
|
||||
"apiVersion": "apps/v1",
|
||||
"namespace": "default",
|
||||
"uid": "6f577034-42a0-479d-be21-78018c466f1f",
|
||||
"creationTimestamp": "2021-07-09T16:46:37Z",
|
||||
"deletionTimestamp": "2021-07-09T16:47:08Z",
|
||||
"deletionGracePeriodSeconds": 0,
|
||||
"finalizers": [
|
||||
"orphan"
|
||||
],
|
||||
...
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{</tabs>}}
|
||||
|
||||
<!--
|
||||
You can check that the Pods managed by the Deployment are still running:
|
||||
-->
|
||||
你可以检查 Deployment 所管理的 Pods 仍然处于运行状态:
|
||||
|
||||
```shell
|
||||
kubectl get pods -l app=nginx
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Learn about [owners and dependents](/docs/concepts/overview/working-with-objects/owners-dependents/) in Kubernetes.
|
||||
* Learn about Kubernetes [finalizers](/docs/concepts/overview/working-with-objects/finalizers/).
|
||||
* Learn about [garbage collection](/docs/concepts/workloads/controllers/garbage-collection/).
|
||||
-->
|
||||
* 了解 Kubernetes 中的[属主与依赖](/zh/docs/concepts/overview/working-with-objects/owners-dependents/)
|
||||
* 了解 Kubernetes [finalizers](/zh/docs/concepts/overview/working-with-objects/finalizers/)
|
||||
* 了解[垃圾收集](/zh/docs/concepts/workloads/controllers/garbage-collection/).
|
||||
|
|
@ -0,0 +1,345 @@
|
|||
---
|
||||
title: 创建 Windows HostProcess Pod
|
||||
content_type: task
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.23
|
||||
---
|
||||
|
||||
<!--
|
||||
title: Create a Windows HostProcess Pod
|
||||
content_type: task
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.23
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
<!--
|
||||
Windows HostProcess containers enable you to run containerized
|
||||
workloads on a Windows host. These containers operate as
|
||||
normal processes but have access to the host network namespace,
|
||||
storage, and devices when given the appropriate user privileges.
|
||||
HostProcess containers can be used to deploy network plugins,
|
||||
storage configurations, device plugins, kube-proxy, and other
|
||||
components to Windows nodes without the need for dedicated proxies or
|
||||
the direct installation of host services.
|
||||
-->
|
||||
Windows HostProcess 容器让你能够在 Windows 主机上运行容器化负载。
|
||||
这类容器以普通的进程形式运行,但能够在具有合适用户特权的情况下,
|
||||
访问主机网络名字空间、存储和设备。HostProcess 容器可用来在 Windows
|
||||
节点上部署网络插件、存储配置、设备插件、kube-proxy 以及其他组件,
|
||||
同时不需要配置专用的代理或者直接安装主机服务。
|
||||
|
||||
<!--
|
||||
Administrative tasks such as installation of security patches, event
|
||||
log collection, and more can be performed without requiring cluster operators to
|
||||
log onto each Window node. HostProcess containers can run as any user that is
|
||||
available on the host or is in the domain of the host machine, allowing administrators
|
||||
to restrict resource access through user permissions. While neither filesystem or process
|
||||
isolation are supported, a new volume is created on the host upon starting the container
|
||||
to give it a clean and consolidated workspace. HostProcess containers can also be built on
|
||||
top of existing Windows base images and do not inherit the same
|
||||
[compatibility requirements](https://docs.microsoft.com/virtualization/windowscontainers/deploy-containers/version-compatibility)
|
||||
as Windows server containers, meaning that the version of the base images does not need
|
||||
to match that of the host. It is, however, recommended that you use the same base image
|
||||
version as your Windows Server container workloads to ensure you do not have any unused
|
||||
images taking up space on the node. HostProcess containers also support
|
||||
[volume mounts](./create-hostprocess-pod#volume-mounts) within the container volume.
|
||||
-->
|
||||
类似于安装安全补丁、事件日志收集等这类管理性质的任务可以在不需要集群操作员登录到每个
|
||||
Windows 节点的前提下执行。HostProcess 容器可以以主机上存在的任何用户账户来运行,
|
||||
也可以以主机所在域中的用户账户运行,这样管理员可以通过用户许可权限来限制资源访问。
|
||||
尽管文件系统和进程隔离都不支持,在启动容器时会在主机上创建一个新的卷,
|
||||
为其提供一个干净的、整合的工作空间。HostProcess 容器也可以基于现有的 Windows
|
||||
基础镜像来制作,并且不再有 Windows 服务器容器所带有的那些
|
||||
[兼容性需求](https://docs.microsoft.com/virtualization/windowscontainers/deploy-containers/version-compatibility),
|
||||
这意味着基础镜像的版本不必与主机操作系统的版本匹配。
|
||||
不过,仍然建议你像使用 Windows 服务器容器负载那样,使用相同的基础镜像版本,
|
||||
这样你就不会有一些未使用的镜像占用节点上的存储空间。HostProcess 容器也支持
|
||||
在容器卷内执行[卷挂载](./create-hostprocess-pod#volume-mounts)。
|
||||
|
||||
<!--
|
||||
### When should I use a Windows HostProcess container?
|
||||
|
||||
- When you need to perform tasks which require the networking namespace of the host.
|
||||
HostProcess containers have access to the host's network interfaces and IP addresses.
|
||||
- You need access to resources on the host such as the filesystem, event logs, etc.
|
||||
- Installation of specific device drivers or Windows services.
|
||||
- Consolidation of administrative tasks and security policies. This reduces the degree of
|
||||
privileges needed by Windows nodes.
|
||||
-->
|
||||
### 我何时该使用 Windows HostProcess 容器?
|
||||
|
||||
- 当你准备执行需要访问主机上网络名字空间的任务时,HostProcess
|
||||
容器能够访问主机上的网络接口和 IP 地址。
|
||||
- 当你需要访问主机上的资源,如文件系统、事件日志等等。
|
||||
- 需要安装特定的设备驱动或者 Windows 服务时。
|
||||
- 需要对管理任务和安全策略进行整合时。使用 HostProcess 容器能够缩小 Windows
|
||||
节点上所需要的特权范围。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!-- change this when graduating to stable -->
|
||||
|
||||
<!--
|
||||
This task guide is specific to Kubernetes v{{< skew currentVersion >}}.
|
||||
If you are not running Kubernetes v{{< skew currentVersion >}}, check the documentation for
|
||||
that version of Kubernetes.
|
||||
|
||||
In Kubernetes {{< skew currentVersion >}}, the HostProcess container feature is enabled by default. The kubelet will
|
||||
communicate with containerd directly by passing the hostprocess flag via CRI. You can use the
|
||||
latest version of containerd (v1.6+) to run HostProcess containers.
|
||||
[How to install containerd.](/docs/setup/production-environment/container-runtimes/#containerd)
|
||||
-->
|
||||
本任务指南是特定于 Kubernetes v{{< skew currentVersion >}} 的。
|
||||
如果你运行的不是 Kubernetes v{{< skew currentVersion >}},请移步访问正确
|
||||
版本的 Kubernetes 文档。
|
||||
|
||||
在 Kubernetes v{{< skew currentVersion >}} 中,HostProcess 容器功能特性默认是启用的。
|
||||
kubelet 会直接与 containerd 通信,通过 CRI 将主机进程标志传递过去。
|
||||
你可以使用 containerd 的最新版本(v1.6+)来运行 HostProcess 容器。
|
||||
参阅[如何安装 containerd](/zh/docs/setup/production-environment/container-runtimes/#containerd)。
|
||||
|
||||
<!--
|
||||
To *disable* HostProcess containers you need to pass the following feature gate flag to the
|
||||
**kubelet** and **kube-apiserver**:
|
||||
-->
|
||||
要 *禁用* HostProcess 容器特性,你需要为 **kubelet** 和 **kube-apiserver**
|
||||
设置下面的特性门控标志:
|
||||
|
||||
```powershell
|
||||
--feature-gates=WindowsHostProcessContainers=false
|
||||
```
|
||||
|
||||
<!--
|
||||
See [Features Gates](/docs/reference/command-line-tools-reference/feature-gates/#overview)
|
||||
documentation for more details.
|
||||
-->
|
||||
进一步的细节可参阅[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/#overview)文档。
|
||||
|
||||
<!--
|
||||
## Limitations
|
||||
|
||||
These limitations are relevant for Kubernetes v{{< skew currentVersion >}}:
|
||||
-->
|
||||
## 限制 {#limitations}
|
||||
|
||||
以下限制是与 Kubernetes v{{< skew currentVersion >}} 相关的:
|
||||
|
||||
<!--
|
||||
- HostProcess containers require containerd 1.6 or higher
|
||||
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
|
||||
- HostProcess pods can only contain HostProcess containers. This is a current limitation
|
||||
of the Windows OS; non-privileged Windows containers cannot share a vNIC with the host IP namespace.
|
||||
- HostProcess containers run as a process on the host and do not have any degree of
|
||||
isolation other than resource constraints imposed on the HostProcess user account. Neither
|
||||
filesystem or Hyper-V isolation are supported for HostProcess containers.
|
||||
-->
|
||||
- HostProcess 容器需要 containerd 1.6 或更高版本的
|
||||
{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}。
|
||||
- HostProcess Pods 只能包含 HostProcess 容器。这是在 Windows 操作系统上的约束;
|
||||
非特权的 Windows 容器不能与主机 IP 名字空间共享虚拟网卡(vNIC)。
|
||||
- HostProcess 在主机上以一个进程的形式运行,除了通过 HostProcess
|
||||
用户账号所实施的资源约束外,不提供任何形式的隔离。HostProcess 容器不支持文件系统或
|
||||
Hyper-V 隔离。
|
||||
<!--
|
||||
- Volume mounts are supported and are mounted under the container volume. See [Volume Mounts](#volume-mounts)
|
||||
- A limited set of host user accounts are available for HostProcess containers by default.
|
||||
See [Choosing a User Account](#choosing-a-user-account).
|
||||
- Resource limits (disk, memory, cpu count) are supported in the same fashion as processes
|
||||
on the host.
|
||||
- Both Named pipe mounts and Unix domain sockets are **not** supported and should instead
|
||||
be accessed via their path on the host (e.g. \\\\.\\pipe\\\*)
|
||||
-->
|
||||
- 卷挂载是被支持的,并且要花在到容器卷下。参见[卷挂载](#volume-mounts)。
|
||||
- 默认情况下有一组主机用户账户可供 HostProcess 容器使用。
|
||||
参见[选择用户账号](#choosing-a-user-account)。
|
||||
- 对资源约束(磁盘、内存、CPU 个数)的支持与主机上进程相同。
|
||||
- **不支持**命名管道或者 UNIX 域套接字形式的挂载,需要使用主机上的路径名来访问
|
||||
(例如,\\\\.\\pipe\\\*)。
|
||||
|
||||
<!--
|
||||
## HostProcess Pod configuration requirements
|
||||
-->
|
||||
## HostProcess Pod 配置需求 {#hostprocess-pod-configuration-requirements}
|
||||
|
||||
<!--
|
||||
Enabling a Windows HostProcess pod requires setting the right configurations in the pod security
|
||||
configuration. Of the policies defined in the [Pod Security Standards](/docs/concepts/security/pod-security-standards)
|
||||
HostProcess pods are disallowed by the baseline and restricted policies. It is therefore recommended
|
||||
that HostProcess pods run in alignment with the privileged profile.
|
||||
|
||||
When running under the privileged policy, here are
|
||||
the configurations which need to be set to enable the creation of a HostProcess pod:
|
||||
-->
|
||||
启用 Windows HostProcess Pod 需要在 Pod 安全配置中设置合适的选项。
|
||||
在 [Pod
|
||||
安全标准](/zh/docs/concepts/security/pod-security-standards)中所定义的策略中,
|
||||
HostProcess Pod 默认是不被 basline 和 restricted 策略支持的。因此建议
|
||||
HostProcess 运行在与 privileged 模式相看齐的策略下。
|
||||
|
||||
当运行在 privileged 策略下时,下面是要启用 HostProcess Pod 创建所需要设置的选项:
|
||||
|
||||
<table>
|
||||
<caption style="display: none"><!--Privileged policy specification-->privileged 策略规约</caption>
|
||||
<thead>
|
||||
<tr>
|
||||
<th><!--Control-->控制</th>
|
||||
<th><!--Policy-->策略</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td style="white-space: nowrap"><a href="/zh/docs/concepts/security/pod-security-standards"><tt>securityContext.windowsOptions.hostProcess</tt></a></td>
|
||||
<td>
|
||||
<p><!--Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">
|
||||
HostProcess containers</a> which enables privileged access to the Windows node.-->
|
||||
Windows Pods 提供运行<a href="/zh/docs/tasks/configure-pod-container/create-hostprocess-pod">
|
||||
HostProcess 容器</a>的能力,这类容器能够具有对 Windows 节点的特权访问权限。</p>
|
||||
<p><strong><!--Allowed Values-->可选值</strong></p>
|
||||
<ul>
|
||||
<li><code>true</code></li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="white-space: nowrap"><a href="/zh/docs/concepts/security/pod-security-standards"><tt>hostNetwork</tt></a></td>
|
||||
<td>
|
||||
<p><!--Will be in host network by default initially. Support
|
||||
to set network to a different compartment may be desirable in
|
||||
the future.-->
|
||||
初始时将默认位于主机网络中。在未来可能会希望将网络设置到不同的隔离环境中。
|
||||
</p>
|
||||
<p><strong><!--Allowed Values-->可选值</strong></p>
|
||||
<ul>
|
||||
<li><code>true</code></li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="white-space: nowrap"><a href="/zh/docs/tasks/configure-pod-container/configure-runasusername/"><tt>securityContext.windowsOptions.runAsUsername</tt></a></td>
|
||||
<td>
|
||||
<p><!--Specification of which user the HostProcess container should run as is required for the pod spec.-->
|
||||
关于 HostProcess 容器所要使用的用户的规约,需要设置在 Pod 的规约中。
|
||||
</p>
|
||||
<p><strong><!--Allowed Values-->可选值</strong></p>
|
||||
<ul>
|
||||
<li><code>NT AUTHORITY\SYSTEM</code></li>
|
||||
<li><code>NT AUTHORITY\Local service</code></li>
|
||||
<li><code>NT AUTHORITY\NetworkService</code></li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="white-space: nowrap"><a href="/zh/docs/concepts/security/pod-security-standards"><tt>runAsNonRoot</tt></a></td>
|
||||
<td>
|
||||
<p><!--Because HostProcess containers have privileged access to the host, the <tt>runAsNonRoot</tt> field cannot be set to true.-->
|
||||
因为 HostProcess 容器有访问主机的特权,<tt>runAsNonRoot</tt> 字段不可以设置为 true。
|
||||
</p>
|
||||
<p><strong><!--Allowed Values-->可选值</strong></p>
|
||||
<ul>
|
||||
<li><!--Undefined/Nil-->未定义/Nil</li>
|
||||
<li><code>false</code></li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
### Example manifest (excerpt) {#manifest-example}
|
||||
-->
|
||||
### 配置清单示例(片段) {#manifest-example}
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
securityContext:
|
||||
windowsOptions:
|
||||
hostProcess: true
|
||||
runAsUserName: "NT AUTHORITY\\Local service"
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: test
|
||||
image: image1:latest
|
||||
command:
|
||||
- ping
|
||||
- -t
|
||||
- 127.0.0.1
|
||||
nodeSelector:
|
||||
"kubernetes.io/os": windows
|
||||
```
|
||||
|
||||
<!--
|
||||
## Volume mounts
|
||||
|
||||
HostProcess containers support the ability to mount volumes within the container volume space.
|
||||
Applications running inside the container can access volume mounts directly via relative or
|
||||
absolute paths. An environment variable `$CONTAINER_SANDBOX_MOUNT_POINT` is set upon container
|
||||
creation and provides the absolute host path to the container volume. Relative paths are based
|
||||
upon the `.spec.containers.volumeMounts.mountPath` configuration.
|
||||
-->
|
||||
## 卷挂载 {#volume-mounts}
|
||||
|
||||
HostProcess 容器支持在容器卷空间中挂载卷的能力。
|
||||
在容器内运行的应用能够通过相对或者绝对路径直接访问卷挂载。
|
||||
环境变量 `$CONTAINER_SANDBOX_MOUNT_POINT` 在容器创建时被设置为指向容器卷的绝对主机路径。
|
||||
相对路径是基于 `.spec.containers.volumeMounts.mountPath` 配置来推导的。
|
||||
|
||||
<!--
|
||||
### Example {#volume-mount-example}
|
||||
|
||||
To access service account tokens the following path structures are supported within the container:
|
||||
-->
|
||||
### 示例 {#volume-mount-example}
|
||||
|
||||
容器内支持通过下面的路径结构来访问服务账好令牌:
|
||||
|
||||
`.\var\run\secrets\kubernetes.io\serviceaccount\`
|
||||
|
||||
`$CONTAINER_SANDBOX_MOUNT_POINT\var\run\secrets\kubernetes.io\serviceaccount\`
|
||||
|
||||
<!--
|
||||
## Resource limits
|
||||
|
||||
Resource limits (disk, memory, cpu count) are applied to the job and are job wide.
|
||||
For example, with a limit of 10MB set, the memory allocated for any HostProcess job object
|
||||
will be capped at 10MB. This is the same behavior as other Windows container types.
|
||||
These limits would be specified the same way they are currently for whatever orchestrator
|
||||
or runtime is being used. The only difference is in the disk resource usage calculation
|
||||
used for resource tracking due to the difference in how HostProcess containers are bootstrapped.
|
||||
-->
|
||||
## 资源约束 {#resource-limits}
|
||||
|
||||
资源约束(磁盘、内存、CPU 个数)作用到任务之上,并在整个任务上起作用。
|
||||
例如,如果内存限制设置为 10MB,任何 HostProcess 任务对象所分配的内存不会超过 10MB。
|
||||
这一行为与其他 Windows 容器类型相同。资源限制的设置方式与编排系统或容器运行时无关。
|
||||
唯一的区别是用来跟踪资源所进行的磁盘资源用量的计算,出现差异的原因是因为
|
||||
HostProcess 容器启动引导的方式造成的。
|
||||
|
||||
<!--
|
||||
## Choosing a user account
|
||||
|
||||
HostProcess containers support the ability to run as one of three supported Windows service accounts:
|
||||
-->
|
||||
## 选择用户账号 {#choosing-a-user-account}
|
||||
|
||||
HostProcess 容器支持以三种被支持的 Windows 服务账号之一来运行:
|
||||
|
||||
- **[LocalSystem](https://docs.microsoft.com/windows/win32/services/localsystem-account)**
|
||||
- **[LocalService](https://docs.microsoft.com/windows/win32/services/localservice-account)**
|
||||
- **[NetworkService](https://docs.microsoft.com/windows/win32/services/networkservice-account)**
|
||||
|
||||
<!--
|
||||
You should select an appropriate Windows service account for each HostProcess
|
||||
container, aiming to limit the degree of privileges so as to avoid accidental (or even
|
||||
malicious) damage to the host. The LocalSystem service account has the highest level
|
||||
of privilege of the three and should be used only if absolutely necessary. Where possible,
|
||||
use the LocalService service account as it is the least privileged of the three options.
|
||||
-->
|
||||
你应该为每个 HostProcess 容器选择一个合适的 Windows 服务账号,尝试限制特权范围,
|
||||
避免给主机代理意外的(甚至是恶意的)伤害。LocalSystem 服务账号的特权级
|
||||
在三者之中最高,只有在绝对需要的时候才应该使用。只要可能,应该使用
|
||||
LocalService 服务账号,因为该账号在三者中特权最低。
|
||||
|
|
@ -29,7 +29,7 @@ min-kubernetes-server-version: v1.14
|
|||
This tutorial shows you how to build and deploy a simple _(not production ready)_, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/). This example consists of the following components:
|
||||
-->
|
||||
本教程向您展示如何使用 Kubernetes 和 [Docker](https://www.docker.com/) 构建和部署
|
||||
一个简单的_(非面向生产)的_多层 web 应用程序。本例由以下组件组成:
|
||||
一个简单的 _(非面向生产的)_ 多层 web 应用程序。本例由以下组件组成:
|
||||
|
||||
<!--
|
||||
* A single-instance [Redis](https://www.redis.io/) to store guestbook entries
|
||||
|
|
|
@ -87,6 +87,9 @@ other = ","
|
|||
[input_placeholder_email_address]
|
||||
other = "email address"
|
||||
|
||||
[javascript_required]
|
||||
other = "JavaScript must be [enabled](https://www.enable-javascript.com/) to view this content"
|
||||
|
||||
[latest_release]
|
||||
other = "Latest Release:"
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
<script src="/js/bootstrap-4.3.1.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
|
||||
|
||||
{{ if .Site.Params.mermaid.enable }}
|
||||
<script src="/js//mermaid.min.js" crossorigin="anonymous"></script>
|
||||
<script src="/js/mermaid.min.js" crossorigin="anonymous"></script>
|
||||
|
||||
{{ end }}
|
||||
|
||||
|
|
|
@ -3,9 +3,9 @@
|
|||
{{ $for_k8s_version := .Get "for_k8s_version" | default (.Page.Param "version")}}
|
||||
{{ $is_valid := strings.Contains $valid_states $state }}
|
||||
{{ if not $is_valid }}
|
||||
{{ errorf "%q is not a valid feature-state, use one of %q" $valid_states }}
|
||||
{{ errorf "%q is not a valid feature-state, use one of %q" $state $valid_states }}
|
||||
{{ else }}
|
||||
<div style="margin-top: 10px; margin-bottom: 10px;">
|
||||
<b>FEATURE STATE:</b> <code>Kubernetes {{ $for_k8s_version }} [{{ $state }}]</code>
|
||||
</div>
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
<figure>
|
||||
<div class="mermaid">
|
||||
{{.Inner}}
|
||||
</div>
|
||||
</figure>
|
||||
<!-- Hide content and error if JS is disabled. -->
|
||||
<noscript>
|
||||
<style type="text/css">
|
||||
.mermaid { display:none; }
|
||||
</style>
|
||||
<h4>[JavaScript must be <a href="https://www.enable-javascript.com/">enabled</a> to view content]</h4>
|
||||
<div class="alert alert-secondary callout" role="alert">
|
||||
<em class="javascript-required">{{ T "javascript_required" | markdownify }}</em>
|
||||
</div>
|
||||
</noscript>
|
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue