Merge pull request #31368 from nate-double-u/merged-main-dev-1.24

Merged main dev 1.24
pull/31370/head
Kubernetes Prow Robot 2022-01-17 12:50:49 -08:00 committed by GitHub
commit 8e1e06f3d6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
65 changed files with 5670 additions and 411 deletions

View File

@ -7,11 +7,16 @@ slug: are-you-ready-for-dockershim-removal
**Author:** Sergey Kanzhelev, Google. With reviews from Davanum Srinivas, Elana Hashman, Noah Kantrowitz, Rey Lejano.
{{% alert color="info" title="Poll closed" %}}
This poll closed on January 7, 2022.
{{% /alert %}}
Last year we announced that Dockershim is being deprecated: [Dockershim Deprecation FAQ](/blog/2020/12/02/dockershim-faq/).
Our current plan is to remove dockershim from the Kubernetes codebase soon.
We are looking for feedback from you whether you are ready for dockershim
removal and to ensure that you are ready when the time comes.
**Please fill out this survey: https://forms.gle/svCJmhvTv78jGdSx8**.
<del>Please fill out this survey: https://forms.gle/svCJmhvTv78jGdSx8</del>
The dockershim component that enables Docker as a Kubernetes container runtime is
being deprecated in favor of runtimes that directly use the [Container Runtime Interface](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/)
@ -25,7 +30,7 @@ are still not ready: [migrating telemetry and security agents](/docs/tasks/admin
At this point, we believe that there is feature parity between Docker and the
other runtimes. Many end-users have used our [migration guide](/docs/tasks/administer-cluster/migrating-from-dockershim/)
and are running production workload using these different runtimes. The plan of
record today is that dockershim will be removed in version 1.24, slated for
record today is that dockershim will be removed in version 1.24, slated for
release around April of next year. For those developing or running alpha and
beta versions, dockershim will be removed in December at the beginning of the
1.24 release development cycle.
@ -33,7 +38,7 @@ beta versions, dockershim will be removed in December at the beginning of the
There is only one month left to give us feedback. We want you to tell us how
ready you are.
**We are collecting opinions through this survey: [https://forms.gle/svCJmhvTv78jGdSx8](https://forms.gle/svCJmhvTv78jGdSx8)**
<del>We are collecting opinions through this survey: https://forms.gle/svCJmhvTv78jGdSx8</del>
To better understand preparedness for the dockershim removal, our survey is
asking the version of Kubernetes you are currently using, and an estimate of
when you think you will adopt Kubernetes 1.24. All the aggregated information

View File

@ -0,0 +1,103 @@
---
layout: blog
title: "Kubernetes is Moving on From Dockershim: Commitments and Next Steps"
date: 2022-01-07
slug: kubernetes-is-moving-on-from-dockershim
---
**Authors:** Sergey Kanzhelev (Google), Jim Angel (Google), Davanum Srinivas (VMware), Shannon Kularathna (Google), Chris Short (AWS), Dawn Chen (Google)
Kubernetes is removing dockershim in the upcoming v1.24 release. We're excited
to reaffirm our community values by supporting open source container runtimes,
enabling a smaller kubelet, and increasing engineering velocity for teams using
Kubernetes. If you [use Docker Engine as a container runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/)
for your Kubernetes cluster, get ready to migrate in 1.24! To check if you're
affected, refer to [Check whether dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/).
## Why were moving away from dockershim
Docker was the first container runtime used by Kubernetes. This is one of the
reasons why Docker is so familiar to many Kubernetes users and enthusiasts.
Docker support was hardcoded into Kubernetes a component the project refers to
as dockershim.
As containerization became an industry standard, the Kubernetes project added support
for additional runtimes. This culminated in the implementation of the
container runtime interface (CRI), letting system components (like the kubelet)
talk to container runtimes in a standardized way. As a result, dockershim became
an anomaly in the Kubernetes project.
Dependencies on Docker and dockershim have crept into various tools
and projects in the CNCF ecosystem ecosystem, resulting in fragile code.
By removing the
dockershim CRI, we're embracing the first value of CNCF: "[Fast is better than
slow](https://github.com/cncf/foundation/blob/master/charter.md#3-values)".
Stay tuned for future communications on the topic!
## Deprecation timeline
We [formally announced](/blog/2020/12/08/kubernetes-1-20-release-announcement/) the dockershim deprecation in December 2020. Full removal is targeted
in Kubernetes 1.24, in April 2022. This timeline
aligns with our [deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior),
which states that deprecated behaviors must function for at least 1 year
after their announced deprecation.
We'll support Kubernetes version 1.23, which includes
dockershim, for another year in the Kubernetes project. For managed
Kubernetes providers, vendor support is likely to last even longer, but this is
dependent on the companies themselves. Regardless, we're confident all cluster operations will have
time to migrate. If you have more questions about the dockershim removal, refer
to the [Dockershim Deprecation FAQ](/dockershim).
We asked you whether you feel prepared for the migration from dockershim in this
survey: [Are you ready for Dockershim removal](/blog/2021/11/12/are-you-ready-for-dockershim-removal/).
We had over 600 responses. To everybody who took time filling out the survey,
thank you.
The results show that we still have a lot of ground to cover to help you to
migrate smoothly. Other container runtimes exist, and have been promoted
extensively. However, many users told us they still rely on dockershim,
and sometimes have dependencies that need to be re-worked. Some of these
dependencies are outside of your control. Based on your feedback, here are some
of the steps we are taking to help.
## Our next steps
Based on the feedback you provided:
- CNCF and the 1.24 release team are committed to delivering documentation in
time for the 1.24 release. This includes more informative blog posts like this
one, updating existing code samples, tutorials, and tasks, and producing a
migration guide for cluster operators.
- We are reaching out to the rest of the CNCF community to help prepare them for
this change.
If you're part of a project with dependencies on dockershim, or if you're
interested in helping with the migration effort, please join us! There's always
room for more contributors, whether to our transition tools or to our
documentation. To get started, say hello in the
[#sig-node](https://kubernetes.slack.com/archives/C0BP8PW9G)
channel on [Kubernetes Slack](https://slack.kubernetes.io/)!
## Final thoughts
As a project, we've already seen cluster operators increasingly adopt other
container runtimes through 2021.
We believe there are no major blockers to migration. The steps we're taking to
improve the migration experience will light the path more clearly for you.
We understand that migration from dockershim is yet another action you may need to
do to keep your Kubernetes infrastructure up to date. For most of you, this step
will be straightforward and transparent. In some cases, you will encounter
hiccups or issues. The community has discussed at length whether postponing the
dockershim removal would be helpful. For example, we recently talked about it in
the [SIG Node discussion on November 11th](https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.r77y11bgzid)
and in the [Kubernetes Steering committee meeting held on December 6th](https://docs.google.com/document/d/1qazwMIHGeF3iUh5xMJIJ6PDr-S3bNkT8tNLRkSiOkOU/edit#bookmark=id.m0ir406av7jx).
We already [postponed](https://github.com/kubernetes/enhancements/pull/2481/) it
once in 2021 because the adoption rate of other
runtimes was lower than we wanted, which also gave us more time to identify
potential blocking issues.
At this point, we believe that the value that you (and Kubernetes) gain from
dockershim removal makes up for the migration effort you'll have. Start planning
now to avoid surprises. We'll have more updates and guides before Kubernetes
1.24 is released.

View File

@ -0,0 +1,106 @@
---
layout: blog
title: "Meet Our Contributors - APAC (India region)"
date: 2022-01-10T12:00:00+0000
slug: meet-our-contributors-india-ep-01
canonicalUrl: https://kubernetes.dev/blog/2022/01/10/meet-our-contributors-india-ep-01/
---
**Authors & Interviewers:** [Anubhav Vardhan](https://github.com/anubha-v-ardhan), [Atharva Shinde](https://github.com/Atharva-Shinde), [Avinesh Tripathi](https://github.com/AvineshTripathi), [Debabrata Panigrahi](https://github.com/Debanitrkl), [Kunal Verma](https://github.com/verma-kunal), [Pranshu Srivastava](https://github.com/PranshuSrivastava), [Pritish Samal](https://github.com/CIPHERTron), [Purneswar Prasad](https://github.com/PurneswarPrasad), [Vedant Kakde](https://github.com/vedant-kakde)
**Editor:** [Priyanka Saggu](https://psaggu.com)
---
Good day, everyone 👋
Welcome to the first episode of the APAC edition of the "Meet Our Contributors" blog post series.
In this post, we'll introduce you to five amazing folks from the India region who have been actively contributing to the upstream Kubernetes projects in a variety of ways, as well as being the leaders or maintainers of numerous community initiatives.
💫 *Let's get started, so without further ado…*
## [Arsh Sharma](https://github.com/RinkiyaKeDad)
Arsh is currently employed with Okteto as a Developer Experience engineer. As a new contributor, he realised that 1:1 mentorship opportunities were quite beneficial in getting him started with the upstream project.
He is presently a CI Signal shadow on the Kubernetes 1.23 release team. He is also contributing to the SIG Testing and SIG Docs projects, as well as to the [cert-manager](https://github.com/cert-manager/infrastructure) tools development work that is being done under the aegis of SIG Architecture.
To the newcomers, Arsh helps plan their early contributions sustainably.
> _I would encourage folks to contribute in a way that's sustainable. What I mean by that
> is that it's easy to be very enthusiastic early on and take up more stuff than one can
> actually handle. This can often lead to burnout in later stages. It's much more sustainable
> to work on things iteratively._
## [Kunal Kushwaha](https://github.com/kunal-kushwaha)
Kunal Kushwaha is a core member of the Kubernetes marketing council. He is also a CNCF ambassador and one of the founders of the [CNCF Students Program](https://community.cncf.io/cloud-native-students/).. He also served as a Communications role shadow during the 1.22 release cycle.
At the end of his first year, Kunal began contributing to the [fabric8io kubernetes-client](https://github.com/fabric8io/kubernetes-client) project. He was then selected to work on the same project as part of Google Summer of Code. Kunal mentored people on the same project, first through Google Summer of Code then through Google Code-in.
As an open-source enthusiast, he believes that diverse participation in the community is beneficial since it introduces new perspectives and opinions and respect for one's peers. He has worked on various open-source projects, and his participation in communities has considerably assisted his development as a developer.
> _I believe if you find yourself in a place where you do not know much about the
> project, that's a good thing because now you can learn while contributing and the
> community is there to help you. It has helped me a lot in gaining skills, meeting
> people from around the world and also helping them. You can learn on the go,
> you don't have to be an expert. Make sure to also check out no code contributions
> because being a beginner is a skill and you can bring new perspectives to the
> organisation._
## [Madhav Jivarajani](https://github.com/MadhavJivrajani)
Madhav Jivarajani works on the VMware Upstream Kubernetes stability team. He began contributing to the Kubernetes project in January 2021 and has since made significant contributions to several areas of work under SIG Architecture, SIG API Machinery, and SIG ContribEx (contributor experience).
Among several significant contributions are his recent efforts toward the Archival of [design proposals](https://github.com/kubernetes/community/issues/6055), refactoring the ["groups" codebase](https://github.com/kubernetes/k8s.io/pull/2713) under k8s-infra repository to make it mockable and testable, and improving the functionality of the [GitHub k8s bot](https://github.com/kubernetes/test-infra/issues/23129).
In addition to his technical efforts, Madhav oversees many projects aimed at assisting new contributors. He organises bi-weekly "KEP reading club" sessions to help newcomers understand the process of adding new features, deprecating old ones, and making other key changes to the upstream project. He has also worked on developing [Katacoda scenarios](https://github.com/kubernetes-sigs/contributor-katacoda) to assist new contributors to become acquainted with the process of contributing to k/k. In addition to his current efforts to meet with community members every week, he has organised several [new contributors workshops (NCW)](https://www.youtube.com/watch?v=FgsXbHBRYIc).
> _I initially did not know much about Kubernetes. I joined because the community was
> super friendly. But what made me stay was not just the people, but the project itself.
> My solution to not feeling overwhelmed in the community was to gain as much context
> and knowledge into the topics that I was interested in and were being discussed. And
> as a result I continued to dig deeper into Kubernetes and the design of it.
> I am a systems nut & thus Kubernetes was an absolute goldmine for me._
## [Rajas Kakodkar](https://github.com/rajaskakodkar)
Rajas Kakodkar currently works at VMware as a Member of Technical Staff. He has been engaged in many aspects of the upstream Kubernetes project since 2019.
He is now a key contributor to the Testing special interest group. He is also active in the SIG Network community. Lately, Rajas has contributed significantly to the [NetworkPolicy++](https://docs.google.com/document/d/1AtWQy2fNa4qXRag9cCp5_HsefD7bxKe3ea2RPn8jnSs/) and [`kpng`](https://github.com/kubernetes-sigs/kpng) sub-projects.
One of the first challenges he ran across was that he was in a different time zone than the upstream project's regular meeting hours. However, async interactions on community forums progressively corrected that problem.
> _I enjoy contributing to Kubernetes not just because I get to work on
> cutting edge tech but more importantly because I get to work with
> awesome people and help in solving real world problems._
## [Rajula Vineet Reddy](https://github.com/rajula96reddy)
Rajula Vineet Reddy, a Junior Engineer at CERN, is a member of the Marketing Council team under SIG ContribEx . He also served as a release shadow for SIG Release during the 1.22 and 1.23 Kubernetes release cycles.
He started looking at the Kubernetes project as part of a university project with the help of one of his professors. Over time, he spent a significant amount of time reading the project's documentation, Slack discussions, GitHub issues, and blogs, which helped him better grasp the Kubernetes project and piqued his interest in contributing upstream. One of his key contributions was his assistance with automation in the SIG ContribEx Upstream Marketing subproject.
According to Rajula, attending project meetings and shadowing various project roles are vital for learning about the community.
> _I find the community very helpful and it's always_
> “you get back as much as you contribute”.
> _The more involved you are, the more you will understand, get to learn and
> contribute new things._
>
> _The first step to_ “come forward and start” _is hard. But it's all gonna be
> smooth after that. Just take that jump._
---
If you have any recommendations/suggestions for who we should interview next, please let us know in #sig-contribex. We're thrilled to have other folks assisting us in reaching out to even more wonderful individuals of the community. Your suggestions would be much appreciated.
We'll see you all in the next one. Everyone, till then, have a happy contributing! 👋

View File

@ -0,0 +1,44 @@
---
layout: blog
title: "Securing Admission Controllers"
date: 2022-01-19
slug: secure-your-admission-controllers-and-webhooks
---
**Author:** Rory McCune (Aqua Security)
[Admission control](/docs/reference/access-authn-authz/admission-controllers/) is a key part of Kubernetes security, alongside authentication and authorization. Webhook admission controllers are extensively used to help improve the security of Kubernetes clusters in a variety of ways including restricting the privileges of workloads and ensuring that images deployed to the cluster meet organizations security requirements.
However, as with any additional component added to a cluster, security risks can present themselves. A security risk example is if the deployment and management of the admission controller are not handled correctly. To help admission controller users and designers manage these risks appropriately, the [security documentation](https://github.com/kubernetes/community/tree/master/sig-security#security-docs) subgroup of SIG Security has spent some time developing a [threat model for admission controllers](https://github.com/kubernetes/sig-security/tree/main/sig-security-docs/papers/admission-control). This threat model looks at likely risks which may arise from the incorrect use of admission controllers, which could allow security policies to be bypassed, or even allow an attacker to get unauthorised access to the cluster.
From the threat model, we developed a set of security best practices that should be adopted to ensure that cluster operators can get the security benefits of admission controllers whilst avoiding any risks from using them.
## Admission controllers and good practices for security
From the threat model, a couple of themes emerged around how to ensure the security of admission controllers.
### Secure webhook configuration
Its important to ensure that any security component in a cluster is well configured and admission controllers are no different here. There are a couple of security best practices to consider when using admission controllers
* **Correctly configured TLS for all webhook traffic**. Communications between the API server and the admission controller webhook should be authenticated and encrypted to ensure that attackers who may be in a network position to view or modify this traffic cannot do so. To achieve this access the API server and webhook must be using certificates from a trusted certificate authority so that they can validate their mutual identities
* **Only authenticated access allowed**. If an attacker can send an admission controller large numbers of requests, they may be able to overwhelm the service causing it to fail. Ensuring all access requires strong authentication should mitigate that risk.
* **Admission controller fails closed**. This is a security practice that has a tradeoff, so whether a cluster operator wants to configure it will depend on the clusters threat model. If an admission controller fails closed, when the API server cant get a response from it, all deployments will fail. This stops attackers bypassing the admission controller by disabling it, but, can disrupt the clusters operation. As clusters can have multiple webhooks, one approach to hit a middle ground might be to have critical controls on a fail closed setups and less critical controls allowed to fail open.
* **Regular reviews of webhook configuration**. Configuration mistakes can lead to security issues, so its important that the admission controller webhook configuration is checked to make sure the settings are correct. This kind of review could be done automatically by an Infrastructure As Code scanner or manually by an administrator.
### Secure cluster configuration for admission control
In most cases, the admission controller webhook used by a cluster will be installed as a workload in the cluster. As a result, its important to ensure that Kubernetes' security features that could impact its operation are well configured.
* **Restrict [RBAC](/docs/reference/access-authn-authz/rbac/) rights**. Any user who has rights which would allow them to modify the configuration of the webhook objects or the workload that the admission controller uses could disrupt its operation. So its important to make sure that only cluster administrators have those rights.
* **Prevent privileged workloads**. One of the realities of container systems is that if a workload is given certain privileges, it will be possible to break out to the underlying cluster node and impact other containers on that node. Where admission controller services run in the cluster theyre protecting, its important to ensure that any requirement for privileged workloads is carefully reviewed and restricted as much as possible.
* **Strictly control external system access**. As a security service in a cluster admission controller systems will have access to sensitive information like credentials. To reduce the risk of this information being sent outside the cluster, [network policies](/docs/concepts/services-networking/network-policies/) should be used to restrict the admission controller services access to external networks.
* **Each cluster has a dedicated webhook**. Whilst it may be possible to have admission controller webhooks that serve multiple clusters, there is a risk when using that model that an attack on the webhook service would have a larger impact where its shared. Also where multiple clusters use an admission controller there will be increased complexity and access requirements, making it harder to secure.
### Admission controller rules
A key element of any admission controller used for Kubernetes security is the rulebase it uses. The rules need to be able to accurately meet their goals avoiding false positive and false negative results.
* **Regularly test and review rules**. Admission controller rules need to be tested to ensure their accuracy. They also need to be regularly reviewed as the Kubernetes API will change with each new version, and rules need to be assessed with each Kubernetes release to understand any changes that may be required to keep them up to date.

View File

@ -15,7 +15,7 @@ each Node in your cluster, so that the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} can launch
{{< glossary_tooltip text="Pods" term_id="pod" >}} and their containers.
{{< glossary_definition term_id="container-runtime-interface" length="all" >}}
{{< glossary_definition prepend="The Container Runtime Interface (CRI) is" term_id="container-runtime-interface" length="all" >}}
<!-- body -->

View File

@ -33,7 +33,7 @@ There are two main ways to have Nodes added to the {{< glossary_tooltip text="AP
1. The kubelet on a node self-registers to the control plane
2. You (or another human user) manually add a Node object
After you create a Node object, or the kubelet on a node self-registers, the
After you create a Node {{< glossary_tooltip text="object" term_id="object" >}}, or the kubelet on a node self-registers, the
control plane checks whether the new Node object is valid. For example, if you
try to create a Node from the following JSON manifest:

View File

@ -79,7 +79,7 @@ addressing, and it can be used in combination with other CNI plugins.
### CNI-Genie from Huawei
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Weave-net](https://www.weave.works/products/weave-net/).
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
@ -104,10 +104,6 @@ network complexity required to deploy Kubernetes at scale within AWS.
[Coil](https://github.com/cybozu-go/coil) is a CNI plugin designed for ease of integration, providing flexible egress networking.
Coil operates with a low overhead compared to bare metal, and allows you to define arbitrary egress NAT gateways for external networks.
### Contiv
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases.
### Contrail / Tungsten Fabric
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
@ -130,6 +126,10 @@ With this toolset DANM is able to provide multiple separated network interfaces,
network that satisfies the Kubernetes requirements. Many
people have reported success with Flannel and Kubernetes.
### Hybridnet
[Hybridnet](https://github.com/alibaba/hybridnet) is an open source CNI plugin designed for hybrid clouds which provides both overlay and underlay networking for containers in one or more clusters. Overlay and underlay containers can run on the same node and have cluster-wide bidirectional network connectivity.
### Jaguar
[Jaguar](https://gitlab.com/sdnlab/jaguar) is an open source solution for Kubernetes's network based on OpenDaylight. Jaguar provides overlay network using vxlan and Jaguar CNIPlugin provides one IP address per pod.

View File

@ -1,7 +1,6 @@
---
reviewers:
title: Device Plugins
description: Use the Kubernetes device plugin framework to implement plugins for GPUs, NICs, FPGAs, InfiniBand, and similar resources that require vendor-specific setup.
description: Device plugins let you configure your cluster with support for devices or resources that require vendor-specific setup, such as GPUs, NICs, FPGAs, or non-volatile main memory.
content_type: concept
weight: 20
---
@ -48,12 +47,14 @@ For example, after a device plugin registers `hardware-vendor.example/foo` with
and reports two healthy devices on a node, the node status is updated
to advertise that the node has 2 "Foo" devices installed and available.
Then, users can request devices in a
[Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
specification as they request other types of resources, with the following limitations:
Then, users can request devices as part of a Pod specification
(see [`container`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)).
Requesting extended resources is similar to how you manage requests and limits for
other resources, with the following differences:
* Extended resources are only supported as integer resources and cannot be overcommitted.
* Devices cannot be shared among Containers.
* Devices cannot be shared between containers.
### Example {#example-pod}
Suppose a Kubernetes cluster is running a device plugin that advertises resource `hardware-vendor.example/foo`
on certain nodes. Here is an example of a pod requesting this resource to run a demo workload:
@ -174,7 +175,7 @@ a Kubernetes release with a newer device plugin API version, upgrade your device
to support both versions before upgrading these nodes. Taking that approach will
ensure the continuous functioning of the device allocations during the upgrade.
## Monitoring Device Plugin Resources
## Monitoring device plugin resources
{{< feature-state for_k8s_version="v1.15" state="beta" >}}
@ -310,7 +311,7 @@ DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
Support for the `PodResourcesLister service` requires `KubeletPodResources` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
It is enabled by default starting with Kubernetes 1.15 and is v1 since Kubernetes 1.20.
## Device Plugin integration with the Topology Manager
## Device plugin integration with the Topology Manager
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
@ -319,7 +320,7 @@ The Topology Manager is a Kubelet component that allows resources to be co-ordin
```gRPC
message TopologyInfo {
repeated NUMANode nodes = 1;
repeated NUMANode nodes = 1;
}
message NUMANode {
@ -338,6 +339,8 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.
## Device plugin examples {#examples}
{{% thirdparty-content %}}
Here are some examples of device plugin implementations:
* The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
@ -357,5 +360,5 @@ Here are some examples of device plugin implementations:
* Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device plugins
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node
* Read about using [hardware acceleration for TLS ingress](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
* Read about using [hardware acceleration for TLS ingress](/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes

View File

@ -42,9 +42,10 @@ on every resource object.
| `app.kubernetes.io/managed-by` | The tool being used to manage the operation of an application | `helm` | string |
| `app.kubernetes.io/created-by` | The controller/user who created this resource | `controller-manager` | string |
To illustrate these labels in action, consider the following StatefulSet object:
To illustrate these labels in action, consider the following {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} object:
```yaml
# This is an excerpt
apiVersion: apps/v1
kind: StatefulSet
metadata:

View File

@ -106,7 +106,7 @@ description: "This priority class should be used for XYZ service pods only."
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
Pods with `PreemptionPolicy: Never` will be placed in the scheduling queue
Pods with `preemptionPolicy: Never` will be placed in the scheduling queue
ahead of lower-priority pods,
but they cannot preempt other pods.
A non-preempting pod waiting to be scheduled will stay in the scheduling queue,
@ -122,16 +122,16 @@ allowing other pods with lower priority to be scheduled before them.
Non-preempting pods may still be preempted by other,
high-priority pods.
`PreemptionPolicy` defaults to `PreemptLowerPriority`,
`preemptionPolicy` defaults to `PreemptLowerPriority`,
which will allow pods of that PriorityClass to preempt lower-priority pods
(as is existing default behavior).
If `PreemptionPolicy` is set to `Never`,
If `preemptionPolicy` is set to `Never`,
pods in that PriorityClass will be non-preempting.
An example use case is for data science workloads.
A user may submit a job that they want to be prioritized above other workloads,
but do not wish to discard existing work by preempting running pods.
The high priority job with `PreemptionPolicy: Never` will be scheduled
The high priority job with `preemptionPolicy: Never` will be scheduled
ahead of other queued pods,
as soon as sufficient cluster resources "naturally" become free.

View File

@ -434,7 +434,7 @@ provisioner: example.com/external-nfs
parameters:
server: nfs-server.example.com
path: /share
readOnly: false
readOnly: "false"
```
* `server`: Server is the hostname or IP address of the NFS server.
@ -797,7 +797,7 @@ parameters:
storagePool: sp1
storageMode: ThinProvisioned
secretRef: sio-secret
readOnly: false
readOnly: "false"
fsType: xfs
```

View File

@ -615,7 +615,7 @@ spec:
```
The new Job itself will have a different uid from `a8f3d00d-c6d2-11e5-9f87-42010af00002`. Setting
`manualSelector: true` tells the system to that you know what you are doing and to allow this
`manualSelector: true` tells the system that you know what you are doing and to allow this
mismatch.
### Job tracking with finalizers

View File

@ -229,7 +229,7 @@ when both the following statements apply:
* All conditions specified in `readinessGates` are `True`.
When a Pod's containers are Ready but at least one custom condition is missing or
`False`, the kubelet sets the Pod's [condition](#pod-condition) to `ContainersReady`.
`False`, the kubelet sets the Pod's [condition](#pod-conditions) to `ContainersReady`.
## Container probes

View File

@ -249,6 +249,7 @@ Home | [All heading and subheading URLs](/docs/home/)
Setup | [All heading and subheading URLs](/docs/setup/)
Tutorials | [Kubernetes Basics](/docs/tutorials/kubernetes-basics/), [Hello Minikube](/docs/tutorials/hello-minikube/)
Site strings | [All site strings](#Site-strings-in-i18n) in a new localized TOML file
Releases | [All heading and subheading URLs](/releases)
Translated documents must reside in their own `content/**/` subdirectory, but otherwise follow the same URL path as the English source. For example, to prepare the [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) tutorial for translation into German, create a subfolder under the `content/de/` folder and copy the English source:

View File

@ -147,7 +147,7 @@ separately for reviewer status in SIG Docs.
To apply:
1. Open a pull request that adds your GitHub user name to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS) file
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) file
in the `kubernetes/website` repository.
{{< note >}}
@ -219,7 +219,7 @@ separately for approver status in SIG Docs.
To apply:
1. Open a pull request adding yourself to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS)
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)
file in the `kubernetes/website` repository.
{{< note >}}

View File

@ -53,7 +53,7 @@ different Kubernetes components.
|---------|---------|-------|-------|-------|
| `APIListChunking` | `false` | Alpha | 1.8 | 1.8 |
| `APIListChunking` | `true` | Beta | 1.9 | |
| `APIPriorityAndFairness` | `false` | Alpha | 1.17 | 1.19 |
| `APIPriorityAndFairness` | `false` | Alpha | 1.18 | 1.19 |
| `APIPriorityAndFairness` | `true` | Beta | 1.20 | |
| `APIResponseCompression` | `false` | Alpha | 1.7 | 1.15 |
| `APIResponseCompression` | `true` | Beta | 1.16 | |
@ -167,8 +167,8 @@ different Kubernetes components.
| `NodeSwap` | `false` | Alpha | 1.22 | |
| `NonPreemptingPriority` | `false` | Alpha | 1.15 | 1.18 |
| `NonPreemptingPriority` | `true` | Beta | 1.19 | |
| `OpenAPIEnum` | `false` | Alpha | 1.23 | |
| `OpenAPIv3` | `false` | Alpha | 1.23 | |
| `OpenAPIEnums` | `false` | Alpha | 1.23 | |
| `OpenAPIV3` | `false` | Alpha | 1.23 | |
| `PodAndContainerStatsFromCRI` | `false` | Alpha | 1.23 | |
| `PodAffinityNamespaceSelector` | `false` | Alpha | 1.21 | 1.21 |
| `PodAffinityNamespaceSelector` | `true` | Beta | 1.22 | |
@ -784,7 +784,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes).
- `ExpandCSIVolumes`: Enable the expanding of CSI volumes.
- `ExpandedDNSConfig`: Enable kubelet and kube-apiserver to allow more DNS
search paths and longer list of DNS search paths. See
search paths and longer list of DNS search paths. This feature requires container
runtime support(Containerd: v1.5.6 or higher, CRI-O: v1.22 or higher). See
[Expanded DNS Configuration](/docs/concepts/services-networking/dns-pod-service/#expanded-dns-configuration).
- `ExpandInUsePersistentVolumes`: Enable expanding in-use PVCs. See
[Resizing an in-use PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim).
@ -913,9 +914,9 @@ Each feature gate is designed for enabling/disabling a specific feature:
Must be used with `KubeletConfiguration.failSwapOn` set to false.
For more details, please see [swap memory](/docs/concepts/architecture/nodes/#swap-memory)
- `NonPreemptingPriority`: Enable `preemptionPolicy` field for PriorityClass and Pod.
- `OpenAPIEnum`: Enables populating "enum" fields of OpenAPI schemas in the
- `OpenAPIEnums`: Enables populating "enum" fields of OpenAPI schemas in the
spec returned from the API server.
- `OpenAPIv3`: Enables the API server to publish OpenAPI v3.
- `OpenAPIV3`: Enables the API server to publish OpenAPI v3.
- `PVCProtection`: Enable the prevention of a PersistentVolumeClaim (PVC) from
being deleted when it is still used by any Pod.
- `PodDeletionCost`: Enable the [Pod Deletion Cost](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost)

View File

@ -19,4 +19,4 @@ The Kubernetes Container Runtime Interface (CRI) defines the main
[gRPC](https://grpc.io) protocol for the communication between the
[cluster components](/docs/concepts/overview/components/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.

View File

@ -41,8 +41,6 @@ For control-plane nodes additional steps are performed:
1. Adding new local etcd member.
1. Adding this node to the ClusterStatus of the kubeadm cluster.
### Using join phases with kubeadm {#join-phases}
Kubeadm allows you join a node to the cluster in phases using `kubeadm join phase`.

View File

@ -16,8 +16,7 @@ Performs a best effort revert of changes made by `kubeadm init` or `kubeadm join
`kubeadm reset` is responsible for cleaning up a node local file system from files that were created using
the `kubeadm init` or `kubeadm join` commands. For control-plane nodes `reset` also removes the local stacked
etcd member of this node from the etcd cluster and also removes this node's information from the kubeadm
`ClusterStatus` object. `ClusterStatus` is a kubeadm managed Kubernetes API object that holds a list of kube-apiserver endpoints.
etcd member of this node from the etcd cluster.
`kubeadm reset phase` can be used to execute the separate phases of the above workflow.
To skip a list of phases you can use the `--skip-phases` flag, which works in a similar way to

View File

@ -16,19 +16,20 @@ or upgrades for such nodes. The long term plan is to empower the tool
aspects.
{{< /note >}}
Kubeadm defaults to running a single member etcd cluster in a static pod managed
by the kubelet on the control plane node. This is not a high availability setup
as the etcd cluster contains only one member and cannot sustain any members
becoming unavailable. This task walks through the process of creating a high
availability etcd cluster of three members that can be used as an external etcd
when using kubeadm to set up a kubernetes cluster.
By default, kubeadm runs a local etcd instance on each control plane node.
It is also possible to treat the etcd cluster as external and provision
etcd instances on separate hosts. The differences between the two approaches are covered in the
[Options for Highly Available topology][/docs/setup/production-environment/tools/kubeadm/ha-topology] page.
This task walks through the process of creating a high availability external
etcd cluster of three members that can be used by kubeadm during cluster creation.
## {{% heading "prerequisites" %}}
* Three hosts that can talk to each other over ports 2379 and 2380. This
* Three hosts that can talk to each other over TCP ports 2379 and 2380. This
document assumes these default ports. However, they are configurable through
the kubeadm config file.
* Each host must [have docker, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
* Each host must have systemd and a bash compatible shell installed.
* Each host must [have a container runtime, kubelet, and kubeadm installed](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
* Each host should have access to the Kubernetes container image registry (`k8s.gcr.io`) or list/pull the required etcd image using
`kubeadm config images list/pull`. This guide will setup etcd instances as
[static pods](/docs/tasks/configure-pod-container/static-pod/) managed by a kubelet.
@ -48,6 +49,11 @@ the certificates described below; no other cryptographic tooling is required for
this example.
{{< /note >}}
{{< note >}}
The examples below use IPv4 addresses but you can also configure kubeadm, the kubelet and etcd
to use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details
on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/setup/production-environment/tools/kubeadm/dual-stack-support/).
{{< /note >}}
1. Configure the kubelet to be a service manager for etcd.
@ -59,8 +65,9 @@ this example.
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
# Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
Restart=always
EOF
@ -80,21 +87,34 @@ this example.
member running on it using the following script.
```sh
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
export HOST0=10.0.0.6
export HOST1=10.0.0.7
export HOST2=10.0.0.8
# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
export NAME0="infra0"
export NAME1="infra1"
export NAME2="infra2"
# Create temp directories to store files that will end up on other hosts.
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("infra0" "infra1" "infra2")
HOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=(${NAME0} ${NAME1} ${NAME2})
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
for i in "${!HOSTS[@]}"; do
HOST=${HOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: InitConfiguration
nodeRegistration:
name: ${NAME}
localAPIEndpoint:
advertiseAddress: ${HOST}
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
etcd:
@ -104,7 +124,7 @@ this example.
peerCertSANs:
- "${HOST}"
extraArgs:
initial-cluster: ${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${NAMES[2]}=https://${ETCDHOSTS[2]}:2380
initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
initial-cluster-state: new
name: ${NAME}
listen-peer-urls: https://${HOST}:2380

View File

@ -37,8 +37,11 @@ The upgrade workflow at high level is the following:
### Additional information
- [Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/) before kubelet MINOR version
upgrades is required. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads.
- The instructions below outline when to drain each node during the upgrade process.
If you are performing a **minor** version upgrade for any kubelet, you **must**
first drain the node (or nodes) that you are upgrading. In the case of control plane nodes,
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
- All containers are restarted after upgrade, because the container spec hash value is changed.
<!-- steps -->

View File

@ -52,7 +52,7 @@ For example: on COS images, Docker exposes its Unix domain socket at
Here's a sample shell script to find Pods that have a mount directly mapping the
Docker socket. This script outputs the namespace and name of the pod. You can
remove the grep `/var/run/docker.sock` to review other mounts.
remove the `grep '/var/run/docker.sock'` to review other mounts.
```bash
kubectl get pods --all-namespaces \

View File

@ -26,19 +26,18 @@ Reload your shell and verify that bash-completion is correctly installed by typi
### Enable kubectl autocompletion
#### Bash
You now need to ensure that the kubectl completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
- Source the completion script in your `~/.bashrc` file:
```bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
```
- Add the completion script to the `/etc/bash_completion.d` directory:
```bash
kubectl completion bash >/etc/bash_completion.d/kubectl
```
{{< tabs name="kubectl_bash_autocompletion" >}}
{{{< tab name="User" codelang="bash" >}}
echo 'source <(kubectl completion bash)' >>~/.bashrc
{{< /tab >}}
{{< tab name="System" codelang="bash" >}}
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
{{< /tab >}}}
{{< /tabs >}}
If you have an alias for kubectl, you can extend shell completion to work with that alias:

View File

@ -47,12 +47,6 @@ Before walking through each tutorial, you may want to bookmark the
* [Running ZooKeeper, A CP Distributed System](/docs/tutorials/stateful-application/zookeeper/)
## Clusters
* [AppArmor](/docs/tutorials/clusters/apparmor/)
* [seccomp](/docs/tutorials/clusters/seccomp/)
## Services
* [Using Source IP](/docs/tutorials/services/source-ip/)
@ -61,7 +55,8 @@ Before walking through each tutorial, you may want to bookmark the
* [Apply Pod Security Standards at Cluster level](/docs/tutorials/security/cluster-level-pss/)
* [Apply Pod Security Standards at Namespace level](/docs/tutorials/security/ns-level-pss/)
* [AppArmor](/docs/tutorials/security/apparmor/)
* [seccomp](/docs/tutorials/security/seccomp/)
## {{% heading "whatsnext" %}}
If you would like to write a tutorial, see

View File

@ -1,5 +0,0 @@
---
title: "Clusters"
weight: 60
---

View File

@ -43,12 +43,12 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Voir la video (en)</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Venez au KubeCon NA Los Angeles, USA du 11 au 15 Octobre 2021</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">Venez au KubeCon Detroit, Michigan, USA du 24 au 28 Octobre 2022</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Venez au KubeCon EU Valence, Espagne du 15 au 20 Mai 2022</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Venez au KubeCon EU Valence, Espagne + Virtuel du 16 au 20 Mai 2022</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
@ -58,4 +58,4 @@ Kubernetes est une solution open-source qui vous permet de tirer parti de vos in
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
{{< blocks/case-studies >}}

View File

@ -527,5 +527,5 @@ Pour plus d'informations sur Minikube, voir la [proposition](https://git.k8s.io/
Les contributions, questions et commentaires sont les bienvenus et sont encouragés !
Les développeurs de minikube sont dans le canal #minikube du [Slack](https://kubernetes.slack.com) de Kubernetes (recevoir une invitation [ici](http://slack.kubernetes.io/)).
Nous avons également la liste de diffusion [kubernetes-dev Google Groupes](https://groups.google.com/forum/#!forum/kubernetes-dev).
Nous avons également la liste de diffusion [dev@kubernetes Google Groupes](https://groups.google.com/a/kubernetes.io/g/dev/).
Si vous publiez sur la liste, veuillez préfixer votre sujet avec "minikube:".

View File

@ -263,12 +263,6 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.
### OVN (Open Virtual Networking)
OVN is an opensource network virtualization solution developed by the

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,620 @@
---
title: kube-apiserver Audit Configuration (v1)
content_type: tool-reference
package: audit.k8s.io/v1
auto_generated: true
---
## Resource Types
- [Event](#audit-k8s-io-v1-Event)
- [EventList](#audit-k8s-io-v1-EventList)
- [Policy](#audit-k8s-io-v1-Policy)
- [PolicyList](#audit-k8s-io-v1-PolicyList)
## `Event` {#audit-k8s-io-v1-Event}
**Appears in:**
- [EventList](#audit-k8s-io-v1-EventList)
Event captures all the information that can be included in an API audit log.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>audit.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>Event</code></td></tr>
<tr><td><code>level</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-Level"><code>Level</code></a>
</td>
<td>
AuditLevel at which event was generated</td>
</tr>
<tr><td><code>auditID</code> <B>[Required]</B><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/types#UID"><code>k8s.io/apimachinery/pkg/types.UID</code></a>
</td>
<td>
Unique audit ID, generated for each request.</td>
</tr>
<tr><td><code>stage</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-Stage"><code>Stage</code></a>
</td>
<td>
Stage of the request handling when this event instance was generated.</td>
</tr>
<tr><td><code>requestURI</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
RequestURI is the request URI as sent by the client to a server.</td>
</tr>
<tr><td><code>verb</code> <B>[Required]</B><br/>
<code>string</code>
</td>
<td>
Verb is the kubernetes verb associated with the request.
For non-resource requests, this is the lower-cased HTTP method.</td>
</tr>
<tr><td><code>user</code> <B>[Required]</B><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Authenticated user information.</td>
</tr>
<tr><td><code>impersonatedUser</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#userinfo-v1-authentication"><code>authentication/v1.UserInfo</code></a>
</td>
<td>
Impersonated user information.</td>
</tr>
<tr><td><code>sourceIPs</code><br/>
<code>[]string</code>
</td>
<td>
Source IPs, from where the request originated and intermediate proxies.</td>
</tr>
<tr><td><code>userAgent</code><br/>
<code>string</code>
</td>
<td>
UserAgent records the user agent string reported by the client.
Note that the UserAgent is provided by the client, and must not be trusted.</td>
</tr>
<tr><td><code>objectRef</code><br/>
<a href="#audit-k8s-io-v1-ObjectReference"><code>ObjectReference</code></a>
</td>
<td>
Object reference this request is targeted at.
Does not apply for List-type requests, or non-resource requests.</td>
</tr>
<tr><td><code>responseStatus</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#status-v1-meta"><code>meta/v1.Status</code></a>
</td>
<td>
The response status, populated even when the ResponseObject is not a Status type.
For successful responses, this will only include the Code and StatusSuccess.
For non-status type error responses, this will be auto-populated with the error Message.</td>
</tr>
<tr><td><code>requestObject</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
</td>
<td>
API object from the request, in JSON format. The RequestObject is recorded as-is in the request
(possibly re-encoded as JSON), prior to version conversion, defaulting, admission or
merging. It is an external versioned object type, and may not be a valid object on its own.
Omitted for non-resource requests. Only logged at Request Level and higher.</td>
</tr>
<tr><td><code>responseObject</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Unknown"><code>k8s.io/apimachinery/pkg/runtime.Unknown</code></a>
</td>
<td>
API object returned in the response, in JSON. The ResponseObject is recorded after conversion
to the external type, and serialized as JSON. Omitted for non-resource requests. Only logged
at Response Level.</td>
</tr>
<tr><td><code>requestReceivedTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached the apiserver.</td>
</tr>
<tr><td><code>stageTimestamp</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#microtime-v1-meta"><code>meta/v1.MicroTime</code></a>
</td>
<td>
Time the request reached current audit stage.</td>
</tr>
<tr><td><code>annotations</code><br/>
<code>map[string]string</code>
</td>
<td>
Annotations is an unstructured key value map stored with an audit event that may be set by
plugins invoked in the request serving chain, including authentication, authorization and
admission plugins. Note that these annotations are for the audit event, and do not correspond
to the metadata.annotations of the submitted object. Keys should uniquely identify the informing
component to avoid name collisions (e.g. podsecuritypolicy.admission.k8s.io/policy). Values
should be short. Annotations are included in the Metadata level.</td>
</tr>
</tbody>
</table>
## `EventList` {#audit-k8s-io-v1-EventList}
EventList is a list of audit Events.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>audit.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>EventList</code></td></tr>
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>items</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-Event"><code>[]Event</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `Policy` {#audit-k8s-io-v1-Policy}
**Appears in:**
- [PolicyList](#audit-k8s-io-v1-PolicyList)
Policy defines the configuration of audit logging, and the rules for how different request
categories are logged.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>audit.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>Policy</code></td></tr>
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta"><code>meta/v1.ObjectMeta</code></a>
</td>
<td>
ObjectMeta is included for interoperability with API infrastructure.Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field.</td>
</tr>
<tr><td><code>rules</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-PolicyRule"><code>[]PolicyRule</code></a>
</td>
<td>
Rules specify the audit Level a request should be recorded at.
A request may match multiple rules, in which case the FIRST matching rule is used.
The default audit level is None, but can be overridden by a catch-all rule at the end of the list.
PolicyRules are strictly ordered.</td>
</tr>
<tr><td><code>omitStages</code><br/>
<a href="#audit-k8s-io-v1-Stage"><code>[]Stage</code></a>
</td>
<td>
OmitStages is a list of stages for which no events are created. Note that this can also
be specified per rule in which case the union of both are omitted.</td>
</tr>
</tbody>
</table>
## `PolicyList` {#audit-k8s-io-v1-PolicyList}
PolicyList is a list of audit Policies.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>apiVersion</code><br/>string</td><td><code>audit.k8s.io/v1</code></td></tr>
<tr><td><code>kind</code><br/>string</td><td><code>PolicyList</code></td></tr>
<tr><td><code>metadata</code><br/>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta"><code>meta/v1.ListMeta</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>items</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-Policy"><code>[]Policy</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `GroupResources` {#audit-k8s-io-v1-GroupResources}
**Appears in:**
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
GroupResources represents resource kinds in an API group.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>group</code><br/>
<code>string</code>
</td>
<td>
Group is the name of the API group that contains the resources.
The empty string represents the core API group.</td>
</tr>
<tr><td><code>resources</code><br/>
<code>[]string</code>
</td>
<td>
Resources is a list of resources this rule applies to.
For example:
'pods' matches pods.
'pods/log' matches the log subresource of pods.
'&lowast;' matches all resources and their subresources.
'pods/&lowast;' matches all subresources of pods.
'&lowast;/scale' matches all scale subresources.
If wildcard is present, the validation rule will ensure resources do not
overlap with each other.
An empty list implies all resources and subresources in this API groups apply.</td>
</tr>
<tr><td><code>resourceNames</code><br/>
<code>[]string</code>
</td>
<td>
ResourceNames is a list of resource instance names that the policy matches.
Using this field requires Resources to be specified.
An empty list implies that every instance of the resource is matched.</td>
</tr>
</tbody>
</table>
## `Level` {#audit-k8s-io-v1-Level}
(Alias of `string`)
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
Level defines the amount of information logged during auditing
## `ObjectReference` {#audit-k8s-io-v1-ObjectReference}
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
ObjectReference contains enough information to let you inspect or modify the referred object.
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>resource</code><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>namespace</code><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>name</code><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>uid</code><br/>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/types#UID"><code>k8s.io/apimachinery/pkg/types.UID</code></a>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>apiGroup</code><br/>
<code>string</code>
</td>
<td>
APIGroup is the name of the API group that contains the referred object.
The empty string represents the core API group.</td>
</tr>
<tr><td><code>apiVersion</code><br/>
<code>string</code>
</td>
<td>
APIVersion is the version of the API group that contains the referred object.</td>
</tr>
<tr><td><code>resourceVersion</code><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
<tr><td><code>subresource</code><br/>
<code>string</code>
</td>
<td>
<span class="text-muted">No description provided.</span>
</td>
</tr>
</tbody>
</table>
## `PolicyRule` {#audit-k8s-io-v1-PolicyRule}
**Appears in:**
- [Policy](#audit-k8s-io-v1-Policy)
PolicyRule maps requests based off metadata to an audit Level.
Requests must match the rules of every field (an intersection of rules).
<table class="table">
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
<tbody>
<tr><td><code>level</code> <B>[Required]</B><br/>
<a href="#audit-k8s-io-v1-Level"><code>Level</code></a>
</td>
<td>
The Level that requests matching this rule are recorded at.</td>
</tr>
<tr><td><code>users</code><br/>
<code>[]string</code>
</td>
<td>
The users (by authenticated user name) this rule applies to.
An empty list implies every user.</td>
</tr>
<tr><td><code>userGroups</code><br/>
<code>[]string</code>
</td>
<td>
The user groups this rule applies to. A user is considered matching
if it is a member of any of the UserGroups.
An empty list implies every user group.</td>
</tr>
<tr><td><code>verbs</code><br/>
<code>[]string</code>
</td>
<td>
The verbs that match this rule.
An empty list implies every verb.</td>
</tr>
<tr><td><code>resources</code><br/>
<a href="#audit-k8s-io-v1-GroupResources"><code>[]GroupResources</code></a>
</td>
<td>
Resources that this rule matches. An empty list implies all kinds in all API groups.</td>
</tr>
<tr><td><code>namespaces</code><br/>
<code>[]string</code>
</td>
<td>
Namespaces that this rule matches.
The empty string "" matches non-namespaced resources.
An empty list implies every namespace.</td>
</tr>
<tr><td><code>nonResourceURLs</code><br/>
<code>[]string</code>
</td>
<td>
NonResourceURLs is a set of URL paths that should be audited.
&lowast;s are allowed, but only as the full, final step in the path.
Examples:
"/metrics" - Log requests for apiserver metrics
"/healthz&lowast;" - Log all health checks</td>
</tr>
<tr><td><code>omitStages</code><br/>
<a href="#audit-k8s-io-v1-Stage"><code>[]Stage</code></a>
</td>
<td>
OmitStages is a list of stages for which no events are created. Note that this can also
be specified policy wide in which case the union of both are omitted.
An empty list means no restrictions will apply.</td>
</tr>
</tbody>
</table>
## `Stage` {#audit-k8s-io-v1-Stage}
(Alias of `string`)
**Appears in:**
- [Event](#audit-k8s-io-v1-Event)
- [Policy](#audit-k8s-io-v1-Policy)
- [PolicyRule](#audit-k8s-io-v1-PolicyRule)
Stage defines the stages in request handling that audit events may be generated.

View File

@ -0,0 +1,5 @@
---
title: "kubeadmによる管理"
weight: 10
---

View File

@ -0,0 +1,269 @@
---
title: kubeadmによる証明書管理
content_type: task
weight: 10
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.15" state="stable" >}}
[kubeadm](/docs/reference/setup-tools/kubeadm/)で生成されたクライアント証明書は1年で失効します。
このページでは、kubeadmで証明書の更新を管理する方法について説明します。
## {{% heading "prerequisites" %}}
[KubernetesにおけるPKI証明書と要件](/docs/setup/best-practices/certificates/)を熟知している必要があります。
<!-- steps -->
## カスタム証明書の使用 {#custom-certificates}
デフォルトでは、kubeadmはクラスターの実行に必要なすべての証明書を生成します。
独自の証明書を提供することで、この動作をオーバーライドできます。
そのためには、`--cert-dir`フラグまたはkubeadmの`ClusterConfiguration`の`certificatesDir`フィールドで指定された任意のディレクトリに配置する必要があります。
デフォルトは`/etc/kubernetes/pki`です。
`kubeadm init` を実行する前に与えられた証明書と秘密鍵のペアが存在する場合、kubeadmはそれらを上書きしません。
つまり、例えば既存のCAを`/etc/kubernetes/pki/ca.crt`と`/etc/kubernetes/pki/ca.key`にコピーすれば、kubeadmは残りの証明書に署名する際、このCAを使用できます。
## 外部CAモード {#external-ca-mode}
また、`ca.crt`ファイルのみを提供し、`ca.key`ファイルを提供しないことも可能です(これはルートCAファイルのみに有効で、他の証明書ペアには有効ではありません)。
他の証明書とkubeconfigファイルがすべて揃っている場合、kubeadmはこの状態を認識し、外部CAモードを有効にします。
kubeadmはディスク上のCAキーがなくても処理を進めます。
代わりに、Controller-managerをスタンドアロンで、`--controllers=csrsigner`と実行し、CA証明書と鍵を指し示します。
[PKI certificates and requirements](/docs/setup/best-practices/certificates/)には、外部CAを使用するためのクラスターのセットアップに関するガイダンスが含まれています。
## 証明書の有効期限の確認
`check-expiration`サブコマンドを使うと、証明書の有効期限を確認することができます。
```
kubeadm certs check-expiration
```
このような出力になります:
```
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
apiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no
controller-manager.conf Dec 30, 2020 23:36 UTC 364d no
etcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no
front-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no
scheduler.conf Dec 30, 2020 23:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 28, 2029 23:36 UTC 9y no
etcd-ca Dec 28, 2029 23:36 UTC 9y no
front-proxy-ca Dec 28, 2029 23:36 UTC 9y no
```
このコマンドは、`/etc/kubernetes/pki`フォルダ内のクライアント証明書と、kubeadmが使用するKUBECONFIGファイル(`admin.conf`,`controller-manager.conf`,`scheduler.conf`)に埋め込まれたクライアント証明書の有効期限/残余時間を表示します。
また、証明書が外部管理されている場合、kubeadmはユーザーに通知します。この場合、ユーザーは証明書の更新を手動または他のツールを使用して管理する必要があります。
{{< warning >}}
`kubeadm`は外部CAによって署名された証明書を管理することができません。
{{< /warning >}}
{{< note >}}
kubeadmは`/var/lib/kubelet/pki`以下にあるローテート可能な証明書でkubeletの[証明書の自動更新](/docs/task/tls/certificate-rotation/)を構成するので`kubelet.conf`は上記のリストに含まれません。
期限切れのkubeletクライアント証明書を修復するには、[Kubelet クライアント証明書のローテーションに失敗しました](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#kubelet-client-cert)を参照ください。
{{< /note >}}
{{< warning >}}
kubeadm version 1.17より前の`kubeadm init`で作成したノードでは、`kubelet.conf`の内容を手動で変更しなければならないという[bug](https://github.com/kubernetes/kubeadm/issues/1753)が存在します。
`kubeadm init`が終了したら、`client-certificate-data`と`client-key-data`を置き換えて、ローテーションされたkubeletクライアント証明書を指すように`kubelet.conf`を更新してください。
```yaml
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
```
{{< /warning >}}
## 証明書の自動更新
kubeadmはコントロールプレーンの[アップグレード](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)時にすべての証明書を更新します。
この機能は、最もシンプルなユースケースに対応するために設計されています。
証明書の更新に特別な要件がなく、Kubernetesのバージョンアップを定期的に行う場合(各アップグレードの間隔が1年未満)、kubeadmがクラスターを最新かつ適度に安全に保つための処理を行います。
{{< note >}}
安全性を維持するために、クラスターを頻繁にアップグレードすることがベストプラクティスです。
{{< /note >}}
証明書の更新に関してより複雑な要求がある場合は、`--certificate-renewal=false`を`kubeadm upgrade apply`や`kubeadm upgrade node`に渡して、デフォルトの動作から外れるようにすることができます。
{{< warning >}}
kubeadmバージョン1.17より前のバージョンでは、`kubeadm upgrade node`コマンドの`--certificate-renewal`のデフォルト値が`false`になっているという[bug(https://github.com/kubernetes/kubeadm/issues/1818)]問題があります。
この場合、明示的に`--certificate-renewal=true`を設定する必要があります。
{{< /warning >}}
## 手動による証明書更新
`kubeadm certs renew` コマンドを使えば、いつでも証明書を手動で更新することができます。
このコマンドは`/etc/kubernetes/pki`に格納されているCA(またはfront-proxy-CA)の証明書と鍵を使って更新を行います。
コマンド実行後、コントロールプレーンのPodを再起動する必要があります。
これは、現在すべてのコンポーネントと証明書について動的な証明書のリロードがサポートされていないため、必要な作業です。
[スタティックPod](/docs/tasks/configure-pod-container/static-pod/)はローカルkubeletによって管理され、API Serverによって管理されないため、kubectlで削除および再起動することはできません。
スタティックPodを再起動するには、一時的に`/etc/kubernetes/manifests/`からマニフェストファイルを削除して20秒間待ちます([KubeletConfiguration struct](/docs/reference/config-api/kubelet-config.v1beta1/)の`fileCheckFrequency`値を参照してください)。
マニフェストディレクトリにPodが無くなると、kubeletはPodを終了します。
その後ファイルを戻して、さらに`fileCheckFrequency`期間後に、kubeletはPodを再作成し、コンポーネントの証明書更新を完了することができます。
{{< warning >}}
HAクラスターを実行している場合、このコマンドはすべての制御プレーンードで実行する必要があります。
{{< /warning >}}
{{< note >}}
`certs renew`は、属性(Common Name、Organization、SANなど)の信頼できるソースとして、kubeadm-config ConfigMapではなく、既存の証明書を使用します。両者を同期させておくことが強く推奨されます。
{{< /note >}}
`kubeadm certs renew` は以下のオプションを提供します:
Kubernetesの証明書は通常1年後に有効期限を迎えます。
- `--csr-only`を使用すると、証明書署名要求を生成して外部CAとの証明書を更新することができます(実際にはその場で証明書を更新しません)。詳しくは次の段落を参照してください。
- また、すべての証明書を更新するのではなく、1つの証明書を更新することも可能です。
## Kubernetes certificates APIによる証明書の更新
ここでは、Kubernetes certificates APIを使用して手動で証明書更新を実行する方法について詳しく説明します。
{{< caution >}}
これらは、組織の証明書インフラをkubeadmで構築されたクラスターに統合する必要があるユーザー向けの上級者向けのトピックです。
kubeadmのデフォルトの設定で満足できる場合は、代わりにkubeadmに証明書を管理させる必要があります。
{{< /caution >}}
### 署名者の設定
Kubernetesの認証局は、そのままでは機能しません。
[cert-manager](https://cert-manager.io/docs/configuration/ca/)などの外部署名者を設定するか、組み込みの署名者を使用することができます。
ビルトインサイナーは[`kube-controller-manager`](/docs/reference/command-line-tools-reference/kube-controller-manager/)に含まれるものです。
ビルトインサイナーを有効にするには、`--cluster-signing-cert-file`と`--cluster-signing-key-file`フラグを渡す必要があります。
新しいクラスターを作成する場合は、kubeadm[設定ファイル](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3)を使用します。
```yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controllerManager:
extraArgs:
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
```
### 証明書署名要求の作成 (CSR)
Kubernetes APIでのCSR作成については、[Create CertificateSigningRequest](/docs/reference/access-authn-authz/certificate-signing-requests/#create-certificatesigningrequest)を参照ください。
## 外部CAによる証明書の更新
ここでは、外部認証局を利用して手動で証明書更新を行う方法について詳しく説明します。
外部CAとの連携を強化するために、kubeadmは証明書署名要求(CSR)を生成することもできます。
CSRとは、クライアント用の署名付き証明書をCAに要求することを表します。
kubeadmの用語では、通常ディスク上のCAによって署名される証明書をCSRとして生成することができます。しかし、CAはCSRとして生成することはできません。
### 証明書署名要求の作成 (CSR)
`kubeadm certs renew --csr-only`で証明書署名要求を作成することができます。
CSRとそれに付随する秘密鍵の両方が出力されます。
ディレクトリを`--csr-dir`で渡すと、指定した場所にCSRを出力することができます。
`csr-dir`を指定しない場合は、デフォルトの証明書ディレクトリ(`/etc/kubernetes/pki`)が使用されます。
証明書は`kubeadm certs renew --csr-only`で更新することができます。
`kubeadm init`と同様に、`--csr-dir`フラグで出力先ディレクトリを指定することができます。
CSRには、証明書の名前、ドメイン、IPが含まれますが、用途は指定されません。
証明書を発行する際に、[正しい証明書の使用法](/docs/setup/best-practices/certificates/#all-certificates)を指定するのはCAの責任です。
* `openssl`では、[`openssl ca`コマンド](https://superuser.com/questions/738612/openssl-ca-keyusage-extension)を使って行います。
* `cfssl`では、[configファイルのusages](https://github.com/cloudflare/cfssl/blob/master/doc/cmd/cfssl.txt#L170)で指定します。
お好みの方法で証明書に署名した後、証明書と秘密鍵をPKIディレクトリ(デフォルトでは`/etc/kubernetes/pki`)にコピーする必要があります。
## 認証局(CA)のローテーション {#certificate-authority-rotation}
Kubeadmは、CA証明書のローテーションや交換を最初からサポートしているわけではありません。
CAの手動ローテーションや交換についての詳細は、[manual rotation of CA certificates](/docs/tasks/tls/manual-rotation-of-ca-certificates/)を参照してください。
## 署名付きkubeletサービング証明書の有効化 {#kubelet-serving-certs}
デフォルトでは、kubeadmによって展開されるkubeletサービング証明書は自己署名されています。
これは、[metrics-server](https://github.com/kubernetes-sigs/metrics-server)のような外部サービスからキューブレットへの接続がTLSで保護されないことを意味します。
新しいkubeadmクラスター内のkubeletが適切に署名されたサービング証明書を取得するように設定するには、`kubeadm init`に以下の最小限の設定を渡す必要があります。
```yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
serverTLSBootstrap: true
```
すでにクラスターを作成している場合は、以下の手順で適応させる必要があります。
- kube-system` ネームスペースにある `kubelet-config-{{< skew latestVersion >}}` ConfigMapを見つけて編集します。
そのConfigMapの`kubelet`キーの値として[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)ドキュメントを指定します。KubeletConfigurationドキュメントを編集し、`serverTLSBootstrap: true`を設定します。
- 各ノードで、`/var/lib/kubelet/config.yaml`に`serverTLSBootstrap: true`フィールドを追加し、`systemctl restart kubelet`でkubeletを再起動します。
`serverTLSBootstrap: true`フィールドは、kubeleサービングのブートストラップを有効にします。
証明書を`certificates.k8s.io`APIにリクエストすることで、証明書を発行することができます。
既知の制限事項として、これらの証明書のCSR(Certificate Signing Requests)はkube-controller-managerのデフォルトサイナーによって自動的に承認されないことがあります。
[`kubernetes.io/kubelet-serving`](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers) を参照してください。
これには、ユーザーまたはサードパーティーのコントローラーからのアクションが必要です。
これらのCSRは、以下を使用して表示できます:
```shell
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending
```
承認するためには、次のようにします:
```shell
kubectl certificate approve <CSR-name>
```
デフォルトでは、これらのサービング証明書は1年後に失効します。
Kubeadmは`KubeletConfiguration`フィールド`rotateCertificates`を`true`に設定します。これは有効期限が切れる間際に、サービング証明書のための新しいCSRセットを作成し、ローテーションを完了するために承認する必要があることを意味します。
詳しくは[Certificate Rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation)をご覧ください。
これらのCSRを自動的に承認するためのソリューションをお探しの場合は、以下をお勧めします。
クラウドプロバイダーに連絡し、ードの識別をアウトオブバンドのメカニズムで行うCSRの署名者がいるかどうか尋ねてください。
{{% thirdparty-content %}}
サードパーティーのカスタムコントローラーを使用することができます。
- [kubelet-csr-approver](https://github.com/postfinance/kubelet-csr-approver)
このようなコントローラーは、CSRのCommonNameを検証するだけでなく、要求されたIPやドメイン名も検証しなければ、安全なメカニズムとは言えません。これにより、kubeletクライアント証明書にアクセスできる悪意のあるアクターが、任意のIPやドメイン名に対してサービング証明書を要求するCSRを作成することを防ぐことができます。

View File

@ -0,0 +1,45 @@
---
title: ノードで使用されているコンテナランタイムの確認
content_type: task
weight: 10
---
<!-- overview -->
このページでは、クラスター内のノードが使用している[コンテナランタイム](/docs/setup/production-environment/container-runtimes/)を確認する手順を概説しています。
クラスターの実行方法によっては、ノード用のコンテナランタイムが事前に設定されている場合と、設定する必要がある場合があります。
マネージドKubernetesサービスを使用している場合、ードに設定されているコンテナランタイムを確認するためのベンダー固有の方法があるかもしれません。
このページで説明する方法は、`kubectl`の実行が許可されていればいつでも動作するはずです。
## {{% heading "prerequisites" %}}
`kubectl`をインストールし、設定します。詳細は[ツールのインストール](/ja/docs/tasks/tools/#kubectl)の項を参照してください。
## ノードで使用されているコンテナランタイムの確認
ノードの情報を取得して表示するには`kubectl`を使用します:
```shell
kubectl get nodes -o wide
```
出力は以下のようなものです。列`CONTAINER-RUNTIME`には、ランタイムとそのバージョンが出力されます。
```none
# For dockershim
NAME STATUS VERSION CONTAINER-RUNTIME
node-1 Ready v1.16.15 docker://19.3.1
node-2 Ready v1.16.15 docker://19.3.1
node-3 Ready v1.16.15 docker://19.3.1
```
```none
# For containerd
NAME STATUS VERSION CONTAINER-RUNTIME
node-1 Ready v1.19.6 containerd://1.4.1
node-2 Ready v1.19.6 containerd://1.4.1
node-3 Ready v1.19.6 containerd://1.4.1
```
コンテナランタイムについては、[コンテナランタイム](/docs/setup/production-environment/container-runtimes/)のページで詳細を確認することができます。

View File

@ -0,0 +1,233 @@
---
content_type: concept
title: 監査
---
<!-- overview -->
Kubernetesの監査はクラスター内の一連の行動を記録するセキュリティに関連した時系列の記録を提供します。
クラスターはユーザー、Kubernetes APIを使用するアプリケーション、
およびコントロールプレーン自体によって生成されたアクティビティなどを監査します。
監査により、クラスター管理者は以下の質問に答えることができます:
- 何が起きたのか?
- いつ起こったのか?
- 誰がそれを始めたのか?
- 何のために起こったのか?
- それはどこで観察されたのか?
- それはどこから始まったのか?
- それはどこへ向かっていたのか?
<!-- body -->
監査記録のライフサイクルは[kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)コンポーネントの中で始まります。
各リクエストの実行の各段階で、監査イベントが生成されます。
ポリシーに従って前処理され、バックエンドに書き込まれます。 ポリシーが何を記録するかを決定し、
バックエンドがその記録を永続化します。現在のバックエンドの実装はログファイルやWebhookなどがあります。
各リクエストは関連する _stage_ で記録されます。
定義されたステージは以下の通りです:
- `RequestReceived` - 監査ハンドラーがリクエストを受信すると同時に生成されるイベントのステージ。
つまり、ハンドラーチェーンに委譲される前に生成されるイベントのステージです。
- `ResponseStarted` - レスポンスヘッダーが送信された後、レスポンスボディが送信される前のステージです。
このステージは長時間実行されるリクエスト(watchなど)でのみ発生します。
- `ResponseComplete` - レスポンスボディの送信が完了して、それ以上のバイトは送信されません。
- `Panic` - パニックが起きたときに発生するイベント。
{{< note >}}
[Audit Event configuration](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)の設定は[Event](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#event-v1-core)APIオブジェクトとは異なります。
{{< /note >}}
監査ログ機能は、リクエストごとに監査に必要なコンテキストが保存されるため、APIサーバーのメモリー消費量が増加します。
メモリーの消費量は、監査ログ機能の設定によって異なります。
## 監査ポリシー
監査ポリシーはどのようなイベントを記録し、どのようなデータを含むべきかについてのルールを定義します。
監査ポリシーのオブジェクト構造は、[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)で定義されています。
イベントが処理されると、そのイベントは順番にルールのリストと比較されます。
最初のマッチングルールは、イベントの監査レベルを設定します。
定義されている監査レベルは:
- `None` - ルールに一致するイベントを記録しません。
- `Metadata` - リクエストのメタデータ(リクエストしたユーザー、タイムスタンプ、リソース、動作など)を記録しますが、リクエストやレスポンスのボディは記録しません。
- `Request` - ログイベントのメタデータとリクエストボディは表示されますが、レスポンスボディは表示されません。
これは非リソースリクエストには適用されません。
- `RequestResponse` - イベントのメタデータ、リクエストとレスポンスのボディを記録しますが、
非リソースリクエストには適用されません。
`audit-policy-file`フラグを使って、ポリシーを記述したファイルを `kube-apiserver`に渡すことができます。
このフラグが省略された場合イベントは記録されません。
監査ポリシーファイルでは、`rules`フィールドが必ず指定されることに注意してください。
ルールがない(0)ポリシーは不当なものとして扱われます。
以下は監査ポリシーファイルの例です:
{{< codenew file="audit/audit-policy.yaml" >}}
最小限の監査ポリシーファイルを使用して、すべてのリクエストを `Metadata`レベルで記録することができます。
```yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
```
独自の監査プロファイルを作成する場合は、Google Container-Optimized OSの監査プロファイルを出発点として使用できます。
監査ポリシーファイルを生成する[configure-helper.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)スクリプトを確認することができます。
スクリプトを直接見ることで、監査ポリシーファイルのほとんどを見ることができます。
また、定義されているフィールドの詳細については、[Policy` configuration reference](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)を参照できます。
## 監査バックエンド
監査バックエンドは監査イベントを外部ストレージに永続化します。
kube-apiserverには2つのバックエンドが用意されています。
- イベントをファイルシステムに書き込むログバックエンド
- 外部のHTTP APIにイベントを送信するWebhookバックエンド
いずれの場合も、監査イベントはKubernetes API[`audit.k8s.io` API group](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)で定義されている構造に従います。
{{< note >}}
パッチの場合、リクエストボディはパッチ操作を含むJSON配列であり、適切なKubernetes APIオブジェクトを含むJSONオブジェクトではありません。
例えば、以下のリクエストボディは`/apis/batch/v1/namespaces/some-namespace/jobs/some-job-name`に対する有効なパッチリクエストです。
```json
[
{
"op": "replace",
"path": "/spec/parallelism",
"value": 0
},
{
"op": "remove",
"path": "/spec/template/spec/containers/0/terminationMessagePolicy"
}
]
```
{{< /note >}}
### ログバックエンド
ログバックエンドは監査イベントを[JSONlines](https://jsonlines.org/)形式のファイルに書き込みます。
以下の `kube-apiserver` フラグを使ってログ監査バックエンドを設定できます。
- `--audit-log-path` は、ログバックエンドが監査イベントを書き込む際に使用するログファイルのパスを指定します。
このフラグを指定しないと、ログバックエンドは無効になります。`-` は標準出力を意味します。
- `--audit-log-maxage` は、古い監査ログファイルを保持する最大日数を定義します。
- `audit-log-maxbackup`は、保持する監査ログファイルの最大数を定義します。
- `--audit-log-maxsize` は、監査ログファイルがローテーションされるまでの最大サイズをメガバイト単位で定義します。
クラスターのコントロールプレーンでkube-apiserverをPodとして動作させている場合は、監査記録が永久化されるように、ポリシーファイルとログファイルの場所に`hostPath`をマウントすることを忘れないでください。
例えば:
```shell
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--audit-log-path=/var/log/audit.log
```
それからボリュームをマウントします:
```yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit
readOnly: true
- mountPath: /var/log/audit.log
name: audit-log
readOnly: false
```
最後に`hostPath`を設定します:
```yaml
...
- name: audit
hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: File
- name: audit-log
hostPath:
path: /var/log/audit.log
type: FileOrCreate
```
### Webhookバックエンド
Webhook監査バックエンドは、監査イベントをリモートのWeb APIに送信しますが、
これは認証手段を含むKubernetes APIの形式であると想定されます。
Webhook監査バックエンドを設定するには、以下のkube-apiserverフラグを使用します。
- `--audit-webhook-config-file` は、Webhookの設定ファイルのパスを指定します。
webhookの設定は、事実上特化した[kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters)です。
- `--audit-webhook-initial-backoff` は、最初に失敗したリクエストの後、再試行するまでに待つ時間を指定します。
それ以降のリクエストは、指数関数的なバックオフで再試行されます。
Webhookの設定ファイルは、kubeconfig形式でサービスのリモートアドレスと接続に使用する認証情報を指定します。
## イベントバッチ {#batching}
ログバックエンドとwebhookバックエンドの両方がバッチ処理をサポートしています。
webhookを例に、利用可能なフラグの一覧を示します。
ログバックエンドで同じフラグを取得するには、フラグ名の`webhook`を`log`に置き換えてください。
デフォルトでは、バッチングは`webhook`では有効で、`log`では無効です。
同様に、デフォルトではスロットリングは `webhook` で有効で、`log`では無効です。
- `--audit-webhook-mode` は、バッファリング戦略を定義します。以下のいずれかとなります。
- `batch` - イベントをバッファリングして、非同期にバッチ処理します。これがデフォルトです。
- `blocking` - 個々のイベントを処理する際に、APIサーバーの応答をブロックします。
- `blocking-strict` - blockingと同じですが、RequestReceivedステージでの監査ログに失敗した場合は RequestReceivedステージで監査ログに失敗すると、kube-apiserverへのリクエスト全体が失敗します。
以下のフラグは `batch` モードでのみ使用されます:
- `--audit-webhook-batch-buffer-size`は、バッチ処理を行う前にバッファリングするイベントの数を定義します。
入力イベントの割合がバッファをオーバーフローすると、イベントはドロップされます。
- `--audit-webhook-batch-max-size`は、1つのバッチに入れるイベントの最大数を定義します。
- `--audit-webhook-batch-max-wait`は、キュー内のイベントを無条件にバッチ処理するまでの最大待機時間を定義します。
- `--audit-webhook-batch-throttle-qps`は、1秒あたりに生成されるバッチの最大平均数を定義します。
- `--audit-webhook-batch-throttle-burst`は、許可された QPS が低い場合に、同じ瞬間に生成されるバッチの最大数を定義します。
## パラメーターチューニング
パラメーターは、APIサーバーの負荷に合わせて設定してください。
例えば、kube-apiserverが毎秒100件のリクエストを受け取り、それぞれのリクエストが`ResponseStarted`と`ResponseComplete`の段階でのみ監査されるとします。毎秒≅200の監査イベントが発生すると考えてください。
1つのバッチに最大100個のイベントがあるの場合、スロットリングレベルを少なくとも2クエリ/秒に設定する必要があります。
バックエンドがイベントを書き込むのに最大で5秒かかる場合、5秒分のイベントを保持するようにバッファーサイズを設定する必要があります。
10バッチ、または1000イベントとなります。
しかし、ほとんどの場合デフォルトのパラメーターで十分であり、手動で設定する必要はありません。
kube-apiserverが公開している以下のPrometheusメトリクスや、ログを見て監査サブシステムの状態を監視することができます。
- `apiserver_audit_event_total`メトリックには、エクスポートされた監査イベントの合計数が含まれます。
- `apiserver_audit_error_total`メトリックには、エクスポート中にエラーが発生してドロップされたイベントの総数が含まれます。
### ログエントリー・トランケーション {#truncate}
logバックエンドとwebhookバックエンドは、ログに記録されるイベントのサイズを制限することをサポートしています。
例として、logバックエンドで利用可能なフラグの一覧を以下に示します
- `audit-log-truncate-enabled`イベントとバッチの切り捨てを有効にするかどうかです。
- `audit-log-truncate-max-batch-size`バックエンドに送信されるバッチのバイト単位の最大サイズ。
- `audit-log-truncate-max-event-size`バックエンドに送信される監査イベントのバイト単位の最大サイズです。
デフォルトでは、`webhook`と`log`の両方で切り捨ては無効になっていますが、クラスター管理者は `audit-log-truncate-enabled`または`audit-webhook-truncate-enabled`を設定して、この機能を有効にする必要があります。
## {{% heading "whatsnext" %}}
* [Mutating webhook auditing annotations](/docs/reference/access-authn-authz/extensible-admission-controllers/#mutating-webhook-auditing-annotations).
* [`Event`](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Event)
* [`Policy`](/docs/reference/config-api/apiserver-audit.v1/#audit-k8s-io-v1-Policy)

View File

@ -0,0 +1,407 @@
---
title: crictlによるKubernetesードのデバッグ
content_type: task
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.11" state="stable" >}}
`crictl`はCRI互換のコンテナランタイム用のコマンドラインインターフェイスです。
これを使って、Kubernetesード上のコンテナランタイムやアプリケーションの検査やデバッグを行うことができます。
`crictl`とそのソースコードは[cri-tools](https://github.com/kubernetes-sigs/cri-tools)リポジトリにホストされています。
## {{% heading "prerequisites" %}}
`crictl`にはCRIランタイムを搭載したLinuxが必要です。
<!-- steps -->
## crictlのインストール
cri-toolsの[リリースページ](https://github.com/kubernetes-sigs/cri-tools/releases)から、いくつかの異なるアーキテクチャ用の圧縮アーカイブ`crictl`をダウンロードできます。
お使いのKubernetesのバージョンに対応するバージョンをダウンロードしてください。
それを解凍してシステムパス上の`/usr/local/bin/`などの場所に移動します。
## 一般的な使い方
`crictl`コマンドにはいくつかのサブコマンドとランタイムフラグがあります。
詳細は`crictl help`または`crictl <subcommand> help`を参照してください。
`crictl`はデフォルトでは`unix:///var/run/dockershim.sock`に接続します。
他のランタイムの場合は、複数の異なる方法でエンドポイントを設定することができます:
- フラグ`--runtime-endpoint`と`--image-endpoint`の設定により
- 環境変数`CONTAINER_RUNTIME_ENDPOINT`と`IMAGE_SERVICE_ENDPOINT`の設定により
- 設定ファイル`--config=/etc/crictl.yaml`でエンドポイントの設定により
また、サーバーに接続する際のタイムアウト値を指定したり、デバッグを有効/無効にしたりすることもできます。
これには、設定ファイルで`timeout`や`debug`を指定するか、`--timeout`や`--debug`のコマンドラインフラグを使用します。
現在の設定を表示または編集するには、`/etc/crictl.yaml`の内容を表示または編集します。
```shell
cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
timeout: 10
debug: true
```
## crictlコマンドの例
以下の例では、いくつかの`crictl`コマンドとその出力例を示しています。
{{< warning >}}
実行中のKubernetesクラスターに`crictl`を使ってポッドのサンドボックスやコンテナを作成しても、Kubeletは最終的にそれらを削除します。`crictl` は汎用のワークフローツールではなく、デバッグに便利なツールです。
{{< /warning >}}
### podsの一覧
すべてのポッドをリストアップ:
```shell
crictl pods
```
出力はこのようになります:
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
926f1b5a1d33a About a minute ago Ready sh-84d7dcf559-4r2gq default 0
4dccb216c4adb About a minute ago Ready nginx-65899c769f-wv2gp default 0
a86316e96fa89 17 hours ago Ready kube-proxy-gblk4 kube-system 0
919630b8f81f1 17 hours ago Ready nvidia-device-plugin-zgbbv kube-system 0
```
Podを名前でリストアップします:
```shell
crictl pods --name nginx-65899c769f-wv2gp
```
出力はこのようになります:
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0
```
Podをラベルでリストアップします:
```shell
crictl pods --label run=nginx
```
出力はこのようになります:
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
4dccb216c4adb 2 minutes ago Ready nginx-65899c769f-wv2gp default 0
```
### イメージの一覧
すべてのイメージをリストアップします:
```shell
crictl images
```
出力はこのようになります:
```
IMAGE TAG IMAGE ID SIZE
busybox latest 8c811b4aec35f 1.15MB
k8s-gcrio.azureedge.net/hyperkube-amd64 v1.10.3 e179bbfe5d238 665MB
k8s-gcrio.azureedge.net/pause-amd64 3.1 da86e6ba6ca19 742kB
nginx latest cd5239a0906a6 109MB
```
イメージをリポジトリでリストアップします:
```shell
crictl images nginx
```
出力はこのようになります:
```
IMAGE TAG IMAGE ID SIZE
nginx latest cd5239a0906a6 109MB
```
イメージのIDのみをリストアップします:
```shell
crictl images -q
```
出力はこのようになります:
```
sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a
sha256:e179bbfe5d238de6069f3b03fccbecc3fb4f2019af741bfff1233c4d7b2970c5
sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
sha256:cd5239a0906a6ccf0562354852fae04bc5b52d72a2aff9a871ddb6bd57553569
```
### List containers
すべてのコンテナをリストアップします:
```shell
crictl ps -a
```
出力はこのようになります:
```
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 7 minutes ago Running sh 1
9c5951df22c78 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 8 minutes ago Exited sh 0
87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 8 minutes ago Running nginx 0
1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 18 hours ago Running kube-proxy 0
```
ランニングコンテナをリストアップします:
```
crictl ps
```
出力はこのようになります:
```
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
1f73f2d81bf98 busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 6 minutes ago Running sh 1
87d3992f84f74 nginx@sha256:d0a8828cccb73397acb0073bf34f4d7d8aa315263f1e7806bf8c55d8ac139d5f 7 minutes ago Running nginx 0
1941fb4da154f k8s-gcrio.azureedge.net/hyperkube-amd64@sha256:00d814b1f7763f4ab5be80c58e98140dfc69df107f253d7fdd714b30a714260a 17 hours ago Running kube-proxy 0
```
### 実行中のコンテナでコマンドの実行
```shell
crictl exec -i -t 1f73f2d81bf98 ls
```
出力はこのようになります:
```
bin dev etc home proc root sys tmp usr var
```
### コンテナログの取得
すべてのコンテナログを取得します:
```shell
crictl logs 87d3992f84f74
```
出力はこのようになります:
```
10.240.0.96 - - [06/Jun/2018:02:45:49 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
10.240.0.96 - - [06/Jun/2018:02:45:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
```
最新の`N`行のログのみを取得します:
```shell
crictl logs --tail=1 87d3992f84f74
```
出力はこのようになります:
```
10.240.0.96 - - [06/Jun/2018:02:45:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.47.0" "-"
```
### Podサンドボックスの実行
`crictl`を使ってPodサンドボックスを実行することは、コンテナのランタイムをデバッグするのに便利です。
稼働中のKubernetesクラスタでは、サンドボックスは最終的にKubeletによって停止され、削除されます。
1. 以下のようなJSONファイルを作成します:
```json
{
"metadata": {
"name": "nginx-sandbox",
"namespace": "default",
"attempt": 1,
"uid": "hdishd83djaidwnduwk28bcsb"
},
"logDirectory": "/tmp",
"linux": {
}
}
```
2. JSONを適用してサンドボックスを実行するには、`crictl runp`コマンドを使用します:
```shell
crictl runp pod-config.json
```
サンドボックスのIDが返されます。
### コンテナの作成
コンテナの作成に`crictl`を使うと、コンテナのランタイムをデバッグするのに便利です。
稼働中のKubernetesクラスタでは、サンドボックスは最終的にKubeletによって停止され、削除されます。
1. busyboxイメージをプルします:
```shell
crictl pull busybox
Image is up to date for busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
```
2. Podとコンテナのコンフィグを作成します:
**Pod config**:
```yaml
{
"metadata": {
"name": "nginx-sandbox",
"namespace": "default",
"attempt": 1,
"uid": "hdishd83djaidwnduwk28bcsb"
},
"log_directory": "/tmp",
"linux": {
}
}
```
**Container config**:
```yaml
{
"metadata": {
"name": "busybox"
},
"image":{
"image": "busybox"
},
"command": [
"top"
],
"log_path":"busybox.log",
"linux": {
}
}
```
3. 先に作成されたPodのID、コンテナの設定ファイル、Podの設定ファイルを渡して、コンテナを作成します。コンテナのIDが返されます。
```shell
crictl create f84dd361f8dc51518ed291fbadd6db537b0496536c1d2d6c05ff943ce8c9a54f container-config.json pod-config.json
```
4. すべてのコンテナをリストアップし、新しく作成されたコンテナの状態が`Created`に設定されていることを確認します:
```shell
crictl ps -a
```
出力はこのようになります:
```
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
3e025dd50a72d busybox 32 seconds ago Created busybox 0
```
### コンテナの起動
コンテナを起動するには、そのコンテナのIDを`crictl start`に渡します:
```shell
crictl start 3e025dd50a72d956c4f14881fbb5b1080c9275674e95fb67f965f6478a957d60
```
出力はこのようになります:
```
3e025dd50a72d956c4f14881fbb5b1080c9275674e95fb67f965f6478a957d60
```
コンテナの状態が「Running」に設定されていることを確認します:
```shell
crictl ps
```
出力はこのようになります:
```
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT
3e025dd50a72d busybox About a minute ago Running busybox 0
```
<!-- discussion -->
詳しくは[kubernetes-sigs/cri-tools](https://github.com/kubernetes-sigs/cri-tools)をご覧ください。
## docker cliからcrictlへのマッピング
以下のマッピング表の正確なバージョンは、`docker cli v1.40`と`crictl v1.19.0`のものです。
この一覧はすべてを網羅しているわけではないことに注意してください。
たとえば、`docker cli`の実験的なコマンドは含まれていません。
{{< note >}}
CRICTLの出力形式はDocker CLIと似ていますが、いくつかのCLIでは列が欠けています。
{{< /note >}}
### デバッグ情報の取得
{{< table caption="mapping from docker cli to crictl - retrieve debugging information" >}}
docker cli | crictl | 説明 | サポートされていない機能
-- | -- | -- | --
`attach` | `attach` | 実行中のコンテナにアタッチ | `--detach-keys`, `--sig-proxy`
`exec` | `exec` | 実行中のコンテナでコマンドの実行 | `--privileged`, `--user`, `--detach-keys`
`images` | `images` | イメージのリストアップ |  
`info` | `info` | システム全体の情報の表示 |  
`inspect` | `inspect`, `inspecti` | コンテナ、イメージ、タスクの低レベルの情報を返します |  
`logs` | `logs` | コンテナのログを取得します | `--details`
`ps` | `ps` | コンテナのリストアップ |  
`stats` | `stats` | コンテナのリソース使用状況をライブで表示 | Column: NET/BLOCK I/O, PIDs
`version` | `version` | ランタイム(Docker、ContainerD、その他)のバージョン情報を表示します |  
{{< /table >}}
### 変更を行います
{{< table caption="mapping from docker cli to crictl - perform changes" >}}
docker cli | crictl | 説明 | サポートされていない機能
-- | -- | -- | --
`create` | `create` | 新しいコンテナを作成します |  
`kill` | `stop` (timeout = 0) | 1つ以上の実行中のコンテナを停止します | `--signal`
`pull` | `pull` | レジストリーからイメージやリポジトリをプルします | `--all-tags`, `--disable-content-trust`
`rm` | `rm` | 1つまたは複数のコンテナを削除します |  
`rmi` | `rmi` | 1つまたは複数のイメージを削除します |  
`run` | `run` | 新しいコンテナでコマンドを実行 |  
`start` | `start` | 停止した1つまたは複数のコンテナを起動 | `--detach-keys`
`stop` | `stop` | 実行中の1つまたは複数のコンテナの停止 |  
`update` | `update` | 1つまたは複数のコンテナの構成を更新 | `--restart`、`--blkio-weight`とその他
{{< /table >}}
### crictlでのみ対応
{{< table caption="mapping from docker cli to crictl - supported only in crictl" >}}
crictl | 説明
-- | --
`imagefsinfo` | イメージファイルシステムの情報を返します
`inspectp` | 1つまたは複数のPodの状態を表示します
`port-forward` | ローカルポートをPodに転送します
`runp` | 新しいPodを実行します
`rmp` | 1つまたは複数のPodを削除します
`stopp` | 稼働中の1つまたは複数のPodを停止します
{{< /table >}}

View File

@ -0,0 +1,257 @@
---
title: Secretsで安全にクレデンシャルを配布する
content_type: task
weight: 50
min-kubernetes-server-version: v1.6
---
<!-- overview -->
このページでは、パスワードや暗号化キーなどの機密データをPodに安全に注入する方法を紹介します。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
### 機密データをbase64でエンコードする
ユーザー名`my-app`とパスワード`39528$vdg7Jb`の2つの機密データが必要だとします。
まず、base64エンコーディングツールを使って、ユーザ名とパスワードをbase64表現に変換します。
ここでは、手軽に入手できるbase64プログラムを使った例を紹介します:
```shell
echo -n 'my-app' | base64
echo -n '39528$vdg7Jb' | base64
```
出力結果によると、ユーザ名のbase64表現は`bXktYXBw`で、パスワードのbase64表現は`Mzk1MjgkdmRnN0pi`です。
{{< caution >}}
OSから信頼されているローカルツールを使用することで、外部ツールのセキュリティリスクを低減することができます。
{{< /caution >}}
<!-- steps -->
## Secretを作成する
以下はユーザー名とパスワードを保持するSecretを作成するために使用できる設定ファイルです:
{{< codenew file="pods/inject/secret.yaml" >}}
1. Secret を作成する
```shell
kubectl apply -f https://k8s.io/examples/pods/inject/secret.yaml
```
1. Secretの情報を取得する
```shell
kubectl get secret test-secret
```
出力:
```
NAME TYPE DATA AGE
test-secret Opaque 2 1m
```
1. Secretの詳細な情報を取得する:
```shell
kubectl describe secret test-secret
```
出力:
```
Name: test-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 13 bytes
username: 7 bytes
```
### kubectlでSecretを作成する
base64エンコードの手順を省略したい場合は、`kubectl create secret`コマンドで同じSecretを作成することができます。
例えば:
```shell
kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=39528$vdg7Jb'
```
先ほどの詳細なアプローチでは 各ステップを明示的に実行し、何が起こっているかを示していますが、`kubectl create secret`の方が便利です。
## Volumeにある機密情報をアクセスするPodを作成する
これはPodの作成に使用できる設定ファイルです。
{{< codenew file="pods/inject/secret-pod.yaml" >}}
1. Podを作成する:
```shell
kubectl apply -f https://k8s.io/examples/pods/inject/secret-pod.yaml
```
1. Podの`STATUS`が`Running`であるのを確認する:
```shell
kubectl get pod secret-test-pod
```
出力:
```
NAME READY STATUS RESTARTS AGE
secret-test-pod 1/1 Running 0 42m
```
1. Podの中にあるコンテナにシェルを実行する
```shell
kubectl exec -i -t secret-test-pod -- /bin/bash
```
1. 機密データは `/etc/secret-volume` にマウントされたボリュームを介してコンテナに公開されます。
ディレクトリ `/etc/secret-volume` 中のファイルの一覧を確認する:
```shell
# Run this in the shell inside the container
ls /etc/secret-volume
```
`password`と`username` 2つのファイル名が出力される:
```
password username
```
1. `username``password` ファイルの中身を表示する:
```shell
# Run this in the shell inside the container
echo "$( cat /etc/secret-volume/username )"
echo "$( cat /etc/secret-volume/password )"
```
出力:
```
my-app
39528$vdg7Jb
```
## Secretでコンテナの環境変数を定義する
### 単一のSecretでコンテナの環境変数を定義する
* Secretの中でkey-valueペアで環境変数を定義する:
```shell
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
```
* Secretで定義された`backend-username`の値をPodの環境変数`SECRET_USERNAME`に割り当てます。
{{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
* Podを作成する:
```shell
kubectl create -f https://k8s.io/examples/pods/inject/pod-single-secret-env-variable.yaml
```
* コンテナの環境変数`SECRET_USERNAME`の中身を表示する:
```shell
kubectl exec -i -t env-single-secret -- /bin/sh -c 'echo $SECRET_USERNAME'
```
出力:
```
backend-admin
```
### 複数のSecretからコンテナの環境変数を定義する
* 前述の例と同様に、まずSecretを作成します:
```shell
kubectl create secret generic backend-user --from-literal=backend-username='backend-admin'
kubectl create secret generic db-user --from-literal=db-username='db-admin'
```
* Podの中で環境変数を定義する:
{{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
* Podを作成する:
```shell
kubectl create -f https://k8s.io/examples/pods/inject/pod-multiple-secret-env-variable.yaml
```
* コンテナの環境変数を表示する:
```shell
kubectl exec -i -t envvars-multiple-secrets -- /bin/sh -c 'env | grep _USERNAME'
```
出力:
```
DB_USERNAME=db-admin
BACKEND_USERNAME=backend-admin
```
## Secretのすべてのkey-valueペアを環境変数として設定する
{{< note >}}
この機能は Kubernetes v1.6 以降から利用可能
{{< /note >}}
* 複数のkey-valueペアを含むSecretを作成する
```shell
kubectl create secret generic test-secret --from-literal=username='my-app' --from-literal=password='39528$vdg7Jb'
```
* envFromを使用してSecretのすべてのデータをコンテナの環境変数として定義します。SecretのキーがPodの環境変数名になります。
{{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
* Podを作成する:
```shell
kubectl create -f https://k8s.io/examples/pods/inject/pod-secret-envFrom.yaml
```
* `username`と`password`コンテナの環境変数を表示する
```shell
kubectl exec -i -t envfrom-secret -- /bin/sh -c 'echo "username: $username\npassword: $password\n"'
```
出力:
```
username: my-app
password: 39528$vdg7Jb
```
### 参考文献
* [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
* [Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
* [Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
## {{% heading "whatsnext" %}}
* [Secrets](/docs/concepts/configuration/secret/)についてもっと知る。
* [Volumes](/docs/concepts/storage/volumes/)について知る。

View File

@ -135,7 +135,7 @@ status:
loadBalancer: {}
```
`.spec.ipFamilies`内の配列の1番目の要素に`IPv6`を明示的に指定した、次のようなServiceを作成してみます。Kubernetesは`service-cluster-ip-range`で設定したIPv6の範囲からcluster IPを割り当てて、`.spec.ipFamilyPolicy`を`SingleStack`に設定します。
`.spec.ipFamilies`内の配列の1番目の要素に`IPv6`を明示的に指定した、次のようなServiceを作成してみます。Kubernetesは`service-cluster-ip-range`で設定したIPv6の範囲からcluster IPを割り当てて、`.spec.ipFamilyPolicy`を`SingleStack`に設定します。
{{< codenew file="service/networking/dual-stack-ipfamilies-ipv6.yaml" >}}
@ -173,7 +173,7 @@ status:
loadBalancer: {}
```
`.spec.ipFamiliePolicy`に`PreferDualStack`を明示的に指定した、次のようなServiceを作成してみます。Kubernetesは(クラスターでデュアルスタックを有効化しているため)IPv4およびIPv6のアドレスの両方を割り当て、`.spec.ClusterIPs`のリストから、`.spec.ipFamilies`配列の最初の要素のアドレスファミリーに基づいた`.spec.ClusterIP`を設定します。
`.spec.ipFamiliePolicy`に`PreferDualStack`を明示的に指定した、次のようなServiceを作成してみます。Kubernetesは(クラスターでデュアルスタックを有効化しているため)IPv4およびIPv6のアドレスの両方を割り当て、`.spec.ClusterIPs`のリストから、`.spec.ipFamilies`配列の最初の要素のアドレスファミリーに基づいた`.spec.ClusterIP`を設定します。
{{< codenew file="service/networking/dual-stack-preferred-svc.yaml" >}}

View File

@ -35,7 +35,7 @@ AppArmorを利用すれば、コンテナに許可することを制限したり
gke-test-default-pool-239f5d02-xwux: v1.4.0
```
2. AppArmorカーネルモジュールが有効であること。LinuxカーネルがAppArmorプロファイルを強制するためには、AppArmorカーネルモジュールのインストールと有効化が必須です。UbuntuやSUSEなどのディストリビューションではデフォルトで有効化されますが、他の多くのディストリビューションでのサポートはオプションです。モジュールが有効になっているかチェックするには、次のように`/sys/module/apparmor/parameters/enabled`ファイルを確認します。
2. AppArmorカーネルモジュールが有効であること。LinuxカーネルがAppArmorプロファイルを強制するためには、AppArmorカーネルモジュールのインストールと有効化が必須です。UbuntuやSUSEなどのディストリビューションではデフォルトで有効化されますが、他の多くのディストリビューションでのサポートはオプションです。モジュールが有効になっているかチェックするには、次のように`/sys/module/apparmor/parameters/enabled`ファイルを確認します。
```shell
cat /sys/module/apparmor/parameters/enabled

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: envvars-multiple-secrets
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: BACKEND_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: backend-username
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: db-username

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Pod
metadata:
name: envfrom-secret
spec:
containers:
- name: envars-test-container
image: nginx
envFrom:
- secretRef:
name: test-secret

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Pod
metadata:
name: env-single-secret
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: backend-user
key: backend-username

View File

@ -39,8 +39,8 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
<div class="light-text">
<h2>150+ 마이크로서비스를 쿠버네티스로 마이그레이션하는 도전</h2>
<p>By Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<p>Sarah Wells, Financial Times 운영 및 안정성 기술 담당 이사</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">비디오 보기</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Attend KubeCon North America on October 11-15, 2021</a>

View File

@ -144,7 +144,7 @@ LGTM은 "Looks good to me"의 약자이며 풀 리퀘스트가 기술적으로
지원하려면, 다음을 수행한다.
1. `kubernetes/website` 리포지터리 내
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS) 파일의 섹션에
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) 파일의 섹션에
여러분의 GitHub 사용자 이름을 추가하는 풀 리퀘스트를 연다.
{{< note >}}
@ -216,7 +216,7 @@ PR은 자동으로 병합된다. SIG Docs 승인자는 추가적인 기술 리
지원하려면 다음을 수행한다.
1. `kubernetes/website` 리포지터리 내
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS)
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)
파일의 섹션에 자신을 추가하는 풀 리퀘스트를 연다.
{{< note >}}

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
title: ConfigMap
id: configmap
date: 2021-08-24
full_link: /docs/concepts/configuration/configmap
full_link: /pt-br/docs/concepts/configuration/configmap
short_description: >
Um objeto da API usado para armazenar dados não-confidenciais em pares chave-valor. Pode ser consumido como variáveis de ambiente, argumentos de linha de comando, ou arquivos de configuração em um volume.

View File

@ -0,0 +1,17 @@
---
title: Variáveis de Ambiente de Contêineres
id: container-env-variables
date: 2021-11-20
full_link: /pt-br/docs/concepts/containers/container-environment/
short_description: >
Variáveis de ambiente de contêineres são pares nome=valor que trazem informações úteis para os contêineres rodando dentro de um Pod.
aka:
tags:
- fundamental
---
Variáveis de ambiente de contêineres são pares nome=valor que trazem informações úteis para os contêineres rodando dentro de um {{< glossary_tooltip text="pod" term_id="Pod" >}}
<!--more-->
Variáveis de ambiente de contêineres fornecem informações requeridas pela aplicação conteinerizada, junto com informações sobre recursos importantes para o {{< glossary_tooltip text="contêiner" term_id="container" >}}. Por exemplo, detalhes do sistema de arquivos, informações sobre o contêiner, e outros recursos do cluster, como endpoints de serviços.

View File

@ -0,0 +1,19 @@
---
title: Contêiner
id: container
date: 2018-04-12
full_link: /docs/concepts/containers/
short_description: >
Uma imagem executável leve e portável que contém software e todas as suas dependências.
aka:
tags:
- fundamental
- workload
---
Uma imagem executável leve e portável que contém software e todas as suas dependências.
<!--more-->
Contêineres desacoplam aplicações da infraestrutura da máquina em que estas rodam para tornar a instalação mais fácil em diferentes ambientes de nuvem e de
sistemas operacionais, e para facilitar o escalonamento das aplicações.

View File

@ -2,7 +2,7 @@
title: Secret
id: secret
date: 2021-08-24
full_link: /docs/concepts/configuration/secret/
full_link: /pt-br/docs/concepts/configuration/secret/
short_description: >
Armazena dados sensíveis, como senhas, tokens OAuth e chaves SSH.

View File

@ -41,12 +41,12 @@ Kubernetes — это проект с открытым исходным кодо
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Смотреть видео</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Посетите KubeCon в Северной Америке, 11-15 октября 2021 года</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Посетите KubeCon в Европе, 16-20 мая 2022 года</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton" button id="desktopKCButton">Посетите KubeCon в Европе, 17-20 мая 2022 года</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Посетите KubeCon в Северной Америке, 24-28 октября 2022 года</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>

View File

@ -62,12 +62,12 @@ Kubernetes - проект з відкритим вихідним кодом. В
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Переглянути відео</button>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Відвідайте KubeCon у Північній Америці, 11-15 жовтня 2021 року</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe-2022/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">Відвідайте KubeCon в Європі, 17-20 травня 2022 року</a>
<br>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna21" button id="desktopKCButton">Відвідайте KubeCon у Північній Америці, 24-28 жовтня 2022 року</a>
</div>
<div id="videoPlayer">

View File

@ -40,6 +40,8 @@ application has to take ports as flags, the API servers have to know how to
insert dynamic port numbers into configuration blocks, services have to know
how to find each other, etc. Rather than deal with this, Kubernetes takes a
different approach.
To learn about the Kubernetes networking model, see [here](/docs/concepts/services-networking/).
-->
Kubernetes 的宗旨就是在应用之间共享机器。
通常来说,共享机器需要两个应用之间不能使用相同的端口,但是在多个应用开发者之间
@ -49,80 +51,7 @@ Kubernetes 的宗旨就是在应用之间共享机器。
而 API 服务器还需要知道如何将动态端口数值插入到配置模块中,服务也需要知道如何找到对方等等。
与其去解决这些问题Kubernetes 选择了其他不同的方法。
<!--
## The Kubernetes network model
Every `Pod` gets its own IP address. This means you do not need to explicitly
create links between `Pods` and you almost never need to deal with mapping
container ports to host ports. This creates a clean, backwards-compatible
model where `Pods` can be treated much like VMs or physical hosts from the
perspectives of port allocation, naming, service discovery, load balancing,
application configuration, and migration.
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
* pods on a node can communicate with all pods on all nodes without NAT
* agents on a node (e.g. system daemons, kubelet) can communicate with all
pods on that node
Note: For those platforms that support `Pods` running in the host network (e.g.
Linux):
* pods in the host network of a node can communicate with all pods on all
nodes without NAT
-->
## Kubernetes 网络模型 {#the-kubernetes-network-model}
每一个 `Pod` 都有它自己的IP地址这就意味着你不需要显式地在每个 `Pod` 之间创建链接,
你几乎不需要处理容器端口到主机端口之间的映射。
这将创建一个干净的、向后兼容的模型,在这个模型里,从端口分配、命名、服务发现、
负载均衡、应用配置和迁移的角度来看,`Pod` 可以被视作虚拟机或者物理主机。
Kubernetes 对所有网络设施的实施,都需要满足以下的基本要求(除非有设置一些特定的网络分段策略):
* 节点上的 Pod 可以不通过 NAT 和其他任何节点上的 Pod 通信
* 节点上的代理比如系统守护进程、kubelet可以和节点上的所有Pod通信
备注:仅针对那些支持 `Pods` 在主机网络中运行的平台比如Linux
* 那些运行在节点的主机网络里的 Pod 可以不通过 NAT 和所有节点上的 Pod 通信
<!--
This model is not only less complex overall, but it is principally compatible
with the desire for Kubernetes to enable low-friction porting of apps from VMs
to containers. If your job previously ran in a VM, your VM had an IP and could
talk to other VMs in your project. This is the same basic model.
Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod`
share their network namespaces - including their IP address and MAC address.
This means that containers within a `Pod` can all reach each other's ports on
`localhost`. This also means that containers within a `Pod` must coordinate port
usage, but this is no different from processes in a VM. This is called the
-->
这个模型不仅不复杂,而且还和 Kubernetes 的实现廉价的从虚拟机向容器迁移的初衷相兼容,
如果你的工作开始是在虚拟机中运行的,你的虚拟机有一个 IP
这样就可以和其他的虚拟机进行通信,这是基本相同的模型。
Kubernetes 的 IP 地址存在于 `Pod` 范围内 - 容器共享它们的网络命名空间 - 包括它们的 IP 地址和 MAC 地址。
这就意味着 `Pod` 内的容器都可以通过 `localhost` 到达各个端口。
这也意味着 `Pod` 内的容器都需要相互协调端口的使用,但是这和虚拟机中的进程似乎没有什么不同,
这也被称为“一个 Pod 一个 IP”模型。
<!--
How this is implemented is a detail of the particular container runtime in use.
It is possible to request ports on the `Node` itself which forward to your `Pod`
(called host ports), but this is a very niche operation. How that forwarding is
implemented is also a detail of the container runtime. The `Pod` itself is
blind to the existence or non-existence of host ports.
-->
如何实现这一点是正在使用的容器运行时的特定信息。
也可以在 `node` 本身通过端口去请求你的 `Pod`(称之为主机端口),
但这是一个很特殊的操作。转发方式如何实现也是容器运行时的细节。
`Pod` 自己并不知道这些主机端口是否存在。
要了解 Kubernetes 网络模型,请参阅[此处](/zh/docs/concepts/services-networking/)。
<!--
## How to implement the Kubernetes networking model
@ -167,39 +96,6 @@ Open vSwitch 是一个高性能可编程的虚拟交换机,支持 Linux 和 Wi
Open vSwitch 使 Antrea 能够以高性能和高效的方式实现 Kubernetes 的网络策略。
借助 Open vSwitch 可编程的特性Antrea 能够在 Open vSwitch 之上实现广泛的联网、安全功能和服务。
<!--
### AOS from Apstra
[AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment.
AOS has a rich set of REST API endpoints that enable Kubernetes to quickly change the network policy based on application requirements. Further enhancements will integrate the AOS Graph model used for the network design with the workload provisioning, enabling an end to end management system for both private and public clouds.
AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux.
Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/
-->
### Apstra 的 AOS
[AOS](https://www.apstra.com/products/aos/) 是一个基于意图的网络系统,
可以通过一个简单的集成平台创建和管理复杂的数据中心环境。
AOS 利用高度可扩展的分布式设计来消除网络中断,同时将成本降至最低。
AOS 参考设计当前支持三层连接的主机,这些主机消除了旧的两层连接的交换问题。
这些三层连接的主机可以是 LinuxDebian、Ubuntu、CentOS系统
它们直接在机架式交换机TOR的顶部创建 BGP 邻居关系。
AOS 自动执行路由邻接,然后提供对 Kubernetes 部署中常见的路由运行状况注入RHI的精细控制。
AOS 具有一组丰富的 REST API 端点,这些端点使 Kubernetes 能够根据应用程序需求快速更改网络策略。
进一步的增强功能将用于网络设计的 AOS Graph 模型与工作负载供应集成在一起,
从而为私有云和公共云提供端到端管理系统。
AOS 支持使用包括 Cisco、Arista、Dell、Mellanox、HPE 在内的制造商提供的通用供应商设备,
以及大量白盒系统和开放网络操作系统,例如 Microsoft SONiC、Dell OPX 和 Cumulus Linux。
想要更详细地了解 AOS 系统是如何工作的可以点击这里https://www.apstra.com/products/how-it-works/
<!--
### AWS VPC CNI for Kubernetes
@ -243,32 +139,6 @@ Pod 可以通过 Express Route 或者 站点到站点的 VPN 来连接到对等
Azure CNI 可以在
[Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) 中获得。
<!--
### Big Cloud Fabric from Big Switch Networks
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premises environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
-->
### Big Switch Networks 的 Big Cloud Fabric
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) 是一个基于云原生的网络架构,
旨在在私有云或者本地环境中运行 Kubernetes。
它使用统一的物理和虚拟 SDNBig Cloud Fabric 解决了固有的容器网络问题,
比如负载均衡、可见性、故障排除、安全策略和容器流量监控。
在 Big Cloud Fabric 的虚拟 Pod 多租户架构的帮助下,容器编排系统
(比如 Kubernetes、RedHat OpenShift、Mesosphere DC/OS 和 Docker Swarm
将与 VM 本地编排系统(比如 VMware、OpenStack 和 Nutanix进行本地集成。
客户将能够安全地互联任意数量的这些集群,并且在需要时启用他们之间的租户间通信。
在最新的 [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html) 上,
BCF 被 Gartner 认为是非常有远见的。
而 BCF 的一条关于 Kubernetes 的本地部署(其中包括 Kubernetes、DC/OS 和在不同地理区域的多个
DC 上运行的 VMware也在[这里](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/)被引用。
<!--
### Calico
@ -300,7 +170,7 @@ Cilium 支持 L7/HTTP可以在 L3-L7 上通过使用与网络分离的基于
<!--
### CNI-Genie from Huawei
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Weave-net](https://www.weave.works/products/weave-net/).
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
-->
@ -313,7 +183,6 @@ CNI-Genie also supports [assigning multiple IP addresses to a pod](https://githu
[CNI 插件](https://github.com/containernetworking/cni#3rd-party-plugins)运行的任何实现,比如
[Flannel](https://github.com/coreos/flannel#flannel)、
[Calico](https://docs.projectcalico.org/)、
[Romana](https://romana.io)、
[Weave-net](https://www.weave.works/products/weave-net/)。
CNI-Genie 还支持[将多个 IP 地址分配给 Pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multi-ip-addresses-per-pod)
@ -362,18 +231,6 @@ Coil operates with a low overhead compared to bare metal, and allows you to defi
[Coil](https://github.com/cybozu-go/coil) 是一个为易于集成、提供灵活的出站流量网络而设计的 CNI 插件。
与裸机相比Coil 的额外操作开销低,并允许针对外部网络的出站流量任意定义 NAT 网关。
<!--
### Contiv
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](https://contiv.io) is all open sourced.
-->
### Contiv
[Contiv](https://github.com/contiv/netplugin)
为各种使用情况提供了一个可配置网络(使用了 BGP 的本地 L3
使用 vxlan 、经典 L2 或 Cisco-SDN/ACI 的覆盖网络)。
[Contiv](https://contiv.io) 是完全开源的。
<!--
### Contrail/Tungsten Fabric
@ -425,83 +282,15 @@ people have reported success with Flannel and Kubernetes.
Kubernetes 所需要的覆盖网络。已经有许多人报告了使用 Flannel 和 Kubernetes 的成功案例。
<!--
### Google Compute Engine (GCE)
### Hybridnet
For the Google Compute Engine cluster configuration scripts, [advanced
routing](https://cloud.google.com/vpc/docs/routes) is used to
assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that
subnet will be routed directly to the VM by the GCE network fabric. This is in
addition to the "main" IP address assigned to the VM, which is NAT'ed for
outbound internet access. A linux bridge (called `cbr0`) is configured to exist
on that subnet, and is passed to docker's `-bridge` flag.
Docker is started with:
```shell
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
This bridge is created by Kubelet (controlled by the `--network-plugin=kubenet`
flag) according to the `Node`'s `.spec.podCIDR`.
Docker will now allocate IPs from the `cbr-cidr` block. Containers can reach
each other and `Nodes` over the `cbr0` bridge. Those IPs are all routable
within the GCE project network.
GCE itself does not know anything about these IPs, though, so it will not NAT
them for outbound internet traffic. To achieve that an iptables rule is used
to masquerade (aka SNAT - to make it seem as if packets came from the `Node`
itself) traffic that is bound for IPs outside the GCE project network
(10.0.0.0/8).
```shell
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
```
Lastly IP forwarding is enabled in the kernel (so the kernel will process
packets for bridged containers):
```shell
sysctl net.ipv4.ip_forward=1
```
The result of all this is that all `Pods` can reach each other and can egress
traffic to the internet.
[Hybridnet](https://github.com/alibaba/hybridnet) is an open source CNI plugin designed for hybrid clouds which provides both overlay and underlay networking for containers in one or more clusters. Overlay and underlay containers can run on the same node and have cluster-wide bidirectional network connectivity.
-->
### Google Compute Engine (GCE)
### Hybridnet
对于 Google Compute Engine 的集群配置脚本,
[高级路由器](https://cloud.google.com/vpc/docs/routes) 用于为每个虚机分配一个子网(默认是 `/24` - 254个 IP
绑定到该子网的任何流量都将通过 GCE 网络结构直接路由到虚机。
这是除了分配给虚机的“主” IP 地址之外的一个补充,该 IP 地址经过 NAT 转换以用于访问外网。
Linux 网桥称为“cbr0”被配置为存在于该子网中并被传递到 Docker 的 --bridge 参数上。
Docker 会以这样的参数启动:
```shell
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
```
这个网桥是由 Kubelet`--network-plugin=kubenet` 参数控制)根据节点的 `.spec.podCIDR` 参数创建的。
Docker 将会从 `cbr-cidr` 块分配 IP。
容器之间可以通过 `cbr0` 网桥相互访问,也可以访问节点。
这些 IP 都可以在 GCE 的网络中被路由。
而 GCE 本身并不知道这些 IP所以不会对访问外网的流量进行 NAT。
为了实现此目的,使用了 `iptables` 规则来伪装(又称为 SNAT使数据包看起来好像是来自“节点”本身
将通信绑定到 GCE 项目网络10.0.0.0/8之外的 IP。
```shell
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
```
最后,在内核中启用了 IP 转发(因此内核将处理桥接容器的数据包):
```shell
sysctl net.ipv4.ip_forward=1
```
所有这些的结果是所有 Pod 都可以互相访问,并且可以将流量发送到互联网。
[Hybridnet](https://github.com/alibaba/hybridnet) 是一个为混合云设计的开源 CNI 插件,
它为一个或多个集群中的容器提供覆盖和底层网络。 Overlay 和 underlay 容器可以在同一个节点上运行,
并具有集群范围的双向网络连接。
<!--
### Jaguar
@ -581,9 +370,9 @@ Lars Kellogg-Stedman.
<!--
### Multus (a Multi Network plugin)
[Multus](https://github.com/Intel-Corp/multus-cni) is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes.
Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.
Multus supports all [reference plugins](https://github.com/containernetworking/plugins) (eg. [Flannel](https://github.com/containernetworking/cni.dev/blob/main/content/plugins/v0.9/meta/flannel.md), [DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp), [Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)) that implement the CNI specification and 3rd party plugins (eg. [Calico](https://github.com/projectcalico/cni-plugin), [Weave](https://github.com/weaveworks/weave), [Cilium](https://github.com/cilium/cilium), [Contiv](https://github.com/contiv/netplugin)). In addition to it, Multus supports [SRIOV](https://github.com/hustcat/sriov-cni), [DPDK](https://github.com/Intel-Corp/sriov-cni), [OVS-DPDK & VPP](https://github.com/intel/vhost-user-net-plugin) workloads in Kubernetes with both cloud native and NFV based applications in Kubernetes.
-->
### Multus (a Multi Network plugin)
@ -591,7 +380,7 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
使用 Kubernetes 中基于 CRD 的网络对象来支持实现 Kubernetes 多网络系统。
Multus 支持所有[参考插件](https://github.com/containernetworking/plugins)(比如:
[Flannel](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel)、
[Flannel](https://github.com/containernetworking/cni.dev/blob/main/content/plugins/v0.9/meta/flannel.md)、
[DHCP](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp)、
[Macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)
来实现 CNI 规范和第三方插件(比如:
@ -623,28 +412,6 @@ NSX-T 可以为多云及多系统管理程序环境提供网络虚拟化,并
以及 NSX-T 与基于容器的 CaaS/PaaS 平台(例如 Pivotal Container ServicePKS和 OpenShift之间的集成。
<!--
### Nuage Networks VCS (Virtualized Cloud Services)
[Nuage](https://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
-->
### Nuage Networks VCS (Virtualized Cloud Services)
[Nuage](https://www.nuagenetworks.net) 提供了一个高度可扩展的基于策略的软件定义网络SDN平台。
Nuage 使用开源的 Open vSwitch 作为数据平面,以及基于开放标准构建具有丰富功能的 SDN 控制器。
Nuage 平台使用覆盖层在 Kubernetes Pod 和非 Kubernetes 环境VM 和裸机服务器)之间提供基于策略的无缝联网。
Nuage 的策略抽象模型在设计时就考虑到了应用程序,并且可以轻松声明应用程序的细粒度策略。
该平台的实时分析引擎可为 Kubernetes 应用程序提供可见性和安全性监控。
<!--
### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) is a somewhat more mature but also
complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.
### OVN (Open Virtual Networking)
OVN is an opensource network virtualization solution developed by the
@ -653,29 +420,12 @@ stateful ACLs, load-balancers etc to build different virtual networking
topologies. The project has a specific Kubernetes plugin and documentation
at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
-->
### OpenVSwitch
[OpenVSwitch](https://www.openvswitch.org/) 是一个较为成熟的解决方案,但同时也增加了构建覆盖网络的复杂性。
这也得到了几个网络系统的“大商店”的拥护。
### OVN (开放式虚拟网络)
OVN 是一个由 Open vSwitch 社区开发的开源的网络虚拟化解决方案。
它允许创建逻辑交换器、逻辑路由、状态 ACL、负载均衡等等来建立不同的虚拟网络拓扑。
该项目有一个特定的Kubernetes插件和文档 [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)。
<!--
### Romana
[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
-->
### Romana
[Romana](https://romana.io) 是一个开源网络和安全自动化解决方案。
它可以让你在没有覆盖网络的情况下部署 Kubernetes。
Romana 支持 Kubernetes [网络策略](/zh/docs/concepts/services-networking/network-policies/)
来提供跨网络命名空间的隔离。
<!--
### Weave Net from Weaveworks

View File

@ -29,9 +29,8 @@ When you deploy Kubernetes, you get a cluster.
This document outlines the various components you need to have for
a complete and working Kubernetes cluster.
Here's the diagram of a Kubernetes cluster with all the components tied together.
{{< figure src="/images/docs/components-of-kubernetes.svg" alt="Components of Kubernetes" caption="The components of a Kubernetes cluster" class="diagram-large" >}}
![Components of Kubernetes](/images/docs/components-of-kubernetes.svg)
-->
<!-- overview -->
当你部署完 Kubernetes, 即拥有了一个完整的集群。
@ -39,9 +38,7 @@ Here's the diagram of a Kubernetes cluster with all the components tied together
本文档概述了交付正常运行的 Kubernetes 集群所需的各种组件。
这张图表展示了包含所有相互关联组件的 Kubernetes 集群。
![Kubernetes 组件](/images/docs/components-of-kubernetes.svg)
{{< figure src="/images/docs/components-of-kubernetes.svg" alt="Kubernetes 的组件" caption="Kubernetes 集群的组件" class="diagram-large" >}}
<!-- body -->

View File

@ -177,7 +177,7 @@ description: "此优先级类应仅用于 XYZ 服务 Pod。"
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
Pods with `PreemptionPolicy: Never` will be placed in the scheduling queue
Pods with `preemptionPolicy: Never` will be placed in the scheduling queue
ahead of lower-priority pods,
but they cannot preempt other pods.
A non-preempting pod waiting to be scheduled will stay in the scheduling queue,
@ -197,7 +197,7 @@ high-priority pods.
{{< feature-state for_k8s_version="v1.19" state="beta" >}}
配置了 `PreemptionPolicy: Never` 的 Pod 将被放置在调度队列中较低优先级 Pod 之前,
配置了 `preemptionPolicy: Never` 的 Pod 将被放置在调度队列中较低优先级 Pod 之前,
但它们不能抢占其他 Pod。等待调度的非抢占式 Pod 将留在调度队列中,直到有足够的可用资源,
它才可以被调度。非抢占式 Pod像其他 Pod 一样,受调度程序回退的影响。
这意味着如果调度程序尝试这些 Pod 并且无法调度它们,它们将以更低的频率被重试,
@ -206,26 +206,26 @@ high-priority pods.
非抢占式 Pod 仍可能被其他高优先级 Pod 抢占。
<!--
`PreemptionPolicy` defaults to `PreemptLowerPriority`,
`preemptionPolicy` defaults to `PreemptLowerPriority`,
which will allow pods of that PriorityClass to preempt lower-priority pods
(as is existing default behavior).
If `PreemptionPolicy` is set to `Never`,
If `preemptionPolicy` is set to `Never`,
pods in that PriorityClass will be non-preempting.
An example use case is for data science workloads.
A user may submit a job that they want to be prioritized above other workloads,
but do not wish to discard existing work by preempting running pods.
The high priority job with `PreemptionPolicy: Never` will be scheduled
The high priority job with `preemptionPolicy: Never` will be scheduled
ahead of other queued pods,
as soon as sufficient cluster resources "naturally" become free.
-->
`PreemptionPolicy` 默认为 `PreemptLowerPriority`
`preemptionPolicy` 默认为 `PreemptLowerPriority`
这将允许该 PriorityClass 的 Pod 抢占较低优先级的 Pod现有默认行为也是如此
如果 `PreemptionPolicy` 设置为 `Never`,则该 PriorityClass 中的 Pod 将是非抢占式的。
如果 `preemptionPolicy` 设置为 `Never`,则该 PriorityClass 中的 Pod 将是非抢占式的。
数据科学工作负载是一个示例用例。用户可以提交他们希望优先于其他工作负载的作业,
但不希望因为抢占运行中的 Pod 而导致现有工作被丢弃。
设置为 `PreemptionPolicy: Never` 的高优先级作业将在其他排队的 Pod 之前被调度,
设置为 `preemptionPolicy: Never` 的高优先级作业将在其他排队的 Pod 之前被调度,
只要足够的集群资源“自然地”变得可用。
<!-- ### Example Non-preempting PriorityClass -->
@ -664,4 +664,4 @@ kubelet 使用优先级来确定
[默认限制优先级消费](/zh/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)
* 了解 [Pod 干扰](/zh/docs/concepts/workloads/pods/disruptions/)
* 了解 [API 发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/)
* 了解[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* 了解[节点压力驱逐](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)

View File

@ -269,7 +269,7 @@ To apply:
<!--
1. Open a pull request that adds your GitHub user name to a section of the
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS) file
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) file
in the `kubernetes/website` repository.
{{< note >}}
@ -282,7 +282,7 @@ If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added
[@k8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) assigns and suggests you as a reviewer on new pull requests.
-->
1. 发起 PR将你的 GitHub 用户名添加到 `kubernetes/website` 仓库中
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS)
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)
文件的特定节。
{{< note >}}
@ -383,7 +383,7 @@ When you meet the [requirements](https://github.com/kubernetes/community/blob/ma
<!--
To apply:
1. Open a pull request adding yourself to a section of the [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS) file in the `kubernetes/website` repository.
1. Open a pull request adding yourself to a section of the [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES) file in the `kubernetes/website` repository.
{{< note >}}
If you aren't sure where to add yourself, add yourself to `sig-docs-en-owners`.
@ -396,7 +396,7 @@ If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added
申请流程如下:
1. 发起一个 PR将自己添加到 `kubernetes/website` 仓库中
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS)
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/main/OWNERS_ALIASES)
文件的对应节区。
{{< note >}}

View File

@ -107,9 +107,9 @@ read that resource will fail until it is deleted or a valid decryption key is pr
Name | Encryption | Strength | Speed | Key Length | Other Considerations
-----|------------|----------|-------|------------|---------------------
`identity` | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written.
`aescbc` | AES-CBC with PKCS#7 padding | Strongest | Fast | 32-byte | The recommended choice for encryption at rest but may be slightly slower than `secretbox`.
`secretbox` | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review.
`aesgcm` | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented.
`aescbc` | AES-CBC with PKCS#7 padding | Weak | Fast | 32-byte | Not recommended due to CBC's vulnerability to padding oracle attacks.
`kms` | Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS) | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. [Configure the KMS provider](/docs/tasks/administer-cluster/kms-provider/)
Each provider supports multiple keys - the keys are tried in order for decryption, and if the provider
@ -119,9 +119,9 @@ is the first provider, the first key is used for encryption.
名称 | 加密类型 | 强度 | 速度 | 密钥长度 | 其它事项
-----|------------|----------|-------|------------|---------------------
`identity` | 无 | N/A | N/A | N/A | 不加密写入的资源。当设置为第一个 provider 时,资源将在新值写入时被解密。
`aescbc` | 填充 PKCS#7 的 AES-CBC | 最强 | 快 | 32字节 | 建议使用的加密项,但可能比 `secretbox` 稍微慢一些。
`secretbox` | XSalsa20 和 Poly1305 | 强 | 更快 | 32字节 | 较新的标准,在需要高度评审的环境中可能不被接受。
`aesgcm` | 带有随机数的 AES-GCM | 必须每 200k 写入一次 | 最快 | 16, 24 或者 32字节 | 建议不要使用,除非实施了自动密钥循环方案。
`aescbc` | 填充 PKCS#7 的 AES-CBC | 弱 | 快 | 32字节 | 由于 CBC 容易受到密文填塞攻击Padding Oracle Attack不推荐使用。
`kms` | 使用信封加密方案:数据使用带有 PKCS#7 填充的 AES-CBC 通过数据加密密钥DEK加密DEK 根据 Key Management ServiceKMS中的配置通过密钥加密密钥Key Encryption KeysKEK加密 | 最强 | 快 | 32字节 | 建议使用第三方工具进行密钥管理。为每个加密生成新的 DEK并由用户控制 KEK 轮换来简化密钥轮换。[配置 KMS 提供程序](/zh/docs/tasks/administer-cluster/kms-provider/)
每个 provider 都支持多个密钥 - 在解密时会按顺序使用密钥,如果是第一个 provider则第一个密钥用于加密。

View File

@ -52,35 +52,44 @@ dependency on Docker:
当用了替代的容器运行时之后Docker 命令可能不工作,甚至产生意外的输出。
这才是判定你是否依赖于 Docker 的方法。
<!--
1. Make sure no privileged Pods execute Docker commands.
2. Check that scripts and apps running on nodes outside of Kubernetes
<!--
1. Make sure no privileged Pods execute Docker commands (like `docker ps`),
restart the Docker service (commands such as `systemctl restart docker.service`),
or modify Docker-specific files such as `/etc/docker/daemon.json`.
1. Check for any private registries or image mirror settings in the Docker
configuration file (like `/etc/docker/daemon.json`). Those typically need to
be reconfigured for another container runtime.
1. Check that scripts and apps running on nodes outside of your Kubernetes
infrastructure do not execute Docker commands. It might be:
- SSH to nodes to troubleshoot;
- Node startup scripts;
- Monitoring and security agents installed on nodes directly.
3. Third-party tools that perform above mentioned privileged operations. See
1. Third-party tools that perform above mentioned privileged operations. See
[Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
for more information.
4. Make sure there is no indirect dependencies on dockershim behavior.
1. Make sure there is no indirect dependencies on dockershim behavior.
This is an edge case and unlikely to affect your application. Some tooling may be configured
to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for
a specific log message as part of troubleshooting instructions.
If you have such tooling configured, test the behavior on test
cluster before migration.
-->
1. 确认没有特权 Pod 执行 docker 命令。
2. 检查 Kubernetes 基础架构外部节点上的脚本和应用,确认它们没有执行 Docker 命令。可能的命令有:
-->
1. 确认没有特权 Pod 执行 Docker 命令(如 `docker ps`)、重新启动 Docker
服务(如 `systemctl restart docker.service`)或修改
Docker 配置文件 `/etc/docker/daemon.json`
2. 检查 Docker 配置文件(如 `/etc/docker/daemon.json`中容器镜像仓库的镜像mirror站点设置。
这些配置通常需要针对不同容器运行时来重新设置。
3. 检查确保在 Kubernetes 基础设施之外的节点上运行的脚本和应用程序没有执行Docker命令。
可能的情况如:
- SSH 到节点排查故障;
- 节点启动脚本;
- 直接安装在节点上的监视和安全代理。
3. 检查执行了上述特权操作的第三方工具。详细操作请参考:
[从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/)
4. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
- 直接安装在节点上的监和安全代理。
4. 检查执行上述特权操作的第三方工具。详细操作请参考:
[从 dockershim 迁移遥测和安全代理](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents)
5. 确认没有对 dockershim 行为的间接依赖。这是一种极端情况,不太可能影响你的应用。
一些工具很可能被配置为使用了 Docker 特性,比如,基于特定指标发警报,或者在故障排查指令的一个环节中搜索特定的日志信息。
如果你有此类配置的工具,需要在迁移之前,在测试集群上完成功能验证。
<!--
## Dependency on Docker explained {#role-of-dockershim}
-->

View File

@ -0,0 +1,83 @@
---
title: 查明节点上所使用的容器运行时
content_type: task
weight: 10
---
<!--
title: Find Out What Container Runtime is Used on a Node
content_type: task
reviewers:
- SergeyKanzhelev
weight: 10
-->
<!-- overview -->
<!--
This page outlines steps to find out what [container runtime](/docs/setup/production-environment/container-runtimes/)
the nodes in your cluster use.
-->
本页面描述查明集群中节点所使用的[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
的步骤。
<!--
Depending on the way you run your cluster, the container runtime for the nodes may
have been pre-configured or you need to configure it. If you're using a managed
Kubernetes service, there might be vendor-specific ways to check what container runtime is
configured for the nodes. The method described on this page should work whenever
the execution of `kubectl` is allowed.
-->
取决于你运行集群的方式,节点所使用的容器运行时可能是事先配置好的,
也可能需要你来配置。如果你在使用托管的 Kubernetes 服务,
可能存在特定于厂商的方法来检查节点上配置的容器运行时。
本页描述的方法应该在能够执行 `kubectl` 的场合下都可以工作。
## {{% heading "prerequisites" %}}
<!--
Install and configure `kubectl`. See [Install Tools](/docs/tasks/tools/#kubectl) section for details.
-->
安装并配置 `kubectl`。参见[安装工具](/zh/docs/tasks/tools/#kubectl) 节了解详情。
<!--
## Find out the container runtime used on a Node
Use `kubectl` to fetch and show node information:
-->
## 查明节点所使用的容器运行时
使用 `kubectl` 来读取并显示节点信息:
```shell
kubectl get nodes -o wide
```
<!--
The output is similar to the following. The column `CONTAINER-RUNTIME` outputs
the runtime and its version.
-->
输出如下面所示。`CONTAINER-RUNTIME` 列给出容器运行时及其版本。
```none
# For dockershim
NAME STATUS VERSION CONTAINER-RUNTIME
node-1 Ready v1.16.15 docker://19.3.1
node-2 Ready v1.16.15 docker://19.3.1
node-3 Ready v1.16.15 docker://19.3.1
```
```none
# For containerd
NAME STATUS VERSION CONTAINER-RUNTIME
node-1 Ready v1.19.6 containerd://1.4.1
node-2 Ready v1.19.6 containerd://1.4.1
node-3 Ready v1.19.6 containerd://1.4.1
```
<!--
Find out more information about container runtimes
on [Container Runtimes](/docs/setup/production-environment/container-runtimes/) page.
-->
你可以在[容器运行时](/zh/docs/setup/production-environment/container-runtimes/)
页面找到与容器运行时相关的更多信息。

View File

@ -0,0 +1,5 @@
---
title: "安全"
weight: 40
---

View File

@ -0,0 +1,444 @@
---
title: 在集群级别应用 Pod 安全标准
content_type: tutorial
weight: 10
---
<!--
title: Apply Pod Security Standards at the Cluster Level
content_type: tutorial
weight: 10
-->
{{% alert title="Note" %}}
<!-- This tutorial applies only for new clusters. -->
本教程仅适用于新集群。
{{% /alert %}}
<!--
Pod Security admission (PSA) is enabled by default in v1.23 and later, as it has
[graduated to beta](/blog/2021/12/09/pod-security-admission-beta/).
Pod Security
is an admission controller that carries out checks against the Kubernetes
[Pod Security Standards](docs/concepts/security/pod-security-standards/) when new pods are
created. This tutorial shows you how to enforce the `baseline` Pod Security
Standard at the cluster level which applies a standard configuration
to all namespaces in a cluster.
To apply Pod Security Standards to specific namespaces, refer to [Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss).
-->
Pod 安全准入PSA在 v1.23 及更高版本默认启用,
因为它[升级到测试版beta](/blog/2021/12/09/pod-security-admission-beta/)。
Pod 安全准入是在创建 Pod 时应用
[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)的准入控制器。
本教程将向你展示如何在集群级别实施 `baseline` Pod 安全标准,
该标准将标准配置应用于集群中的所有名称空间。
要将 Pod 安全标准应用于特定名字空间,
请参阅[在名字空间级别应用 Pod 安全标准](/zh/docs/tutorials/security/ns-level-pss)。
## {{% heading "prerequisites" %}}
<!--
Install the following on your workstation:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
-->
在你的工作站中安装以下内容:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
<!--
## Choose the right Pod Security Standard to apply
[Pod Security Admission](/docs/concepts/security/pod-security-admission/)
lets you apply built-in [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
with the following modes: `enforce`, `audit`, and `warn`.
To gather information that helps you to choose the Pod Security Standards
that are most appropriate for your configuration, do the following:
-->
## 正确选择要应用的 Pod 安全标准 {#choose-the-right-pod-security-standard-to-apply}
[Pod 安全准入](/zh/docs/concepts/security/pod-security-admission/)
允许你使用以下模式应用内置的
[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/):
`enforce`、`audit` 和 `warn`
要收集信息以便选择最适合你的配置的 Pod 安全标准,请执行以下操作:
<!--
1. Create a cluster with no Pod Security Standards applied:
-->
1. 创建一个没有应用 Pod 安全标准的集群:
```shell
kind create cluster --name psa-wo-cluster-pss --image kindest/node:v1.23.0
```
<!-- The output is similar to this: -->
输出类似于:
```
Creating cluster "psa-wo-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.23.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-wo-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-wo-cluster-pss
Thanks for using kind! 😊
```
<!--
1. Set the kubectl context to the new cluster:
-->
2. 将 kubectl 上下文设置为新集群:
```shell
kubectl cluster-info --context kind-psa-wo-cluster-pss
```
<!-- The output is similar to this: -->
输出类似于:
```
Kubernetes control plane is running at https://127.0.0.1:61350
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
<!--
1. Get a list of namespaces in the cluster:
-->
3. 获取集群中的名字空间列表:
```shell
kubectl get ns
```
<!-- The output is similar to this: -->
输出类似于:
```
NAME STATUS AGE
default Active 9m30s
kube-node-lease Active 9m32s
kube-public Active 9m32s
kube-system Active 9m32s
local-path-storage Active 9m26s
```
<!--
1. Use `--dry-run=server` to understand what happens when different Pod Security Standards
are applied:
-->
4. 使用 `--dry-run=server` 来了解应用不同的 Pod 安全标准时会发生什么:
1. Privileged
```shell
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=privileged
```
<!-- The output is similar to this: -->
输出类似于:
```
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
namespace/kube-system labeled
namespace/local-path-storage labeled
```
2. Baseline
```shell
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=baseline
```
<!-- The output is similar to this: -->
输出类似于:
```
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
Warning: existing pods in namespace "kube-system" violate the new PodSecurity enforce level "baseline:latest"
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged
namespace/kube-system labeled
namespace/local-path-storage labeled
```
3. Restricted
```shell
kubectl label --dry-run=server --overwrite ns --all \
pod-security.kubernetes.io/enforce=restricted
```
<!-- The output is similar to this: -->
输出类似于:
```
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
Warning: existing pods in namespace "kube-system" violate the new PodSecurity enforce level "restricted:latest"
Warning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities, runAsNonRoot != true, seccompProfile
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged, allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types, runAsNonRoot != true, seccompProfile
namespace/kube-system labeled
Warning: existing pods in namespace "local-path-storage" violate the new PodSecurity enforce level "restricted:latest"
Warning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile
namespace/local-path-storage labeled
```
<!--
From the previous output, you'll notice that applying the `privileged` Pod Security Standard shows no warnings
for any namespaces. However, `baseline` and `restricted` standards both have
warnings, specifically in the `kube-system` namespace.
-->
从前面的输出中,你会注意到应用 `privileged` Pod 安全标准不会显示任何名字空间的警告。
然而,`baseline` 和 `restricted` 标准都有警告,特别是在 `kube-system` 名字空间中。
<!--
## Set modes, versions and standards
In this section, you apply the following Pod Security Standards to the `latest` version:
* `baseline` standard in `enforce` mode.
* `restricted` standard in `warn` and `audit` mode.
-->
## 设置模式、版本和标准 {#set-modes-versions-and-standards}
在本节中,你将以下 Pod 安全标准应用于最新(`latest`)版本:
* 在 `enforce` 模式下的 `baseline` 标准。
* `warn``audit` 模式下的 `restricted` 标准。
<!--
The `baseline` Pod Security Standard provides a convenient
middle ground that allows keeping the exemption list short and prevents known
privilege escalations.
Additionally, to prevent pods from failing in `kube-system`, you'll exempt the namespace
from having Pod Security Standards applied.
When you implement Pod Security Admission in your own environment, consider the
following:
-->
`baseline` Pod 安全标准提供了一个方便的中间立场,能够保持豁免列表简短并防止已知的特权升级。
此外,为了防止 `kube-system` 中的 Pod 失败,你将免除该名字空间应用 Pod 安全标准。
在你自己的环境中实施 Pod 安全准入时,请考虑以下事项:
<!--
1. Based on the risk posture applied to a cluster, a stricter Pod Security
Standard like `restricted` might be a better choice.
1. Exempting the `kube-system` namespace allows pods to run as
`privileged` in this namespace. For real world use, the Kubernetes project
strongly recommends that you apply strict RBAC
policies that limit access to `kube-system`, following the principle of least
privilege.
To implement the preceding standards, do the following:
1. Create a configuration file that can be consumed by the Pod Security
Admission Controller to implement these Pod Security Standards:
-->
1. 根据应用于集群的风险状况,更严格的 Pod 安全标准(如 `restricted`)可能是更好的选择。
2. 对 `kube-system` 名字空间进行赦免会允许 Pod 在其中以 `privileged` 模式运行。
对于实际使用Kubernetes 项目强烈建议你应用严格的 RBAC 策略来限制对 `kube-system` 的访问,
遵循最小特权原则。
3. 创建一个配置文件Pod 安全准入控制器可以使用该文件来实现这些 Pod 安全标准:
```
mkdir -p /tmp/pss
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "baseline"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [kube-system]
EOF
```
<!--
1. Configure the API server to consume this file during cluster creation:
-->
4. 在创建集群时配置 API 服务器使用此文件:
```
cat <<EOF > /tmp/pss/cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
admission-control-config-file: /etc/config/cluster-level-pss.yaml
extraVolumes:
- name: accf
hostPath: /etc/config
mountPath: /etc/config
readOnly: false
pathType: "DirectoryOrCreate"
extraMounts:
- hostPath: /tmp/pss
containerPath: /etc/config
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: None
EOF
```
{{<note>}}
<!--
If you use Docker Desktop with KinD on macOS, you can
add `/tmp` as a Shared Directory under the menu item
**Preferences > Resources > File Sharing**.
-->
如果你在 macOS 上使用 Docker Desktop 和 KinD
你可以在菜单项 **Preferences > Resources > File Sharing**
下添加 `/tmp` 作为共享目录。
{{</note>}}
<!--
1. Create a cluster that uses Pod Security Admission to apply
these Pod Security Standards:
-->
5. 创建一个使用 Pod 安全准入的集群来应用这些 Pod 安全标准:
```shell
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.23.0 --config /tmp/pss/cluster-config.yaml
```
<!-- The output is similar to this: -->
输出类似于:
```
Creating cluster "psa-with-cluster-pss" ...
✓ Ensuring node image (kindest/node:v1.23.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-with-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-with-cluster-pss
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
```
<!--
1. Point kubectl to the cluster
-->
6. 将 kubectl 指向集群
```shell
kubectl cluster-info --context kind-psa-with-cluster-pss
```
<!-- The output is similar to this: -->
输出类似于:
```
Kubernetes control plane is running at https://127.0.0.1:63855
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
<!--
1. Create the following Pod specification for a minimal configuration in the default namespace:
-->
7. 创建以下 Pod 规约作为在 default 名字空间中的一个最小配置:
```
cat <<EOF > /tmp/pss/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
```
<!--
1. Create the Pod in the cluster:
-->
8. 在集群中创建 Pod
```shell
kubectl apply -f /tmp/pss/nginx-pod.yaml
```
<!-- The output is similar to this: -->
输出类似于:
```
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
```
<!--
## Clean up
Run `kind delete cluster -name psa-with-cluster-pss` and
`kind delete cluster -name psa-wo-cluster-pss` to delete the clusters you
created.
-->
## 清理 {#clean-up}
运行 `kind delete cluster -name psa-with-cluster-pss`
`kind delete cluster -name psa-wo-cluster-pss` 来删除你创建的集群。
## {{% heading "whatsnext" %}}
<!--
- Run a
[shell script](/examples/security/kind-with-cluster-level-baseline-pod-security.sh)
to perform all the preceding steps at once:
1. Create a Pod Security Standards based cluster level Configuration
2. Create a file to let API server consumes this configuration
3. Create a cluster that creates an API server with this configuration
4. Set kubectl context to this new cluster
5. Create a minimal pod yaml file
6. Apply this file to create a Pod in the new cluster
- [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
- [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
- [Apply Pod Security Standards at the namespace level](/docs/tutorials/security/ns-level-pss/)
-->
- 运行一个 [shell 脚本](/zh/examples/security/kind-with-cluster-level-baseline-pod-security.sh)
一次执行前面的所有步骤:
1. 创建一个基于 Pod 安全标准的集群级别配置
2. 创建一个文件让 API 服务器消费这个配置
3. 创建一个集群,用这个配置创建一个 API 服务器
4. 设置 kubectl 上下文为这个新集群
5. 创建一个最小的 Pod yaml 文件
6. 应用这个文件,在新集群中创建一个 Pod
- [Pod 安全准入](/zh/docs/concepts/security/pod-security-admission/)
- [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
- [在名字空间级别应用 Pod 安全标准](/zh/docs/tutorials/security/ns-level-pss/)

View File

@ -0,0 +1,243 @@
---
title: 在名字空间级别应用 Pod 安全标准
content_type: tutorial
weight: 10
---
<!--
title: Apply Pod Security Standards at the Namespace Level
content_type: tutorial
weight: 10
-->
{{% alert title="Note" %}}
<!-- This tutorial applies only for new clusters. -->
本教程仅适用于新集群。
{{% /alert %}}
<!--
Pod Security admission (PSA) is enabled by default in v1.23 and later, as it [graduated
to beta](/blog/2021/12/09/pod-security-admission-beta/). Pod Security Admission
is an admission controller that applies
[Pod Security Standards](docs/concepts/security/pod-security-standards/)
when pods are created. In this tutorial, you will enforce the `baseline` Pod Security Standard,
one namespace at a time.
You can also apply Pod Security Standards to multiple namespaces at once at the cluster
level. For instructions, refer to [Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss).
-->
Pod 安全准入PSA在 v1.23 及更高版本默认启用,
因为它[升级到测试版beta](/blog/2021/12/09/pod-security-admission-beta/)。
Pod 安全准入是在创建 Pod 时应用
[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)的准入控制器。
在本教程中,你将应用 `baseline` Pod 安全标准,每次一个名字空间。
你还可以在集群级别一次将 Pod 安全标准应用于多个名称空间。
有关说明,请参阅[在集群级别应用 Pod 安全标准](/zh/docs/tutorials/security/cluster-level-pss)。
## {{% heading "prerequisites" %}}
<!--
Install the following on your workstation:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
-->
在你的工作站中安装以下内容:
- [KinD](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
<!--
## Create cluster
1. Create a `KinD` cluster as follows:
-->
## 创建集群 {#create-cluster}
2. 按照如下方式创建一个 `KinD` 集群:
```shell
kind create cluster --name psa-ns-level --image kindest/node:v1.23.0
```
<!-- The output is similar to this: -->
输出类似于:
```
Creating cluster "psa-ns-level" ...
✓ Ensuring node image (kindest/node:v1.23.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-psa-ns-level"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-ns-level
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
```
<!-- 1. Set the kubectl context to the new cluster: -->
1. 将 kubectl 上下文设置为新集群:
```shell
kubectl cluster-info --context kind-psa-ns-level
```
<!-- The output is similar to this: -->
输出类似于:
```
Kubernetes control plane is running at https://127.0.0.1:50996
CoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
<!--
## Create a namespace
Create a new namespace called `example`:
-->
## 创建名字空间 {#create-a-namespace}
创建一个名为 `example` 的新名字空间:
```shell
kubectl create ns example
```
<!-- The output is similar to this: -->
输出类似于:
```
namespace/example created
```
<!--
## Apply Pod Security Standards
1. Enable Pod Security Standards on this namespace using labels supported by
built-in Pod Security Admission. In this step we will warn on baseline pod
security standard as per the latest version (default value)
-->
## 应用 Pod 安全标准 {#apply-pod-security-standards}
1. 使用内置 Pod 安全准入所支持的标签在此名字空间上启用 Pod 安全标准。
在这一步中,我们将根据最新版本(默认值)对基线 Pod 安全标准发出警告。
```shell
kubectl label --overwrite ns example \
pod-security.kubernetes.io/warn=baseline \
pod-security.kubernetes.io/warn-version=latest
```
<!--
2. Multiple pod security standards can be enabled on any namespace, using labels.
Following command will `enforce` the `baseline` Pod Security Standard, but
`warn` and `audit` for `restricted` Pod Security Standards as per the latest
version (default value)
-->
2. 可以使用标签在任何名字空间上启用多个 Pod 安全标准。
以下命令将强制(`enforce` 执行基线(`baseline`Pod 安全标准,
但根据最新版本(默认值)对受限(`restricted`Pod 安全标准执行警告(`warn`)和审核(`audit`)。
```
kubectl label --overwrite ns example \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/enforce-version=latest \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latest
```
<!--
## Verify the Pod Security Standards
1. Create a minimal pod in `example` namespace:
-->
## 验证 Pod 安全标准 {#verify-the-pod-security-standards}
1. 在 `example` 名字空间中创建一个最小的 pod
```shell
cat <<EOF > /tmp/pss/nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
EOF
```
<!--
2. Apply the pod spec to the cluster in `example` namespace:
-->
1. 将 Pod 规约应用到集群中的 `example` 名字空间中:
```shell
kubectl apply -n example -f /tmp/pss/nginx-pod.yaml
```
<!-- The output is similar to this: -->
输出类似于:
```
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
```
<!--
1. Apply the pod spec to the cluster in `default` namespace:
-->
3. 将 Pod 规约应用到集群中的 `default` 名字空间中:
```shell
kubectl apply -n default -f /tmp/pss/nginx-pod.yaml
```
<!-- Output is similar to this: -->
输出类似于:
```
pod/nginx created
```
<!--
The Pod Security Standards were applied only to the `example`
namespace. You could create the same Pod in the `default` namespace
with no warnings.
-->
以上 Pod 安全标准仅被应用到 `example` 名字空间。
你可以在没有警告的情况下在 `default` 名字空间中创建相同的 Pod。
<!--
## Clean up
Run `kind delete cluster -name psa-ns-level` to delete the cluster created.
-->
## 清理 {#clean-up}
运行 `kind delete cluster -name psa-ns-level` 删除创建的集群。
## {{% heading "whatsnext" %}}
<!--
- Run a
[shell script](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
to perform all the preceding steps all at once.
1. Create KinD cluster
2. Create new namespace
3. Apply `baseline` Pod Security Standard in `enforce` mode while applying
`restricted` Pod Security Standard also in `warn` and `audit` mode.
4. Create a new pod with the following pod security standards applied
- [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
- [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
- [Apply Pod Security Standards at the cluster level](/docs/tutorials/security/cluster-level-pss/)
-->
- 运行一个 [shell 脚本](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
一次执行所有前面的步骤。
1. 创建 KinD 集群
2. 创建新的名字空间
3. 在 `enforce` 模式下应用 `baseline` Pod 安全标准,
同时在 `warn``audit` 模式下应用 `restricted` Pod 安全标准。
4. 创建一个应用以下 Pod 安全标准的新 Pod
- [Pod 安全准入](/zh/docs/concepts/security/pod-security-admission/)
- [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
- [在集群级别应用 Pod 安全标准](/zh/docs/tutorials/security/cluster-level-pss/)

View File

@ -1,11 +1,12 @@
你必须拥有一个 Kubernetes 的集群,同时你的 Kubernetes 集群必须带有 kubectl 命令行工具。
如果你还没有集群,你可以通过 [Minikube](/zh/docs/tasks/tools/#minikube) 构建一
个你自己的集群,或者你可以使用下面任意一个 Kubernetes 工具构建:
建议在至少有两个节点的集群上运行本教程,且这些节点不作为控制平面主机。
如果你还没有集群,你可以通过 [Minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
构建一个你自己的集群,或者你可以使用下面任意一个 Kubernetes 工具构建:
<!--
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. If you do not already have a
be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create one by using
[minikube](/docs/tasks/tools/#minikube)
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one of these Kubernetes playgrounds:
-->

View File

@ -184,6 +184,12 @@ other = "購読する"
[ui_search_placeholder]
other = "検索"
[thirdparty_message]
other = """このセクションでは、Kubernetesが必要とする機能を提供するサードパーティープロジェクトにリンクしています。これらのプロジェクトはアルファベット順に記載されていて、Kubernetesプロジェクトの作者は責任を持ちません。このリストにプロジェクトを追加するには、変更を提出する前に<a href="/docs/contribute/style/content-guide/#third-party-content">content guide</a>をお読みください。<a href="#third-party-content-disclaimer">詳細はこちら。</a>"""
[thirdparty_message_disclaimer]
other = """<p>このページの項目は、Kubernetesが必要とする機能を提供するサードパーティー製品またはプロジェクトです。Kubernetesプロジェクトの作者は、それらのサードパーティー製品またはプロジェクトに責任を負いません。詳しくは、<a href="https://github.com/cncf/foundation/blob/master/website-guidelines.md" target="_blank">CNCFウェブサイトのガイドライン</a>をご覧ください。第三者のリンクを追加するような変更を提案する前に、<a href="/docs/contribute/style/content-guide/#third-party-content">コンテンツガイド</a>を読むべきです。</p>"""
[version_check_mustbe]
other = "作業するKubernetesサーバーは次のバージョンである必要があります: "

View File

@ -335,6 +335,8 @@
/docs/tutorials/kubernetes-basics/expose-intro/ /docs/tutorials/kubernetes-basics/expose/expose-intro/ 301
/docs/tutorials/kubernetes-basics/scale-interactive/ /docs/tutorials/kubernetes-basics/scale/scale-interactive/ 301
/docs/tutorials/kubernetes-basics/scale-intro/ /docs/tutorials/kubernetes-basics/scale/scale-intro/ 301
/docs/tutorials/clusters/apparmor/ /docs/tutorials/security/apparmor/ 301
/docs/tutorials/clusters/seccomp/ /docs/tutorials/security/seccomp/ 301
/ja/docs/tutorials/kubernetes-basics/scale-intro/ /ja/docs/tutorials/kubernetes-basics/scale/scale-intro/ 301
/ko/docs/tutorials/kubernetes-basics/scale-intro/ /ko/docs/tutorials/kubernetes-basics/scale/scale-intro/ 301
/docs/tutorials/kubernetes-basics/update-interactive/ /docs/tutorials/kubernetes-basics/update/update-interactive/ 301