Merge pull request #47366 from chanieljdan/merged-main-dev-1.31
Merge main branch into dev-1.31pull/47392/head
commit
8bdcb806aa
|
@ -0,0 +1,183 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Spotlight on SIG API Machinery"
|
||||
slug: sig-api-machinery-spotlight-2024
|
||||
canonicalUrl: https://www.kubernetes.dev/blog/2024/08/07/sig-api-machinery-spotlight-2024
|
||||
date: 2024-08-07
|
||||
author: "Frederico Muñoz (SAS Institute)"
|
||||
---
|
||||
|
||||
We recently talked with [Federico Bongiovanni](https://github.com/fedebongio) (Google) and [David
|
||||
Eads](https://github.com/deads2k) (Red Hat), Chairs of SIG API Machinery, to know a bit more about
|
||||
this Kubernetes Special Interest Group.
|
||||
|
||||
## Introductions
|
||||
|
||||
**Frederico (FSM): Hello, and thank your for your time. To start with, could you tell us about
|
||||
yourselves and how you got involved in Kubernetes?**
|
||||
|
||||
**David**: I started working on
|
||||
[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) (the Red Hat
|
||||
distribution of Kubernetes) in the fall of 2014 and got involved pretty quickly in API Machinery. My
|
||||
first PRs were fixing kube-apiserver error messages and from there I branched out to `kubectl`
|
||||
(_kubeconfigs_ are my fault!), `auth` ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) and `*Review` APIs are ports
|
||||
from OpenShift), `apps` (_workqueues_ and _sharedinformers_ for example). Don’t tell the others,
|
||||
but API Machinery is still my favorite :)
|
||||
|
||||
**Federico**: I was not as early in Kubernetes as David, but now it's been more than six years. At
|
||||
my previous company we were starting to use Kubernetes for our own products, and when I came across
|
||||
the opportunity to work directly with Kubernetes I left everything and boarded the ship (no pun
|
||||
intended). I joined Google and Kubernetes in early 2018, and have been involved since.
|
||||
|
||||
## SIG Machinery's scope
|
||||
|
||||
**FSM: It only takes a quick look at the SIG API Machinery charter to see that it has quite a
|
||||
significant scope, nothing less than the Kubernetes control plane. Could you describe this scope in
|
||||
your own words?**
|
||||
|
||||
**David**: We own the `kube-apiserver` and how to efficiently use it. On the backend, that includes
|
||||
its contract with backend storage and how it allows API schema evolution over time. On the
|
||||
frontend, that includes schema best practices, serialization, client patterns, and controller
|
||||
patterns on top of all of it.
|
||||
|
||||
**Federico**: Kubernetes has a lot of different components, but the control plane has a really
|
||||
critical mission: it's your communication layer with the cluster and also owns all the extensibility
|
||||
mechanisms that make Kubernetes so powerful. We can't make mistakes like a regression, or an
|
||||
incompatible change, because the blast radius is huge.
|
||||
|
||||
**FSM: Given this breadth, how do you manage the different aspects of it?**
|
||||
|
||||
**Federico**: We try to organize the large amount of work into smaller areas. The working groups and
|
||||
subprojects are part of it. Different people on the SIG have their own areas of expertise, and if
|
||||
everything fails, we are really lucky to have people like David, Joe, and Stefan who really are "all
|
||||
terrain", in a way that keeps impressing me even after all these years. But on the other hand this
|
||||
is the reason why we need more people to help us carry the quality and excellence of Kubernetes from
|
||||
release to release.
|
||||
|
||||
## An evolving collaboration model
|
||||
|
||||
**FSM: Was the existing model always like this, or did it evolve with time - and if so, what would
|
||||
you consider the main changes and the reason behind them?**
|
||||
|
||||
**David**: API Machinery has evolved over time both growing and contracting in scope. When trying
|
||||
to satisfy client access patterns it’s very easy to add scope both in terms of features and applying
|
||||
them.
|
||||
|
||||
A good example of growing scope is the way that we identified a need to reduce memory utilization by
|
||||
clients writing controllers and developed shared informers. In developing shared informers and the
|
||||
controller patterns use them (workqueues, error handling, and listers), we greatly reduced memory
|
||||
utilization and eliminated many expensive lists. The downside: we grew a new set of capability to
|
||||
support and effectively took ownership of that area from sig-apps.
|
||||
|
||||
For an example of more shared ownership: building out cooperative resource management (the goal of
|
||||
server-side apply), `kubectl` expanded to take ownership of leveraging the server-side apply
|
||||
capability. The transition isn’t yet complete, but [SIG
|
||||
CLI](https://github.com/kubernetes/community/tree/master/sig-cli) manages that usage and owns it.
|
||||
|
||||
**FSM: And for the boundary between approaches, do you have any guidelines?**
|
||||
|
||||
**David**: I think much depends on the impact. If the impact is local in immediate effect, we advise
|
||||
other SIGs and let them move at their own pace. If the impact is global in immediate effect without
|
||||
a natural incentive, we’ve found a need to press for adoption directly.
|
||||
|
||||
**FSM: Still on that note, SIG Architecture has an API Governance subproject, is it mostly
|
||||
independent from SIG API Machinery or are there important connection points?**
|
||||
|
||||
**David**: The projects have similar sounding names and carry some impacts on each other, but have
|
||||
different missions and scopes. API Machinery owns the how and API Governance owns the what. API
|
||||
conventions, the API approval process, and the final say on individual k8s.io APIs belong to API
|
||||
Governance. API Machinery owns the REST semantics and non-API specific behaviors.
|
||||
|
||||
**Federico**: I really like how David put it: *"API Machinery owns the how and API Governance owns
|
||||
the what"*: we don't own the actual APIs, but the actual APIs live through us.
|
||||
|
||||
## The challenges of Kubernetes popularity
|
||||
|
||||
**FSM: With the growth in Kubernetes adoption we have certainly seen increased demands from the
|
||||
Control Plane: how is this felt and how does it influence the work of the SIG?**
|
||||
|
||||
**David**: It’s had a massive influence on API Machinery. Over the years we have often responded to
|
||||
and many times enabled the evolutionary stages of Kubernetes. As the central orchestration hub of
|
||||
nearly all capability on Kubernetes clusters, we both lead and follow the community. In broad
|
||||
strokes I see a few evolution stages for API Machinery over the years, with constantly high
|
||||
activity.
|
||||
|
||||
1. **Finding purpose**: `pre-1.0` up until `v1.3` (up to our first 1000+ nodes/namespaces) or
|
||||
so. This time was characterized by rapid change. We went through five different versions of our
|
||||
schemas and rose to meet the need. We optimized for quick, in-tree API evolution (sometimes to
|
||||
the detriment of longer term goals), and defined patterns for the first time.
|
||||
|
||||
2. **Scaling to meet the need**: `v1.3-1.9` (up to shared informers in controllers) or so. When we
|
||||
started trying to meet customer needs as we gained adoption, we found severe scale limitations in
|
||||
terms of CPU and memory. This was where we broadened API machinery to include access patterns, but
|
||||
were still heavily focused on in-tree types. We built the watch cache, protobuf serialization,
|
||||
and shared caches.
|
||||
|
||||
3. **Fostering the ecosystem**: `v1.8-1.21` (up to CRD v1) or so. This was when we designed and wrote
|
||||
CRDs (the considered replacement for third-party-resources), the immediate needs we knew were
|
||||
coming (admission webhooks), and evolution to best practices we knew we needed (API schemas).
|
||||
This enabled an explosion of early adopters willing to work very carefully within the constraints
|
||||
to enable their use-cases for servicing pods. The adoption was very fast, sometimes outpacing
|
||||
our capability, and creating new problems.
|
||||
|
||||
4. **Simplifying deployments**: `v1.22+`. In the relatively recent past, we’ve been responding to
|
||||
pressures or running kube clusters at scale with large numbers of sometimes-conflicting ecosystem
|
||||
projects using our extensions mechanisms. Lots of effort is now going into making platform
|
||||
extensions easier to write and safer to manage by people who don't hold PhDs in kubernetes. This
|
||||
started with things like server-side-apply and continues today with features like webhook match
|
||||
conditions and validating admission policies.
|
||||
|
||||
Work in API Machinery has a broad impact across the project and the ecosystem. It’s an exciting
|
||||
area to work for those able to make a significant time investment on a long time horizon.
|
||||
|
||||
## The road ahead
|
||||
|
||||
**FSM: With those different evolutionary stages in mind, what would you pinpoint as the top
|
||||
priorities for the SIG at this time?**
|
||||
|
||||
**David:** **Reliability, efficiency, and capability** in roughly that order.
|
||||
|
||||
With the increased usage of our `kube-apiserver` and extensions mechanisms, we find that our first
|
||||
set of extensions mechanisms, while fairly complete in terms of capability, carry significant risks
|
||||
in terms of potential mis-use with large blast radius. To mitigate these risks, we’re investing in
|
||||
features that reduce the blast radius for accidents (webhook match conditions) and which provide
|
||||
alternative mechanisms with lower risk profiles for most actions (validating admission policy).
|
||||
|
||||
At the same time, the increased usage has made us more aware of scaling limitations that we can
|
||||
improve both server and client-side. Efforts here include more efficient serialization (CBOR),
|
||||
reduced etcd load (consistent reads from cache), and reduced peak memory usage (streaming lists).
|
||||
|
||||
And finally, the increased usage has highlighted some long existing
|
||||
gaps that we’re closing. Things like field selectors for CRDs which
|
||||
the [Batch Working Group](https://github.com/kubernetes/community/blob/master/wg-batch/README.md)
|
||||
is eager to leverage and will eventually form the basis for a new way
|
||||
to prevent trampoline pod attacks from exploited nodes.
|
||||
|
||||
## Joining the fun
|
||||
|
||||
**FSM: For anyone wanting to start contributing, what's your suggestions?**
|
||||
|
||||
**Federico**: SIG API Machinery is not an exception to the Kubernetes motto: **Chop Wood and Carry
|
||||
Water**. There are multiple weekly meetings that are open to everybody, and there is always more
|
||||
work to be done than people to do it.
|
||||
|
||||
I acknowledge that API Machinery is not easy, and the ramp up will be steep. The bar is high,
|
||||
because of the reasons we've been discussing: we carry a huge responsibility. But of course with
|
||||
passion and perseverance many people has ramped up through the years, and we hope more will come.
|
||||
|
||||
In terms of concrete opportunities, there is the SIG meeting every two weeks. Everyone is welcome to
|
||||
attend and listen, see what the group talks about, see what's going on in this release, etc.
|
||||
|
||||
Also two times a week, Tuesday and Thursday, we have the public Bug Triage, where we go through
|
||||
everything new from the last meeting. We've been keeping this practice for more than 7 years
|
||||
now. It's a great opportunity to volunteer to review code, fix bugs, improve documentation,
|
||||
etc. Tuesday's it's at 1 PM (PST) and Thursday is on an EMEA friendly time (9:30 AM PST). We are
|
||||
always looking to improve, and we hope to be able to provide more concrete opportunities to join and
|
||||
participate in the future.
|
||||
|
||||
**FSM: Excellent, thank you! Any final comments you would like to share with our readers?**
|
||||
|
||||
**Federico**: As I mentioned, the first steps might be hard, but the reward is also larger. Working
|
||||
on API Machinery is working on an area of huge impact (millions of users?), and your contributions
|
||||
will have a direct outcome in the way that Kubernetes works and the way that it's used. For me
|
||||
that's enough reward and motivation!
|
|
@ -11,7 +11,8 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetes {{< skew currentVersion >}} supports [Container Network Interface](https://github.com/containernetworking/cni)
|
||||
Kubernetes (version 1.3 through to the latest {{< skew latestVersion >}}, and likely onwards) lets you use
|
||||
[Container Network Interface](https://github.com/containernetworking/cni)
|
||||
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your
|
||||
cluster and that suits your needs. Different plugins are available (both open- and closed- source)
|
||||
in the wider Kubernetes ecosystem.
|
||||
|
|
|
@ -146,8 +146,8 @@ and all resources with no labels with the `tier` key. One could filter for resou
|
|||
excluding `frontend` using the comma operator: `environment=production,tier!=frontend`
|
||||
|
||||
One usage scenario for equality-based label requirement is for Pods to specify
|
||||
node selection criteria. For example, the sample Pod below selects nodes with
|
||||
the label "`accelerator=nvidia-tesla-p100`".
|
||||
node selection criteria. For example, the sample Pod below selects nodes where
|
||||
the `accelerator` label exists and is set to `nvidia-tesla-p100`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -29,9 +29,9 @@ This guide outlines the requirements of each policy.
|
|||
**The _Privileged_ policy is purposely-open, and entirely unrestricted.** This type of policy is
|
||||
typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.
|
||||
|
||||
The Privileged policy is defined by an absence of restrictions. Allow-by-default
|
||||
mechanisms (such as gatekeeper) may be Privileged by default. In contrast, for a deny-by-default mechanism (such as Pod
|
||||
Security Policy) the Privileged policy should disable all restrictions.
|
||||
The Privileged policy is defined by an absence of restrictions. If you define a Pod where the Privileged
|
||||
security policy applies, the Pod you define is able to bypass typical container isolation mechanisms.
|
||||
For example, you can define a Pod that has access to the node's host network.
|
||||
|
||||
### Baseline
|
||||
|
||||
|
@ -57,7 +57,7 @@ fail validation.
|
|||
<tr>
|
||||
<td style="white-space: nowrap">HostProcess</td>
|
||||
<td>
|
||||
<p>Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. {{< feature-state for_k8s_version="v1.26" state="stable" >}}</p>
|
||||
<p>Windows Pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows host machine. Privileged access to the host is disallowed in the Baseline policy. {{< feature-state for_k8s_version="v1.26" state="stable" >}}</p>
|
||||
<p><strong>Restricted Fields</strong></p>
|
||||
<ul>
|
||||
<li><code>spec.securityContext.windowsOptions.hostProcess</code></li>
|
||||
|
@ -304,7 +304,7 @@ applications, as well as lower-trust users. The following listed controls should
|
|||
enforced/disallowed:
|
||||
|
||||
{{< note >}}
|
||||
In this table, wildcards (`*`) indicate all elements in a list. For example,
|
||||
In this table, wildcards (`*`) indicate all elements in a list. For example,
|
||||
`spec.containers[*].securityContext` refers to the Security Context object for _all defined
|
||||
containers_. If any of the listed containers fails to meet the requirements, the entire pod will
|
||||
fail validation.
|
||||
|
@ -318,12 +318,12 @@ fail validation.
|
|||
<td><strong>Policy</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2"><em>Everything from the baseline profile.</em></td>
|
||||
<td colspan="2"><em>Everything from the Baseline policy</em></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="white-space: nowrap">Volume Types</td>
|
||||
<td>
|
||||
<p>The restricted policy only permits the following volume types.</p>
|
||||
<p>The Restricted policy only permits the following volume types.</p>
|
||||
<p><strong>Restricted Fields</strong></p>
|
||||
<ul>
|
||||
<li><code>spec.volumes[*]</code></li>
|
||||
|
@ -474,7 +474,7 @@ of individual policies are not defined here.
|
|||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
|
||||
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
|
||||
|
||||
- [Kubewarden](https://github.com/kubewarden)
|
||||
- [Kyverno](https://kyverno.io/policies/pod-security/)
|
||||
|
@ -489,15 +489,14 @@ workloads. Specifically, many of the Pod `securityContext` fields
|
|||
[have no effect on Windows](/docs/concepts/windows/intro/#compatibility-v1-pod-spec-containers-securitycontext).
|
||||
|
||||
{{< note >}}
|
||||
Kubelets prior to v1.24 don't enforce the pod OS field, and if a cluster has nodes on versions earlier than v1.24 the restricted policies should be pinned to a version prior to v1.25.
|
||||
Kubelets prior to v1.24 don't enforce the pod OS field, and if a cluster has nodes on versions earlier than v1.24 the Restricted policies should be pinned to a version prior to v1.25.
|
||||
{{< /note >}}
|
||||
|
||||
### Restricted Pod Security Standard changes
|
||||
Another important change, made in Kubernetes v1.25 is that the _restricted_ Pod security
|
||||
Another important change, made in Kubernetes v1.25 is that the _Restricted_ policy
|
||||
has been updated to use the `pod.spec.os.name` field. Based on the OS name, certain policies that are specific
|
||||
to a particular OS can be relaxed for the other OS.
|
||||
|
||||
|
||||
#### OS-specific policy controls
|
||||
Restrictions on the following controls are only required if `.spec.os.name` is not `windows`:
|
||||
- Privilege Escalation
|
||||
|
@ -512,10 +511,10 @@ the [documentation](/docs/concepts/workloads/pods/user-namespaces#integration-wi
|
|||
|
||||
## FAQ
|
||||
|
||||
### Why isn't there a profile between privileged and baseline?
|
||||
### Why isn't there a profile between Privileged and Baseline?
|
||||
|
||||
The three profiles defined here have a clear linear progression from most secure (restricted) to least
|
||||
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline
|
||||
The three profiles defined here have a clear linear progression from most secure (Restricted) to least
|
||||
secure (Privileged), and cover a broad set of workloads. Privileges required above the Baseline
|
||||
policy are typically very application specific, so we do not offer a standard profile in this
|
||||
niche. This is not to say that the privileged profile should always be used in this case, but that
|
||||
policies in this space need to be defined on a case-by-case basis.
|
||||
|
@ -529,9 +528,9 @@ Containers at runtime. Security contexts are defined as part of the Pod and cont
|
|||
in the Pod manifest, and represent parameters to the container runtime.
|
||||
|
||||
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
||||
as well as other related parameters outside the Security Context. As of July 2021,
|
||||
as well as other related parameters outside the Security Context. As of July 2021,
|
||||
[Pod Security Policies](/docs/concepts/security/pod-security-policy/) are deprecated in favor of the
|
||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||
|
||||
|
||||
### What about sandboxed Pods?
|
||||
|
|
|
@ -136,7 +136,7 @@ picks a node for the Pod to run on. In any cluster where there is more than one
|
|||
running nodes, you should set the
|
||||
[kubernetes.io/os](/docs/reference/labels-annotations-taints/#kubernetes-io-os)
|
||||
label correctly on each node, and define pods with a `nodeSelector` based on the operating system
|
||||
label, the kube-scheduler assigns your pod to a node based on other criteria and may or may not
|
||||
label. The kube-scheduler assigns your pod to a node based on other criteria and may or may not
|
||||
succeed in picking a suitable node placement where the node OS is right for the containers in that Pod.
|
||||
The [Pod security standards](/docs/concepts/security/pod-security-standards/) also use this
|
||||
field to avoid enforcing policies that aren't relevant to the operating system.
|
||||
|
|
|
@ -11,11 +11,11 @@ weight: 20
|
|||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
Bootstrap tokens are a simple bearer token that is meant to be used when
|
||||
creating new clusters or joining new nodes to an existing cluster. It was built
|
||||
to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
|
||||
creating new clusters or joining new nodes to an existing cluster.
|
||||
It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
|
||||
for users that wish to start clusters without `kubeadm`. It is also built to
|
||||
work, via RBAC policy, with the
|
||||
[Kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
|
||||
[kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
@ -23,19 +23,19 @@ work, via RBAC policy, with the
|
|||
|
||||
Bootstrap Tokens are defined with a specific type
|
||||
(`bootstrap.kubernetes.io/token`) of secrets that lives in the `kube-system`
|
||||
namespace. These Secrets are then read by the Bootstrap Authenticator in the
|
||||
API Server. Expired tokens are removed with the TokenCleaner controller in the
|
||||
Controller Manager. The tokens are also used to create a signature for a
|
||||
namespace. These Secrets are then read by the Bootstrap Authenticator in the
|
||||
API Server. Expired tokens are removed with the TokenCleaner controller in the
|
||||
Controller Manager. The tokens are also used to create a signature for a
|
||||
specific ConfigMap used in a "discovery" process through a BootstrapSigner
|
||||
controller.
|
||||
|
||||
## Token Format
|
||||
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`. More formally,
|
||||
they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`.
|
||||
More formally, they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
|
||||
|
||||
The first part of the token is the "Token ID" and is considered public
|
||||
information. It is used when referring to a token without leaking the secret
|
||||
information. It is used when referring to a token without leaking the secret
|
||||
part used for authentication. The second part is the "Token Secret" and should
|
||||
only be shared with trusted parties.
|
||||
|
||||
|
@ -56,8 +56,8 @@ Authorization: Bearer 07401b.f395accd246ae52d
|
|||
```
|
||||
|
||||
Tokens authenticate as the username `system:bootstrap:<token id>` and are members
|
||||
of the group `system:bootstrappers`. Additional groups may be specified in the
|
||||
token's Secret.
|
||||
of the group `system:bootstrappers`.
|
||||
Additional groups may be specified in the token's Secret.
|
||||
|
||||
Expired tokens can be deleted automatically by enabling the `tokencleaner`
|
||||
controller on the controller manager.
|
||||
|
@ -68,7 +68,7 @@ controller on the controller manager.
|
|||
|
||||
## Bootstrap Token Secret Format
|
||||
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
find the full design doc
|
||||
[here](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md).
|
||||
|
||||
|
@ -104,20 +104,19 @@ stringData:
|
|||
```
|
||||
|
||||
The type of the secret must be `bootstrap.kubernetes.io/token` and the name must
|
||||
be `bootstrap-token-<token id>`. It must also exist in the `kube-system`
|
||||
namespace.
|
||||
be `bootstrap-token-<token id>`. It must also exist in the `kube-system` namespace.
|
||||
|
||||
The `usage-bootstrap-*` members indicate what this secret is intended to be used
|
||||
for. A value must be set to `true` to be enabled.
|
||||
The `usage-bootstrap-*` members indicate what this secret is intended to be used for.
|
||||
A value must be set to `true` to be enabled.
|
||||
|
||||
* `usage-bootstrap-authentication` indicates that the token can be used to
|
||||
authenticate to the API server as a bearer token.
|
||||
* `usage-bootstrap-signing` indicates that the token may be used to sign the
|
||||
`cluster-info` ConfigMap as described below.
|
||||
|
||||
The `expiration` field controls the expiry of the token. Expired tokens are
|
||||
The `expiration` field controls the expiry of the token. Expired tokens are
|
||||
rejected when used for authentication and ignored during ConfigMap signing.
|
||||
The expiry value is encoded as an absolute UTC time using RFC3339. Enable the
|
||||
The expiry value is encoded as an absolute UTC time using RFC3339. Enable the
|
||||
`tokencleaner` controller to automatically delete expired tokens.
|
||||
|
||||
## Token Management with kubeadm
|
||||
|
@ -127,9 +126,9 @@ You can use the `kubeadm` tool to manage tokens on a running cluster. See the
|
|||
|
||||
## ConfigMap Signing
|
||||
|
||||
In addition to authentication, the tokens can be used to sign a ConfigMap. This
|
||||
is used early in a cluster bootstrap process before the client trusts the API
|
||||
server. The signed ConfigMap can be authenticated by the shared token.
|
||||
In addition to authentication, the tokens can be used to sign a ConfigMap.
|
||||
This is used early in a cluster bootstrap process before the client trusts the API
|
||||
server. The signed ConfigMap can be authenticated by the shared token.
|
||||
|
||||
Enable ConfigMap signing by enabling the `bootstrapsigner` controller on the
|
||||
Controller Manager.
|
||||
|
@ -140,7 +139,7 @@ Controller Manager.
|
|||
|
||||
The ConfigMap that is signed is `cluster-info` in the `kube-public` namespace.
|
||||
The typical flow is that a client reads this ConfigMap while unauthenticated and
|
||||
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
|
||||
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
|
||||
at a signature embedded in the ConfigMap.
|
||||
|
||||
The ConfigMap may look like this:
|
||||
|
@ -168,15 +167,15 @@ data:
|
|||
```
|
||||
|
||||
The `kubeconfig` member of the ConfigMap is a config file with only the cluster
|
||||
information filled out. The key thing being communicated here is the
|
||||
`certificate-authority-data`. This may be expanded in the future.
|
||||
information filled out. The key thing being communicated here is the
|
||||
`certificate-authority-data`. This may be expanded in the future.
|
||||
|
||||
The signature is a JWS signature using the "detached" mode. To validate the
|
||||
The signature is a JWS signature using the "detached" mode. To validate the
|
||||
signature, the user should encode the `kubeconfig` payload according to JWS
|
||||
rules (base64 encoded while discarding any trailing `=`). That encoded payload
|
||||
is then used to form a whole JWS by inserting it between the 2 dots. You can
|
||||
rules (base64 encoded while discarding any trailing `=`). That encoded payload
|
||||
is then used to form a whole JWS by inserting it between the 2 dots. You can
|
||||
verify the JWS using the `HS256` scheme (HMAC-SHA256) with the full token (e.g.
|
||||
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
|
||||
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
|
||||
is used.
|
||||
|
||||
{{< warning >}}
|
||||
|
@ -188,4 +187,3 @@ client relying on the signature to bootstrap TLS trust.
|
|||
|
||||
Consult the [kubeadm implementation details](/docs/reference/setup-tools/kubeadm/implementation-details/)
|
||||
section for more information.
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ the `spec.request` field. The CertificateSigningRequest denotes the signer (the
|
|||
recipient that the request is being made to) using the `spec.signerName` field.
|
||||
Note that `spec.signerName` is a required key after API version `certificates.k8s.io/v1`.
|
||||
In Kubernetes v1.22 and later, clients may optionally set the `spec.expirationSeconds`
|
||||
field to request a particular lifetime for the issued certificate. The minimum valid
|
||||
field to request a particular lifetime for the issued certificate. The minimum valid
|
||||
value for this field is `600`, i.e. ten minutes.
|
||||
|
||||
Once created, a CertificateSigningRequest must be approved before it can be signed.
|
||||
|
@ -113,13 +113,15 @@ signed, a security certificate.
|
|||
|
||||
Any signer that is made available for outside a particular cluster should provide information
|
||||
about how the signer works, so that consumers can understand what that means for CertifcateSigningRequests
|
||||
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
|
||||
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
|
||||
This includes:
|
||||
|
||||
1. **Trust distribution**: how trust anchors (CA certificates or certificate bundles) are distributed.
|
||||
1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames, Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior when usages different than the signer-determined usages are specified in the CSR.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames,
|
||||
Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior
|
||||
when usages different than the signer-determined usages are specified in the CSR.
|
||||
1. **Expiration/certificate lifetime**: whether it is fixed by the signer, configurable by the admin, determined by the CSR `spec.expirationSeconds` field, etc
|
||||
and the behavior when the signer-determined expiration is different from the CSR `spec.expirationSeconds` field.
|
||||
1. **CA bit allowed/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.
|
||||
|
@ -140,12 +142,12 @@ certificate expiration or lifetime. The expiration or lifetime therefore has to
|
|||
through the `spec.expirationSeconds` field of the CSR object. The built-in signers
|
||||
use the `ClusterSigningDuration` configuration option, which defaults to 1 year,
|
||||
(the `--cluster-signing-duration` command-line flag of the kube-controller-manager)
|
||||
as the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`
|
||||
as the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`
|
||||
is specified, the minimum of `spec.expirationSeconds` and `ClusterSigningDuration` is
|
||||
used.
|
||||
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -208,14 +210,14 @@ The kube-controller-manager implements [control plane signing](#signer-control-p
|
|||
signers. Failures for all of these are only reported in kube-controller-manager logs.
|
||||
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
{{< /note >}}
|
||||
|
||||
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
|
||||
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
|
||||
coincidental. For instance, some distributions may honor `kubernetes.io/legacy-unknown` as client certificates for the
|
||||
kube-apiserver, but this is not a standard.
|
||||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
guaranteed to verify a connection to the API server using the default service (`kubernetes.default.svc`).
|
||||
|
||||
### Custom signers
|
||||
|
@ -240,7 +242,8 @@ were marked as approved.
|
|||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22.
|
||||
Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -386,7 +389,7 @@ this API.
|
|||
{{< /note >}}
|
||||
|
||||
A ClusterTrustBundles is a cluster-scoped object for distributing X.509 trust
|
||||
anchors (root certificates) to workloads within the cluster. They're designed
|
||||
anchors (root certificates) to workloads within the cluster. They're designed
|
||||
to work well with the [signer](#signers) concept from CertificateSigningRequests.
|
||||
|
||||
ClusterTrustBundles can be used in two modes:
|
||||
|
@ -395,8 +398,8 @@ ClusterTrustBundles can be used in two modes:
|
|||
### Common properties and validation {#ctb-common}
|
||||
|
||||
All ClusterTrustBundle objects have strong validation on the contents of their
|
||||
`trustBundle` field. That field must contain one or more X.509 certificates,
|
||||
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
|
||||
`trustBundle` field. That field must contain one or more X.509 certificates,
|
||||
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
|
||||
must parse as valid X.509 certificates.
|
||||
|
||||
Esoteric PEM features like inter-block data and intra-block headers are either
|
||||
|
@ -444,8 +447,8 @@ controller in the cluster, so they have several security features:
|
|||
`<signerNameDomain>/<signerNamePath>` or match a pattern such as
|
||||
`<signerNameDomain>/*`.
|
||||
* Signer-linked ClusterTrustBundles **must** be named with a prefix derived from
|
||||
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
|
||||
and a final colon is appended. This is followed by an arbitrary name. For
|
||||
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
|
||||
and a final colon is appended. This is followed by an arbitrary name. For
|
||||
example, the signer `example.com/mysigner` can be linked to a
|
||||
ClusterTrustBundle `example.com:mysigner:<arbitrary-name>`.
|
||||
|
||||
|
@ -468,8 +471,8 @@ spec:
|
|||
trustBundle: "<... PEM data ...>"
|
||||
```
|
||||
|
||||
They are primarily intended for cluster configuration use cases. Each
|
||||
signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
|
||||
They are primarily intended for cluster configuration use cases.
|
||||
Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
|
||||
customary grouping behavior of signer-linked ClusterTrustBundles.
|
||||
|
||||
Signer-unlinked ClusterTrustBundles have no `attest` verb requirement.
|
||||
|
@ -483,7 +486,8 @@ signer-unlinked ClusterTrustBundles **must not** contain a colon (`:`).
|
|||
|
||||
{{<feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
The contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets. See the [clusterTrustBundle projected volume source](/docs/concepts/storage/projected-volumes#clustertrustbundle) for more details.
|
||||
The contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets.
|
||||
See the [clusterTrustBundle projected volume source](/docs/concepts/storage/projected-volumes#clustertrustbundle) for more details.
|
||||
|
||||
<!-- TODO this should become a task page -->
|
||||
## How to issue a certificate for a user {#normal-user}
|
||||
|
@ -609,12 +613,13 @@ To test it, change the context to `myuser`:
|
|||
kubectl config use-context myuser
|
||||
```
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Read [Manage TLS Certificates in a Cluster](/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
* View the source code for the kube-controller-manager built in [signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* View the source code for the kube-controller-manager built in
|
||||
[signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in
|
||||
[approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
|
||||
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)
|
||||
* Read about the ClusterTrustBundle API:
|
||||
|
|
|
@ -620,16 +620,16 @@ This allows the cluster to repair accidental modifications, and helps to keep ro
|
|||
up-to-date as permissions and subjects change in new Kubernetes releases.
|
||||
|
||||
To opt out of this reconciliation, set the `rbac.authorization.kubernetes.io/autoupdate`
|
||||
annotation on a default cluster role or rolebinding to `false`.
|
||||
annotation on a default cluster role or default cluster RoleBinding to `false`.
|
||||
Be aware that missing default permissions and subjects can result in non-functional clusters.
|
||||
|
||||
Auto-reconciliation is enabled by default if the RBAC authorizer is active.
|
||||
|
||||
### API discovery roles {#discovery-roles}
|
||||
|
||||
Default role bindings authorize unauthenticated and authenticated users to read API information
|
||||
Default cluster role bindings authorize unauthenticated and authenticated users to read API information
|
||||
that is deemed safe to be publicly accessible (including CustomResourceDefinitions).
|
||||
To disable anonymous unauthenticated access, add `--anonymous-auth=false` to
|
||||
To disable anonymous unauthenticated access, add `--anonymous-auth=false` flag to
|
||||
the API server configuration.
|
||||
|
||||
To view the configuration of these roles via `kubectl` run:
|
||||
|
|
|
@ -47,11 +47,6 @@ must be defined for a policy to have an effect.
|
|||
If a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave
|
||||
`spec.paramKind` in `ValidatingAdmissionPolicy` not specified.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
- Ensure the `ValidatingAdmissionPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
- Ensure that the `admissionregistration.k8s.io/v1beta1` API is enabled.
|
||||
|
||||
## Getting Started with Validating Admission Policy
|
||||
|
||||
Validating Admission Policy is part of the cluster control-plane. You should write and deploy them
|
||||
|
|
|
@ -5,10 +5,12 @@ date: 2024-01-23
|
|||
slug: kubernetes-separate-image-filesystem
|
||||
author: >
|
||||
Kevin Hannon (Red Hat)
|
||||
translator: >
|
||||
Taisuke Okamoto (IDCフロンティア),
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学),
|
||||
nasa9084 (LINEヤフー)
|
||||
---
|
||||
|
||||
**翻訳者:** Taisuke Okamoto (IDC Frontier Inc), Junya Okabe (University of Tsukuba), nasa9084 (LY Corporation)
|
||||
|
||||
Kubernetesクラスターの稼働、運用する上でよくある問題は、ディスク容量が不足することです。
|
||||
ノードがプロビジョニングされる際には、コンテナイメージと実行中のコンテナのために十分なストレージスペースを確保することが重要です。
|
||||
通常、[コンテナランタイム](/ja/docs/setup/production-environment/container-runtimes/)は`/var`に書き込みます。
|
||||
|
|
|
@ -7,8 +7,6 @@ author: >
|
|||
Frederico Muñoz (SAS Institute)
|
||||
---
|
||||
|
||||
**著者**: Frederico Muñoz (SAS Institute)
|
||||
|
||||
Kubernetesとそれを取り巻く技術のエコシステム全体を学ぶことは、課題がないわけではありません。
|
||||
このインタビューでは、[AWSのCarlos Santana](https://www.linkedin.com/in/csantanapr/)さんに、コミュニティベースの学習体験を利用するために、彼がどのようにして[Kubernetesブッククラブ](https://community.cncf.io/kubernetes-virtual-book-club/)を作ったのか、その会がどのような活動をするのか、そしてどのようにして参加するのかについて伺います。
|
||||
|
||||
|
|
|
@ -5,10 +5,12 @@ date: 2024-03-07
|
|||
slug: cri-o-seccomp-oci-artifacts
|
||||
author: >
|
||||
Sascha Grunert
|
||||
translator: >
|
||||
Taisuke Okamoto (IDCフロンティア),
|
||||
atoato88 (NEC),
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学)
|
||||
---
|
||||
|
||||
**翻訳者:** Taisuke Okamoto (IDC Frontier Inc), atoato88 (NEC Corporation), Junya Okabe (University of Tsukuba)
|
||||
|
||||
seccompはセキュアなコンピューティングモードを意味し、Linuxカーネルのバージョン2.6.12以降の機能として提供されました。
|
||||
これは、プロセスの特権をサンドボックス化し、ユーザースペースからカーネルへの呼び出しを制限するために使用できます。
|
||||
Kubernetesでは、ノードに読み込まれたseccompプロファイルをPodやコンテナに自動的に適用することができます。
|
||||
|
|
|
@ -5,10 +5,11 @@ slug: diy-create-your-own-cloud-with-kubernetes-part-1
|
|||
date: 2024-04-05T07:30:00+00:00
|
||||
author: >
|
||||
Andrei Kvapil (Ænix)
|
||||
translator: >
|
||||
[Taisuke Okamoto](https://github.com/b1gb4by) (IDCフロンティア),
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学)
|
||||
---
|
||||
|
||||
**翻訳者:** [Taisuke Okamoto](https://github.com/b1gb4by) (IDC Frontier Inc), [Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba)
|
||||
|
||||
Ænixでは、Kubernetesに対する深い愛着があり、近いうちにすべての最新テクノロジーがKubernetesの驚くべきパターンを活用し始めることを夢見ています。
|
||||
自分だけのクラウドを構築することを考えたことはありませんか?きっと考えたことがあるでしょう。
|
||||
しかし、快適なKubernetesエコシステムを離れることなく、最新のテクノロジーとアプローチのみを使ってそれを実現することは可能でしょうか?
|
||||
|
|
|
@ -5,10 +5,13 @@ slug: diy-create-your-own-cloud-with-kubernetes-part-2
|
|||
date: 2024-04-05T07:35:00+00:00
|
||||
author: >
|
||||
Andrei Kvapil (Ænix)
|
||||
translator: >
|
||||
[Taisuke Okamoto](https://github.com/b1gb4by) ([IDCフロンティア](https://www.idcf.jp/)),
|
||||
[Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake](https://3-shake.com/)),
|
||||
[atoato88](https://github.com/atoato88) ([NEC](https://jpn.nec.com/index.html)),
|
||||
[Kaito Ii](https://github.com/kaitoii11) ([Hewlett Packard Enterprise](https://www.hpe.com/jp/ja/home.html))
|
||||
---
|
||||
|
||||
**翻訳者:** [Taisuke Okamoto](https://github.com/b1gb4by) ([IDC Frontier Inc.](https://www.idcf.jp/)), [Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake Inc.](https://3-shake.com/en/)), [atoato88](https://github.com/atoato88) ([NEC Corporation](https://jpn.nec.com/index.html)), [Kaito Ii](https://github.com/kaitoii11) ([Hewlett Packard Enterprise](https://www.hpe.com/jp/ja/home.html))
|
||||
|
||||
Kubernetesエコシステムだけを使って自分だけのクラウドを構築する方法について、一連の記事を続けています。
|
||||
[前回の記事](/ja/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/)では、Talos LinuxとFlux CDをベースにした基本的なKubernetes ディストリビューションの準備方法を説明しました。
|
||||
この記事では、Kubernetesにおけるさまざまな仮想化テクノロジーをいくつか紹介し、主にストレージとネットワークを中心に、Kubernetes内で仮想マシンを実行するために必要な環境を整えます。
|
||||
|
|
|
@ -5,10 +5,12 @@ slug: diy-create-your-own-cloud-with-kubernetes-part-3
|
|||
date: 2024-04-05T07:40:00+00:00
|
||||
author: >
|
||||
Andrei Kvapil (Ænix)
|
||||
translator: >
|
||||
[Taisuke Okamoto](https://github.com/b1gb4by) (IDCフロンティア),
|
||||
[Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake](https://3-shake.com/)),
|
||||
[atoato88](https://github.com/atoato88) ([NEC](https://jpn.nec.com/index.html))
|
||||
---
|
||||
|
||||
**翻訳者:** [Taisuke Okamoto](https://github.com/b1gb4by) (IDC Frontier Inc), [Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake Inc.](https://3-shake.com/en/)), [atoato88](https://github.com/atoato88) ([NEC Corporation](https://jpn.nec.com/index.html))
|
||||
|
||||
Kubernetesの中でKubernetesを実行するという最も興味深いフェーズに近づいています。
|
||||
この記事では、KamajiやCluster APIなどのテクノロジーとそれらのKubeVirtとの統合について説明します。
|
||||
|
||||
|
|
|
@ -8,10 +8,11 @@ author: >
|
|||
Michelle Au (Google),
|
||||
Walter Fender (Google),
|
||||
Michael McCune (Red Hat)
|
||||
translator: >
|
||||
Taisuke Okamoto (IDCフロンティア),
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学)
|
||||
---
|
||||
|
||||
**翻訳者:** Taisuke Okamoto (IDC Frontier Inc), [Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba)
|
||||
|
||||
Kubernetes v1.7以降、Kubernetesプロジェクトは、クラウドプロバイダーとの統合機能をKubernetesのコアコンポーネントから分離するという野心的な目標を追求してきました([KEP-2395](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers/README.md))。
|
||||
この統合機能はKubernetesの初期の開発と成長に重要な役割を果たしつつも、2つの重要な要因によってその分離が推進されました。
|
||||
1つは、何百万行ものGoコードにわたってすべてのクラウドプロバイダーのネイティブサポートを維持することの複雑さが増大していたこと、もう1つは、Kubernetesを真にベンダーニュートラルなプラットフォームとして確立したいという願望です。
|
||||
|
|
|
@ -6,10 +6,11 @@ canonicalUrl: https://www.kubernetes.dev/blog/2024/06/20/sig-node-spotlight-2024
|
|||
date: 2024-06-20
|
||||
author: >
|
||||
Arpit Agrawal
|
||||
translator: >
|
||||
Taisuke Okamoto (IDCフロンティア),
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学)
|
||||
---
|
||||
|
||||
**翻訳者:** [Taisuke Okamoto](https://github.com/b1gb4by) (IDC Frontier Inc), [Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba)
|
||||
|
||||
コンテナオーケストレーションの世界で、[Kubernetes](/ja)は圧倒的な存在感を示しており、世界中で最も複雑で動的なアプリケーションの一部を動かしています。
|
||||
その裏では、Special Interest Groups(SIG)のネットワークがKubernetesの革新と安定性を牽引しています。
|
||||
|
||||
|
|
|
@ -10,8 +10,11 @@ author: >
|
|||
[Kaslin Fields](https://github.com/kaslin) (Google),
|
||||
[Tim Bannister](https://github.com/sftim) (The Scale Factory),
|
||||
and every contributor across the globe
|
||||
translator: >
|
||||
[Junya Okabe](https://github.com/Okabe-Junya) (筑波大学),
|
||||
[Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake](https://3-shake.com/)),
|
||||
[Kaito Ii](https://github.com/kaitoii11) (Hewlett Packard Enterprise)
|
||||
---
|
||||
**翻訳者**: [Junya Okabe](https://github.com/Okabe-Junya) (University of Tsukuba), [Daiki Hayakawa(bells17)](https://github.com/bells17) ([3-shake Inc.](https://3-shake.com/en/)), [Kaito Ii](https://github.com/kaitoii11) (Hewlett Packard Enterprise)
|
||||
|
||||
![KCSEU 2024 group photo](kcseu2024.jpg)
|
||||
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Liveness、ReadinessおよびStartup Probe
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetesは様々な種類のProbeがあります。
|
||||
|
||||
- [Liveness Probe](#liveness-probe)
|
||||
- [Readiness Probe](#readiness-probe)
|
||||
- [Startup Probe](#startup-probe)
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Liveness Probe {#liveness-probe}
|
||||
|
||||
コンテナの再起動を判断するためにLiveness Probeを使用します。
|
||||
例えば、Liveness Probeはアプリケーションは起動しているが、処理が継続できないデッドロックを検知することができます。
|
||||
|
||||
コンテナがLiveness Probeを繰り返し失敗するとkubeletはコンテナを再起動します。
|
||||
|
||||
Liveness ProbeはReadiness Probeの成功を待ちません。Liveness Probeの実行を待つためには、`initialDelaySeconds`を定義するか、[Startup Probe](#startup-probe)を使用してください。
|
||||
|
||||
|
||||
## Readiness Probe {#readiness-probe}
|
||||
|
||||
Readiness Probeはコンテナがトラフィックを受け入れる準備ができたかを決定します。ネットワーク接続の確立、ファイルの読み込み、キャッシュのウォームアップなどの時間のかかる初期タスクを実行するアプリケーションを待つ場合に有用です。
|
||||
|
||||
Readiness Probeが失敗状態を返す場合、KubernetesはそのPodをすべての一致するサービスエンドポイントから取り除きます。
|
||||
|
||||
Readiness Probeはコンテナのライフサイクル全体にわたって実行されます。
|
||||
|
||||
|
||||
## Startup Probe {#startup-probe}
|
||||
|
||||
Startup Probeはコンテナ内のアプリケーションが起動されたかどうかを検証します。起動が遅いコンテナに対して起動されたかどうかチェックを取り入れるために使用され、kubeletによって起動や実行する前に終了されるのを防ぎます。
|
||||
|
||||
Startup Probeが設定された場合、成功するまでLiveness ProbeとReadiness Probeのチェックを無効にします。
|
||||
|
||||
定期的に実行されるReadiness Probeとは異なり、Startup Probeは起動時のみ実行されます。
|
||||
|
||||
* 詳細については[Liveness Probe、Readiness ProbeおよびStartup Probeの設定](/ja/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes)をご覧ください.
|
|
@ -174,7 +174,7 @@ Linuxでは、Pod内のどんなコンテナも、`privileged`フラグをコン
|
|||
|
||||
## static Pod
|
||||
|
||||
*static Pod*は、{{< glossary_tooltip text="APIサーバー" term_id="kube-apiserver" >}}には管理されない、特定のノード上でkubeletデーモンによって直接管理されるPodのことです。大部分のPodはコントロープレーン(たとえば{{< glossary_tooltip text="Deployment" term_id="deployment" >}})によって管理されますが、static Podの場合はkubeletが各static Podを直接管理します(障害時には再起動します)。
|
||||
*static Pod*は、{{< glossary_tooltip text="APIサーバー" term_id="kube-apiserver" >}}には管理されない、特定のノード上でkubeletデーモンによって直接管理されるPodのことです。大部分のPodはコントロールプレーン(たとえば{{< glossary_tooltip text="Deployment" term_id="deployment" >}})によって管理されますが、static Podの場合はkubeletが各static Podを直接管理します(障害時には再起動します)。
|
||||
|
||||
static Podは常に特定のノード上の1つの{{< glossary_tooltip term_id="kubelet" >}}に紐付けられます。static Podの主な用途は、セルフホストのコントロールプレーンを実行すること、言い換えると、kubeletを使用して個別の[コントロールプレーンコンポーネント](/ja/docs/concepts/overview/components/#control-plane-components)を管理することです。
|
||||
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: エンドポイント(Endpoints)
|
||||
id: endpoints
|
||||
date: 2020-04-23
|
||||
full_link:
|
||||
short_description: >
|
||||
エンドポイントは、サービスのセレクターに一致するPodのIPアドレスを記録する責任があります。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- networking
|
||||
---
|
||||
エンドポイントは、{{< glossary_tooltip text="サービス" term_id="selector" >}}のセレクターに一致するPodのIPアドレスを記録する責任があります。
|
||||
|
||||
<!--more-->
|
||||
セレクター識別子を指定せずに、{{< glossary_tooltip text="サービス" term_id="selector" >}}上でエンドポイントを手動で設定できます。
|
||||
{{< glossary_tooltip text="EndpointSlice" term_id="endpoint-slice" >}}は、スケーラブルで拡張可能な代替手段を提供します。
|
|
@ -1,4 +1,110 @@
|
|||
---
|
||||
title: プロダクション環境
|
||||
title: "プロダクション環境"
|
||||
description: プロダクション品質のKubernetesクラスターを作成します。
|
||||
weight: 30
|
||||
no_list: true
|
||||
---
|
||||
<!-- overview -->
|
||||
|
||||
プロダクション環境向けのKubernetesクラスターには計画と準備が必要です。Kubernetesクラスターが重要なワークロードを動かしている場合、耐障害性のある構成にしなければいけません。このページはプロダクション環境で利用できるクラウターのセットアップをするための手順や既存のクラスターをプロダクション環境で利用できるように昇格するための手順を説明します。
|
||||
既にプロダクション環境のセットアップを理解している場合、[次の項目](#what-s-next)に進んでください。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## プロダクション環境の考慮事項
|
||||
|
||||
通常、プロダクション用のKubernetesクラスター環境は個人学習の環境や開発環境、テスト環境より多くの要件があります。プロダクション環境は多くのユーザーによるセキュアなアクセスや安定した可用性、変化する需要に適用するためのリソースが必要になる場合があります。
|
||||
|
||||
プロダクション用のKubernetes環境をどこに配置するか(オンプレミスまたはクラウド)、どの程度の管理を自分で行うか、それとも他に任せるかを決定する際には、以下の問題がKubernetesクラスターに対する要件にどのように影響を与えるかを考慮してください。
|
||||
|
||||
- *可用性*: 単一のマシンで動作するKubernetes[学習環境](/ja/docs/setup/#learning-environment)には単一障害点があります。高可用性のクラスターの作成するには下記の点を考慮する必要があります。
|
||||
- ワーカーノードからのコントロールプレーンの分離
|
||||
- 複数ノードへのコントロールプレーンのレプリケーション
|
||||
- クラスターの{{< glossary_tooltip term_id="kube-apiserver" text="APIサーバー" >}}へのトラフィックの負荷分散
|
||||
- 変化するワークロードに応じて、十分な数のワーカーノードが利用可能であること、または迅速に利用可能になること
|
||||
|
||||
- *スケール*: プロダクション用のKubernetes環境が安定した要求を受けることが予測できる場合、必要なキャパシティをセットアップすることができるかもしれません。しかし、時間の経過と共に成長する需要やシーズンや特別なイベントのようなことで大幅な変化を予測する場合、コントロールプレーンやワーカーノードへの多くのリクエストにより増加する負荷を軽減するスケールの方法や未使用のリソースを削減するためのスケールダウンの方法を計画する必要があリます。
|
||||
|
||||
- *セキュリティやアクセス管理*: 自身のKubernetes学習クラスターでは全管理者権限を持っています。しかし、重要なワークロードを保持していたり、複数のユーザーが利用する共有クラスターでは、誰がどのクラスターのリソースに対してアクセスできるかをより制限されたアプローチを必要とします。ユーザーやワークロードが必要なリソースへアクセスできることを実現するロールベースアクセス制御([RBAC](/ja/docs/reference/access-authn-authz/rbac/))や他のセキュリティメカニズムを使用し、ワークロードやクラスターを保護することができます。[ポリシー](/docs/concepts/policy/)や[コンテナリソース](/ja/docs/concepts/configuration/manage-resources-containers/)を管理することによってユーザーやワークロードがアクセスできるリソースの制限を設定できます。
|
||||
|
||||
自身のプロダクション環境のKubernetesを構築する前に、[ターンキークラウドソリューション](/ja/docs/setup/production-environment/turnkey-solutions/)や
|
||||
プロバイダーや他の[Kubernetesパートナー](/ja/partners/)へ仕事の一部や全てを委託することを考えてください。オプションには次のものが含まれます。
|
||||
|
||||
- *サーバーレス*: クラスターを全く管理せずに第三者の設備上でワークロードを実行します。CPU使用量やメモリ、ディスクリクエストなどの利用に応じて課金します。
|
||||
- *マネージドコントロールプレーン*: クラスターのコントロールプレーンのスケールと可用性やパッチとアップグレードの実行をプロバイダーに管理してもらいます。
|
||||
- *マネージドワーカーノード*: 需要に合わせてノードのプールを構成し、プロバイダーがワーカーノードが利用可能であることを保証し、需要に応じたアップグレードを実施できるようにします。
|
||||
- *統合*: ストレージ、コンテナレジストリ、認証方法、開発ツールなどの他の必要なサービスとKubernetesを統合するプロバイダーも存在します。
|
||||
|
||||
プロダクション用のKubernetesクラスターを自身で構築する場合でもパートナーと連携する場合でもクラスターの*コントロールプレーン*、*ワーカーノード*、*ユーザーアクセス*、および*ワークロードリソース*に関連する要件を評価するために以下のセクションのレビューを行なってください。
|
||||
|
||||
## プロダクション環境のクラスターのセットアップ
|
||||
|
||||
プロダクション環境向けのKubernetesクラスターでは、コントロールプレーンが異なる方法で複数のコンピューターに分散されたサービスからクラスターを管理します。一方で、各ワーカーノードは単一のエンティティとして表され、KubernetesのPodを実行するように設定されています。
|
||||
|
||||
### プロダクション環境のコントロールプレーン
|
||||
|
||||
最もシンプルなKubernetesクラスターはすべてのコントロールプレーンとワーカーノードサービスが同一のマシン上で稼働しています。[Kubernetesコンポーネント](/ja/docs/concepts/overview/components/)の図に示すようにワーカーノードの追加によって環境をスケールさせることができます。クラスターが短時間の稼働や深刻な問題が起きたときに破棄してもよい場合は、同一マシン上での構成で要件を満たしているかもしれません。
|
||||
|
||||
永続性や高可用性のクラスターが必要であれば、コントロールプレーンの拡張方法を考えなければいけません。設計上、単一のマシンで動作するコントロールプレーンサービスは高可用性ではありません。クラスターを常に稼働させ、何か問題が発生した場合に修復できる保証が重要な場合は、以下のステップを考えてください。
|
||||
|
||||
- *デプロイツールの選択*: kubeadm、kopsやkubesprayなどのツールを使ってコントロールプレーンをデプロイできます。これらのデプロイメント方法を使用したプロダクション環境向けののデプロイのヒントを学ぶために[デプロイツールによるKubernetesのインストール](/ja/docs/setup/production-environment/tools/)をご覧になってください。異なる[コンテナランタイム](/ja/docs/setup/production-environment/container-runtimes/)をデプロイに使用することができます。
|
||||
- *証明書の管理*: コントロールプレーンサービス間の安全な通信は証明書を使用して実装されています。証明書はデプロイ時に自動で生成したり、独自の認証局を使用し生成することができます。詳細は[PKI証明書と要件](/ja/docs/setup/best-practices/certificates/)をご覧ください。
|
||||
- *APIサーバー用のロードバランサーの構成*: 外部からのAPIリクエストを異なるノード上で稼働しているAPIサーバーサービスインスタンスに分散させるためにロードバランサーを設定します。詳細は [外部ロードバランサーの作成](/docs/tasks/access-application-cluster/create-external-load-balancer/)をご覧ください。
|
||||
- *etcdサービスの分離とバックアップ*: etcdサービスは他のコントロールプレーンサービスと同じマシン上で動作させることも、追加のセキュリティと可用性のために別のマシン上で動作させることもできます。etcdはクラスターの構成データを格納しており、必要に応じてデータベースを修復できるようにするためにetcdデータベースのバックアップは定期的に行うべきです。etcdの構成と使用に関する詳細は[etcd FAQ](https://etcd.io/docs/v3.5/faq/)をご覧ください。また、[Kubernetes向けのetcdクラスターの運用](/ja/docs/tasks/administer-cluster/configure-upgrade-etcd/)と[kubeadmを使用した高可用性etcdクラスターのセットアップ](/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)もご覧ください。
|
||||
- *複数のコントロールプレーンシステムの作成*: 高可用性のためにコントロールプレーンは単一のマシンに限定されるべきではありません。コントロールプレーンサービスはinitサービス(systemdなど)によって実行される場合、各サービスは少なくとも3台のマシンで実行されるべきです。しかし、Kubernetes内でPodとしてコントロールプレーンサービスを実行することで、リクエストしたサービスのレプリカ数が常に利用可能であることが保証されます。スケジューラーは耐障害性が備わっているべきですが、高可用性は必要ありません。一部のデプロイメントツールはKubernetesサービスのリーダー選出のために[Raft](https://raft.github.io/)コンセンサスアルゴリズムを設定しています。プライマリが失われた場合、別のサービスが自らを選出して引き継ぎます。
|
||||
- *複数ゾーンへの配置*: クラスターを常に利用可能に保つことが重要である場合、複数のデータセンターにまたがって実行されるクラスターを作成することを検討してください。クラウド環境ではゾーンと呼ばれます。ゾーンのグループはリージョンと呼ばれます。同リージョンで複数のゾーンにクラスターを分散させることで、一つのゾーンが利用不可能になったとしても、クラスタが機能し続ける可能性を向上できます。詳細は、[複数ゾーンでの稼働](/ja/docs/setup/best-practices/multiple-zones/)をご覧ください。
|
||||
- *継続的な機能の管理*: クラスターを長期間稼働する計画がある場合、正常性とセキュリティを維持するために行うべきタスクがあります。例えば、kubeadmを使用してインストールした場合、[証明書管理](/ja/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)や[kubeadmによるクラスターのアップグレード](s/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)を支援するドキュメントがあります。より多くのKubernetes管理タスクのリストについては、[クラスターの管理](/ja/docs/tasks/administer-cluster/)をご覧ください。
|
||||
|
||||
コントロールプレーンサービスを実行する際の利用可能なオプションについて学ぶためには、[kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)、[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)、[kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/)のコンポーネントページをご覧ください。高可用性のコントロールプレーンの例については、[高可用性トポロジーのオプション](/ja/docs/setup/production-environment/tools/kubeadm/ha-topology/)、[kubeadmを使用した高可用性クラスターの作成](/ja/docs/setup/production-environment/tools/kubeadm/high-availability/)、[Kubernetes向けetcdクラスターの運用](/ja/docs/tasks/administer-cluster/configure-upgrade-etcd/)をご覧ください。etcdクラスターのバックアップ計画の作成については、[etcdクラスターのバックアップ](/ja/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster)をご覧ください。
|
||||
|
||||
### プロダクション環境のワーカーノード
|
||||
|
||||
プロダクション向けのワークロードとそのワークロードが依存するサービス(CoreDNSなど)は耐障害性を必要とします。自身でコントロールプレーンを管理するか、クラウドプロバイダーに任せるかに関わらず、ワーカーノード(単にノードとも呼ばれます)の管理方法を考える必要があります。
|
||||
|
||||
- *ノードの構成*: ノードは物理マシンもしくは仮想マシンになります。ノードを自身で作成し管理したい場合、サポートされてるオペレーティングシステムをインストールし、適切な[ノードサービス](/ja/docs/concepts/overview/components/#node-components)を追加し、実行します。
|
||||
|
||||
- ノードをセットアップする際に、ワークロードの需要に合わせた適切なメモリ、CPU、ディスク速度、ストレージ容量を確保することを考えること
|
||||
- 汎用コンピュータシステムで十分か、GPUプロセッサやWindowsノード、VMの分離を必要とするワークロードがあるかどうかを考えること
|
||||
- *ノードの検証*: ノードがKubernetesクラスターに参加するための要件を満たしていることを保証する方法についての情報は[有効なノードのセットアップ](/ja/docs/setup/best-practices/node-conformance/)をご覧ください。
|
||||
- *クラスターへのノードの追加*: 自身でクラスターを管理している場合、自身のマシンをセットアップし手動で追加するか、または自動でクラスターのAPIサーバーに登録させることによってノードを追加できます。これらのKubernetesへノードを追加するためのセットアップ方法については、[ノード](/ja/docs/concepts/architecture/nodes/)セクションをご覧ください。
|
||||
- *ノードのスケール*: クラスターのキャパシティの拡張プランを作成することは最終的に必要になります。稼働させなければいけないPod数やコンテナ数を基にどのくらいのノード数が必要なのかを決定をするための助けとなる[大規模クラスターの考慮事項](/ja/docs/setup/best-practices/cluster-large/)をご覧ください。自身でノードを管理する場合、自身で物理機材を購入し設置することを意味します。
|
||||
- *ノードのオートスケーリング*: ノードやノードが提供するキャパシティを自動的に管理するために利用できるツールについて学ぶために、[クラスターのオートスケーリング](/ja/docs/concepts/cluster-administration/cluster-autoscaling)をご覧ください。
|
||||
- *ノードのヘルスチェックのセットアップ*: 重要なワークロードのためにノード上で稼働しているノードとPodが正常であることを確認しなければいけません。[Node Problem Detector](/ja/docs/tasks/debug/debug-cluster/monitor-node-health/)デーモンを使用し、ノードが正常であることを保証してください。
|
||||
|
||||
## プロダクション環境のユーザー管理
|
||||
|
||||
プロダクション環境では、自身または少人数の小さなグループがクラスターにアクセスするモデルから、数十人から数百人がアクセスする可能性のあるモデルへと移行するかもしれません。学習環境やプラットフォームのプロトタイプでは、すべての作業を行うための1つの管理アカウントを持っているかもしれません。プロダクション環境では、異なる名前空間へのアクセスレベルが異なる複数のアカウントを持つことになリます。
|
||||
|
||||
プロダクション環境向けのクラスターを運用することは、他のユーザーによるアクセスを選択的に許可する方法を決定することを意味します。特に、クラスターにアクセスをしようとするユーザーの身元を検証するための戦略を選択し(認証)、ユーザーが要求する操作に対して権限があるかどうかを決定する必要があります(認可)。:
|
||||
|
||||
- *認証*: APIサーバーはクライアント証明書、bearerトークン、認証プロキシまたはHTTPベーシック認証を使用し、ユーザーを認証できます。使用したい認証の方法を選択できます。プラグインを使用することで、APIサーバーはLDAPやKerberosなどの組織の既存の認証方法を活用できます。Kubernetesユーザーを認証する様々な方法の説明は[認証](/ja/docs/reference/access-authn-authz/authentication/)をご覧ください。
|
||||
- *認可*: 通常のユーザーを認可する際には、おそらくRBACとABACの認可方法のどちらかを選択することになります。様々なユーザーアカウントの認可方式(およびサービスアカウントによるクラスターがアクセスするための認可方式)を評価するために、[認可の概要](/docs/reference/access-authn-authz/authorization/)をご覧ください。
|
||||
- *ロールベースアクセスコントロール*: ([RBAC](/ja/docs/reference/access-authn-authz/rbac/)): 認証されたユーザーに特定の権限のセットを許可することによってクラスターへのアクセスを割り当てることができます。特定のNamespace(Role)やクラスター全体(ClusterRole)に権限を割り当てることができます。RoleBindingsやClusterRoleBindingsを使用することによって、権限を特定のユーザーに付与することができます。
|
||||
- *属性ベースアクセスコントロール* ([ABAC](/ja/docs/reference/access-authn-authz/abac/)): クラスターのリソース属性に基づいたポリシーを作成し、その属性に基づいてアクセスを許可または拒否することができます。ポリシーファイルの各行は、バージョニングプロパティ(apiVersionとkind)やsubject(ユーザーやグループ)に紐づくプロパティとリソースに紐づくプロパティとリソースに紐づかないプロパティ(/version or /apis)と読み取り専用プロパティを持つmapのspecプロパティを特定します。詳細は、[Examples](/docs/reference/access-authn-authz/abac/#examples)をご覧ください。
|
||||
|
||||
プロダクション用のKubernetesクラスターの認証認可をセットアップするにあたって、いくつかの考慮事項があります。
|
||||
|
||||
- *認証モードの設定*: Kubermetes APIサーバー ([kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/))の起動時に、*--authorization-mode*フラグを使用しサポートされた認証モードを設定しなければいけません。例えば、*/etc/kubernetes/manifests*配下の*kube-adminserver.yaml*ファイルで*--authorization-mode*フラグにNodeやRBACを設定することで、認証されたリクエストに対してノードやRBACの認証を許可することができます。
|
||||
- *ユーザー証明書とロールバインディングの作成(RMAC)*: RBAC認証を使用している場合、ユーザーはクラスター証明機関により署名された証明書署名要求(CSR)を作成でき、各ユーザーにRolesとClusterRolesをバインドすることができます。詳細は[証明書署名要求](/docs/reference/access-authn-authz/certificate-signing-requests/)をご覧ください。
|
||||
- *属性を組み合わせたポリシーの作成(ABAC)*: ABAC認証を使用している場合、特定のリソース(例えばPod)、Namespace、またはAPIグループにアクセスするために、選択されたユーザーやグループに属性の組み合わせで形成されたポリシーを割り当てることができます。より多くの情報は[Examples](/docs/reference/access-authn-authz/abac/#examples)をご覧ください。
|
||||
- *アドミッションコントローラーの考慮事項*: APIサーバーを経由してくるリクエストのための追加の認証形式に[Webhookトークン認証](/ja/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)があります。Webhookや他の特別な認証形式はAPIサーバーへアドミッションコントローラーを追加し有効化される必要があります。
|
||||
|
||||
## ワークロードリソースの制限の設定
|
||||
|
||||
プロダクションワークロードからの要求はKubernetesのコントロールプレーンの内外の両方で負荷が生じる原因になります。クラスターのワークロードの需要に合わせて設定するためには、次の項目を考慮してください。
|
||||
|
||||
- *Namespaceの制限の設定*: メモリやCPUなどの項目のクォートをNamespaceごとに設定します。詳細については、[メモリー、CPU、APIリソースの管理](/ja/docs/tasks/administer-cluster/manage-resources/)をご覧ください。制限を継承するために[階層型Namespace](/blog/2020/08/14/introducing-hierarchical-namespaces/)を設定することもできます。
|
||||
- *DNS要求のための準備*: ワークロードの急激なスケールアップを予測するのであれば、DNSサービスもスケールアップする準備をする必要があります。詳細については、[クラスター内のDNSサービスのオートスケール](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)をご覧ください。
|
||||
- *追加のサービスアカウントの作成*: ユーザーアカウントはクラスターで何ができるかを決定し、サービスアカウントは特定のNamespace内でのPodへのアクセスを定義します。
|
||||
デフォルトでは、Podは名前空間のデフォルトのサービスアカウントを引き受けます。新規のサービスアカウントの作成についての情報は[サービスアカウントの管理](/docs/reference/access-authn-authz/service-accounts-admin/)をご覧ください。例えば、以下のようなことが考えられます:
|
||||
- Podが特定のコンテナレジストリからイメージをプルするためのシークレットを追加する。例は[Podのためのサービスアカウントの構成](/docs/tasks/configure-Pod-container/configure-service-account/)についてご覧ください。
|
||||
- サービスアカウントへRBAC権限を割り当てる。詳細は[サービスアカウントの権限](/ja/docs/reference/access-authn-authz/rbac/#service-account-permissions)をご覧ください。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- プロダクション環境のKubernetesを自身で構築するか、[ターンキークラウドソリューション](/ja/docs/setup/production-environment/turnkey-solutions/)や[Kubernetesパートナー](/ja/partners/)から取得するかを決定する
|
||||
- 自身で構築することを選んだ場合、[証明書](/ja/docs/setup/best-practices/certificates/)の管理方法を計画し、[etcd](/ja/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/)や[APIサーバー](/docs/setup/production-environment/tools/kubeadm/ha-topology/)などの機能のための高可用性をセットアップする
|
||||
- [kubeadm](/ja/docs/setup/production-environment/tools/kubeadm/)、[kops](https://kops.sigs.k8s.io/)、[Kubespray](https://kubespray.io/)からデプロイメント方法を選択する
|
||||
- [認証](/ja/docs/reference/access-authn-authz/authentication/)と
|
||||
[認可](/docs/reference/access-authn-authz/authorization/)の方法を決定し、ユーザー管理を構成する
|
||||
- [リソース制限](/ja/docs/tasks/administer-cluster/manage-resources/)や[DNSオートスケーリング](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)や[サービスアカウント](/docs/reference/access-authn-authz/service-accounts-admin/)のセットアップによってアプリケーションのワークロードのための準備をする
|
||||
|
|
|
@ -223,7 +223,7 @@ kubeadmを使用している場合、手動で[kubelet cgroupドライバーの
|
|||
|
||||
本セクションでは、コンテナランタイムとしてCRI-Oをインストールするために必要な手順を説明します。
|
||||
|
||||
CRI-Oをインストールするには、[CRI-Oのインストール手順](https://github.com/cri-o/cri-o/blob/main/install.md#readme)に従ってください。
|
||||
CRI-Oをインストールするには、[CRI-Oのインストール手順](https://github.com/cri-o/packaging/blob/main/README.md#usage)に従ってください。
|
||||
|
||||
#### cgroupドライバー
|
||||
|
||||
|
|
|
@ -2,6 +2,12 @@
|
|||
layout: blog
|
||||
title: '쿠버네티스에서 어려움 없이 gRPC 로드밸런싱하기'
|
||||
date: 2018-11-07
|
||||
author: >
|
||||
William Morgan (Buoyant)
|
||||
translator:
|
||||
송원석 (쏘카)
|
||||
김상홍 (국민대)
|
||||
손석호 (ETRI)
|
||||
---
|
||||
|
||||
**저자**: William Morgan (Buoyant)
|
||||
|
@ -15,8 +21,8 @@ date: 2018-11-07
|
|||
|
||||
![](/images/blog/grpc-load-balancing-with-linkerd/Screenshot2018-11-0116-c4d86100-afc1-4a08-a01c-16da391756dd.34.36.png)
|
||||
|
||||
여기 표시된 `voting` 서비스는 여러 개의 파드로 구성되어 있지만, 쿠버네티스의 CPU 그래프는 명확하게
|
||||
파드 중 하나만 실제 작업을 수행하고 있는 것(하나의 파드만 트래픽을 수신하고 있으므로)을
|
||||
여기 표시된 `voting` 서비스는 여러 개의 파드로 구성되어 있지만, 쿠버네티스의 CPU 그래프는 명확하게
|
||||
파드 중 하나만 실제 작업을 수행하고 있는 것(하나의 파드만 트래픽을 수신하고 있으므로)을
|
||||
보여준다. 왜 그런 것일까?
|
||||
|
||||
이 블로그 게시물에서는, 이런 일이 발생하는 이유를 설명하고,
|
||||
|
@ -27,15 +33,15 @@ gRPC 로드밸런싱을 쿠버네티스 앱에 추가하여, 이 문제를 쉽
|
|||
|
||||
먼저, 왜 gRPC를 위해 특별한 작업이 필요한지 살펴보자.
|
||||
|
||||
gRPC는 애플리케이션 개발자에게 점점 더 일반적인 선택지가 되고 있다.
|
||||
JSON-over-HTTP와 같은 대체 프로토콜에 비해, gRPC는
|
||||
극적으로 낮은 (역)직렬화 비용과, 자동 타입
|
||||
gRPC는 애플리케이션 개발자에게 점점 더 일반적인 선택지가 되고 있다.
|
||||
JSON-over-HTTP와 같은 대체 프로토콜에 비해, gRPC는
|
||||
극적으로 낮은 (역)직렬화 비용과, 자동 타입
|
||||
체크, 공식화된 APIs, 적은 TCP 관리 오버헤드 등에 상당한 이점이 있다.
|
||||
|
||||
그러나, gRPC는 쿠버네티스에서 제공하는 것과 마찬가지로
|
||||
표준(일반)적으로 사용되는 연결 수준 로드밸런싱(connection-level load balancing)을 어렵게 만드는 측면도 있다. gRPC는 HTTP/2로
|
||||
표준(일반)적으로 사용되는 연결 수준 로드밸런싱(connection-level load balancing)을 어렵게 만드는 측면도 있다. gRPC는 HTTP/2로
|
||||
구축되었고, HTTP/2는 하나의 오래 지속되는 TCP 연결을 갖도록 설계되있기 때문에,
|
||||
모든 요청은 *다중화(multiplexed)*(특정 시점에 다수의 요청이
|
||||
모든 요청은 *다중화(multiplexed)*(특정 시점에 다수의 요청이
|
||||
하나의 연결에서만 동작하는 것을 의미)된다. 일반적으로, 그것은
|
||||
연결 관리 오버헤드를 줄이는 장점이 있다. 그러나, 그것은 또한
|
||||
(상상할 수 있듯이) 연결 수준의 밸런싱(balancing)에는 유용하지 않다는 것을 의미한다. 일단
|
||||
|
@ -53,9 +59,9 @@ HTTP/1.1에는 TCP 연결을 순환시키게 만드는 여러 특징들이 있
|
|||
더 이상 아무것도 할 필요가 없다.
|
||||
|
||||
그 이유를 이해하기 위해, HTTP/1.1을 자세히 살펴보자. HTTP/2와 달리
|
||||
HTTP/1.1은 요청을 다중화할 수 없다. TCP 연결 시점에 하나의 HTTP 요청만
|
||||
HTTP/1.1은 요청을 다중화할 수 없다. TCP 연결 시점에 하나의 HTTP 요청만
|
||||
활성화될 수 있다. 예를 들어 클라이언트가 'GET /foo'를 요청하고,
|
||||
서버가 응답할 때까지 대기한다. 요청-응답 주기가
|
||||
서버가 응답할 때까지 대기한다. 요청-응답 주기가
|
||||
발생하면, 해당 연결에서 다른 요청을 실행할 수 없다.
|
||||
|
||||
일반적으로, 우리는 병렬로 많은 요청이 발생하기 원한다. 그러므로,
|
||||
|
@ -74,34 +80,34 @@ HTTP/1.1 동시 요청을 위해, 여러 HTTP/1.1 연결을 만들어야 하고,
|
|||
|
||||
![](/images/blog/grpc-load-balancing-with-linkerd/Stereo-09aff9d7-1c98-4a0a-9184-9998ed83a531.png)
|
||||
|
||||
네트워크 측면에서, L3/L4에서 결정을 내리기 보다는 L5/L7에서 결정을
|
||||
네트워크 측면에서, L3/L4에서 결정을 내리기 보다는 L5/L7에서 결정을
|
||||
내려야 한다. 즉, TCP 연결을 통해 전송된 프로토콜을 이해해야 한다.
|
||||
|
||||
우리는 이것을 어떻게 달성해야할까? 몇 가지 옵션이 있다. 먼저, 우리의 애플리케이션
|
||||
코드는 대상에 대한 자체 로드 밸런싱 풀을 수동으로 유지 관리할 수 있고,
|
||||
gRPC 클라이언트에 [로드 밸런싱 풀을
|
||||
코드는 대상에 대한 자체 로드 밸런싱 풀을 수동으로 유지 관리할 수 있고,
|
||||
gRPC 클라이언트에 [로드 밸런싱 풀을
|
||||
사용](https://godoc.org/google.golang.org/grpc/balancer)하도록 구성할 수 있다. 이 접근 방식은
|
||||
우리에게 높은 제어력을 제공하지만, 시간이 지남에 따라 파드가 리스케줄링(reschedule)되면서 풀이 변경되는
|
||||
쿠버네티스와 같은 환경에서는 매우 복잡할 수 있다. 이 경우, 우리의
|
||||
애플리케이션은 파드와 쿠버네티스 API를 관찰하고 자체적으로 최신 상태를
|
||||
애플리케이션은 파드와 쿠버네티스 API를 관찰하고 자체적으로 최신 상태를
|
||||
유지해야 한다.
|
||||
|
||||
대안으로, 쿠버네티스에서 앱을 [헤드리스(headless)
|
||||
서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)로 배포할 수 있다.
|
||||
이 경우, 쿠버네티스는 서비스를 위한 DNS 항목에
|
||||
[다중 A 레코드를 생성할 것이다](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스).
|
||||
이 경우, 쿠버네티스는 서비스를 위한 DNS 항목에
|
||||
[다중 A 레코드를 생성할 것이다](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스).
|
||||
만약 충분히 진보한 gRPC 클라이언트를 사용한다면,
|
||||
해당 DNS 항목에서 로드 밸런싱 풀을 자동으로 유지 관리할 수 있다.
|
||||
그러나 이 접근 방식은 우리를 특정 gRPC 클라이언트로를 사용하도록 제한할 뿐만 아니라,
|
||||
그러나 이 접근 방식은 우리를 특정 gRPC 클라이언트로를 사용하도록 제한할 뿐만 아니라,
|
||||
헤드리스 서비스만 사용하는 경우도 거의 없으므로 제약이 크다.
|
||||
|
||||
마지막으로, 세 번째 접근 방식을 취할 수 있다. 경량 프록시를 사용하는 것이다.
|
||||
|
||||
# Linkerd를 사용하여 쿠버네티스에서 gRPC 로드 밸런싱
|
||||
|
||||
[Linkerd](https://linkerd.io)는 [CNCF](https://cncf.io)에서 관리하는 쿠버네티스용
|
||||
*서비스 메시*이다. 우리의 목적과 가장 관련이 깊은 Linkerd는, 클러스터 수준의 권한
|
||||
없이도 단일 서비스에 적용할 수 있는
|
||||
[Linkerd](https://linkerd.io)는 [CNCF](https://cncf.io)에서 관리하는 쿠버네티스용
|
||||
*서비스 메시*이다. 우리의 목적과 가장 관련이 깊은 Linkerd는, 클러스터 수준의 권한
|
||||
없이도 단일 서비스에 적용할 수 있는
|
||||
*서비스 사이드카*로써도 작동한다. Linkerd를 서비스에 추가하는
|
||||
것은, 각 파드에 작고, 초고속인 프록시를 추가하는 것을 의미하며, 이러한 프록시가
|
||||
쿠버네티스 API를 와치(watch)하고 gRPC 로드 밸런싱을 자동으로 수행하는 것을 의미이다. 우리가 수행한 배포는
|
||||
|
@ -111,30 +117,30 @@ gRPC 클라이언트에 [로드 밸런싱 풀을
|
|||
|
||||
Linkerd를 사용하면 몇 가지 장점이 있다. 첫째, 어떠한 언어로 작성된 서비스든지, 어떤 gRPC 클라이언트든지
|
||||
그리고 어떠한 배포 모델과도 (헤드리스 여부와 상관없이) 함께 작동한다.
|
||||
Linkerd의 프록시는 완전히 투명하기 때문에, HTTP/2와 HTTP/1.x를 자동으로 감지하고
|
||||
Linkerd의 프록시는 완전히 투명하기 때문에, HTTP/2와 HTTP/1.x를 자동으로 감지하고
|
||||
L7 로드 밸런싱을 수행하며, 다른 모든 트래픽을
|
||||
순수한 TCP로 통과(pass through)시킨다. 이것은 모든 것이 *그냥 작동*한다는 것을 의미한다.
|
||||
|
||||
둘째, Linkerd의 로드 밸런싱은 매우 정교하다. Linkerd는
|
||||
쿠버네티스 API에 대한 와치(watch)를 유지하고 파드가 리스케술링 될 때
|
||||
로드 밸런싱 풀을 자동으로 갱신할 뿐만 아니라, Linkerd는 응답 대기 시간이 가장 빠른 파드에
|
||||
요청을 자동으로 보내기 위해 *지수 가중 이동 평균(exponentially-weighted moving average)* 을
|
||||
사용한다. 하나의 파드가 잠시라도 느려지면, Linkerd가 트래픽을
|
||||
로드 밸런싱 풀을 자동으로 갱신할 뿐만 아니라, Linkerd는 응답 대기 시간이 가장 빠른 파드에
|
||||
요청을 자동으로 보내기 위해 *지수 가중 이동 평균(exponentially-weighted moving average)* 을
|
||||
사용한다. 하나의 파드가 잠시라도 느려지면, Linkerd가 트래픽을
|
||||
변경할 것이다. 이를 통해 종단 간 지연 시간을 줄일 수 있다.
|
||||
|
||||
마지막으로, Linkerd의 Rust 기반 프록시는 매우 작고 빠르다. 그것은 1ms 미만의
|
||||
마지막으로, Linkerd의 Rust 기반 프록시는 매우 작고 빠르다. 그것은 1ms 미만의
|
||||
p99 지연 시간(<1ms of p99 latency)을 지원할 뿐만 아니라, 파드당 10mb 미만의 RSS(<10mb of RSS)만 필요로 하므로
|
||||
시스템 성능에도 거의 영향을 미치지 않는다.
|
||||
|
||||
# 60초 안에 gRPC 부하 분산
|
||||
|
||||
Linkerd는 시도하기가 매우 쉽다. 단지 —랩탑에 CLI를 설치하고— [Linkerd 시작
|
||||
지침](https://linkerd.io/2/getting-started/)의 단계만 따르면 된다.
|
||||
클러스터에 컨트롤 플레인과 "메시"
|
||||
서비스(프록시를 각 파드에 주입)를 설치한다. 서비스에 Linkerd가 즉시
|
||||
Linkerd는 시도하기가 매우 쉽다. 단지 —랩탑에 CLI를 설치하고— [Linkerd 시작
|
||||
지침](https://linkerd.io/2/getting-started/)의 단계만 따르면 된다.
|
||||
클러스터에 컨트롤 플레인과 "메시"
|
||||
서비스(프록시를 각 파드에 주입)를 설치한다. 서비스에 Linkerd가 즉시
|
||||
실행될 것이므로 적절한 gRPC 밸런싱을 즉시 확인할 수 있다.
|
||||
|
||||
Linkerd 설치 후에, 예시 `voting` 서비스를
|
||||
Linkerd 설치 후에, 예시 `voting` 서비스를
|
||||
다시 살펴보자.
|
||||
|
||||
![](/images/blog/grpc-load-balancing-with-linkerd/Screenshot2018-11-0116-24b8ee81-144c-4eac-b73d-871bbf0ea22e.57.42.png)
|
||||
|
@ -143,14 +149,14 @@ Linkerd 설치 후에, 예시 `voting` 서비스를
|
|||
—코드를 변경하지 않았지만— 트래픽을 받고 있다. 짜잔,
|
||||
마법처럼 gRPC 로드 밸런싱이 됐다!
|
||||
|
||||
Linkerd는 또한 내장된 트래픽 수준 대시보드를 제공하므로, 더 이상
|
||||
CPU 차트에서 무슨 일이 일어나고 있는지 추측하는 것이 필요하지 않다. 다음은 각 파드의
|
||||
Linkerd는 또한 내장된 트래픽 수준 대시보드를 제공하므로, 더 이상
|
||||
CPU 차트에서 무슨 일이 일어나고 있는지 추측하는 것이 필요하지 않다. 다음은 각 파드의
|
||||
성공률, 요청 볼륨 및 지연 시간 백분위수를 보여주는
|
||||
Linkerd 그래프이다.
|
||||
|
||||
![](/images/blog/grpc-load-balancing-with-linkerd/Screenshot2018-11-0212-15ed0448-5424-4e47-9828-20032de868b5.08.38.png)
|
||||
|
||||
각 파드가 약 5 RPS를 얻고 있음을 알 수 있다. 또한,
|
||||
각 파드가 약 5 RPS를 얻고 있음을 알 수 있다. 또한,
|
||||
로드 밸런싱 문제는 해결되었지만 해당 서비스의
|
||||
성공률에 대해서는 아직 할 일이 남았다는 것도 살펴볼 수 있다. (데모 앱은 독자에 대한 연습을 위해 의도적으로
|
||||
실패 상태로 만들었다. Linkerd 대시보드를 사용하여
|
||||
|
@ -158,9 +164,9 @@ Linkerd 그래프이다.
|
|||
|
||||
# 마지막으로
|
||||
|
||||
쿠버네티스 서비스에 gRPC 로드 밸런싱을 추가하는 방법에
|
||||
쿠버네티스 서비스에 gRPC 로드 밸런싱을 추가하는 방법에
|
||||
흥미가 있다면, 어떤 언어로 작성되었든 상관없이, 어떤 gRPC
|
||||
클라이언트를 사용중이든지, 또는 어떻게 배포되었든지, Linkerd를 사용하여 단 몇 개의 명령으로
|
||||
클라이언트를 사용중이든지, 또는 어떻게 배포되었든지, Linkerd를 사용하여 단 몇 개의 명령으로
|
||||
gRPC 로드 밸런싱을 추가할 수 있다.
|
||||
|
||||
보안, 안정성 및 디버깅을 포함하여 Linkerd에는 더 많은 특징
|
||||
|
@ -170,4 +176,4 @@ gRPC 로드 밸런싱을 추가할 수 있다.
|
|||
Linkerd는 [CNCF](https://cncf.io) 프로젝트로
|
||||
[GitHub에 호스팅 되어 있고](https://github.com/linkerd/linkerd2), [Slack](https://slack.linkerd.io),
|
||||
[Twitter](https://twitter.com/linkerd), 그리고 [이메일 리스트](https://lists.cncf.io/g/cncf-linkerd-users)를 통해 커뮤니티를 만날 수 있다.
|
||||
접속하여 커뮤니티에 참여하는 즐거움을 느껴보길 바란다!
|
||||
접속하여 커뮤니티에 참여하는 즐거움을 느껴보길 바란다!
|
||||
|
|
|
@ -4,6 +4,20 @@ title: "당황하지 마세요. 쿠버네티스와 도커"
|
|||
date: 2020-12-02
|
||||
slug: dont-panic-kubernetes-and-docker
|
||||
evergreen: true
|
||||
author: >
|
||||
Jorge Castro,
|
||||
Duffie Cooley,
|
||||
Kat Cosgrove,
|
||||
Justin Garrison,
|
||||
Noah Kantrowitz,
|
||||
Bob Killen,
|
||||
Rey Lejano,
|
||||
Dan “POP” Papandrea,
|
||||
Jeffrey Sica,
|
||||
Davanum “Dims” Srinivas
|
||||
translator:
|
||||
박재화(삼성SDS),
|
||||
손석호(ETRI)
|
||||
---
|
||||
|
||||
**업데이트:** _쿠버네티스의 도커심을 통한 도커 지원이 제거되었습니다.
|
||||
|
@ -12,10 +26,6 @@ evergreen: true
|
|||
|
||||
---
|
||||
|
||||
**저자:** Jorge Castro, Duffie Cooley, Kat Cosgrove, Justin Garrison, Noah Kantrowitz, Bob Killen, Rey Lejano, Dan “POP” Papandrea, Jeffrey Sica, Davanum “Dims” Srinivas
|
||||
|
||||
**번역:** 박재화(삼성SDS), 손석호(한국전자통신연구원)
|
||||
|
||||
쿠버네티스는 v1.20 이후 컨테이너 런타임으로서
|
||||
[도커를 사용 중단(deprecating)](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation)합니다.
|
||||
|
||||
|
|
|
@ -4,12 +4,14 @@ title: '쿠버네티스 1.22: 새로운 정점에 도달(Reaching New Peaks)'
|
|||
date: 2021-08-04
|
||||
slug: kubernetes-1-22-release-announcement
|
||||
evergreen: true
|
||||
author: >
|
||||
[쿠버네티스 1.22 릴리스 팀](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.22/release-team.md)
|
||||
translator: >
|
||||
[손석호(ETRI)](https://github.com/seokho-son),
|
||||
[서지훈(ETRI)](https://github.com/jihoon-seo),
|
||||
[쿠버네티스 문서 한글화 팀](https://kubernetes.slack.com/archives/CA1MMR86S)
|
||||
---
|
||||
|
||||
**저자:** [쿠버네티스 1.22 릴리스 팀](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.22/release-team.md)
|
||||
|
||||
**번역:** [손석호(ETRI)](https://github.com/seokho-son), [서지훈(ETRI)](https://github.com/jihoon-seo), [쿠버네티스 문서 한글화 팀](https://kubernetes.slack.com/archives/CA1MMR86S)
|
||||
|
||||
2021년의 두 번째 릴리스인 쿠버네티스 1.22 릴리스를 발표하게 되어 기쁘게 생각합니다!
|
||||
|
||||
이번 릴리스는 53개의 개선 사항(enhancement)으로 구성되어 있습니다. 13개의 개선 사항은 스테이블(stable)로 졸업하였으며(graduated), 24개의 개선 사항은 베타(beta)로 이동하였고, 16개는 알파(alpha)에 진입하였습니다. 또한, 3개의 기능(feature)을 더 이상 사용하지 않게 되었습니다(deprecated).
|
||||
|
|
|
@ -3,12 +3,15 @@ layout: blog
|
|||
title: "쿠버네티스 1.24: gRPC 컨테이너 프로브 베타"
|
||||
date: 2022-05-13
|
||||
slug: grpc-probes-now-in-beta
|
||||
author: >
|
||||
Sergey Kanzhelev (Google)
|
||||
translator:
|
||||
송원석 (쏘카),
|
||||
손석호 (ETRI),
|
||||
김상홍 (국민대),
|
||||
김보배 (11번가)
|
||||
---
|
||||
|
||||
**저자**: Sergey Kanzhelev (Google)
|
||||
|
||||
**번역**: 송원석 (쏘카), 손석호 (ETRI), 김상홍 (국민대), 김보배 (11번가)
|
||||
|
||||
쿠버네티스 1.24에서 gRPC 프로브 기능이 베타에 진입했으며 기본적으로 사용 가능하다.
|
||||
이제 HTTP 엔드포인트를 노출하지 않고, gRPC 앱에 대한 스타트업(startup), 활성(liveness) 및 준비성(readiness) 프로브를 구성할 수 있으며,
|
||||
실행 파일도 필요하지 않는다. 쿠버네티스는 기본적으로 gRPC를 통해 워크로드에 자체적으로(natively) 연결 가능하며, 상태를 쿼리할 수 있다.
|
||||
|
@ -22,7 +25,7 @@ gRPC 지원이 추가되기 전에도, 쿠버네티스는 이미 컨테이너
|
|||
TCP 연결이 성공했는지 여부를 확인하여 상태를 확인할 수 있었다.
|
||||
|
||||
대부분의 앱은, 이러한 검사로 충분하다. 앱이 상태(또는 준비성) 확인을
|
||||
위한 gRPC 엔드포인트를 제공하는 경우 `exec` 프로브를
|
||||
위한 gRPC 엔드포인트를 제공하는 경우 `exec` 프로브를
|
||||
gRPC 상태 확인에 사용하도록 쉽게 용도를 변경할 수 있다.
|
||||
블로그 기사 [쿠버네티스의 gRPC 서버 상태 확인](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/)에서, Ahmet Alp Balkan은 이를 수행하는 방법을 설명하였으며, 이는 지금도 여전히 작동하는 메커니즘이다.
|
||||
|
||||
|
@ -122,7 +125,7 @@ Conditions:
|
|||
PodScheduled True
|
||||
```
|
||||
|
||||
이제 상태 확인 엔드포인트 상태를 NOT_SERVING으로 변경해 보겠다.
|
||||
이제 상태 확인 엔드포인트 상태를 NOT_SERVING으로 변경해 보겠다.
|
||||
파드의 http 포트를 호출하기 위해 포트 포워드를 생성한다.
|
||||
|
||||
```shell
|
||||
|
@ -177,4 +180,4 @@ Conditions:
|
|||
## 요약
|
||||
|
||||
쿠버네티스는 인기 있는 워크로드 오케스트레이션 플랫폼이며 피드백과 수요를 기반으로 기능을 추가한다.
|
||||
gRPC 프로브 지원과 같은 기능은 많은 앱 개발자의 삶을 더 쉽게 만들고 앱을 더 탄력적으로 만들 수 있는 마이너한 개선이다. 오늘 기능을 사용해보고 기능이 GA로 전환되기 전에 오늘 사용해 보고 피드백을 제공해보자.
|
||||
gRPC 프로브 지원과 같은 기능은 많은 앱 개발자의 삶을 더 쉽게 만들고 앱을 더 탄력적으로 만들 수 있는 마이너한 개선이다. 오늘 기능을 사용해보고 기능이 GA로 전환되기 전에 오늘 사용해 보고 피드백을 제공해보자.
|
||||
|
|
|
@ -118,7 +118,6 @@ kubectl edit SampleDB/example-database # 일부 설정을 수동으로 변경하
|
|||
* [kube-rs](https://kube.rs/) (Rust)
|
||||
* [kubebuilder](https://book.kubebuilder.io/) 사용하기
|
||||
* [KubeOps](https://buehler.github.io/dotnet-operator-sdk/) (.NET 오퍼레이터 SDK)
|
||||
* [KUDO](https://kudo.dev/) (Kubernetes Universal Declarative Operator)
|
||||
* 웹훅(WebHook)과 함께 [Metacontroller](https://metacontroller.github.io/metacontroller/intro.html)를
|
||||
사용하여 직접 구현하기
|
||||
* [오퍼레이터 프레임워크](https://operatorframework.io)
|
||||
|
|
|
@ -244,7 +244,7 @@ kubeadm을 사용하는 경우,
|
|||
|
||||
이 섹션은 CRI-O를 컨테이너 런타임으로 설치하는 필수적인 단계를 담고 있다.
|
||||
|
||||
CRI-O를 설치하려면, [CRI-O 설치 방법](https://github.com/cri-o/cri-o/blob/main/install.md#readme)을 따른다.
|
||||
CRI-O를 설치하려면, [CRI-O 설치 방법](https://github.com/cri-o/packaging/blob/main/README.md#usage)을 따른다.
|
||||
|
||||
#### cgroup 드라이버
|
||||
|
||||
|
|
|
@ -2,8 +2,6 @@ apiVersion: networking.k8s.io/v1
|
|||
kind: Ingress
|
||||
metadata:
|
||||
name: example-ingress
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /$1
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
|
|
|
@ -15,7 +15,7 @@ spec:
|
|||
labels:
|
||||
app: cassandra
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 1800
|
||||
terminationGracePeriodSeconds: 500
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: gcr.io/google-samples/cassandra:v13
|
||||
|
@ -43,7 +43,7 @@ spec:
|
|||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- nodetool drain
|
||||
|
|
|
@ -64,12 +64,16 @@ Kubernetes - проект з відкритим вихідним кодом. В
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Переглянути відео</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon в Європі, 19-22 березня 2024 року</a>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-open-source-summit-ai-dev-china/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon у Китаї 21-23 серпня</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon у Північній Америці, 12-15 листопада 2024 року</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-india/" button id="desktopKCButton">Відвідайте KubeCon + CloudNativeCon в Індії 11-12 грудня</a>
|
||||
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
|
|
|
@ -31,7 +31,7 @@ Kubernetes provides Containers with lifecycle hooks.
|
|||
The hooks enable Containers to be aware of events in their management lifecycle
|
||||
and run code implemented in a handler when the corresponding lifecycle hook is executed.
|
||||
-->
|
||||
## 概述
|
||||
## 概述 {#overview}
|
||||
|
||||
类似于许多具有生命周期回调组件的编程语言框架,例如 Angular、Kubernetes 为容器提供了生命周期回调。
|
||||
回调使容器能够了解其管理生命周期中的事件,并在执行相应的生命周期回调时运行在处理程序中实现的代码。
|
||||
|
@ -79,8 +79,8 @@ Pod 的终止宽限周期在 `PreStop` 回调被执行之前即开始计数,
|
|||
A more detailed description of the termination behavior can be found in
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination).
|
||||
-->
|
||||
有关终止行为的更详细描述,请参见
|
||||
[终止 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
有关终止行为的更详细描述,
|
||||
请参见[终止 Pod](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。
|
||||
|
||||
<!--
|
||||
### Hook handler implementations
|
||||
|
@ -88,7 +88,7 @@ A more detailed description of the termination behavior can be found in
|
|||
Containers can access a hook by implementing and registering a handler for that hook.
|
||||
There are three types of hook handlers that can be implemented for Containers:
|
||||
-->
|
||||
### 回调处理程序的实现
|
||||
### 回调处理程序的实现 {#hook-handler-implementations}
|
||||
|
||||
容器可以通过实现和注册该回调的处理程序来访问该回调。
|
||||
针对容器,有三种类型的回调处理程序可供实现:
|
||||
|
@ -115,21 +115,20 @@ When a Container lifecycle management hook is called,
|
|||
the Kubernetes management system executes the handler according to the hook action,
|
||||
`httpGet` , `tcpSocket` and `sleep` are executed by the kubelet process, and `exec` is executed in the container.
|
||||
-->
|
||||
### 回调处理程序执行
|
||||
### 回调处理程序执行 {#hook-handler-execution}
|
||||
|
||||
当调用容器生命周期管理回调时,Kubernetes 管理系统根据回调动作执行其处理程序,
|
||||
`httpGet`、`tcpSocket` 和 `sleep` 由 kubelet 进程执行,而 `exec` 在容器中执行。
|
||||
|
||||
<!--
|
||||
Hook handler calls are synchronous within the context of the Pod containing the Container.
|
||||
This means that for a `PostStart` hook,
|
||||
the Container ENTRYPOINT and hook fire asynchronously.
|
||||
However, if the hook takes too long to run or hangs,
|
||||
the Container cannot reach a `running` state.
|
||||
The `PostStart` hook handler call is initiated when a container is created,
|
||||
meaning the container ENTRYPOINT and the `PostStart` hook are triggered simultaneously.
|
||||
However, if the `PostStart` hook takes too long to execute or if it hangs,
|
||||
it can prevent the container from transitioning to a `running` state.
|
||||
-->
|
||||
回调处理程序调用在包含容器的 Pod 上下文中是同步的。
|
||||
这意味着对于 `PostStart` 回调,容器入口点和回调异步触发。
|
||||
但是,如果回调运行或挂起的时间太长,则容器无法达到 `running` 状态。
|
||||
当容器创建时,会调用 `PostStart` 回调程序,
|
||||
这意味着容器的 ENTRYPOINT 和 `PostStart` 回调会同时触发。然而,
|
||||
如果 `PostStart` 回调程序执行时间过长或挂起,它可能会阻止容器进入 `running` 状态。
|
||||
|
||||
<!--
|
||||
`PreStop` hooks are not executed asynchronously from the signal
|
||||
|
@ -176,7 +175,7 @@ which means that a hook may be called multiple times for any given event,
|
|||
such as for `PostStart` or `PreStop`.
|
||||
It is up to the hook implementation to handle this correctly.
|
||||
-->
|
||||
### 回调递送保证
|
||||
### 回调递送保证 {#hook-delivery-guarantees}
|
||||
|
||||
回调的递送应该是**至少一次**,这意味着对于任何给定的事件,
|
||||
例如 `PostStart` 或 `PreStop`,回调可以被调用多次。
|
||||
|
@ -205,7 +204,7 @@ and for `PreStop`, this is the `FailedPreStopHook` event.
|
|||
To generate a failed `FailedPostStartHook` event yourself, modify the [lifecycle-events.yaml](https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/lifecycle-events.yaml) file to change the postStart command to "badcommand" and apply it.
|
||||
Here is some example output of the resulting events you see from running `kubectl describe pod lifecycle-demo`:
|
||||
-->
|
||||
### 调试回调处理程序
|
||||
### 调试回调处理程序 {#debugging-hook-handlers}
|
||||
|
||||
回调处理程序的日志不会在 Pod 事件中公开。
|
||||
如果处理程序由于某种原因失败,它将播放一个事件。
|
||||
|
@ -237,7 +236,5 @@ Events:
|
|||
* Get hands-on experience
|
||||
[attaching handlers to Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
-->
|
||||
|
||||
* 进一步了解[容器环境](/zh-cn/docs/concepts/containers/container-environment/)。
|
||||
* 动手[为容器的生命周期事件设置处理函数](/zh-cn/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)。
|
||||
|
||||
|
|
|
@ -315,8 +315,9 @@ The [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/cu
|
|||
API resource allows you to define custom resources.
|
||||
Defining a CRD object creates a new custom resource with a name and schema that you specify.
|
||||
The Kubernetes API serves and handles the storage of your custom resource.
|
||||
The name of a CRD object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
The name of the CRD object itself must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) derived from the defined resource name and its API group; see [how to create a CRD](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions#create-a-customresourcedefinition) for more details.
|
||||
Further, the name of an object whose kind/resource is defined by a CRD must also be a valid DNS subdomain name.
|
||||
-->
|
||||
## CustomResourceDefinitions
|
||||
|
||||
|
@ -324,8 +325,10 @@ The name of a CRD object must be a valid
|
|||
API 资源允许你定义定制资源。
|
||||
定义 CRD 对象的操作会使用你所设定的名字和模式定义(Schema)创建一个新的定制资源,
|
||||
Kubernetes API 负责为你的定制资源提供存储和访问服务。
|
||||
CRD 对象的名称必须是合法的
|
||||
[DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
|
||||
CRD 对象的名称必须是有效的 [DNS 子域名](/zh-cn/docs/concepts/overview/working-with-objects/names#dns-subdomain-names),
|
||||
该名称由定义的资源名称及其 API 组派生而来。有关详细信息,
|
||||
请参见[如何创建 CRD](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions#create-a-customresourcedefinition)。
|
||||
此外,由 CRD 定义的某种对象/资源的名称也必须是有效的 DNS 子域名。
|
||||
|
||||
<!--
|
||||
This frees you from writing your own API server to handle the custom resource,
|
||||
|
@ -434,6 +437,7 @@ Aggregated APIs offer more advanced API features and customization of other feat
|
|||
| strategic-merge-patch | The new endpoints support PATCH with `Content-Type: application/strategic-merge-patch+json`. Useful for updating objects that may be modified both locally, and by the server. For more information, see ["Update API Objects in Place Using kubectl patch"](/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) | No | Yes |
|
||||
| Protocol Buffers | The new resource supports clients that want to use Protocol Buffers | No | Yes |
|
||||
| OpenAPI Schema | Is there an OpenAPI (swagger) schema for the types that can be dynamically fetched from the server? Is the user protected from misspelling field names by ensuring only allowed fields are set? Are types enforced (in other words, don't put an `int` in a `string` field?) | Yes, based on the [OpenAPI v3.0 validation](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) schema (GA in 1.16). | Yes |
|
||||
| Instance Name | Does this extension mechanism impose any constraints on the names of objects whose kind/resource is defined this way? | Yes, such an object's name must be a valid DNS subdomain name. | No |
|
||||
-->
|
||||
| 特性 | 描述 | CRD | 聚合 API |
|
||||
| ------- | ----------- | ---- | -------------- |
|
||||
|
@ -448,6 +452,7 @@ Aggregated APIs offer more advanced API features and customization of other feat
|
|||
| strategic-merge-patch | 新的端点要支持标记了 `Content-Type: application/strategic-merge-patch+json` 的 PATCH 操作。对于更新既可在本地更改也可在服务器端更改的对象而言是有用的。要了解更多信息,可参见[使用 `kubectl patch` 来更新 API 对象](/zh-cn/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)。 | 不可以。 | 可以。 |
|
||||
| 支持协议缓冲区 | 新的资源要支持想要使用协议缓冲区(Protocol Buffer)的客户端。 | 不可以。 | 可以。 |
|
||||
| OpenAPI Schema | 是否存在新资源类别的 OpenAPI(Swagger)Schema 可供动态从服务器上读取?是否存在机制确保只能设置被允许的字段以避免用户犯字段拼写错误?是否实施了字段类型检查(换言之,不允许在 `string` 字段设置 `int` 值)? | 可以,依据 [OpenAPI v3.0 合法性检查](/zh-cn/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation) 模式(1.16 中进入正式发布状态)。 | 可以。|
|
||||
| 实例名称 | 这种扩展机制是否对通过这种方式定义的对象(类别/资源)的名称有任何限制? | 可以,此类对象的名称必须是一个有效的 DNS 子域名。 | 不可以|
|
||||
|
||||
<!--
|
||||
### Common Features
|
||||
|
|
|
@ -16,12 +16,15 @@ weight: 10
|
|||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Kubernetes {{< skew currentVersion >}} supports [Container Network Interface](https://github.com/containernetworking/cni)
|
||||
Kubernetes (version 1.3 through to the latest {{< skew latestVersion >}}, and likely onwards) lets you use
|
||||
[Container Network Interface](https://github.com/containernetworking/cni)
|
||||
(CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your
|
||||
cluster and that suits your needs. Different plugins are available (both open- and closed- source)
|
||||
in the wider Kubernetes ecosystem.
|
||||
-->
|
||||
Kubernetes {{< skew currentVersion >}} 支持用于集群联网的[容器网络接口](https://github.com/containernetworking/cni)(CNI)插件。
|
||||
Kubernetes(1.3 版本至最新 {{< skew latestVersion >}},并可能包括未来版本)
|
||||
允许你使用[容器网络接口](https://github.com/containernetworking/cni)(CNI)
|
||||
插件来完成集群联网。
|
||||
你必须使用和你的集群相兼容并且满足你的需求的 CNI 插件。
|
||||
在更广泛的 Kubernetes 生态系统中你可以使用不同的插件(开源和闭源)。
|
||||
|
||||
|
|
|
@ -244,11 +244,11 @@ excluding `frontend` using the comma operator: `environment=production,tier!=fro
|
|||
|
||||
<!--
|
||||
One usage scenario for equality-based label requirement is for Pods to specify
|
||||
node selection criteria. For example, the sample Pod below selects nodes with
|
||||
the label "`accelerator=nvidia-tesla-p100`".
|
||||
node selection criteria. For example, the sample Pod below selects nodes where
|
||||
the `accelerator` label exists and is set to `nvidia-tesla-p100`.
|
||||
-->
|
||||
基于等值的标签要求的一种使用场景是 Pod 要指定节点选择标准。
|
||||
例如,下面的示例 Pod 选择带有标签 "`accelerator=nvidia-tesla-p100`"。
|
||||
例如,下面的示例 Pod 选择存在 `accelerator` 标签且值为 `nvidia-tesla-p100` 的节点。
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Pod 安全性标准
|
||||
description: >
|
||||
详细了解 Pod 安全性标准(Pod Security Standards)中所定义的不同策略级别。
|
||||
详细了解 Pod 安全性标准(Pod Security Standard)中所定义的不同策略级别。
|
||||
content_type: concept
|
||||
weight: 15
|
||||
---
|
||||
|
@ -52,17 +52,16 @@ Pod 安全性标准定义了三种不同的**策略(Policy)**,以广泛覆
|
|||
**The _Privileged_ policy is purposely-open, and entirely unrestricted.** This type of policy is
|
||||
typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.
|
||||
|
||||
The Privileged policy is defined by an absence of restrictions. Allow-by-default
|
||||
mechanisms (such as gatekeeper) may be Privileged by default. In contrast, for a deny-by-default mechanism (such as Pod
|
||||
Security Policy) the Privileged policy should disable all restrictions.
|
||||
The Privileged policy is defined by an absence of restrictions. If you define a Pod where the Privileged
|
||||
security policy applies, the Pod you define is able to bypass typical container isolation mechanisms.
|
||||
For example, you can define a Pod that has access to the node's host network.
|
||||
-->
|
||||
**_Privileged_ 策略是有目的地开放且完全无限制的策略。**
|
||||
此类策略通常针对由特权较高、受信任的用户所管理的系统级或基础设施级负载。
|
||||
|
||||
Privileged 策略定义中限制较少。默认允许的(Allow-by-default)实施机制(例如 gatekeeper)
|
||||
可以缺省设置为 Privileged。
|
||||
与此不同,对于默认拒绝(Deny-by-default)的实施机制(如 Pod 安全策略)而言,
|
||||
Privileged 策略应该禁止所有限制。
|
||||
Privileged 策略定义中限制较少。
|
||||
如果你定义应用了 Privileged 安全策略的 Pod,你所定义的这个 Pod 能够绕过典型的容器隔离机制。
|
||||
例如,你可以定义有权访问节点主机网络的 Pod。
|
||||
|
||||
### Baseline
|
||||
|
||||
|
@ -99,9 +98,15 @@ fail validation.
|
|||
<tr>
|
||||
<td style="white-space: nowrap">HostProcess</td>
|
||||
<td>
|
||||
<p><!--Windows pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows node. Privileged access to the host is disallowed in the baseline policy. -->
|
||||
Windows Pod 提供了运行 <a href="/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess 容器</a> 的能力,这使得对 Windows 节点的特权访问成为可能。Baseline 策略中禁止对宿主的特权访问。{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
</p>
|
||||
<p>
|
||||
<!--
|
||||
Windows Pods offer the ability to run <a href="/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess containers</a> which enables privileged access to the Windows host machine. Privileged access to the host is disallowed in the Baseline policy.
|
||||
-->
|
||||
Windows Pod 提供了运行
|
||||
<a href="/zh-cn/docs/tasks/configure-pod-container/create-hostprocess-pod">HostProcess 容器</a>的能力,
|
||||
这使得对 Windows 宿主的特权访问成为可能。Baseline 策略中禁止对宿主的特权访问。
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
</p>
|
||||
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
|
||||
<ul>
|
||||
<li><code>spec.securityContext.windowsOptions.hostProcess</code></li>
|
||||
|
@ -206,7 +211,8 @@ fail validation.
|
|||
<p><strong><!--Allowed Values-->准许的取值</strong></p>
|
||||
<ul>
|
||||
<li><!--Undefined/nil-->未定义、nil</li>
|
||||
<li><!--Known list (not supported by the built-in <a href="/docs/concepts/security/pod-security-admission/">Pod Security Admission controller</a>)-->已知列表(不支持内置的 <a href="/docs/concepts/security/pod-security-admission/">Pod 安全性准入控制器</a> )</li>
|
||||
<li><!--Known list (not supported by the built-in <a href="/docs/concepts/security/pod-security-admission/">Pod Security Admission controller</a>)-->
|
||||
已知列表(不支持内置的 <a href="/zh-cn/docs/concepts/security/pod-security-admission/">Pod 安全性准入控制器</a> )</li>
|
||||
<li><code>0</code></li>
|
||||
</ul>
|
||||
</td>
|
||||
|
@ -215,14 +221,14 @@ fail validation.
|
|||
<td style="white-space: nowrap">AppArmor</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
On supported hosts, the <code>RuntimeDefault</code> AppArmor profile is applied by default. The baseline policy should prevent overriding or disabling the default AppArmor profile, or restrict overrides to an allowed set of profiles.
|
||||
-->
|
||||
在受支持的主机上,默认使用 <code>RuntimeDefault</code> AppArmor 配置。Baseline
|
||||
策略应避免覆盖或者禁用默认策略,以及限制覆盖一些配置集合的权限。
|
||||
</p>
|
||||
<!--
|
||||
On supported hosts, the <code>RuntimeDefault</code> AppArmor profile is applied by default. The baseline policy should prevent overriding or disabling the default AppArmor profile, or restrict overrides to an allowed set of profiles.
|
||||
-->
|
||||
在受支持的主机上,默认使用 <code>RuntimeDefault</code> AppArmor 配置。Baseline
|
||||
策略应避免覆盖或者禁用默认策略,以及限制覆盖一些配置集合的权限。
|
||||
</p>
|
||||
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
|
||||
<ul>
|
||||
<ul>
|
||||
<li><code>spec.securityContext.appArmorProfile.type</code></li>
|
||||
<li><code>spec.containers[*].securityContext.appArmorProfile.type</code></li>
|
||||
<li><code>spec.initContainers[*].securityContext.appArmorProfile.type</code></li>
|
||||
|
@ -250,11 +256,11 @@ fail validation.
|
|||
<td style="white-space: nowrap">SELinux</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Setting the SELinux type is restricted, and setting a custom SELinux user or role option is forbidden.
|
||||
-->
|
||||
设置 SELinux 类型的操作是被限制的,设置自定义的 SELinux 用户或角色选项是被禁止的。
|
||||
</p>
|
||||
<!--
|
||||
Setting the SELinux type is restricted, and setting a custom SELinux user or role option is forbidden.
|
||||
-->
|
||||
设置 SELinux 类型的操作是被限制的,设置自定义的 SELinux 用户或角色选项是被禁止的。
|
||||
</p>
|
||||
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
|
||||
<ul>
|
||||
<li><code>spec.securityContext.seLinuxOptions.type</code></li>
|
||||
|
@ -327,12 +333,12 @@ fail validation.
|
|||
<td style="white-space: nowrap">Sysctls</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node.
|
||||
-->
|
||||
sysctl 可以禁用安全机制或影响宿主上所有容器,因此除了若干“安全”的允许子集之外,其他都应该被禁止。
|
||||
如果某 sysctl 是受容器或 Pod 的名字空间限制,且与节点上其他 Pod 或进程相隔离,可认为是安全的。
|
||||
</p>
|
||||
<!--
|
||||
Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset. A sysctl is considered safe if it is namespaced in the container or the Pod, and it is isolated from other Pods or processes on the same Node.
|
||||
-->
|
||||
sysctl 可以禁用安全机制或影响宿主上所有容器,因此除了若干“安全”的允许子集之外,其他都应该被禁止。
|
||||
如果某 sysctl 是受容器或 Pod 的名字空间限制,且与节点上其他 Pod 或进程相隔离,可认为是安全的。
|
||||
</p>
|
||||
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
|
||||
<ul>
|
||||
<li><code>spec.securityContext.sysctls[*].name</code></li>
|
||||
|
@ -370,7 +376,7 @@ enforced/disallowed:
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
In this table, wildcards (`*`) indicate all elements in a list. For example,
|
||||
In this table, wildcards (`*`) indicate all elements in a list. For example,
|
||||
`spec.containers[*].securityContext` refers to the Security Context object for _all defined
|
||||
containers_. If any of the listed containers fails to meet the requirements, the entire pod will
|
||||
fail validation.
|
||||
|
@ -388,16 +394,16 @@ fail validation.
|
|||
<td><strong><!--Policy-->策略</strong></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td colspan="2"><em><!--Everything from the baseline profile.-->Baseline 策略的所有要求。</em></td>
|
||||
<td colspan="2"><em><!--Everything from the Baseline policy-->Baseline 策略的所有要求</em></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td style="white-space: nowrap"><!--Volume Types-->卷类型</td>
|
||||
<td>
|
||||
<p>
|
||||
<!--
|
||||
In addition to restricting HostPath volumes, the restricted policy limits usage of non-core volume types to those defined through PersistentVolumes.
|
||||
The Restricted policy only permits the following volume types.
|
||||
-->
|
||||
除了限制 HostPath 卷之外,此类策略还限制可以通过 PersistentVolumes 定义的非核心卷类型。
|
||||
Restricted 策略仅允许以下卷类型。
|
||||
</p>
|
||||
<p><strong><!--Restricted Fields-->限制的字段</strong></p>
|
||||
<ul>
|
||||
|
@ -427,7 +433,7 @@ fail validation.
|
|||
<li><code>spec.initContainers[*].securityContext.allowPrivilegeEscalation</code></li>
|
||||
<li><code>spec.ephemeralContainers[*].securityContext.allowPrivilegeEscalation</code></li>
|
||||
</ul>
|
||||
<p><strong><!--Allowed Values-->允许的取值</strong></p>
|
||||
<p><strong><!--Allowed Values-->准许的取值</strong></p>
|
||||
<ul>
|
||||
<li><code>false</code></li>
|
||||
</ul>
|
||||
|
@ -574,7 +580,7 @@ of individual policies are not defined here.
|
|||
{{% thirdparty-content %}}
|
||||
|
||||
<!--
|
||||
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
|
||||
Other alternatives for enforcing policies are being developed in the Kubernetes ecosystem, such as:
|
||||
-->
|
||||
在 Kubernetes 生态系统中还在开发一些其他的替代方案,例如:
|
||||
|
||||
|
@ -601,21 +607,21 @@ Kubernetes 中的 Windows 与基于 Linux 的工作负载相比有一些限制
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Kubelets prior to v1.24 don't enforce the pod OS field, and if a cluster has nodes on versions earlier than v1.24 the restricted policies should be pinned to a version prior to v1.25.
|
||||
Kubelets prior to v1.24 don't enforce the pod OS field, and if a cluster has nodes on versions earlier than v1.24 the Restricted policies should be pinned to a version prior to v1.25.
|
||||
-->
|
||||
v1.24 之前的 Kubelet 不强制处理 Pod OS 字段,如果集群中有些节点运行早于 v1.24 的版本,
|
||||
则应将限制性的策略锁定到 v1.25 之前的版本。
|
||||
v1.24 之前的 kubelet 不强制处理 Pod OS 字段,如果集群中有些节点运行早于 v1.24 的版本,
|
||||
则应将 Restricted 策略锁定到 v1.25 之前的版本。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Restricted Pod Security Standard changes
|
||||
Another important change, made in Kubernetes v1.25 is that the _restricted_ Pod security
|
||||
Another important change, made in Kubernetes v1.25 is that the _Restricted_ policy
|
||||
has been updated to use the `pod.spec.os.name` field. Based on the OS name, certain policies that are specific
|
||||
to a particular OS can be relaxed for the other OS.
|
||||
-->
|
||||
### 限制性的 Pod Security Standard 变更 {#restricted-pod-security-standard-changes}
|
||||
|
||||
Kubernetes v1.25 中的另一个重要变化是**限制性的(Restricted)** Pod 安全性已更新,
|
||||
Kubernetes v1.25 中的另一个重要变化是 _Restricted_ 策略已更新,
|
||||
能够处理 `pod.spec.os.name` 字段。根据 OS 名称,专用于特定 OS 的某些策略对其他 OS 可以放宽限制。
|
||||
|
||||
<!--
|
||||
|
@ -629,6 +635,7 @@ Restrictions on the following controls are only required if `.spec.os.name` is n
|
|||
#### OS 特定的策略控制
|
||||
|
||||
仅当 `.spec.os.name` 不是 `windows` 时,才需要对以下控制进行限制:
|
||||
|
||||
- 特权提升
|
||||
- Seccomp
|
||||
- Linux 权能
|
||||
|
@ -644,32 +651,30 @@ the [documentation](/docs/concepts/workloads/pods/user-namespaces#integration-wi
|
|||
|
||||
用户命名空间是 Linux 特有的功能,可在运行工作负载时提高隔离度。
|
||||
关于用户命名空间如何与 PodSecurityStandard 协同工作,
|
||||
请参阅
|
||||
[文档](/zh-cn/docs/concepts/workloads/pods/user-namespaces#integration-with-pod-security-admission-checks)
|
||||
了解 Pod 如何使用用户命名空间。
|
||||
请参阅[文档](/zh-cn/docs/concepts/workloads/pods/user-namespaces#integration-with-pod-security-admission-checks)了解
|
||||
Pod 如何使用用户命名空间。
|
||||
|
||||
<!--
|
||||
## FAQ
|
||||
|
||||
### Why isn't there a profile between privileged and baseline?
|
||||
### Why isn't there a profile between Privileged and Baseline?
|
||||
-->
|
||||
## 常见问题 {#faq}
|
||||
|
||||
### 为什么不存在介于 Privileged 和 Baseline 之间的策略类型
|
||||
### 为什么不存在介于 Privileged 和 Baseline 之间的策略类型 {#why-isnt-there-a-profile-between-privileged-and-baseline}
|
||||
|
||||
<!--
|
||||
The three profiles defined here have a clear linear progression from most secure (restricted) to least
|
||||
secure (privileged), and cover a broad set of workloads. Privileges required above the baseline
|
||||
The three profiles defined here have a clear linear progression from most secure (Restricted) to least
|
||||
secure (Privileged), and cover a broad set of workloads. Privileges required above the Baseline
|
||||
policy are typically very application specific, so we do not offer a standard profile in this
|
||||
niche. This is not to say that the privileged profile should always be used in this case, but that
|
||||
policies in this space need to be defined on a case-by-case basis.
|
||||
|
||||
SIG Auth may reconsider this position in the future, should a clear need for other profiles arise.
|
||||
-->
|
||||
这里定义的三种策略框架有一个明晰的线性递进关系,从最安全(Restricted)到最不安全,
|
||||
并且覆盖了很大范围的工作负载。特权要求超出 Baseline 策略者通常是特定于应用的需求,
|
||||
所以我们没有在这个范围内提供标准框架。
|
||||
这并不意味着在这样的情形下仍然只能使用 Privileged 框架,
|
||||
这里定义的三种策略框架有一个明晰的线性递进关系,从最安全(Restricted)到最不安全(Privileged),
|
||||
并且覆盖了很大范围的工作负载。特权要求超出 Baseline 策略,这通常是特定于应用的需求,
|
||||
所以我们没有在这个范围内提供标准框架。这并不意味着在这样的情形下仍然只能使用 Privileged 框架,
|
||||
只是说处于这个范围的策略需要因地制宜地定义。
|
||||
|
||||
SIG Auth 可能会在将来考虑这个范围的框架,前提是有对其他框架的需求。
|
||||
|
@ -681,7 +686,7 @@ SIG Auth 可能会在将来考虑这个范围的框架,前提是有对其他
|
|||
Containers at runtime. Security contexts are defined as part of the Pod and container specifications
|
||||
in the Pod manifest, and represent parameters to the container runtime.
|
||||
-->
|
||||
### 安全配置与安全上下文的区别是什么?
|
||||
### 安全配置与安全上下文的区别是什么? {#whats-the-difference-between-security-profile-and-security-context}
|
||||
|
||||
[安全上下文](/zh-cn/docs/tasks/configure-pod-container/security-context/)在运行时配置 Pod
|
||||
和容器。安全上下文是在 Pod 清单中作为 Pod 和容器规约的一部分来定义的,
|
||||
|
@ -689,12 +694,12 @@ in the Pod manifest, and represent parameters to the container runtime.
|
|||
|
||||
<!--
|
||||
Security profiles are control plane mechanisms to enforce specific settings in the Security Context,
|
||||
as well as other related parameters outside the Security Context. As of July 2021,
|
||||
as well as other related parameters outside the Security Context. As of July 2021,
|
||||
[Pod Security Policies](/docs/concepts/security/pod-security-policy/) are deprecated in favor of the
|
||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||
built-in [Pod Security Admission Controller](/docs/concepts/security/pod-security-admission/).
|
||||
-->
|
||||
安全策略则是控制面用来对安全上下文以及安全性上下文之外的参数实施某种设置的机制。
|
||||
在 2020 年 7 月,
|
||||
在 2021 年 7 月,
|
||||
[Pod 安全性策略](/zh-cn/docs/concepts/security/pod-security-policy/)已被废弃,
|
||||
取而代之的是内置的 [Pod 安全性准入控制器](/zh-cn/docs/concepts/security/pod-security-admission/)。
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ A _Pod_ (as in a pod of whales or pea pod) is a group of one or more
|
|||
and a specification for how to run the containers. A Pod's contents are always co-located and
|
||||
co-scheduled, and run in a shared context. A Pod models an
|
||||
application-specific "logical host": it contains one or more application
|
||||
containers which are relatively tightly coupled.
|
||||
containers which are relatively tightly coupled.
|
||||
In non-cloud contexts, applications executed on the same physical or virtual machine are
|
||||
analogous to cloud applications executed on the same logical host.
|
||||
-->
|
||||
|
@ -37,7 +37,7 @@ analogous to cloud applications executed on the same logical host.
|
|||
|
||||
**Pod**(就像在鲸鱼荚或者豌豆荚中)是一组(一个或多个)
|
||||
{{< glossary_tooltip text="容器" term_id="container" >}};
|
||||
这些容器共享存储、网络、以及怎样运行这些容器的声明。
|
||||
这些容器共享存储、网络、以及怎样运行这些容器的规约。
|
||||
Pod 中的内容总是并置(colocated)的并且一同调度,在共享的上下文中运行。
|
||||
Pod 所建模的是特定于应用的 “逻辑主机”,其中包含一个或多个应用容器,
|
||||
这些容器相对紧密地耦合在一起。
|
||||
|
@ -246,11 +246,11 @@ Kubernetes. In the future, this list may be expanded.
|
|||
|
||||
In Kubernetes v{{< skew currentVersion >}}, the value of `.spec.os.name` does not affect
|
||||
how the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
||||
picks a Pod to run a node. In any cluster where there is more than one operating system for
|
||||
picks a node for the Pod to run on. In any cluster where there is more than one operating system for
|
||||
running nodes, you should set the
|
||||
[kubernetes.io/os](/docs/reference/labels-annotations-taints/#kubernetes-io-os)
|
||||
label correctly on each node, and define pods with a `nodeSelector` based on the operating system
|
||||
label, the kube-scheduler assigns your pod to a node based on other criteria and may or may not
|
||||
label. The kube-scheduler assigns your pod to a node based on other criteria and may or may not
|
||||
succeed in picking a suitable node placement where the node OS is right for the containers in that Pod.
|
||||
The [Pod security standards](/docs/concepts/security/pod-security-standards/) also use this
|
||||
field to avoid enforcing policies that aren't relevant to the operating system.
|
||||
|
@ -259,8 +259,8 @@ field to avoid enforcing policies that aren't relevant to the operating system.
|
|||
这两个是 Kubernetes 目前支持的操作系统。将来,这个列表可能会被扩充。
|
||||
|
||||
在 Kubernetes v{{< skew currentVersion >}} 中,`.spec.os.name` 的值对
|
||||
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} 如何调度 Pod 到节点上没有影响。
|
||||
在任何有多种操作系统运行节点的集群中,你应该在每个节点上正确设置
|
||||
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}}
|
||||
如何选择要运行 Pod 的节点没有影响。在任何有多种操作系统运行节点的集群中,你应该在每个节点上正确设置
|
||||
[kubernetes.io/os](/zh-cn/docs/reference/labels-annotations-taints/#kubernetes-io-os)
|
||||
标签,并根据操作系统标签为 Pod 设置 `nodeSelector` 字段。
|
||||
kube-scheduler 将根据其他标准将你的 Pod 分配到节点,
|
||||
|
@ -305,7 +305,7 @@ PodTemplates are specifications for creating Pods, and are included in workload
|
|||
### Pod 模板 {#pod-templates}
|
||||
|
||||
{{< glossary_tooltip text="工作负载" term_id="workload" >}}资源的控制器通常使用
|
||||
**Pod 模板(Pod Template)**来替你创建 Pod 并管理它们。
|
||||
**Pod 模板(Pod Template)** 来替你创建 Pod 并管理它们。
|
||||
|
||||
Pod 模板是包含在工作负载对象中的规范,用来创建 Pod。这类负载资源包括
|
||||
[Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)、
|
||||
|
@ -581,7 +581,7 @@ Pods, the kubelet directly supervises each static Pod (and restarts it if it fai
|
|||
-->
|
||||
## 静态 Pod {#static-pods}
|
||||
|
||||
**静态 Pod(Static Pod)**直接由特定节点上的 `kubelet` 守护进程管理,
|
||||
**静态 Pod(Static Pod)** 直接由特定节点上的 `kubelet` 守护进程管理,
|
||||
不需要 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}}看到它们。
|
||||
尽管大多数 Pod 都是通过控制面(例如,{{< glossary_tooltip text="Deployment" term_id="deployment" >}})
|
||||
来管理的,对于静态 Pod 而言,`kubelet` 直接监控每个 Pod,并在其失效时重启之。
|
||||
|
|
|
@ -17,18 +17,18 @@ weight: 20
|
|||
|
||||
<!--
|
||||
Bootstrap tokens are a simple bearer token that is meant to be used when
|
||||
creating new clusters or joining new nodes to an existing cluster. It was built
|
||||
to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
|
||||
creating new clusters or joining new nodes to an existing cluster.
|
||||
It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts
|
||||
for users that wish to start clusters without `kubeadm`. It is also built to
|
||||
work, via RBAC policy, with the
|
||||
[Kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
|
||||
[kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system.
|
||||
-->
|
||||
启动引导令牌是一种简单的持有者令牌(Bearer Token),这种令牌是在新建集群
|
||||
或者在现有集群中添加新节点时使用的。
|
||||
它被设计成能够支持 [`kubeadm`](/zh-cn/docs/reference/setup-tools/kubeadm/),
|
||||
但是也可以被用在其他的案例中以便用户在不使用 `kubeadm` 的情况下启动集群。
|
||||
它也被设计成可以通过 RBAC 策略,结合
|
||||
[Kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
[kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
|
||||
系统进行工作。
|
||||
|
||||
<!-- body -->
|
||||
|
@ -37,9 +37,9 @@ work, via RBAC policy, with the
|
|||
|
||||
Bootstrap Tokens are defined with a specific type
|
||||
(`bootstrap.kubernetes.io/token`) of secrets that lives in the `kube-system`
|
||||
namespace. These Secrets are then read by the Bootstrap Authenticator in the
|
||||
API Server. Expired tokens are removed with the TokenCleaner controller in the
|
||||
Controller Manager. The tokens are also used to create a signature for a
|
||||
namespace. These Secrets are then read by the Bootstrap Authenticator in the
|
||||
API Server. Expired tokens are removed with the TokenCleaner controller in the
|
||||
Controller Manager. The tokens are also used to create a signature for a
|
||||
specific ConfigMap used in a "discovery" process through a BootstrapSigner
|
||||
controller.
|
||||
-->
|
||||
|
@ -53,11 +53,11 @@ BootstrapSigner 控制器也会使用这一 ConfigMap。
|
|||
<!--
|
||||
## Token Format
|
||||
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`. More formally,
|
||||
they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
|
||||
Bootstrap Tokens take the form of `abcdef.0123456789abcdef`.
|
||||
More formally, they must match the regular expression `[a-z0-9]{6}\.[a-z0-9]{16}`.
|
||||
|
||||
The first part of the token is the "Token ID" and is considered public
|
||||
information. It is used when referring to a token without leaking the secret
|
||||
information. It is used when referring to a token without leaking the secret
|
||||
part used for authentication. The second part is the "Token Secret" and should
|
||||
only be shared with trusted parties.
|
||||
-->
|
||||
|
@ -97,8 +97,8 @@ Authorization: Bearer 07401b.f395accd246ae52d
|
|||
```
|
||||
<!--
|
||||
Tokens authenticate as the username `system:bootstrap:<token id>` and are members
|
||||
of the group `system:bootstrappers`. Additional groups may be specified in the
|
||||
token's Secret.
|
||||
of the group `system:bootstrappers`.
|
||||
Additional groups may be specified in the token's Secret.
|
||||
|
||||
Expired tokens can be deleted automatically by enabling the `tokencleaner`
|
||||
controller on the controller manager.
|
||||
|
@ -115,7 +115,7 @@ controller on the controller manager.
|
|||
<!--
|
||||
## Bootstrap Token Secret Format
|
||||
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
Each valid token is backed by a secret in the `kube-system` namespace. You can
|
||||
find the full design doc
|
||||
[here](https://github.com/kubernetes/design-proposals-archive/blob/main/cluster-lifecycle/bootstrap-discovery.md).
|
||||
|
||||
|
@ -161,11 +161,10 @@ stringData:
|
|||
|
||||
<!--
|
||||
The type of the secret must be `bootstrap.kubernetes.io/token` and the name must
|
||||
be `bootstrap-token-<token id>`. It must also exist in the `kube-system`
|
||||
namespace.
|
||||
be `bootstrap-token-<token id>`. It must also exist in the `kube-system` namespace.
|
||||
|
||||
The `usage-bootstrap-*` members indicate what this secret is intended to be used
|
||||
for. A value must be set to `true` to be enabled.
|
||||
The `usage-bootstrap-*` members indicate what this secret is intended to be used for.
|
||||
A value must be set to `true` to be enabled.
|
||||
-->
|
||||
Secret 的类型必须是 `bootstrap.kubernetes.io/token`,而且名字必须是 `bootstrap-token-<token id>`。
|
||||
令牌必须存在于 `kube-system` 名字空间中。
|
||||
|
@ -183,9 +182,9 @@ authenticate to the API server as a bearer token.
|
|||
就像下面描述的那样。
|
||||
|
||||
<!--
|
||||
The `expiration` field controls the expiry of the token. Expired tokens are
|
||||
The `expiration` field controls the expiry of the token. Expired tokens are
|
||||
rejected when used for authentication and ignored during ConfigMap signing.
|
||||
The expiry value is encoded as an absolute UTC time using RFC3339. Enable the
|
||||
The expiry value is encoded as an absolute UTC time using RFC3339. Enable the
|
||||
`tokencleaner` controller to automatically delete expired tokens.
|
||||
-->
|
||||
`expiration` 字段控制令牌的失效期。过期的令牌在用于身份认证时会被拒绝,在用于
|
||||
|
@ -208,9 +207,9 @@ You can use the `kubeadm` tool to manage tokens on a running cluster. See the
|
|||
<!--
|
||||
## ConfigMap Signing
|
||||
|
||||
In addition to authentication, the tokens can be used to sign a ConfigMap. This
|
||||
is used early in a cluster bootstrap process before the client trusts the API
|
||||
server. The signed ConfigMap can be authenticated by the shared token.
|
||||
In addition to authentication, the tokens can be used to sign a ConfigMap.
|
||||
This is used early in a cluster bootstrap process before the client trusts the API
|
||||
server. The signed ConfigMap can be authenticated by the shared token.
|
||||
|
||||
Enable ConfigMap signing by enabling the `bootstrapsigner` controller on the
|
||||
Controller Manager.
|
||||
|
@ -230,7 +229,7 @@ Controller Manager.
|
|||
<!--
|
||||
The ConfigMap that is signed is `cluster-info` in the `kube-public` namespace.
|
||||
The typical flow is that a client reads this ConfigMap while unauthenticated and
|
||||
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
|
||||
ignoring TLS errors. It then validates the payload of the ConfigMap by looking
|
||||
at a signature embedded in the ConfigMap.
|
||||
|
||||
The ConfigMap may look like this:
|
||||
|
@ -265,19 +264,19 @@ data:
|
|||
|
||||
<!--
|
||||
The `kubeconfig` member of the ConfigMap is a config file with only the cluster
|
||||
information filled out. The key thing being communicated here is the
|
||||
`certificate-authority-data`. This may be expanded in the future.
|
||||
information filled out. The key thing being communicated here is the
|
||||
`certificate-authority-data`. This may be expanded in the future.
|
||||
-->
|
||||
ConfigMap 的 `kubeconfig` 成员是一个填好了集群信息的配置文件。
|
||||
这里主要交换的信息是 `certificate-authority-data`。在将来可能会有扩展。
|
||||
|
||||
<!--
|
||||
The signature is a JWS signature using the "detached" mode. To validate the
|
||||
The signature is a JWS signature using the "detached" mode. To validate the
|
||||
signature, the user should encode the `kubeconfig` payload according to JWS
|
||||
rules (base64 encoded while discarding any trailing `=`). That encoded payload
|
||||
is then used to form a whole JWS by inserting it between the 2 dots. You can
|
||||
rules (base64 encoded while discarding any trailing `=`). That encoded payload
|
||||
is then used to form a whole JWS by inserting it between the 2 dots. You can
|
||||
verify the JWS using the `HS256` scheme (HMAC-SHA256) with the full token (e.g.
|
||||
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
|
||||
`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256
|
||||
is used.
|
||||
-->
|
||||
签名是一个使用 “detached” 模式生成的 JWS 签名。
|
||||
|
|
|
@ -73,7 +73,7 @@ the `spec.request` field. The CertificateSigningRequest denotes the signer (the
|
|||
recipient that the request is being made to) using the `spec.signerName` field.
|
||||
Note that `spec.signerName` is a required key after API version `certificates.k8s.io/v1`.
|
||||
In Kubernetes v1.22 and later, clients may optionally set the `spec.expirationSeconds`
|
||||
field to request a particular lifetime for the issued certificate. The minimum valid
|
||||
field to request a particular lifetime for the issued certificate. The minimum valid
|
||||
value for this field is `600`, i.e. ten minutes.
|
||||
-->
|
||||
### 请求签名流程 {#request-signing-process}
|
||||
|
@ -223,7 +223,7 @@ signed, a security certificate.
|
|||
|
||||
Any signer that is made available for outside a particular cluster should provide information
|
||||
about how the signer works, so that consumers can understand what that means for CertifcateSigningRequests
|
||||
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
|
||||
and (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).
|
||||
This includes:
|
||||
-->
|
||||
## 签名者 {#signers}
|
||||
|
@ -237,8 +237,10 @@ This includes:
|
|||
<!--
|
||||
1. **Trust distribution**: how trust anchors (CA certificates or certificate bundles) are distributed.
|
||||
1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames, Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior when usages different than the signer-determined usages are specified in the CSR.
|
||||
1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames,
|
||||
Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.
|
||||
1. **Permitted key usages / extended key usages**: any restrictions on and behavior
|
||||
when usages different than the signer-determined usages are specified in the CSR.
|
||||
1. **Expiration/certificate lifetime**: whether it is fixed by the signer, configurable by the admin, determined by the CSR `spec.expirationSeconds` field, etc
|
||||
and the behavior when the signer-determined expiration is different from the CSR `spec.expirationSeconds` field.
|
||||
1. **CA bit allowed/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.
|
||||
|
@ -281,7 +283,7 @@ certificate expiration or lifetime. The expiration or lifetime therefore has to
|
|||
through the `spec.expirationSeconds` field of the CSR object. The built-in signers
|
||||
use the `ClusterSigningDuration` configuration option, which defaults to 1 year,
|
||||
(the `--cluster-signing-duration` command-line flag of the kube-controller-manager)
|
||||
as the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`
|
||||
as the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`
|
||||
is specified, the minimum of `spec.expirationSeconds` and `ClusterSigningDuration` is
|
||||
used.
|
||||
-->
|
||||
|
@ -294,7 +296,7 @@ PKCS#10 签名请求格式并没有一种标准的方法去设置证书的过期
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
-->
|
||||
`spec.expirationSeconds` 字段是在 Kubernetes v1.22 中加入的。早期的 Kubernetes 版本并不认识该字段。
|
||||
|
@ -426,7 +428,7 @@ kube-controller-manager 为每个内置签名者实现了[控制平面签名](#s
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
-->
|
||||
`spec.expirationSeconds` 字段是在 Kubernetes v1.22 中加入的,早期的 Kubernetes 版本并不认识该字段,
|
||||
|
@ -434,10 +436,10 @@ v1.22 版本之前的 Kubernetes API 服务器会在创建对象的时候忽略
|
|||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
|
||||
Distribution of trust happens out of band for these signers. Any trust outside of those described above are strictly
|
||||
coincidental. For instance, some distributions may honor `kubernetes.io/legacy-unknown` as client certificates for the
|
||||
kube-apiserver, but this is not a standard.
|
||||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
None of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only
|
||||
guaranteed to verify a connection to the API server using the default service (`kubernetes.default.svc`).
|
||||
-->
|
||||
对于这些签名者,信任的分发发生在带外(out of band)。上述信任之外的任何信任都是完全巧合的。
|
||||
|
@ -474,7 +476,8 @@ kube-controller-manager 签名所有标记为 approved 的 CSR。
|
|||
|
||||
{{< note >}}
|
||||
<!--
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.
|
||||
The `spec.expirationSeconds` field was added in Kubernetes v1.22.
|
||||
Earlier versions of Kubernetes do not honor this field.
|
||||
Kubernetes API servers prior to v1.22 will silently drop this field when the object is created.
|
||||
-->
|
||||
`spec.expirationSeconds` 字段是在 Kubernetes v1.22 中加入的,早期的 Kubernetes 版本并不认识该字段,
|
||||
|
@ -703,7 +706,7 @@ this API.
|
|||
|
||||
<!--
|
||||
A ClusterTrustBundles is a cluster-scoped object for distributing X.509 trust
|
||||
anchors (root certificates) to workloads within the cluster. They're designed
|
||||
anchors (root certificates) to workloads within the cluster. They're designed
|
||||
to work well with the [signer](#signers) concept from CertificateSigningRequests.
|
||||
|
||||
ClusterTrustBundles can be used in two modes:
|
||||
|
@ -719,8 +722,8 @@ ClusterTrustBundle 可以使用两种模式:
|
|||
### Common properties and validation {#ctb-common}
|
||||
|
||||
All ClusterTrustBundle objects have strong validation on the contents of their
|
||||
`trustBundle` field. That field must contain one or more X.509 certificates,
|
||||
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
|
||||
`trustBundle` field. That field must contain one or more X.509 certificates,
|
||||
DER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates
|
||||
must parse as valid X.509 certificates.
|
||||
|
||||
Esoteric PEM features like inter-block data and intra-block headers are either
|
||||
|
@ -796,8 +799,8 @@ controller in the cluster, so they have several security features:
|
|||
`<signerNameDomain>/<signerNamePath>` or match a pattern such as
|
||||
`<signerNameDomain>/*`.
|
||||
* Signer-linked ClusterTrustBundles **must** be named with a prefix derived from
|
||||
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
|
||||
and a final colon is appended. This is followed by an arbitrary name. For
|
||||
their `spec.signerName` field. Slashes (`/`) are replaced with colons (`:`),
|
||||
and a final colon is appended. This is followed by an arbitrary name. For
|
||||
example, the signer `example.com/mysigner` can be linked to a
|
||||
ClusterTrustBundle `example.com:mysigner:<arbitrary-name>`.
|
||||
-->
|
||||
|
@ -841,8 +844,8 @@ spec:
|
|||
```
|
||||
|
||||
<!--
|
||||
They are primarily intended for cluster configuration use cases. Each
|
||||
signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
|
||||
They are primarily intended for cluster configuration use cases.
|
||||
Each signer-unlinked ClusterTrustBundle is an independent object, in contrast to the
|
||||
customary grouping behavior of signer-linked ClusterTrustBundles.
|
||||
|
||||
Signer-unlinked ClusterTrustBundles have no `attest` verb requirement.
|
||||
|
@ -869,7 +872,8 @@ ClusterTrustBundle 的名称**必须不**包含英文冒号(`:`)。
|
|||
{{<feature-state for_k8s_version="v1.29" state="alpha" >}}
|
||||
|
||||
<!--
|
||||
The contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets. See the [clusterTrustBundle projected volume source](/docs/concepts/storage/projected-volumes#clustertrustbundle) for more details.
|
||||
The contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets.
|
||||
See the [clusterTrustBundle projected volume source](/docs/concepts/storage/projected-volumes#clustertrustbundle) for more details.
|
||||
-->
|
||||
ClusterTrustBundle 的内容可以注入到容器文件系统,这与 ConfigMap 和 Secret 类似。
|
||||
更多细节参阅 [ClusterTrustBundle 投射卷源](/zh-cn/docs/concepts/storage/projected-volumes#clustertrustbundle)。
|
||||
|
@ -1074,8 +1078,10 @@ kubectl config use-context myuser
|
|||
|
||||
<!--
|
||||
* Read [Manage TLS Certificates in a Cluster](/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
* View the source code for the kube-controller-manager built in [signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* View the source code for the kube-controller-manager built in
|
||||
[signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in
|
||||
[approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
|
||||
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)
|
||||
* Read about the ClusterTrustBundle API:
|
||||
|
|
|
@ -1088,7 +1088,7 @@ Allowed values are `Exact` or `Equivalent`.
|
|||
|
||||
<!--
|
||||
* `Exact` means a request should be intercepted only if it exactly matches a specified rule.
|
||||
* `Equivalent` means a request should be intercepted if modifies a resource listed in `rules`,
|
||||
* `Equivalent` means a request should be intercepted if it modifies a resource listed in `rules`,
|
||||
even via another API group or version.
|
||||
|
||||
In the example given above, the webhook that only registered for `apps/v1` could use `matchPolicy`:
|
||||
|
|
|
@ -1148,7 +1148,7 @@ This allows the cluster to repair accidental modifications, and helps to keep ro
|
|||
up-to-date as permissions and subjects change in new Kubernetes releases.
|
||||
|
||||
To opt out of this reconciliation, set the `rbac.authorization.kubernetes.io/autoupdate`
|
||||
annotation on a default cluster role or rolebinding to `false`.
|
||||
annotation on a default cluster role or default cluster RoleBinding to `false`.
|
||||
Be aware that missing default permissions and subjects can result in non-functional clusters.
|
||||
|
||||
Auto-reconciliation is enabled by default if the RBAC authorizer is active.
|
||||
|
@ -1160,7 +1160,7 @@ Auto-reconciliation is enabled by default if the RBAC authorizer is active.
|
|||
这种自动协商机制允许集群去修复一些不小心发生的修改,
|
||||
并且有助于保证角色和角色绑定在新的发行版本中有权限或主体变更时仍然保持最新。
|
||||
|
||||
如果要禁止此功能,请将默认 ClusterRole 以及 ClusterRoleBinding 的
|
||||
如果要禁止此功能,请将默认 ClusterRole 以及默认 ClusterRoleBinding 的
|
||||
`rbac.authorization.kubernetes.io/autoupdate` 注解设置成 `false`。
|
||||
注意,缺少默认权限和角色绑定主体可能会导致集群无法正常工作。
|
||||
|
||||
|
@ -1169,9 +1169,9 @@ Auto-reconciliation is enabled by default if the RBAC authorizer is active.
|
|||
<!--
|
||||
### API discovery roles {#discovery-roles}
|
||||
|
||||
Default role bindings authorize unauthenticated and authenticated users to read API information
|
||||
Default cluster role bindings authorize unauthenticated and authenticated users to read API information
|
||||
that is deemed safe to be publicly accessible (including CustomResourceDefinitions).
|
||||
To disable anonymous unauthenticated access, add `--anonymous-auth=false` to
|
||||
To disable anonymous unauthenticated access, add `--anonymous-auth=false` flag to
|
||||
the API server configuration.
|
||||
|
||||
To view the configuration of these roles via `kubectl` run:
|
||||
|
@ -1179,8 +1179,8 @@ To view the configuration of these roles via `kubectl` run:
|
|||
### API 发现角色 {#discovery-roles}
|
||||
|
||||
无论是经过身份验证的还是未经过身份验证的用户,
|
||||
默认的角色绑定都授权他们读取被认为是可安全地公开访问的 API(包括 CustomResourceDefinitions)。
|
||||
如果要禁用匿名的未经过身份验证的用户访问,请在 API 服务器配置中中添加
|
||||
默认的集群角色绑定都授权他们读取被认为是可安全地公开访问的 API(包括 CustomResourceDefinitions)。
|
||||
如果要禁用匿名的未经过身份验证的用户访问,请在 API 服务器配置中添加
|
||||
`--anonymous-auth=false` 的配置选项。
|
||||
|
||||
通过运行命令 `kubectl` 可以查看这些角色的配置信息:
|
||||
|
|
|
@ -109,7 +109,7 @@ Kubernetes 区分用户账号和服务账号的概念,主要基于以下原因
|
|||
<!--
|
||||
## Bound service account tokens
|
||||
-->
|
||||
## 绑定的服务账户令牌 {#bound-service-account-tokens}
|
||||
## 绑定的服务账号令牌 {#bound-service-account-tokens}
|
||||
|
||||
<!--
|
||||
ServiceAccount tokens can be bound to API objects that exist in the kube-apiserver.
|
||||
|
@ -279,6 +279,27 @@ Here's an example of how that looks for a launched Pod:
|
|||
|
||||
以下示例演示如何查找已启动的 Pod:
|
||||
|
||||
<!--
|
||||
```yaml
|
||||
...
|
||||
- name: kube-api-access-<random-suffix>
|
||||
projected:
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
path: token # must match the path the app expects
|
||||
- configMap:
|
||||
items:
|
||||
- key: ca.crt
|
||||
path: ca.crt
|
||||
name: kube-root-ca.crt
|
||||
- downwardAPI:
|
||||
items:
|
||||
- fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
path: namespace
|
||||
```
|
||||
-->
|
||||
```yaml
|
||||
...
|
||||
- name: kube-api-access-<随机后缀>
|
||||
|
@ -497,7 +518,7 @@ ensures a ServiceAccount named "default" exists in every active namespace.
|
|||
-->
|
||||
## 控制平面细节 {#control-plane-details}
|
||||
|
||||
### ServiceAccount 控制器 {#serviceaccount-controller}
|
||||
### ServiceAccount 控制器 {#serviceaccount-controller}
|
||||
|
||||
ServiceAccount 控制器管理名字空间内的 ServiceAccount,
|
||||
并确保每个活跃的名字空间中都存在名为 `default` 的 ServiceAccount。
|
||||
|
@ -595,7 +616,7 @@ it does the following when a Pod is created:
|
|||
<!--
|
||||
### Legacy ServiceAccount token tracking controller
|
||||
-->
|
||||
### 传统 ServiceAccount 令牌追踪控制器
|
||||
### 传统 ServiceAccount 令牌追踪控制器 {#legacy-serviceaccount-token-tracking-controller}
|
||||
|
||||
{{< feature-state feature_gate_name="LegacyServiceAccountTokenTracking" >}}
|
||||
|
||||
|
@ -607,12 +628,12 @@ account tokens began to be monitored by the system.
|
|||
-->
|
||||
此控制器在 `kube-system` 命名空间中生成名为
|
||||
`kube-apiserver-legacy-service-account-token-tracking` 的 ConfigMap。
|
||||
这个 ConfigMap 记录了系统开始监视传统服务账户令牌的时间戳。
|
||||
这个 ConfigMap 记录了系统开始监视传统服务账号令牌的时间戳。
|
||||
|
||||
<!--
|
||||
### Legacy ServiceAccount token cleaner
|
||||
-->
|
||||
### 传统 ServiceAccount 令牌清理器
|
||||
### 传统 ServiceAccount 令牌清理器 {#legacy-serviceaccount-token-cleaner}
|
||||
|
||||
{{< feature-state feature_gate_name="LegacyServiceAccountTokenCleanUp" >}}
|
||||
|
||||
|
@ -713,6 +734,9 @@ kubelet 确保该卷包含允许容器作为正确 ServiceAccount 进行身份
|
|||
|
||||
以下示例演示如何查找已启动的 Pod:
|
||||
|
||||
<!--
|
||||
# decimal equivalent of octal 0644
|
||||
-->
|
||||
```yaml
|
||||
...
|
||||
- name: kube-api-access-<random-suffix>
|
||||
|
@ -871,6 +895,9 @@ Otherwise, first find the Secret for the ServiceAccount.
|
|||
-->
|
||||
否则,先找到 ServiceAccount 所用的 Secret。
|
||||
|
||||
<!--
|
||||
# This assumes that you already have a namespace named 'examplens'
|
||||
-->
|
||||
```shell
|
||||
# 此处假设你已有一个名为 'examplens' 的名字空间
|
||||
kubectl -n examplens get serviceaccount/example-automated-thing -o yaml
|
||||
|
|
|
@ -18,7 +18,6 @@ content_type: concept
|
|||
<!--
|
||||
This page provides an overview of Validating Admission Policy.
|
||||
-->
|
||||
|
||||
本页面提供验证准入策略(Validating Admission Policy)的概述。
|
||||
|
||||
<!-- body -->
|
||||
|
@ -63,7 +62,6 @@ A policy is generally made up of three resources:
|
|||
A native type such as ConfigMap or a CRD defines the schema of a parameter resource.
|
||||
`ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource.
|
||||
-->
|
||||
|
||||
- `ValidatingAdmissionPolicy` 描述策略的抽象逻辑(想想看:“这个策略确保一个特定标签被设置为一个特定值”)。
|
||||
|
||||
- 一个 `ValidatingAdmissionPolicyBinding` 将上述资源联系在一起,并提供作用域。
|
||||
|
@ -86,22 +84,12 @@ If a `ValidatingAdmissionPolicy` does not need to be configured via parameters,
|
|||
如果 `ValidatingAdmissionPolicy` 不需要参数配置,不设置 `ValidatingAdmissionPolicy` 中的
|
||||
`spec.paramKind` 即可。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
- Ensure the `ValidatingAdmissionPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled.
|
||||
- Ensure that the `admissionregistration.k8s.io/v1beta1` API is enabled.
|
||||
-->
|
||||
- 确保 `ValidatingAdmissionPolicy` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)被启用。
|
||||
- 确保 `admissionregistration.k8s.io/v1beta1` API 被启用。
|
||||
|
||||
<!--
|
||||
## Getting Started with Validating Admission Policy
|
||||
|
||||
Validating Admission Policy is part of the cluster control-plane. You should write and deploy them
|
||||
with great caution. The following describes how to quickly experiment with Validating Admission Policy.
|
||||
-->
|
||||
|
||||
## 开始使用验证准入策略 {#getting-started-with-validating-admission-policy}
|
||||
|
||||
验证准入策略是集群控制平面的一部分。你应该非常谨慎地编写和部署它们。下面介绍如何快速试验验证准入策略。
|
||||
|
@ -179,9 +167,9 @@ The supported `validationActions` are:
|
|||
as a [warning](/blog/2020/09/03/warnings/).
|
||||
- `Audit`: Validation failure is included in the audit event for the API request.
|
||||
-->
|
||||
- `Deny`: 验证失败会导致请求被拒绝。
|
||||
- `Warn`: 验证失败会作为[警告](/blog/2020/09/03/warnings/)报告给请求客户端。
|
||||
- `Audit`: 验证失败会包含在 API 请求的审计事件中。
|
||||
- `Deny`:验证失败会导致请求被拒绝。
|
||||
- `Warn`:验证失败会作为[警告](/zh-cn/blog/2020/09/03/warnings/)报告给请求客户端。
|
||||
- `Audit`:验证失败会包含在 API 请求的审计事件中。
|
||||
|
||||
<!--
|
||||
For example, to both warn clients about a validation failure and to audit the
|
||||
|
@ -199,6 +187,7 @@ API response body and the HTTP warning headers.
|
|||
-->
|
||||
`Deny` 和 `Warn` 不能一起使用,因为这种组合会不必要地将验证失败重复输出到
|
||||
API 响应体和 HTTP 警告头中。
|
||||
|
||||
<!--
|
||||
A `validation` that evaluates to false is always enforced according to these
|
||||
actions. Failures defined by the `failurePolicy` are enforced
|
||||
|
@ -211,9 +200,9 @@ otherwise the failures are ignored.
|
|||
|
||||
<!--
|
||||
See [Audit Annotations: validation failures](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure) for more details about the validation failure audit annotation.
|
||||
-->
|
||||
有关验证失败审计注解的详细信息,请参见
|
||||
[审计注解:验证失败](/zh-cn/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation_failure)。
|
||||
-->
|
||||
有关验证失败审计注解的详细信息,
|
||||
请参见[审计注解:验证失败](/zh-cn/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation_failure)。
|
||||
|
||||
<!--
|
||||
### Parameter resources
|
||||
|
@ -225,7 +214,7 @@ and then a policy binding ties a policy by name (via policyName) to a particular
|
|||
If parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy
|
||||
with parameter configuration.
|
||||
-->
|
||||
### 参数资源
|
||||
### 参数资源 {#parameter-resources}
|
||||
|
||||
参数资源允许策略配置与其定义分开。
|
||||
一个策略可以定义 paramKind,给出参数资源的 GVK,
|
||||
|
@ -394,7 +383,9 @@ CEL 提供了 `has()` 方法,它检查传递给它的键是否存在。CEL 还
|
|||
|
||||
结合这两者,我们可以提供一种验证可选参数的方法:
|
||||
|
||||
`!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)`
|
||||
```
|
||||
!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)
|
||||
```
|
||||
|
||||
<!--
|
||||
Here, we first check that the optional parameter is present with `!has(params.optionalNumber)`.
|
||||
|
@ -500,7 +491,7 @@ admission policy are handled. Allowed values are `Ignore` or `Fail`.
|
|||
|
||||
Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:
|
||||
-->
|
||||
### 失效策略
|
||||
### 失效策略 {#failure-policy}
|
||||
|
||||
`failurePolicy` 定义了如何处理错误配置和准入策略的 CEL 表达式取值为 error 的情况。
|
||||
|
||||
|
@ -520,7 +511,14 @@ Note that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:
|
|||
To learn more, see the [CEL language specification](https://github.com/google/cel-spec)
|
||||
CEL expressions have access to the contents of the Admission request/response, organized into CEL
|
||||
variables as well as some other useful variables:
|
||||
-->
|
||||
### 检查表达式 {#validation-expression}
|
||||
|
||||
`spec.validations[i].expression` 代表将使用 CEL 来计算表达式。
|
||||
要了解更多信息,请参阅 [CEL 语言规范](https://github.com/google/cel-spec)。
|
||||
CEL 表达式可以访问按 CEL 变量来组织的 Admission 请求/响应的内容,以及其他一些有用的变量 :
|
||||
|
||||
<!--
|
||||
- 'object' - The object from the incoming request. The value is null for DELETE requests.
|
||||
- 'oldObject' - The existing object. The value is null for CREATE requests.
|
||||
- 'request' - Attributes of the [admission request](/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest).
|
||||
|
@ -533,12 +531,6 @@ variables as well as some other useful variables:
|
|||
- `authorizer.requestResource` - A shortcut for an authorization check configured with the request
|
||||
resource (group, resource, (subresource), namespace, name).
|
||||
-->
|
||||
### 检查表达式
|
||||
|
||||
`spec.validations[i].expression` 代表将使用 CEL 来计算表达式。
|
||||
要了解更多信息,请参阅 [CEL 语言规范](https://github.com/google/cel-spec)。
|
||||
CEL 表达式可以访问按 CEL 变量来组织的 Admission 请求/响应的内容,以及其他一些有用的变量 :
|
||||
|
||||
- 'object' - 来自传入请求的对象。对于 DELETE 请求,该值为 null。
|
||||
- 'oldObject' - 现有对象。对于 CREATE 请求,该值为 null。
|
||||
- 'request' - [准入请求](/zh-cn/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest)的属性。
|
||||
|
@ -567,7 +559,7 @@ Concatenation on arrays with x-kubernetes-list-type use the semantics of the lis
|
|||
列表类型为 "set" 或 "map" 的数组上的等价关系比较会忽略元素顺序,即 [1, 2] == [2, 1]。
|
||||
使用 x-kubernetes-list-type 连接数组时使用列表类型的语义:
|
||||
|
||||
- 'set': `X + Y` 执行并集,其中 `X` 中所有元素的数组位置被保留,`Y` 中不相交的元素被追加,保留其元素的偏序关系。
|
||||
- 'set':`X + Y` 执行并集,其中 `X` 中所有元素的数组位置被保留,`Y` 中不相交的元素被追加,保留其元素的偏序关系。
|
||||
- 'map':`X + Y` 执行合并,保留 `X` 中所有键的数组位置,但是当 `X` 和 `Y` 的键集相交时,其值被 `Y` 的值覆盖。
|
||||
`Y` 中键值不相交的元素被追加,保留其元素之间的偏序关系。
|
||||
|
||||
|
@ -662,7 +654,7 @@ the request is determined as follows:
|
|||
|
||||
For example, here is an admission policy with an audit annotation:
|
||||
-->
|
||||
### 审计注解
|
||||
### 审计注解 {#audit-annotations}
|
||||
|
||||
`auditAnnotations` 可用于在 API 请求的审计事件中包括审计注解。
|
||||
|
||||
|
@ -733,7 +725,7 @@ message expression must evaluate to a string.
|
|||
For example, to better inform the user of the reason of denial when the policy refers to a parameter,
|
||||
we can have the following validation:
|
||||
-->
|
||||
### 消息表达式
|
||||
### 消息表达式 {#message-expression}
|
||||
|
||||
为了在策略拒绝请求时返回更友好的消息,我们在 `spec.validations[i].messageExpression`
|
||||
中使用 CEL 表达式来构造消息。
|
||||
|
@ -768,8 +760,7 @@ Note that static message is validated against multi-line strings.
|
|||
这比静态消息 "too many replicas" 更具说明性。
|
||||
|
||||
如果既定义了消息表达式,又在 `spec.validations[i].message` 中定义了静态消息,
|
||||
则消息表达式优先于静态消息。
|
||||
但是,如果消息表达式求值失败,则将使用静态消息。
|
||||
则消息表达式优先于静态消息。但是,如果消息表达式求值失败,则将使用静态消息。
|
||||
此外,如果消息表达式求值为多行字符串,则会丢弃求值结果并使用静态消息(如果存在)。
|
||||
请注意,静态消息也要检查是否存在多行字符串。
|
||||
|
||||
|
@ -786,7 +777,7 @@ and an empty `status.typeChecking` means that no errors were detected.
|
|||
|
||||
For example, given the following policy definition:
|
||||
-->
|
||||
### 类型检查
|
||||
### 类型检查 {#type-checking}
|
||||
|
||||
创建或更新策略定义时,验证过程将解析它包含的表达式,在发现错误时报告语法错误并拒绝该定义。
|
||||
之后,引用的变量将根据 `spec.matchConstraints` 的匹配类型检查类型错误,包括缺少字段和类型混淆。
|
||||
|
@ -855,7 +846,7 @@ Type Checking has the following limitation:
|
|||
|
||||
- 没有通配符匹配。
|
||||
如果 `spec.matchConstraints.resourceRules` 中的任何一个 `apiGroups`、`apiVersions`
|
||||
或 `resources` 包含 "\*",则不会检查与 "\*" 匹配的类型。
|
||||
或 `resources` 包含 `"\*"`,则不会检查与 `"\*"` 匹配的类型。
|
||||
- 匹配的类型数量最多为 10 种。这是为了防止手动指定过多类型的策略消耗过多计算资源。
|
||||
按升序处理组、版本,然后是资源,忽略第 11 个及其之后的组合。
|
||||
- 类型检查不会以任何方式影响策略行为。即使类型检查检测到错误,策略也将继续评估。
|
||||
|
@ -870,7 +861,7 @@ If an expression grows too complicated, or part of the expression is reusable an
|
|||
you can extract some part of the expressions into variables. A variable is a named expression that can be referred later
|
||||
in `variables` in other expressions.
|
||||
-->
|
||||
### 变量组合
|
||||
### 变量组合 {#variable-composition}
|
||||
|
||||
如果表达式变得太复杂,或者表达式的一部分可重用且进行评估时计算开销较大,可以将表达式的某些部分提取为变量。
|
||||
变量是一个命名表达式,后期可以在其他表达式中的 `variables` 中引用。
|
||||
|
|
|
@ -8,11 +8,7 @@ _build:
|
|||
stages:
|
||||
- stage: alpha
|
||||
defaultValue: false
|
||||
fromVersion: "1.26"
|
||||
toVersion: "1.26"
|
||||
- stage: beta
|
||||
defaultValue: false
|
||||
fromVersion: "1.27"
|
||||
fromVersion: "1.25"
|
||||
---
|
||||
<!--
|
||||
Enable support for the kubelet to receive container life cycle events from the
|
||||
|
|
|
@ -15,7 +15,7 @@ stages:
|
|||
Extend the kubelet's pod resources gRPC endpoint to
|
||||
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
|
||||
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
|
||||
with informations about the allocatable resources, enabling clients to properly
|
||||
with information about the allocatable resources, enabling clients to properly
|
||||
track the free compute resources on a node.
|
||||
-->
|
||||
扩展 kubelet 的 Pod 资源 gRPC 端点以包括通过
|
||||
|
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
title: KubeletPodResourcesDynamicResources
|
||||
content_type: feature_gate
|
||||
_build:
|
||||
list: never
|
||||
render: false
|
||||
|
||||
stages:
|
||||
- stage: alpha
|
||||
defaultValue: false
|
||||
fromVersion: "1.27"
|
||||
---
|
||||
|
||||
<!--
|
||||
Extend the kubelet's pod resources gRPC endpoint to
|
||||
to include resources allocated in `ResourceClaims` via `DynamicResourceAllocation` API.
|
||||
See [resource allocation reporting](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources) for more details.
|
||||
with information about the allocatable resources, enabling clients to properly
|
||||
track the free compute resources on a node.
|
||||
-->
|
||||
扩展 kubelet 的 Pod 资源 gRPC 端点,通过 `DynamicResourceAllocation` API 把已分配的资源算入 `ResourceClaims` 中。
|
||||
有关更多细节,参见[资源分配报告](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#monitoring-device-plugin-resources)。
|
||||
使用可分配资源的信息,使客户端能够正确跟踪节点上的空闲计算资源。
|
|
@ -33,6 +33,36 @@ Kubernetes API 中不使用以下注解。当你在集群中[启用审计](/zh-c
|
|||
{{</note>}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## k8s.io/deprecated
|
||||
|
||||
Example: `k8s.io/deprecated: "true"`
|
||||
|
||||
Value **must** be "true" or "false". The value "true" indicates that the
|
||||
request used a deprecated API version.
|
||||
-->
|
||||
## k8s.io/deprecated {#k8s-io-deprecated}
|
||||
|
||||
例子:`k8s.io/deprecated: "true"`
|
||||
|
||||
值**必须**为 "true" 或 "false"。值为 "true" 时表示该请求使用了已弃用的 API 版本。
|
||||
|
||||
<!--
|
||||
## k8s.io/removed-release
|
||||
|
||||
Example: `k8s.io/removed-release: "1.22"`
|
||||
|
||||
Value **must** be in the format "<major>.<minor>". It is set to target the removal release
|
||||
on requests made to deprecated API versions with a target removal release.
|
||||
-->
|
||||
## k8s.io/removed-release {#k8s-io-removed-release}
|
||||
|
||||
例子:`k8s.io/removed-release: "1.22"`
|
||||
|
||||
值**必须**为 "<major>.<minor>" 的格式。当请求使用了已弃用的 API 版本时,
|
||||
该值会被设置为目标移除的版本。
|
||||
|
||||
<!--
|
||||
## pod-security.kubernetes.io/exempt
|
||||
|
||||
|
|
|
@ -782,10 +782,15 @@ either be a snapshot file from a previous backup operation, or from a remaining
|
|||
|
||||
<!--
|
||||
If `<data-dir-location>` is the same folder as before, delete it and stop the etcd process before restoring the cluster.
|
||||
Otherwise, change etcd configuration and restart the etcd process after restoration to have it use the new data directory.
|
||||
Otherwise, change etcd configuration and restart the etcd process after restoration to have it use the new data directory:
|
||||
first change `/etc/kubernetes/manifests/etcd.yaml`'s `volumes.hostPath.path` for `name: etcd-data` to `<data-dir-location>`,
|
||||
then execute `kubectl -n kube-system delete pod <name-of-etcd-pod>` or `systemctl restart kubelet.service` (or both).
|
||||
-->
|
||||
如果 `<data-dir-location>` 与之前的文件夹相同,请先删除此文件夹并停止 etcd 进程,再恢复集群。
|
||||
否则,在恢复后更改 etcd 配置并重启 etcd 进程将使用新的数据目录。
|
||||
否则,在恢复后更改 etcd 配置并重启 etcd 进程将使用新的数据目录:
|
||||
首先将 `/etc/kubernetes/manifests/etcd.yaml` 中 `name: etcd-data` 对应条目的
|
||||
`volumes.hostPath.path` 改为 `<data-dir-location>`,
|
||||
然后执行 `kubectl -n kube-system delete pod <name-of-etcd-pod>` 或 `systemctl restart kubelet.service`(或两段命令都执行)。
|
||||
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
|
@ -80,7 +80,7 @@ kubectl get pods dnsutils
|
|||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
dnsutils 1/1 Running 0 <some-time>
|
||||
```
|
||||
|
||||
|
|
|
@ -833,6 +833,7 @@ them. The list of masked and read-only paths are as follows:
|
|||
- `/proc/sched_debug`
|
||||
- `/proc/scsi`
|
||||
- `/sys/firmware`
|
||||
- `/sys/devices/virtual/powercap`
|
||||
|
||||
<!--
|
||||
- Read-Only Paths:
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1alpha1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicyBinding
|
||||
metadata:
|
||||
name: "demo-binding-test.example.com"
|
||||
|
@ -8,4 +8,4 @@ spec:
|
|||
matchResources:
|
||||
namespaceSelector:
|
||||
matchLabels:
|
||||
environment: test
|
||||
environment: test
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "demo-policy.example.com"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicyBinding
|
||||
metadata:
|
||||
name: "replicalimit-binding-nontest"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicyBinding
|
||||
metadata:
|
||||
name: "replicalimit-binding-test.example.com"
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
spec:
|
||||
...
|
||||
failurePolicy: Ignore # The default is "Fail"
|
||||
failurePolicy: Ignore # 默认为 "Fail"
|
||||
validations:
|
||||
- expression: "object.spec.xyz == params.x"
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "replicalimit-policy.example.com"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "replica-policy.example.com"
|
||||
|
@ -10,6 +10,6 @@ spec:
|
|||
operations: ["CREATE", "UPDATE"]
|
||||
resources: ["deployments","replicasets"]
|
||||
validations:
|
||||
- expression: "object.replicas > 1" # should be "object.spec.replicas > 1"
|
||||
- expression: "object.replicas > 1" # 应为 "object.spec.replicas > 1"
|
||||
message: "must be replicated"
|
||||
reason: Invalid
|
|
@ -1,4 +1,4 @@
|
|||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingAdmissionPolicy
|
||||
metadata:
|
||||
name: "deploy-replica-policy.example.com"
|
||||
|
@ -10,6 +10,6 @@ spec:
|
|||
operations: ["CREATE", "UPDATE"]
|
||||
resources: ["deployments"]
|
||||
validations:
|
||||
- expression: "object.replicas > 1" # should be "object.spec.replicas > 1"
|
||||
- expression: "object.replicas > 1" # 应为 "object.spec.replicas > 1"
|
||||
message: "must be replicated"
|
||||
reason: Invalid
|
||||
reason: Invalid
|
||||
|
|
|
@ -409,6 +409,9 @@ other = "这篇文章已经一年多了,较旧的文章可能包含过时的
|
|||
[patch_release]
|
||||
other = "补丁版本"
|
||||
|
||||
[post_byline_by]
|
||||
other = "作者"
|
||||
|
||||
[post_create_issue]
|
||||
other = "登记一个问题"
|
||||
|
||||
|
@ -568,6 +571,9 @@ other = """<p>本页面中的条目引用了第三方产品或项目,这些产
|
|||
[thirdparty_message_vendor]
|
||||
other = """本页面中的条目引用了 Kubernetes 外部的供应商。Kubernetes 项目的开发人员不对这些第三方产品(项目)负责。要将供应商、产品或项目添加到此列表中,请在提交更改之前阅读<a href="/zh-cn/docs/contribute/style/content-guide/#third-party-content">内容指南</a>。<a href="#third-party-content-disclaimer">更多信息。</a>"""
|
||||
|
||||
[translated_by]
|
||||
other = "译者"
|
||||
|
||||
[ui_search_placeholder]
|
||||
other = "搜索"
|
||||
|
||||
|
|
|
@ -68,31 +68,41 @@
|
|||
else return "";
|
||||
}
|
||||
|
||||
if (getCookie("is_china") === "") {
|
||||
$.ajax({
|
||||
url: "https://ipinfo.io?token=796e43f4f146b1",
|
||||
dataType: "jsonp",
|
||||
success: function (response) {
|
||||
if (response.country == 'CN') {
|
||||
window.renderPageFindSearchResults()
|
||||
document.cookie = "is_china=true;" + path + expires
|
||||
} else {
|
||||
window.renderGoogleSearchResults()
|
||||
document.cookie = "is_china=false;" + path + expires;
|
||||
}
|
||||
},
|
||||
error: function () {
|
||||
window.renderPageFindSearchResults()
|
||||
document.cookie = "is_china=true;" + path + expires;
|
||||
},
|
||||
timeout: 3000
|
||||
});
|
||||
} else if (getCookie("is_china") == "true") {
|
||||
window.addEventListener('DOMContentLoaded', (event) => {
|
||||
window.renderPageFindSearchResults()
|
||||
});
|
||||
} else {
|
||||
window.renderGoogleSearchResults()
|
||||
async function checkBlockedSite(url) {
|
||||
const controller = new AbortController();
|
||||
const timeout = setTimeout(() => {
|
||||
controller.abort();
|
||||
}, 5000); // Timeout set to 5000ms (5 seconds)
|
||||
|
||||
try {
|
||||
const response = await fetch(url, { method: 'HEAD', mode: 'no-cors', signal: controller.signal });
|
||||
// If we reach this point, the site is accessible (since mode: 'no-cors' doesn't allow us to check response.ok)
|
||||
clearTimeout(timeout);
|
||||
return false;
|
||||
} catch (error) {
|
||||
// If an error occurs, it's likely the site is blocked
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
async function loadSearch() {
|
||||
if (getCookie("can_google") === "") {
|
||||
const isGoogleBlocked = await checkBlockedSite("https://www.google.com/favicon.ico");
|
||||
if ( isGoogleBlocked ) {
|
||||
// Google is blocked.
|
||||
document.cookie = "can_google=false;" + path + expires
|
||||
window.renderPageFindSearchResults()
|
||||
} else {
|
||||
// Google is not blocked.
|
||||
document.cookie = "can_google=true;" + path + expires
|
||||
window.renderGoogleSearchResults()
|
||||
}
|
||||
} else if (getCookie("can_google") == "false") {
|
||||
window.renderPageFindSearchResults()
|
||||
} else {
|
||||
window.renderGoogleSearchResults()
|
||||
}
|
||||
}
|
||||
|
||||
window.onload = loadSearch;
|
||||
|
||||
|
|
Loading…
Reference in New Issue