Merge branch 'kubernetes:main' into main
commit
b49b78497b
|
|
@ -7,71 +7,156 @@ slug: podsecuritypolicy-deprecation-past-present-and-future
|
|||
**Author:** Tabitha Sable (Kubernetes SIG Security)
|
||||
|
||||
{{% pageinfo color="primary" %}}
|
||||
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.* *You can read more information about the removal of PodSecurityPolicy in the [Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
|
||||
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.*
|
||||
*You can read more information about the removal of PodSecurityPolicy in the
|
||||
[Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
|
||||
{{% /pageinfo %}}
|
||||
|
||||
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week. This starts the countdown to its removal, but doesn’t change anything else. PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely. In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
|
||||
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week.
|
||||
This starts the countdown to its removal, but doesn’t change anything else.
|
||||
PodSecurityPolicy will continue to be fully functional for several more releases before being removed completely.
|
||||
In the meantime, we are developing a replacement for PSP that covers key use cases more easily and sustainably.
|
||||
|
||||
What are Pod Security Policies? Why did we need them? Why are they going away, and what’s next? How does this affect you? These key questions come to mind as we prepare to say goodbye to PSP, so let’s walk through them together. We’ll start with an overview of how features get removed from Kubernetes.
|
||||
What are Pod Security Policies? Why did we need them? Why are they going away, and what’s next?
|
||||
How does this affect you? These key questions come to mind as we prepare to say goodbye to PSP,
|
||||
so let’s walk through them together. We’ll start with an overview of how features get removed from Kubernetes.
|
||||
|
||||
## What does deprecation mean in Kubernetes?
|
||||
|
||||
Whenever a Kubernetes feature is set to go away, our [deprecation policy](/docs/reference/using-api/deprecation-policy/) is our guide. First the feature is marked as deprecated, then after enough time has passed, it can finally be removed.
|
||||
Whenever a Kubernetes feature is set to go away, our [deprecation policy](/docs/reference/using-api/deprecation-policy/)
|
||||
is our guide. First the feature is marked as deprecated, then after enough time has passed, it can finally be removed.
|
||||
|
||||
Kubernetes 1.21 starts the deprecation process for PodSecurityPolicy. As with all feature deprecations, PodSecurityPolicy will continue to be fully functional for several more releases. The current plan is to remove PSP from Kubernetes in the 1.25 release.
|
||||
Kubernetes 1.21 starts the deprecation process for PodSecurityPolicy. As with all feature deprecations,
|
||||
PodSecurityPolicy will continue to be fully functional for several more releases.
|
||||
The current plan is to remove PSP from Kubernetes in the 1.25 release.
|
||||
|
||||
Until then, PSP is still PSP. There will be at least a year during which the newest Kubernetes releases will still support PSP, and nearly two years until PSP will pass fully out of all supported Kubernetes versions.
|
||||
Until then, PSP is still PSP. There will be at least a year during which the newest Kubernetes releases will
|
||||
still support PSP, and nearly two years until PSP will pass fully out of all supported Kubernetes versions.
|
||||
|
||||
## What is PodSecurityPolicy?
|
||||
|
||||
[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) is a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/) that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
|
||||
[PodSecurityPolicy](/docs/concepts/security/pod-security-policy/) is
|
||||
a built-in [admission controller](/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/)
|
||||
that allows a cluster administrator to control security-sensitive aspects of the Pod specification.
|
||||
|
||||
First, one or more PodSecurityPolicy resources are created in a cluster to define the requirements Pods must meet. Then, RBAC rules are created to control which PodSecurityPolicy applies to a given pod. If a pod meets the requirements of its PSP, it will be admitted to the cluster as usual. In some cases, PSP can also modify Pod fields, effectively creating new defaults for those fields. If a Pod does not meet the PSP requirements, it is rejected, and cannot run.
|
||||
First, one or more PodSecurityPolicy resources are created in a cluster to define the requirements Pods must meet.
|
||||
Then, RBAC rules are created to control which PodSecurityPolicy applies to a given pod.
|
||||
If a pod meets the requirements of its PSP, it will be admitted to the cluster as usual.
|
||||
In some cases, PSP can also modify Pod fields, effectively creating new defaults for those fields.
|
||||
If a Pod does not meet the PSP requirements, it is rejected, and cannot run.
|
||||
|
||||
One more important thing to know about PodSecurityPolicy: it’s not the same as [PodSecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
|
||||
One more important thing to know about PodSecurityPolicy: it’s not the same as
|
||||
[PodSecurityContext](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context).
|
||||
|
||||
A part of the Pod specification, PodSecurityContext (and its per-container counterpart `SecurityContext`) is the collection of fields that specify many of the security-relevant settings for a Pod. The security context dictates to the kubelet and container runtime how the Pod should actually be run. In contrast, the PodSecurityPolicy only constrains (or defaults) the values that may be set on the security context.
|
||||
A part of the Pod specification, PodSecurityContext (and its per-container counterpart `SecurityContext`)
|
||||
is the collection of fields that specify many of the security-relevant settings for a Pod.
|
||||
The security context dictates to the kubelet and container runtime how the Pod should actually be run.
|
||||
In contrast, the PodSecurityPolicy only constrains (or defaults) the values that may be set on the security context.
|
||||
|
||||
The deprecation of PSP does not affect PodSecurityContext in any way.
|
||||
|
||||
## Why did we need PodSecurityPolicy?
|
||||
|
||||
In Kubernetes, we define resources such as Deployments, StatefulSets, and Services that represent the building blocks of software applications. The various controllers inside a Kubernetes cluster react to these resources, creating further Kubernetes resources or configuring some software or hardware to accomplish our goals.
|
||||
In Kubernetes, we define resources such as Deployments, StatefulSets, and Services that
|
||||
represent the building blocks of software applications. The various controllers inside
|
||||
a Kubernetes cluster react to these resources, creating further Kubernetes resources or
|
||||
configuring some software or hardware to accomplish our goals.
|
||||
|
||||
In most Kubernetes clusters, RBAC (Role-Based Access Control) [rules](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) control access to these resources. `list`, `get`, `create`, `edit`, and `delete` are the sorts of API operations that RBAC cares about, but _RBAC does not consider what settings are being put into the resources it controls_. For example, a Pod can be almost anything from a simple webserver to a privileged command prompt offering full access to the underlying server node and all the data. It’s all the same to RBAC: a Pod is a Pod is a Pod.
|
||||
In most Kubernetes clusters,
|
||||
RBAC (Role-Based Access Control) [rules](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole)
|
||||
control access to these resources. `list`, `get`, `create`, `edit`, and `delete` are
|
||||
the sorts of API operations that RBAC cares about,
|
||||
but _RBAC does not consider what settings are being put into the resources it controls_.
|
||||
For example, a Pod can be almost anything from a simple webserver to
|
||||
a privileged command prompt offering full access to the underlying server node and all the data.
|
||||
It’s all the same to RBAC: a Pod is a Pod is a Pod.
|
||||
|
||||
To control what sorts of settings are allowed in the resources defined in your cluster, you need Admission Control in addition to RBAC. Since Kubernetes 1.3, PodSecurityPolicy has been the built-in way to do that for security-related Pod fields. Using PodSecurityPolicy, you can prevent “create Pod” from automatically meaning “root on every cluster node,” without needing to deploy additional external admission controllers.
|
||||
To control what sorts of settings are allowed in the resources defined in your cluster,
|
||||
you need Admission Control in addition to RBAC. Since Kubernetes 1.3,
|
||||
PodSecurityPolicy has been the built-in way to do that for security-related Pod fields.
|
||||
Using PodSecurityPolicy, you can prevent “create Pod” from automatically meaning “root on every cluster node,”
|
||||
without needing to deploy additional external admission controllers.
|
||||
|
||||
## Why is PodSecurityPolicy going away?
|
||||
|
||||
In the years since PodSecurityPolicy was first introduced, we have realized that PSP has some serious usability problems that can’t be addressed without making breaking changes.
|
||||
In the years since PodSecurityPolicy was first introduced, we have realized that
|
||||
PSP has some serious usability problems that can’t be addressed without making breaking changes.
|
||||
|
||||
The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them. It is easy to accidentally grant broader permissions than intended, and difficult to inspect which PSP(s) apply in a given situation. The “changing Pod defaults” feature can be handy, but is only supported for certain Pod settings and it’s not obvious when they will or will not apply to your Pod. Without a “dry run” or audit mode, it’s impractical to retrofit PSP to existing clusters safely, and it’s impossible for PSP to ever be enabled by default.
|
||||
The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them.
|
||||
It is easy to accidentally grant broader permissions than intended,
|
||||
and difficult to inspect which PSP(s) apply in a given situation. The “changing Pod defaults” feature can be handy,
|
||||
but is only supported for certain Pod settings and it’s not obvious when they will or will not apply to your Pod.
|
||||
Without a “dry run” or audit mode, it’s impractical to retrofit PSP to existing clusters safely,
|
||||
and it’s impossible for PSP to ever be enabled by default.
|
||||
|
||||
For more information about these and other PSP difficulties, check out SIG Auth’s KubeCon NA 2019 Maintainer Track session video: {{< youtube "SFtHRmPuhEw?start=953" youtube-quote-sm >}}
|
||||
For more information about these and other PSP difficulties, check out
|
||||
SIG Auth’s KubeCon NA 2019 Maintainer Track session video:{{< youtube "SFtHRmPuhEw?start=953" youtube-quote-sm >}}
|
||||
|
||||
Today, you’re not limited only to deploying PSP or writing your own custom admission controller. Several external admission controllers are available that incorporate lessons learned from PSP to provide a better user experience. [K-Rail](https://github.com/cruise-automation/k-rail), [Kyverno](https://github.com/kyverno/kyverno/), and [OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper/) are all well-known, and each has its fans.
|
||||
Today, you’re not limited only to deploying PSP or writing your own custom admission controller.
|
||||
Several external admission controllers are available that incorporate lessons learned from PSP to
|
||||
provide a better user experience. [K-Rail](https://github.com/cruise-automation/k-rail),
|
||||
[Kyverno](https://github.com/kyverno/kyverno/), and
|
||||
[OPA/Gatekeeper](https://github.com/open-policy-agent/gatekeeper/) are all well-known, and each has its fans.
|
||||
|
||||
Although there are other good options available now, we believe there is still value in having a built-in admission controller available as a choice for users. With this in mind, we turn toward building what’s next, inspired by the lessons learned from PSP.
|
||||
Although there are other good options available now, we believe there is still value in
|
||||
having a built-in admission controller available as a choice for users. With this in mind,
|
||||
we turn toward building what’s next, inspired by the lessons learned from PSP.
|
||||
|
||||
## What’s next?
|
||||
|
||||
Kubernetes SIG Security, SIG Auth, and a diverse collection of other community members have been working together for months to ensure that what’s coming next is going to be awesome. We have developed a Kubernetes Enhancement Proposal ([KEP 2579](https://github.com/kubernetes/enhancements/issues/2579)) and a prototype for a new feature, currently being called by the temporary name "PSP Replacement Policy." We are targeting an Alpha release in Kubernetes 1.22.
|
||||
Kubernetes SIG Security, SIG Auth, and a diverse collection of other community members
|
||||
have been working together for months to ensure that what’s coming next is going to be awesome.
|
||||
We have developed a Kubernetes Enhancement Proposal ([KEP 2579](https://github.com/kubernetes/enhancements/issues/2579))
|
||||
and a prototype for a new feature, currently being called by the temporary name "PSP Replacement Policy."
|
||||
We are targeting an Alpha release in Kubernetes 1.22.
|
||||
|
||||
PSP Replacement Policy starts with the realization that since there is a robust ecosystem of external admission controllers already available, PSP’s replacement doesn’t need to be all things to all people. Simplicity of deployment and adoption is the key advantage a built-in admission controller has compared to an external webhook, so we have focused on how to best utilize that advantage.
|
||||
PSP Replacement Policy starts with the realization that
|
||||
since there is a robust ecosystem of external admission controllers already available,
|
||||
PSP’s replacement doesn’t need to be all things to all people.
|
||||
Simplicity of deployment and adoption is the key advantage a built-in admission controller has
|
||||
compared to an external webhook, so we have focused on how to best utilize that advantage.
|
||||
|
||||
PSP Replacement Policy is designed to be as simple as practically possible while providing enough flexibility to really be useful in production at scale. It has soft rollout features to enable retrofitting it to existing clusters, and is configurable enough that it can eventually be active by default. It can be deactivated partially or entirely, to coexist with external admission controllers for advanced use cases.
|
||||
PSP Replacement Policy is designed to be as simple as practically possible
|
||||
while providing enough flexibility to really be useful in production at scale.
|
||||
It has soft rollout features to enable retrofitting it to existing clusters,
|
||||
and is configurable enough that it can eventually be active by default.
|
||||
It can be deactivated partially or entirely, to coexist with external admission controllers for advanced use cases.
|
||||
|
||||
## What does this mean for you?
|
||||
|
||||
What this all means for you depends on your current PSP situation. If you’re already using PSP, there’s plenty of time to plan your next move. Please review the PSP Replacement Policy KEP and think about how well it will suit your use case.
|
||||
What this all means for you depends on your current PSP situation.
|
||||
If you’re already using PSP, there’s plenty of time to plan your next move.
|
||||
Please review the PSP Replacement Policy KEP and think about how well it will suit your use case.
|
||||
|
||||
If you’re making extensive use of the flexibility of PSP with numerous PSPs and complex binding rules, you will likely find the simplicity of PSP Replacement Policy too limiting. Use the next year to evaluate the other admission controller choices in the ecosystem. There are resources available to ease this transition, such as the [Gatekeeper Policy Library](https://github.com/open-policy-agent/gatekeeper-library).
|
||||
If you’re making extensive use of the flexibility of PSP with numerous PSPs and complex binding rules,
|
||||
you will likely find the simplicity of PSP Replacement Policy too limiting.
|
||||
Use the next year to evaluate the other admission controller choices in the ecosystem.
|
||||
There are resources available to ease this transition,
|
||||
such as the [Gatekeeper Policy Library](https://github.com/open-policy-agent/gatekeeper-library).
|
||||
|
||||
If your use of PSP is relatively simple, with a few policies and straightforward binding to service accounts in each namespace, you will likely find PSP Replacement Policy to be a good match for your needs. Evaluate your PSPs compared to the Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/) to get a feel for where you’ll be able to use the Restricted, Baseline, and Privileged policies. Please follow along with or contribute to the KEP and subsequent development, and try out the Alpha release of PSP Replacement Policy when it becomes available.
|
||||
If your use of PSP is relatively simple, with a few policies and straightforward binding to
|
||||
service accounts in each namespace, you will likely find PSP Replacement Policy to be a good match for your needs.
|
||||
Evaluate your PSPs compared to the Kubernetes [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
|
||||
to get a feel for where you’ll be able to use the Restricted, Baseline, and Privileged policies.
|
||||
Please follow along with or contribute to the KEP and subsequent development,
|
||||
and try out the Alpha release of PSP Replacement Policy when it becomes available.
|
||||
|
||||
If you’re just beginning your PSP journey, you will save time and effort by keeping it simple. You can approximate the functionality of PSP Replacement Policy today by using the Pod Security Standards’ PSPs. If you set the cluster default by binding a Baseline or Restricted policy to the `system:serviceaccounts` group, and then make a more-permissive policy available as needed in certain Namespaces [using ServiceAccount bindings](/docs/concepts/policy/pod-security-policy/#run-another-pod), you will avoid many of the PSP pitfalls and have an easy migration to PSP Replacement Policy. If your needs are much more complex than this, your effort is probably better spent adopting one of the more fully-featured external admission controllers mentioned above.
|
||||
If you’re just beginning your PSP journey, you will save time and effort by keeping it simple.
|
||||
You can approximate the functionality of PSP Replacement Policy today by using the Pod Security Standards’ PSPs.
|
||||
If you set the cluster default by binding a Baseline or Restricted policy to the `system:serviceaccounts` group,
|
||||
and then make a more-permissive policy available as needed in certain
|
||||
Namespaces [using ServiceAccount bindings](/docs/concepts/policy/pod-security-policy/#run-another-pod),
|
||||
you will avoid many of the PSP pitfalls and have an easy migration to PSP Replacement Policy.
|
||||
If your needs are much more complex than this, your effort is probably better spent adopting
|
||||
one of the more fully-featured external admission controllers mentioned above.
|
||||
|
||||
We’re dedicated to making Kubernetes the best container orchestration tool we can, and sometimes that means we need to remove longstanding features to make space for better things to come. When that happens, the Kubernetes deprecation policy ensures you have plenty of time to plan your next move. In the case of PodSecurityPolicy, several options are available to suit a range of needs and use cases. Start planning ahead now for PSP’s eventual removal, and please consider contributing to its replacement! Happy securing!
|
||||
We’re dedicated to making Kubernetes the best container orchestration tool we can,
|
||||
and sometimes that means we need to remove longstanding features to make space for better things to come.
|
||||
When that happens, the Kubernetes deprecation policy ensures you have plenty of time to plan your next move.
|
||||
In the case of PodSecurityPolicy, several options are available to suit a range of needs and use cases.
|
||||
Start planning ahead now for PSP’s eventual removal, and please consider contributing to its replacement! Happy securing!
|
||||
|
||||
**Acknowledgment:** It takes a wonderful group to make wonderful software. Thanks are due to everyone who has contributed to the PSP replacement effort, especially (in alphabetical order) Tim Allclair, Ian Coldwater, and Jordan Liggitt. It’s been a joy to work with y’all on this.
|
||||
**Acknowledgment:** It takes a wonderful group to make wonderful software.
|
||||
Thanks are due to everyone who has contributed to the PSP replacement effort,
|
||||
especially (in alphabetical order) Tim Allclair, Ian Coldwater, and Jordan Liggitt.
|
||||
It’s been a joy to work with y’all on this.
|
||||
|
|
|
|||
|
|
@ -522,6 +522,8 @@ poorly-behaved workloads that may be harming system health.
|
|||
queue excess requests, or
|
||||
* `time-out`, indicating that the request was still in the queue
|
||||
when its queuing time limit expired.
|
||||
* `cancelled`, indicating that the request is not purge locked
|
||||
and has been ejected from the queue.
|
||||
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
|
||||
vector (cumulative since server start) of requests that began
|
||||
|
|
|
|||
|
|
@ -12,11 +12,12 @@ SIG Docs [Reviewers](/docs/contribute/participate/#reviewers) and
|
|||
[Approvers](/docs/contribute/participate/#approvers) do a few extra things
|
||||
when reviewing a change.
|
||||
|
||||
Every week a specific docs approver volunteers to triage
|
||||
and review pull requests. This
|
||||
person is the "PR Wrangler" for the week. See the
|
||||
[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers) for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting and volunteer. Even if you are not on the schedule for the current week, you can still review pull
|
||||
requests (PRs) that are not already under active review.
|
||||
Every week a specific docs approver volunteers to triage and review pull requests.
|
||||
This person is the "PR Wrangler" for the week. See the
|
||||
[PR Wrangler scheduler](https://github.com/kubernetes/website/wiki/PR-Wranglers)
|
||||
for more information. To become a PR Wrangler, attend the weekly SIG Docs meeting
|
||||
and volunteer. Even if you are not on the schedule for the current week, you can
|
||||
still review pull requests (PRs) that are not already under active review.
|
||||
|
||||
In addition to the rotation, a bot assigns reviewers and approvers
|
||||
for the PR based on the owners for the affected files.
|
||||
|
|
@ -25,21 +26,26 @@ for the PR based on the owners for the affected files.
|
|||
|
||||
## Reviewing a PR
|
||||
|
||||
Kubernetes documentation follows the [Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process).
|
||||
Kubernetes documentation follows the
|
||||
[Kubernetes code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process).
|
||||
|
||||
Everything described in [Reviewing a pull request](/docs/contribute/review/reviewing-prs) applies, but Reviewers and Approvers should also do the following:
|
||||
Everything described in [Reviewing a pull request](/docs/contribute/review/reviewing-prs)
|
||||
applies, but Reviewers and Approvers should also do the following:
|
||||
|
||||
- Using the `/assign` Prow command to assign a specific reviewer to a PR as needed. This is extra important
|
||||
when it comes to requesting technical review from code contributors.
|
||||
- Using the `/assign` Prow command to assign a specific reviewer to a PR as needed.
|
||||
This is extra important when it comes to requesting technical review from code contributors.
|
||||
|
||||
{{< note >}}
|
||||
Look at the `reviewers` field in the front-matter at the top of a Markdown file to see who can
|
||||
provide technical review.
|
||||
{{< /note >}}
|
||||
|
||||
- Making sure the PR follows the [Content](/docs/contribute/style/content-guide/) and [Style](/docs/contribute/style/style-guide/) guides; link the author to the relevant part of the guide(s) if it doesn't.
|
||||
- Making sure the PR follows the [Content](/docs/contribute/style/content-guide/)
|
||||
and [Style](/docs/contribute/style/style-guide/) guides; link the author to the
|
||||
relevant part of the guide(s) if it doesn't.
|
||||
- Using the GitHub **Request Changes** option when applicable to suggest changes to the PR author.
|
||||
- Changing your review status in GitHub using the `/approve` or `/lgtm` Prow commands, if your suggestions are implemented.
|
||||
- Changing your review status in GitHub using the `/approve` or `/lgtm` Prow commands,
|
||||
if your suggestions are implemented.
|
||||
|
||||
## Commit into another person's PR
|
||||
|
||||
|
|
@ -72,7 +78,9 @@ true:
|
|||
[Prow](https://github.com/kubernetes/test-infra/blob/master/prow/README.md) is
|
||||
the Kubernetes-based CI/CD system that runs jobs against pull requests (PRs). Prow
|
||||
enables chatbot-style commands to handle GitHub actions across the Kubernetes
|
||||
organization, like [adding and removing labels](#adding-and-removing-issue-labels), closing issues, and assigning an approver. Enter Prow commands as GitHub comments using the `/<command-name>` format.
|
||||
organization, like [adding and removing labels](#adding-and-removing-issue-labels),
|
||||
closing issues, and assigning an approver. Enter Prow commands as GitHub comments
|
||||
using the `/<command-name>` format.
|
||||
|
||||
The most common prow commands reviewers and approvers use are:
|
||||
|
||||
|
|
@ -92,9 +100,9 @@ To view the commands that you can use in a PR, see the
|
|||
|
||||
## Triage and categorize issues
|
||||
|
||||
|
||||
In general, SIG Docs follows the [Kubernetes issue triage](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md) process and uses the same labels.
|
||||
|
||||
In general, SIG Docs follows the
|
||||
[Kubernetes issue triage](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md)
|
||||
process and uses the same labels.
|
||||
|
||||
This GitHub Issue [filter](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+-label%3Apriority%2Fbacklog+-label%3Apriority%2Fimportant-longterm+-label%3Apriority%2Fimportant-soon+-label%3Atriage%2Fneeds-information+-label%3Atriage%2Fsupport+sort%3Acreated-asc)
|
||||
finds issues that might need triage.
|
||||
|
|
@ -102,16 +110,18 @@ finds issues that might need triage.
|
|||
### Triaging an issue
|
||||
|
||||
1. Validate the issue
|
||||
- Make sure the issue is about website documentation. Some issues can be closed quickly by
|
||||
answering a question or pointing the reporter to a resource. See the
|
||||
[Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details.
|
||||
- Assess whether the issue has merit.
|
||||
- Add the `triage/needs-information` label if the issue doesn't have enough
|
||||
detail to be actionable or the template is not filled out adequately.
|
||||
- Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels.
|
||||
|
||||
- Make sure the issue is about website documentation. Some issues can be closed quickly by
|
||||
answering a question or pointing the reporter to a resource. See the
|
||||
[Support requests or code bug reports](#support-requests-or-code-bug-reports) section for details.
|
||||
- Assess whether the issue has merit.
|
||||
- Add the `triage/needs-information` label if the issue doesn't have enough
|
||||
detail to be actionable or the template is not filled out adequately.
|
||||
- Close the issue if it has both the `lifecycle/stale` and `triage/needs-information` labels.
|
||||
|
||||
2. Add a priority label (the
|
||||
[Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority) define priority labels in detail)
|
||||
[Issue Triage Guidelines](https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority)
|
||||
define priority labels in detail)
|
||||
|
||||
{{< table caption="Issue labels" >}}
|
||||
Label | Description
|
||||
|
|
@ -146,7 +156,8 @@ To remove a label, leave a comment in one of the following formats:
|
|||
In both cases, the label must already exist. If you try to add a label that does not exist, the command is
|
||||
silently ignored.
|
||||
|
||||
For a list of all labels, see the [website repository's Labels section](https://github.com/kubernetes/website/labels). Not all labels are used by SIG Docs.
|
||||
For a list of all labels, see the [website repository's Labels section](https://github.com/kubernetes/website/labels).
|
||||
Not all labels are used by SIG Docs.
|
||||
|
||||
### Issue lifecycle labels
|
||||
|
||||
|
|
@ -177,7 +188,9 @@ and avoids duplicate work on the same problem.
|
|||
|
||||
### Dead link issues
|
||||
|
||||
If the dead link issue is in the API or `kubectl` documentation, assign them `/priority critical-urgent` until the problem is fully understood. Assign all other dead link issues `/priority important-longterm`, as they must be manually fixed.
|
||||
If the dead link issue is in the API or `kubectl` documentation, assign them
|
||||
`/priority critical-urgent` until the problem is fully understood. Assign all
|
||||
other dead link issues `/priority important-longterm`, as they must be manually fixed.
|
||||
|
||||
### Blog issues
|
||||
|
||||
|
|
@ -222,4 +235,49 @@ https://github.com/kubernetes/kubernetes/issues.
|
|||
If this is a documentation issue, please re-open this issue.
|
||||
```
|
||||
|
||||
### Squashing
|
||||
|
||||
As an approver, when you review pull requests (PRs), there are various cases
|
||||
where you might do the following:
|
||||
|
||||
- Advise the contributor to squash their commits.
|
||||
- Squash the commits for the contributor.
|
||||
- Advise the contributor not to squash yet.
|
||||
- Prevent squashing.
|
||||
|
||||
**Advising contributors to squash**: A new contributor might not know that they
|
||||
should squash commits in their pull requests (PRs). If this is the case, advise
|
||||
them to do so, provide links to useful information, and offer to arrange help if
|
||||
they need it. Some useful links:
|
||||
|
||||
- [Opening pull requests and squashing your commits](/docs/contribute/new-content/open-a-pr#squashing-commits)
|
||||
for documentation contributors.
|
||||
- [GitHub Workflow](https://www.k8s.dev/docs/guide/github-workflow/), including diagrams, for developers.
|
||||
|
||||
**Squashing commits for contributors**: If a contributor might have difficulty
|
||||
squashing commits or there is time pressure to merge a PR, you can perform the
|
||||
squash for them:
|
||||
|
||||
- The kubernetes/website repo is
|
||||
[configured to allow squashing for pull request merges](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/configuring-commit-squashing-for-pull-requests).
|
||||
Simply select the *Squash commits* button.
|
||||
- In the PR, if the contributor enables maintainers to manage the PR, you can
|
||||
squash their commits and update their fork with the result. Before you squash,
|
||||
advise them to save and push their latest changes to the PR. After you squash,
|
||||
advise them to pull the squashed commit to their local clone.
|
||||
- You can get GitHub to squash the commits by using a label so that Tide / GitHub
|
||||
performs the squash or by clicking the *Squash commits* button when you merge the PR.
|
||||
|
||||
**Advise contributors to avoid squashing**
|
||||
|
||||
- If one commit does something broken or unwise, and the last commit reverts this
|
||||
error, don't squash the commits. Even though the "Files changed" tab in the PR
|
||||
on GitHub and the Netlify preview will both look OK, merging this PR might create
|
||||
rebase or merge conflicts for other folks. Intervene as you see fit to avoid that
|
||||
risk to other contributors.
|
||||
|
||||
**Never squash**
|
||||
|
||||
- If you're launching a localization or releasing the docs for a new version,
|
||||
you are merging in a branch that's not from a user's fork, _never squash the commits_.
|
||||
Not squashing is essential because you must maintain the commit history for those files.
|
||||
|
|
|
|||
|
|
@ -77,6 +77,7 @@ operator to use or manage a cluster.
|
|||
* [kubeconfig (v1)](/docs/reference/config-api/kubeconfig.v1/)
|
||||
* [kube-apiserver admission (v1)](/docs/reference/config-api/apiserver-admission.v1/)
|
||||
* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) and
|
||||
* [kube-apiserver configuration (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/) and
|
||||
[kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
|
||||
* [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/)
|
||||
* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,268 @@
|
|||
---
|
||||
title: kube-apiserver Configuration (v1beta1)
|
||||
content_type: tool-reference
|
||||
package: apiserver.k8s.io/v1beta1
|
||||
auto_generated: true
|
||||
---
|
||||
<p>Package v1beta1 is the v1beta1 version of the API.</p>
|
||||
|
||||
|
||||
## Resource Types
|
||||
|
||||
|
||||
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)
|
||||
|
||||
|
||||
|
||||
## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration}
|
||||
|
||||
|
||||
|
||||
<p>EgressSelectorConfiguration provides versioned configuration for egress selector clients.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
<tr><td><code>apiVersion</code><br/>string</td><td><code>apiserver.k8s.io/v1beta1</code></td></tr>
|
||||
<tr><td><code>kind</code><br/>string</td><td><code>EgressSelectorConfiguration</code></td></tr>
|
||||
|
||||
|
||||
<tr><td><code>egressSelections</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-EgressSelection"><code>[]EgressSelection</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>connectionServices contains a list of egress selection client configurations</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `Connection` {#apiserver-k8s-io-v1beta1-Connection}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [EgressSelection](#apiserver-k8s-io-v1beta1-EgressSelection)
|
||||
|
||||
|
||||
<p>Connection provides the configuration for a single egress selection client.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>proxyProtocol</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-ProtocolType"><code>ProtocolType</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Protocol is the protocol used to connect from client to the konnectivity server.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>transport</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-Transport"><code>Transport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>Transport defines the transport configurations we use to dial to the konnectivity server.
|
||||
This is required if ProxyProtocol is HTTPConnect or GRPC.</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `EgressSelection` {#apiserver-k8s-io-v1beta1-EgressSelection}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)
|
||||
|
||||
|
||||
<p>EgressSelection provides the configuration for a single egress selection client.</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>name</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>name is the name of the egress selection.
|
||||
Currently supported values are "controlplane", "master", "etcd" and "cluster"
|
||||
The "master" egress selector is deprecated in favor of "controlplane"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>connection</code> <B>[Required]</B><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-Connection"><code>Connection</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>connection is the exact information used to configure the egress selection</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `ProtocolType` {#apiserver-k8s-io-v1beta1-ProtocolType}
|
||||
|
||||
(Alias of `string`)
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Connection](#apiserver-k8s-io-v1beta1-Connection)
|
||||
|
||||
|
||||
<p>ProtocolType is a set of valid values for Connection.ProtocolType</p>
|
||||
|
||||
|
||||
|
||||
|
||||
## `TCPTransport` {#apiserver-k8s-io-v1beta1-TCPTransport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Transport](#apiserver-k8s-io-v1beta1-Transport)
|
||||
|
||||
|
||||
<p>TCPTransport provides the information to connect to konnectivity server via TCP</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>url</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>URL is the location of the konnectivity server to connect to.
|
||||
As an example it might be "https://127.0.0.1:8131"</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>tlsConfig</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-TLSConfig"><code>TLSConfig</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>TLSConfig is the config needed to use TLS when connecting to konnectivity server</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `TLSConfig` {#apiserver-k8s-io-v1beta1-TLSConfig}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [TCPTransport](#apiserver-k8s-io-v1beta1-TCPTransport)
|
||||
|
||||
|
||||
<p>TLSConfig provides the authentication information to connect to konnectivity server
|
||||
Only used with TCPTransport</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>caBundle</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>caBundle is the file location of the CA to be used to determine trust with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
If absent while TCPTransport.URL is prefixed with https://, default to system trust roots.</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>clientKey</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
Must be configured if TCPTransport.URL is prefixed with https://</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>clientCert</code><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server.
|
||||
Must be absent/empty if TCPTransport.URL is prefixed with http://
|
||||
Must be configured if TCPTransport.URL is prefixed with https://</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `Transport` {#apiserver-k8s-io-v1beta1-Transport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Connection](#apiserver-k8s-io-v1beta1-Connection)
|
||||
|
||||
|
||||
<p>Transport defines the transport configurations we use to dial to the konnectivity server</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>tcp</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-TCPTransport"><code>TCPTransport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>TCP is the TCP configuration for communicating with the konnectivity server via TCP
|
||||
ProxyProtocol of GRPC is not supported with TCP transport at the moment
|
||||
Requires at least one of TCP or UDS to be set</p>
|
||||
</td>
|
||||
</tr>
|
||||
<tr><td><code>uds</code><br/>
|
||||
<a href="#apiserver-k8s-io-v1beta1-UDSTransport"><code>UDSTransport</code></a>
|
||||
</td>
|
||||
<td>
|
||||
<p>UDS is the UDS configuration for communicating with the konnectivity server via UDS
|
||||
Requires at least one of TCP or UDS to be set</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
## `UDSTransport` {#apiserver-k8s-io-v1beta1-UDSTransport}
|
||||
|
||||
|
||||
**Appears in:**
|
||||
|
||||
- [Transport](#apiserver-k8s-io-v1beta1-Transport)
|
||||
|
||||
|
||||
<p>UDSTransport provides the information to connect to konnectivity server via UDS</p>
|
||||
|
||||
|
||||
<table class="table">
|
||||
<thead><tr><th width="30%">Field</th><th>Description</th></tr></thead>
|
||||
<tbody>
|
||||
|
||||
|
||||
<tr><td><code>udsName</code> <B>[Required]</B><br/>
|
||||
<code>string</code>
|
||||
</td>
|
||||
<td>
|
||||
<p>UDSName is the name of the unix domain socket to connect to konnectivity server
|
||||
This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket)</p>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
|
@ -104,6 +104,53 @@ The cluster autoscaler never evicts Pods that have this annotation explicitly se
|
|||
`"false"`; you could set that on an important Pod that you want to keep running.
|
||||
If this annotation is not set then the cluster autoscaler follows its Pod-level behavior.
|
||||
|
||||
### config.kubernetes.io/local-config
|
||||
|
||||
Example: `config.kubernetes.io/local-config: "true"`
|
||||
|
||||
Used on: All objects
|
||||
|
||||
This annotation is used in manifests to mark an object as local configuration that should not be submitted to the Kubernetes API.
|
||||
|
||||
A value of "true" for this annotation declares that the object is only consumed by client-side tooling and should not be submitted to the API server.
|
||||
|
||||
A value of "false" can be used to declare that the object should be submitted to the API server even when it would otherwise be assumed to be local.
|
||||
|
||||
This annotation is part of the Kubernetes Resource Model (KRM) Functions Specification, which is used by Kustomize and similar third-party tools. For example, Kustomize removes objects with this annotation from its final build output.
|
||||
|
||||
|
||||
### internal.config.kubernetes.io/* (reserved prefix) {#internal.config.kubernetes.io-reserved-wildcard}
|
||||
|
||||
Used on: All objects
|
||||
|
||||
This prefix is reserved for internal use by tools that act as orchestrators in accordance with the Kubernetes Resource Model (KRM) Functions Specification. Annotations with this prefix are internal to the orchestration process and are not persisted to the manifests on the filesystem. In other words, the orchestrator tool should set these annotations when reading files from the local filesystem and remove them when writing the output of functions back to the filesystem.
|
||||
|
||||
A KRM function **must not** modify annotations with this prefix, unless otherwise specified for a given annotation. This enables orchestrator tools to add additional internal annotations, without requiring changes to existing functions.
|
||||
|
||||
### internal.config.kubernetes.io/path
|
||||
|
||||
Example: `internal.config.kubernetes.io/path: "relative/file/path.yaml"`
|
||||
|
||||
Used on: All objects
|
||||
|
||||
This annotation records the slash-delimited, OS-agnostic, relative path to the manifest file the object was loaded from. The path is relative to a fixed location on the filesystem, determined by the orchestrator tool.
|
||||
|
||||
This annotation is part of the Kubernetes Resource Model (KRM) Functions Specification, which is used by Kustomize and similar third-party tools.
|
||||
|
||||
A KRM Function **should not** modify this annotation on input objects unless it is modifying the referenced files. A KRM Function **may** include this annotation on objects it generates.
|
||||
|
||||
### internal.config.kubernetes.io/index
|
||||
|
||||
Example: `internal.config.kubernetes.io/index: "2"`
|
||||
|
||||
Used on: All objects
|
||||
|
||||
This annotation records the zero-indexed position of the YAML document that contains the object within the manifest file the object was loaded from. Note that YAML documents are separated by three dashes (`---`) and can each contain one object. When this annotation is not specified, a value of 0 is implied.
|
||||
|
||||
This annotation is part of the Kubernetes Resource Model (KRM) Functions Specification, which is used by Kustomize and similar third-party tools.
|
||||
|
||||
A KRM Function **should not** modify this annotation on input objects unless it is modifying the referenced files. A KRM Function **may** include this annotation on objects it generates.
|
||||
|
||||
### kubernetes.io/arch
|
||||
|
||||
Example: `kubernetes.io/arch: "amd64"`
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ Kubespray provides the following utilities to help provision your environment:
|
|||
* [Terraform](https://www.terraform.io/) scripts for the following cloud providers:
|
||||
* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)
|
||||
* [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)
|
||||
* [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/metal)
|
||||
* [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/equinix)
|
||||
|
||||
### (2/5) Compose an inventory file
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: Solutions indépendantes
|
||||
weight: 50
|
||||
---
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
---
|
||||
title: Serveurs physiques
|
||||
weight: 60
|
||||
---
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
title: Istio
|
||||
id: istio
|
||||
date: 2018-04-12
|
||||
full_link: https://istio.io/docs/concepts/what-is-istio/
|
||||
full_link: https://istio.io/latest/about/service-mesh/#what-is-istio
|
||||
short_description: >
|
||||
एक ओपन प्लैटफ़ॉर्म (कुबेरनेट्स-विशिष्ट नहीं) जो माइक्रोसर्विसेज को एकीकृत करने, ट्रैफ़िक प्रवाह को प्रबंधित करने, नीतियों को लागू करने और टेलीमेट्री डेटा को एकत्र करने का एक समान तरीका प्रदान करता हैं।
|
||||
aka:
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ DaemonSet内のPodテンプレートでは、[`RestartPolicy`](/ja/docs/concepts
|
|||
|
||||
上記の2つが指定された場合は、2つの条件をANDでどちらも満たすものを結果として返します。
|
||||
|
||||
もし`spec.selector`が指定されたとき、`.spec.template.metadata.labels`とマッチしなければなりません。この2つの値がマッチしない設定をした場合、APIによってリジェクトされます。
|
||||
`spec.selector`は`.spec.template.metadata.labels`とマッチしなければなりません。この2つの値がマッチしない設定をした場合、APIによってリジェクトされます。
|
||||
|
||||
### 選択したNode上でPodを稼働させる {#running-pods-on-select-nodes}
|
||||
|
||||
|
|
|
|||
|
|
@ -22,22 +22,55 @@ weight: 10
|
|||
|
||||
レビューを始める前に、以下のことを心に留めてください。
|
||||
|
||||
- [CNCFの行動規範](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
|
||||
- [CNCFの行動規範](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)を読み、いかなる時にも行動規範にしたがって行動するようにする。
|
||||
- 礼儀正しく、思いやりを持ち、助け合う気持ちを持つ。
|
||||
- 変更点だけでなく、PRのポジティブな側面についてもコメントする。
|
||||
- 相手の気持ちに共感して、自分のレビューが相手にどのように受け取られるのかをよく意識する。
|
||||
- 相手の善意を前提として、疑問点を明確にする質問をする。
|
||||
- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な新規のコントリビューターとペアを組んで作業に取り組むことを考える。
|
||||
- 経験を積んだコントリビューターの場合、コンテンツに大幅な変更が必要な作業を行う新規のコントリビューターとペアを組んで取り組むことを考える。
|
||||
|
||||
## レビューのプロセス
|
||||
|
||||
一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。
|
||||
一般に、コンテンツや文体に対するプルリクエストは、英語でレビューを行います。図1は、レビュープロセスについて手順の概要を示しています。 各ステップの詳細は次のとおりです。
|
||||
|
||||
**(訳注:SIG Docs jaでは、日本語でも対応しています。日本語の翻訳に対するレビューは、日本語でも構いません。ただし、プルリクエストの作成者や他のコントリビューターが必ずしも日本語を理解できるとは限りませんので、注意して発言してください。)**
|
||||
|
||||
<!-- See https://github.com/kubernetes/website/issues/28808 for live-editor URL to this figure -->
|
||||
<!-- You can also cut/paste the mermaid code into the live editor at https://mermaid-js.github.io/mermaid-live-editor to play around with it -->
|
||||
|
||||
{{< mermaid >}}
|
||||
flowchart LR
|
||||
subgraph fourth[レビューの開始]
|
||||
direction TB
|
||||
S[ ] -.-
|
||||
M[コメントを追加] --> N[変更の確認]
|
||||
N --> O[Commentを選択]
|
||||
end
|
||||
subgraph third[PRの選択]
|
||||
direction TB
|
||||
T[ ] -.-
|
||||
J[説明とコメントを読む]--> K[Netlifyプレビューで<br>変更点を表示]
|
||||
end
|
||||
|
||||
A[オープン状態の<br>PR一覧を確認]--> B[オープン状態のPRを<br>ラベルで絞り込む]
|
||||
B --> third --> fourth
|
||||
|
||||
|
||||
classDef grey fill:#dddddd,stroke:#ffffff,stroke-width:px,color:#000000, font-size:15px;
|
||||
classDef white fill:#ffffff,stroke:#000,stroke-width:px,color:#000,font-weight:bold
|
||||
classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000
|
||||
class A,B,J,K,M,N,O grey
|
||||
class S,T spacewhite
|
||||
class third,fourth white
|
||||
{{</ mermaid >}}
|
||||
|
||||
図1. レビュープロセスの手順
|
||||
|
||||
1. [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls)に移動します。Kubernetesのウェブサイトとドキュメントに対するopen状態のプルリクエスト一覧が表示されます。
|
||||
|
||||
2. open状態のPRに、以下に示すラベルを1つ以上使って絞り込みます。
|
||||
|
||||
- `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/overview/#sign-the-cla)を読んでください。
|
||||
- `cncf-cla: yes` (推奨): CLAにサインしていないコントリビューターが提出したPRはマージできません。詳しい情報は、[CLAの署名](/docs/contribute/new-content/#sign-the-cla)を読んでください。
|
||||
- `language/en` (推奨): 英語のPRだけに絞り込みます。
|
||||
- `size/<size>`: 特定の大きさのPRだけに絞り込みます。レビューを始めたばかりの人は、小さなPRから始めてください。
|
||||
|
||||
|
|
@ -47,12 +80,17 @@ weight: 10
|
|||
- PRの説明を読み、行われた変更について理解し、関連するissueがあればそれも読みます。
|
||||
- 他のレビュアのコメントがあれば読みます。
|
||||
- **Files changed**タブをクリックし、変更されたファイルと行を確認します。
|
||||
- **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、**deploy/netlify**の行の**Details**リンクをクリックして、Netlifyのプレビュービルドで変更点をプレビューします。
|
||||
- **Conversation**タブの下にあるPRのbuild checkセクションまでスクロールし、Netlifyのプレビュービルドで変更点をプレビューします。これはスクリーンショットです(これは、GitHubのデスクトップサイトを見せています。タブレットやスマートフォンデバイスでレビューしている場合は、GitHubウェブのUIは少し異なります):
|
||||
{{< figure src="/images/docs/github_netlify_deploy_preview.png" alt="GitHub pull request details including link to Netlify preview" >}}
|
||||
プレビューを開くには、チェックリストの**deploy/netlify**行の**Details**リンクをクリックします。
|
||||
|
||||
4. **Files changed**タブに移動してレビューを始めます。
|
||||
1. コメントしたい場合は行の横の`+`マークをクリックします。
|
||||
2. その行に関するコメントを書き、**Add single comment**(1つのコメントだけを残したい場合)または**Start a review**(複数のコメントを行いたい場合)のいずれかをクリックします。
|
||||
3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。必要に応じて、PRを承認したり、コメントしたり、変更をリクエストします。新しいコントリビューターの場合は**Comment**だけが行えます。
|
||||
3. コメントをすべて書いたら、ページ上部の**Review changes**をクリックします。ここでは、レビューの要約を追加できます(コントリビューターにポジティブなコメントも書きましょう!)。常に「Comment」を使用してください。
|
||||
|
||||
- レビューの終了時、「Request changes」ボタンをクリックしないでください。さらに変更される前にマージされるのをブロックしたい場合、「/hold」コメントを残すことができます。Holdを設定する理由を説明し、必要に応じて、自分や他のレビューアがHoldを解除できる条件を指定してください。
|
||||
- レビューの終了時、「Approve」ボタンをクリックしないでください。大抵の場合、「/approve」コメントを残すことが推奨されます。
|
||||
|
||||
## レビューのチェックリスト
|
||||
|
||||
|
|
@ -61,6 +99,9 @@ weight: 10
|
|||
### 言語と文法
|
||||
|
||||
- 言語や文法に明らかな間違いはないですか? もっとよい言い方はないですか?
|
||||
- 作成者が変更している箇所の用語や文法に注目してください。作成者がページ全体の変更を目的として明確にしていない限り、そのページのすべての問題を修正する義務はありません。
|
||||
- 既存のページを変更するPRである場合、変更されている箇所に注目してレビューしてください。その変更されたコンテンツは、技術的および編集の正確性についてレビューしてください。PRの作成者が対処しようとしている内容と直接関係のない間違いを見つけた場合、それは別のIssueとして扱うべきです(既存のIssueが無いことを確認してください)。
|
||||
- コンテンツを*移動*するPull Requestに注意してください。作成者がページの名前を変更したり、2つのページを結合したりする場合、通常、私たち(Kubernetes SIG Docs)は、その移動されたコンテンツ内で見つけられるすべての文法やスペルの間違いを修正することを作成者に要求することを避けています。
|
||||
- もっと簡単な単語に置き換えられる複雑な単語や古い単語はありませんか?
|
||||
- 使われている単語や専門用語や言い回しで差別的ではない別の言葉に置き換えられるものはありませんか?
|
||||
- 言葉選びや大文字の使い方は[style guide](/docs/contribute/style/style-guide/)に従っていますか?
|
||||
|
|
@ -77,10 +118,16 @@ weight: 10
|
|||
- PRはページ名、slug/alias、アンカーリンクの変更や削除をしていますか? その場合、このPRの変更の結果、リンク切れは発生しませんか? ページ名を変更してslugはそのままにするなど、他の選択肢はありませんか?
|
||||
- PRは新しいページを作成するものですか? その場合、次の点に注意してください。
|
||||
- ページは正しい[page content type](/docs/contribute/style/page-content-types/)と関係するHugoのshortcodeを使用していますか?
|
||||
- セクションの横のナビゲーション(または全体)にページは正しく表示されますか?
|
||||
- セクションの横のナビゲーションにページは正しく表示されますか(または表示されませんか)?
|
||||
- ページは[Docs Home](/docs/home/)に一覧されますか?
|
||||
- Netlifyのプレビューで変更は確認できますか? 特にリスト、コードブロック、テーブル、備考、画像などに注意してください。
|
||||
|
||||
### その他
|
||||
|
||||
PRに関して誤字や空白などの小さな問題を指摘する場合は、コメントの前に`nit:`と書いてください。こうすることで、PRの作者は問題が深刻なものではないことが分かります。
|
||||
- [些細な編集](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits)に注意してください。些細な編集だと思われる変更を見つけた場合は、そのポリシーを指摘してください (それが本当に改善である場合は、変更を受け入れても問題ありません)。
|
||||
- 空白の修正を行っている作成者には、PRの最初のコミットでそれを行い、その後に他の変更を加えるよう促してください。これにより、マージとレビューの両方が容易になります。特に、大量の空白文字の整理と共に1回のコミットで発生する些細な変更に注意してください(もしそれを見つけたら、作成者に修正を促してください)。
|
||||
|
||||
レビュアーが誤字や不適切な空白など、PRの本質でない小さな問題を発見した場合は、コメントの先頭に`nit:`を付けてください。これにより、作成者はこのフィードバックが重要でないことを知ることができます。
|
||||
|
||||
Pull Requestの承認を検討する際、残りのすべてのフィードバックがnitとしてマークされていれば、残っていたとしてもPRをマージできます。その場合、残っているnitに関するIssueをオープンすると役立つことがよくあります。その新しいIssueを[Good First Issue](https://www.kubernetes.dev/docs/guide/help-wanted/#good-first-issue)としてマークするための条件を満たすことができるかどうか検討してください。それができたら、これらは良い情報源になります。
|
||||
|
||||
|
|
|
|||
|
|
@ -49,6 +49,7 @@ content_type: concept
|
|||
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | 1.23 |
|
||||
| `AnyVolumeDataSource` | `true` | Beta | 1.24 | |
|
||||
| `AppArmor` | `true` | Beta | 1.4 | |
|
||||
| `CheckpointContainer` | `false` | Alpha | 1.25 | |
|
||||
| `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | |
|
||||
| `CPUManagerPolicyBetaOptions` | `true` | Beta | 1.23 | |
|
||||
| `CPUManagerPolicyOptions` | `false` | Alpha | 1.22 | 1.22 |
|
||||
|
|
@ -354,6 +355,7 @@ GAになってからさらなる変更を加えることは現実的ではない
|
|||
`APIPriorityAndFairness`: 各サーバーで優先順位付けと公平性を備えた要求の並行性を管理できるようにします(`RequestManagement`から名前が変更されました)。
|
||||
- `APIResponseCompression`:`LIST`や`GET`リクエストのAPIレスポンスを圧縮します。
|
||||
- `AppArmor`: Dockerを使用する場合にLinuxノードでAppArmorによる強制アクセスコントロールを有効にします。詳細は[AppArmorチュートリアル](/ja/docs/tutorials/clusters/apparmor/)で確認できます。
|
||||
- `ContainerCheckpoint`: kubeletチェックポイントAPIを有効にします。詳細は[KubeletチェックポイントAPI](/ja/docs/reference/node/kubelet-checkpoint-api/)で確認できます。
|
||||
- `AttachVolumeLimit`: ボリュームプラグインを有効にすることでノードにアタッチできるボリューム数の制限を設定できます。
|
||||
- `BalanceAttachedNodeVolumes`: スケジューリング中にバランスのとれたリソース割り当てを考慮するノードのボリュームカウントを含めます。判断を行う際に、CPU、メモリー使用率、およびボリュームカウントが近いノードがスケジューラーによって優先されます。
|
||||
- `BlockVolume`: PodでRawブロックデバイスの定義と使用を有効にします。詳細は[Rawブロックボリュームのサポート](/docs/concepts/storage/persistent-volumes/#raw-block-volume-support)で確認できます。
|
||||
|
|
|
|||
|
|
@ -4,13 +4,13 @@ id: service-catalog
|
|||
date: 2018-04-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための拡張APIです。
|
||||
Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための古い拡張APIです。
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- extension
|
||||
---
|
||||
Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための拡張APIです。
|
||||
Kubernetesクラスターで稼働するアプリケーションが、クラウドプロバイダーによって提供されるデータストアサービスのように、外部のマネージドソフトウェアを容易に使えるようにするための古い拡張APIです。
|
||||
|
||||
<!--more-->
|
||||
サービスカタログを使用することで、提供されている{{< glossary_tooltip text="マネージドサービス" term_id="managed-service" >}}を、それらのサービスがどのように作成されるか、また管理されるかについての知識を無しに、一覧表示したり、プロビジョニングや使用をすることができます。
|
||||
サービスカタログを使用することで、提供されている{{< glossary_tooltip text="マネージドサービス" term_id="managed-service" >}}を、それらのサービスがどのように作成されるか、また管理されるかについての知識無しに、一覧表示したり、プロビジョニングやバインドが可能でした。
|
||||
|
|
@ -66,6 +66,32 @@ kubectl [command] [TYPE] [NAME] [flags]
|
|||
|
||||
ヘルプが必要な場合は、ターミナルウィンドウから`kubectl help`を実行してください。
|
||||
|
||||
## クラスター内認証と名前空間のオーバーライド {#in-cluster-authentication-and-namespace-overrides}
|
||||
|
||||
デフォルトでは、`kubectl`は最初にPod内で動作しているか、つまりクラスター内で動作しているかどうかを判断します。まず、`KUBERNETES_SERVICE_HOST`と`KUBERNETES_SERVICE_PORT`の環境変数を確認し、サービスアカウントのトークンファイルが`/var/run/secrets/kubernetes.io/serviceaccount/token`に存在するかどうかを確認します。3つともクラスター内で見つかった場合、クラスター内認証とみなされます。
|
||||
|
||||
後方互換性を保つため、クラスター内認証時に`POD_NAMESPACE`環境変数が設定されている場合には、サービスアカウントトークンのデフォルトの名前空間が上書きされます。名前空間のデフォルトに依存しているすべてのマニフェストやツールは、この影響を受けます。
|
||||
|
||||
**`POD_NAMESPACE`環境変数**
|
||||
|
||||
`POD_NAMESPACE`環境変数が設定されている場合、名前空間に属するリソースのCLI操作は、デフォルトで環境変数の値になります。例えば、変数に`seattle`が設定されている場合、`kubectl get pods`は、`seattle`名前空間のPodを返します。これは、Podが名前空間に属するリソースであり、コマンドで名前空間が指定されていないためです。`kubectl api-resources`の出力を見て、リソースが名前空間に属するかどうかを判断してください。
|
||||
|
||||
明示的に`--namespace <value>`を使用すると、この動作は上書きされます。
|
||||
|
||||
**kubectlによるServiceAccountトークンの処理方法**
|
||||
|
||||
以下の条件がすべて成立した場合、
|
||||
* `/var/run/secrets/kubernetes.io/serviceaccount/token`にマウントされたKubernetesサービスアカウントのトークンファイルがある
|
||||
* `KUBERNETES_SERVICE_HOST`環境変数が設定されている
|
||||
* `KUBERNETES_SERVICE_PORT`環境変数が設定されている
|
||||
* kubectlコマンドラインで名前空間を明示的に指定しない
|
||||
|
||||
kubectlはクラスター内で実行されているとみなして、そのServiceAccountの名前空間(これはPodの名前空間と同じです)を検索し、その名前空間に対して機能します。これは、クラスターの外の動作とは異なります。kubectlがクラスターの外で実行され、名前空間を指定しない場合、kubectlコマンドは、クライアント構成の現在のコンテキストに設定されている名前空間に対して動作します。kubectlのデフォルトの名前空間を変更するには、次のコマンドを使用できます。
|
||||
|
||||
```shell
|
||||
kubectl config set-context --current --namespace=<namespace-name>
|
||||
```
|
||||
|
||||
## 操作
|
||||
|
||||
以下の表に、`kubectl`のすべての操作の簡単な説明と一般的な構文を示します。
|
||||
|
|
@ -93,6 +119,7 @@ kubectl [command] [TYPE] [NAME] [flags]
|
|||
`diff` | `kubectl diff -f FILENAME [flags]`| ファイルまたは標準出力と、現在の設定との差分を表示します。
|
||||
`drain` | `kubectl drain NODE [options]` | メンテナンスの準備のためにNodeをdrainします。
|
||||
`edit` | <code>kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]</code> | デファルトのエディタを使い、サーバー上の1つ以上のリソースリソースの定義を編集し、更新します。
|
||||
`events` | `kubectl events` | イベントを一覧表示します。
|
||||
`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Pod内のコンテナに対して、コマンドを実行します。
|
||||
`explain` | `kubectl explain [--recursive=false] [flags]` | 様々なリソースのドキュメントを取得します。例えば、Pod、Node、Serviceなどです。
|
||||
`expose` | <code>kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags]</code> | ReplicationController、Service、Podを、新しいKubernetesサービスとして公開します。
|
||||
|
|
@ -122,59 +149,66 @@ kubectl [command] [TYPE] [NAME] [flags]
|
|||
|
||||
以下の表に、サポートされているすべてのリソースと、省略されたエイリアスの一覧を示します。
|
||||
|
||||
(この出力は`kubectl api-resources`から取得でき、Kubernetes 1.13.3時点で正確でした。)
|
||||
(この出力は`kubectl api-resources`から取得でき、Kubernetes 1.25.0時点で正確でした。)
|
||||
|
||||
| リソース名 | 短縮名 | APIグループ | 名前空間に属するか | リソースの種類 |
|
||||
| リソース名 | 短縮名 | APIバージョン | 名前空間に属するか | リソースの種類 |
|
||||
|---|---|---|---|---|
|
||||
| `bindings` | | | true | Binding|
|
||||
| `componentstatuses` | `cs` | | false | ComponentStatus |
|
||||
| `configmaps` | `cm` | | true | ConfigMap |
|
||||
| `endpoints` | `ep` | | true | Endpoints |
|
||||
| `limitranges` | `limits` | | true | LimitRange |
|
||||
| `namespaces` | `ns` | | false | Namespace |
|
||||
| `nodes` | `no` | | false | Node |
|
||||
| `persistentvolumeclaims` | `pvc` | | true | PersistentVolumeClaim |
|
||||
| `persistentvolumes` | `pv` | | false | PersistentVolume |
|
||||
| `pods` | `po` | | true | Pod |
|
||||
| `podtemplates` | | | true | PodTemplate |
|
||||
| `replicationcontrollers` | `rc` | | true| ReplicationController |
|
||||
| `resourcequotas` | `quota` | | true | ResourceQuota |
|
||||
| `secrets` | | | true | Secret |
|
||||
| `serviceaccounts` | `sa` | | true | ServiceAccount |
|
||||
| `services` | `svc` | | true | Service |
|
||||
| `mutatingwebhookconfigurations` | | admissionregistration.k8s.io | false | MutatingWebhookConfiguration |
|
||||
| `validatingwebhookconfigurations` | | admissionregistration.k8s.io | false | ValidatingWebhookConfiguration |
|
||||
| `customresourcedefinitions` | `crd`, `crds` | apiextensions.k8s.io | false | CustomResourceDefinition |
|
||||
| `apiservices` | | apiregistration.k8s.io | false | APIService |
|
||||
| `controllerrevisions` | | apps | true | ControllerRevision |
|
||||
| `daemonsets` | `ds` | apps | true | DaemonSet |
|
||||
| `deployments` | `deploy` | apps | true | Deployment |
|
||||
| `replicasets` | `rs` | apps | true | ReplicaSet |
|
||||
| `statefulsets` | `sts` | apps | true | StatefulSet |
|
||||
| `tokenreviews` | | authentication.k8s.io | false | TokenReview |
|
||||
| `localsubjectaccessreviews` | | authorization.k8s.io | true | LocalSubjectAccessReview |
|
||||
| `selfsubjectaccessreviews` | | authorization.k8s.io | false | SelfSubjectAccessReview |
|
||||
| `selfsubjectrulesreviews` | | authorization.k8s.io | false | SelfSubjectRulesReview |
|
||||
| `subjectaccessreviews` | | authorization.k8s.io | false | SubjectAccessReview |
|
||||
| `horizontalpodautoscalers` | `hpa` | autoscaling | true | HorizontalPodAutoscaler |
|
||||
| `cronjobs` | `cj` | batch | true | CronJob |
|
||||
| `jobs` | | batch | true | Job |
|
||||
| `certificatesigningrequests` | `csr` | certificates.k8s.io | false | CertificateSigningRequest |
|
||||
| `leases` | | coordination.k8s.io | true | Lease |
|
||||
| `events` | `ev` | events.k8s.io | true | Event |
|
||||
| `ingresses` | `ing` | extensions | true | Ingress |
|
||||
| `networkpolicies` | `netpol` | networking.k8s.io | true | NetworkPolicy |
|
||||
| `poddisruptionbudgets` | `pdb` | policy | true | PodDisruptionBudget |
|
||||
| `podsecuritypolicies` | `psp` | policy | false | PodSecurityPolicy |
|
||||
| `clusterrolebindings` | | rbac.authorization.k8s.io | false | ClusterRoleBinding |
|
||||
| `clusterroles` | | rbac.authorization.k8s.io | false | ClusterRole |
|
||||
| `rolebindings` | | rbac.authorization.k8s.io | true | RoleBinding |
|
||||
| `roles` | | rbac.authorization.k8s.io | true | Role |
|
||||
| `priorityclasses` | `pc` | scheduling.k8s.io | false | PriorityClass |
|
||||
| `csidrivers` | | storage.k8s.io | false | CSIDriver |
|
||||
| `csinodes` | | storage.k8s.io | false | CSINode |
|
||||
| `storageclasses` | `sc` | storage.k8s.io | false | StorageClass |
|
||||
| `volumeattachments` | | storage.k8s.io | false | VolumeAttachment |
|
||||
| `bindings` | | v1 | true | Binding |
|
||||
| `componentstatuses` | `cs` | v1 | false | ComponentStatus |
|
||||
| `configmaps` | `cm` | v1 | true | ConfigMap |
|
||||
| `endpoints` | `ep` | v1 | true | Endpoints |
|
||||
| `events` | `ev` | v1 | true | Event |
|
||||
| `limitranges` | `limits` | v1 | true | LimitRange |
|
||||
| `namespaces` | `ns` | v1 | false | Namespace |
|
||||
| `nodes` | `no` | v1 | false | Node |
|
||||
| `persistentvolumeclaims` | `pvc` | v1 | true | PersistentVolumeClaim |
|
||||
| `persistentvolumes` | `pv` | v1 | false | PersistentVolume |
|
||||
| `pods` | `po` | v1 | true | Pod |
|
||||
| `podtemplates` | | v1 | true | PodTemplate |
|
||||
| `replicationcontrollers` | `rc` | v1 | true | ReplicationController |
|
||||
| `resourcequotas` | `quota` | v1 | true | ResourceQuota |
|
||||
| `secrets` | | v1 | true | Secret |
|
||||
| `serviceaccounts` | `sa` | v1 | true | ServiceAccount |
|
||||
| `services` | `svc` | v1 | true | Service |
|
||||
| `mutatingwebhookconfigurations` | | admissionregistration.k8s.io/v1 | false | MutatingWebhookConfiguration |
|
||||
| `validatingwebhookconfigurations` | | admissionregistration.k8s.io/v1 | false | ValidatingWebhookConfiguration |
|
||||
| `customresourcedefinitions` | `crd,crds` | apiextensions.k8s.io/v1 | false | CustomResourceDefinition |
|
||||
| `apiservices` | | apiregistration.k8s.io/v1 | false | APIService |
|
||||
| `controllerrevisions` | | apps/v1 | true | ControllerRevision |
|
||||
| `daemonsets` | `ds` | apps/v1 | true | DaemonSet |
|
||||
| `deployments` | `deploy` | apps/v1 | true | Deployment |
|
||||
| `replicasets` | `rs` | apps/v1 | true | ReplicaSet |
|
||||
| `statefulsets` | `sts` | apps/v1 | true | StatefulSet |
|
||||
| `tokenreviews` | | authentication.k8s.io/v1 | false | TokenReview |
|
||||
| `localsubjectaccessreviews` | | authorization.k8s.io/v1 | true | LocalSubjectAccessReview |
|
||||
| `selfsubjectaccessreviews` | | authorization.k8s.io/v1 | false | SelfSubjectAccessReview |
|
||||
| `selfsubjectrulesreviews` | | authorization.k8s.io/v1 | false | SelfSubjectRulesReview |
|
||||
| `subjectaccessreviews` | | authorization.k8s.io/v1 | false | SubjectAccessReview |
|
||||
| `horizontalpodautoscalers` | `hpa` | autoscaling/v2 | true | HorizontalPodAutoscaler |
|
||||
| `cronjobs` | `cj` | batch/v1 | true | CronJob |
|
||||
| `jobs` | | batch/v1 | true | Job |
|
||||
| `certificatesigningrequests` | `csr` | certificates.k8s.io/v1 | false | CertificateSigningRequest |
|
||||
| `leases` | | coordination.k8s.io/v1 | true | Lease |
|
||||
| `endpointslices` | | discovery.k8s.io/v1 | true | EndpointSlice |
|
||||
| `events` | `ev` | events.k8s.io/v1 | true | Event |
|
||||
| `flowschemas` | | flowcontrol.apiserver.k8s.io/v1beta2 | false | FlowSchema |
|
||||
| `prioritylevelconfigurations` | | flowcontrol.apiserver.k8s.io/v1beta2 | false | PriorityLevelConfiguration |
|
||||
| `ingressclasses` | | networking.k8s.io/v1 | false | IngressClass |
|
||||
| `ingresses` | `ing` | networking.k8s.io/v1 | true | Ingress |
|
||||
| `networkpolicies` | `netpol` | networking.k8s.io/v1 | true | NetworkPolicy |
|
||||
| `runtimeclasses` | | node.k8s.io/v1 | false | RuntimeClass |
|
||||
| `poddisruptionbudgets` | `pdb` | policy/v1 | true | PodDisruptionBudget |
|
||||
| `podsecuritypolicies` | `psp` | policy/v1beta1 | false | PodSecurityPolicy |
|
||||
| `clusterrolebindings` | | rbac.authorization.k8s.io/v1 | false | ClusterRoleBinding |
|
||||
| `clusterroles` | | rbac.authorization.k8s.io/v1 | false | ClusterRole |
|
||||
| `rolebindings` | | rbac.authorization.k8s.io/v1 | true | RoleBinding |
|
||||
| `roles` | | rbac.authorization.k8s.io/v1 | true | Role |
|
||||
| `priorityclasses` | `pc` | scheduling.k8s.io/v1 | false | PriorityClass |
|
||||
| `csidrivers` | | storage.k8s.io/v1 | false | CSIDriver |
|
||||
| `csinodes` | | storage.k8s.io/v1 | false | CSINode |
|
||||
| `csistoragecapacities` | | storage.k8s.io/v1 | true | CSIStorageCapacity |
|
||||
| `storageclasses` | `sc` | storage.k8s.io/v1 | false | StorageClass |
|
||||
| `volumeattachments` | | storage.k8s.io/v1 | false | VolumeAttachment |
|
||||
|
||||
## 出力オプション
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
content_type: "reference"
|
||||
title: KubeletチェックポイントAPI
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
|
||||
コンテナのチェックポイントは実行中のコンテナのステートフルコピーを作成するための機能です。
|
||||
コンテナのステートフルコピーがあると、デバックや類似の目的のために別のコンピューターに移動させることができます。
|
||||
|
||||
チェックポイントコンテナデータを復元可能なコンピューターに移動させる場合、その復元したコンテナは、チェックポイントが作成された正確に同じ地点で実行が再開されます。
|
||||
保存したデータを検査することも可能です。
|
||||
ただし、検査を行うための適したツールを保持している必要があります。
|
||||
|
||||
コンテナのチェックポイントを作成することで、セキュリティ影響が発生する場合があります。
|
||||
通常、チェックポイントはチェックポイントされたコンテナ内のすべてのプロセスのすべてのメモリーページを含んでいます。
|
||||
メモリー内で使用された全てがローカルディスク上で利用できるようになることを意味しています。
|
||||
これはすべてのプライベートデータを含んでおり、もしかしたら暗号化に使用した鍵も含まれているかもしれません。
|
||||
基礎となるCRI実装(そのノード上のコンテナランタイム)は、`root`ユーザーのみがアクセス可能なチェックポイントアーカイブを作成するべきです。
|
||||
チェックポイントアーカイブが他のシステムに転送された場合、全てのメモリーページがチェックポイントアーカイブのオーナーによって読み取れるようになることを覚えておくことが重要です。
|
||||
|
||||
## 操作方法 {#operations}
|
||||
|
||||
### `post` 指定したコンテナのチェックポイント {#post-checkpoint}
|
||||
|
||||
指定したPodから指定したコンテナのチェックポイントを作成するようにkubeletに指示します。
|
||||
|
||||
kubeletチェックポイントインターフェイスへのアクセスの制御方法についての詳細な情報は、[Kubelet authentication/authorization reference](/docs/reference/access-authn-authz/kubelet-authn-authz)を参照してください。
|
||||
|
||||
kubeletは基礎となる{{<glossary_tooltip term_id="cri" text="CRI">}}実装にチェックポイントをリクエストします。
|
||||
チェックポイントリクエストでは、kubeletが`checkpoint-<podFullName>-<containerName>-<timestamp>.tar`のようなチェックポイントアーカイブの名前を指定します。
|
||||
併せて、(`--root-dir`で定義される)rootディレクトリ配下の`checkpoints`ディレクトリに、チェックポイントアーカイブを保存することをリクエストします。
|
||||
デフォルトは`/var/lib/kubelet/checkpoints`です。
|
||||
|
||||
チェックポイントアーカイブは _tar_ フォーマットであり、[`tar`](https://pubs.opengroup.org/onlinepubs/7908799/xcu/tar.html)の実装を使用して一覧表示できます。
|
||||
アーカイブの内容は、基礎となるCRI実装(ノード上のコンテナランタイム)に依存します。
|
||||
|
||||
#### HTTPリクエスト {#post-checkpoint-request}
|
||||
|
||||
POST /checkpoint/{namespace}/{pod}/{container}
|
||||
|
||||
#### パラメーター {#post-checkpoint-params}
|
||||
|
||||
- **namespace** (*パス内*): string, 必須項目
|
||||
|
||||
{{< glossary_tooltip term_id="namespace" >}}
|
||||
|
||||
- **pod** (*パス内*): string, 必須項目
|
||||
|
||||
{{< glossary_tooltip term_id="pod" >}}
|
||||
|
||||
- **container** (*パス内*): string, 必須項目
|
||||
|
||||
{{< glossary_tooltip term_id="container" >}}
|
||||
|
||||
- **timeout** (*クエリ内*): integer
|
||||
|
||||
チェックポイントの作成が終了するまで待機する秒単位のタイムアウト。
|
||||
ゼロまたはタイムアウトが指定されていない場合、デフォルトは{{<glossary_tooltip term_id="cri" text="CRI">}}タイムアウトの値が使用されます。
|
||||
チェックポイント作成時間はコンテナの使用メモリーに直接依存します。
|
||||
コンテナの使用メモリーが多いほど、対応するチェックポイントを作成するために必要な時間が長くなります。
|
||||
|
||||
#### レスポンス {#post-checkpoint-response}
|
||||
|
||||
200: OK
|
||||
|
||||
401: Unauthorized
|
||||
|
||||
404: Not Found (`ContainerCheckpoint`フィーチャーゲートが無効の場合)
|
||||
|
||||
404: Not Found (指定した`namespace`や`pod`、`container`が見つからない場合)
|
||||
|
||||
500: Internal Server Error (CRI実装でチェックポイント中にエラーが発生した場合(詳細はエラーメッセージを参照))
|
||||
|
||||
500: Internal Server Error (CRI実装がチェックポイントCRI APIを実装していない場合(詳細はエラーメッセージを参照))
|
||||
|
||||
{{< comment >}}
|
||||
TODO: CRI実装がチェックポイントや復元の機能を持つ場合のリターンコードについてさらなる情報を追加すること。
|
||||
このTODOは、CRI実装が新しいContainerCheckpoint CRI APIコールを実装するために、Kubernetesの変更をマージすることを必要とするため、リリース前には修正できません。
|
||||
これを修正するためには1.25リリースの後を待つ必要があります。
|
||||
{{< /comment >}}
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
---
|
||||
title: "サービスカタログ"
|
||||
description: サービスカタログ拡張APIをインストールする
|
||||
weight: 150
|
||||
---
|
||||
|
||||
|
|
@ -1,118 +0,0 @@
|
|||
---
|
||||
title: Helmを使用したサービスカタログのインストール
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
{{< glossary_definition term_id="service-catalog" length="all" prepend="サービスカタログは" >}}
|
||||
|
||||
[Helm](https://helm.sh/)を使用してKubernetesクラスターにサービスカタログをインストールします。手順の最新情報は[kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/install.md)リポジトリーを参照してください。
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* [サービスカタログ](/docs/concepts/extend-kubernetes/service-catalog/)の基本概念を理解してください。
|
||||
* サービスカタログを使用するには、Kubernetesクラスターのバージョンが1.7以降である必要があります。
|
||||
* KubernetesクラスターのクラスターDNSを有効化する必要があります。
|
||||
* クラウド上のKubernetesクラスター、または{{< glossary_tooltip text="Minikube" term_id="minikube" >}}を使用している場合、クラスターDNSはすでに有効化されています。
|
||||
* `hack/local-up-cluster.sh`を使用している場合は、環境変数`KUBE_ENABLE_CLUSTER_DNS`が設定されていることを確認し、インストールスクリプトを実行してください。
|
||||
* [kubectlのインストールおよびセットアップ](/ja/docs/tasks/tools/install-kubectl/)を参考に、v1.7以降のkubectlをインストールし、設定を行ってください。
|
||||
* v2.7.0以降の[Helm](https://helm.sh/)をインストールしてください。
|
||||
* [Helm install instructions](https://helm.sh/docs/intro/install/)を参考にしてください。
|
||||
* 上記のバージョンのHelmをすでにインストールしている場合は、`helm init`を実行し、HelmのサーバーサイドコンポーネントであるTillerをインストールしてください。
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
## Helmリポジトリーにサービスカタログを追加
|
||||
|
||||
Helmをインストールし、以下のコマンドを実行することでローカルマシンに*service-catalog*のHelmリポジトリーを追加します。
|
||||
|
||||
|
||||
```shell
|
||||
helm repo add svc-cat https://kubernetes-sigs.github.io/service-catalog
|
||||
```
|
||||
|
||||
以下のコマンドを実行し、インストールに成功していることを確認します。
|
||||
|
||||
```shell
|
||||
helm search service-catalog
|
||||
```
|
||||
|
||||
インストールが成功していれば、出力は以下のようになります:
|
||||
|
||||
```
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
svc-cat/catalog 0.2.1 service-catalog API server and controller-manager helm chart
|
||||
svc-cat/catalog-v0.2 0.2.2 service-catalog API server and controller-manager helm chart
|
||||
```
|
||||
|
||||
## RBACの有効化
|
||||
|
||||
KubernetesクラスターのRBACを有効化することで、Tiller Podに`cluster-admin`アクセスを持たせます。
|
||||
|
||||
v0.25以前のMinikubeを使用している場合は、明示的にRBACを有効化して起動する必要があります:
|
||||
|
||||
```shell
|
||||
minikube start --extra-config=apiserver.Authorization.Mode=RBAC
|
||||
```
|
||||
|
||||
v0.26以降のMinikubeを使用している場合は、以下のコマンドを実行してください。
|
||||
|
||||
```shell
|
||||
minikube start
|
||||
```
|
||||
|
||||
v0.26以降のMinikubeを使用している場合、`--extra-config`を指定しないでください。
|
||||
このフラグは--extra-config=apiserver.authorization-modeを指定するものに変更されており、現在MinikubeではデフォルトでRBACが有効化されています。
|
||||
古いフラグを指定すると、スタートコマンドが応答しなくなることがあります。
|
||||
|
||||
`hack/local-up-cluster.sh`を使用している場合、環境変数`AUTHORIZATION_MODE`を以下の値に設定してください:
|
||||
|
||||
```
|
||||
AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O
|
||||
```
|
||||
|
||||
`helm init`は、デフォルトで`kube-system`のnamespaceにTiller Podをインストールし、Tillerは`default`のServiceAccountを使用するように設定されています。
|
||||
|
||||
{{< note >}}
|
||||
`helm init`を実行する際に`--tiller-namespace`または`--service-account`のフラグを使用する場合、以下のコマンドの`--serviceaccount`フラグには適切なnamespaceとServiceAccountを指定する必要があります。
|
||||
{{< /note >}}
|
||||
|
||||
Tillerに`cluster-admin`アクセスを設定する場合:
|
||||
|
||||
```shell
|
||||
kubectl create clusterrolebinding tiller-cluster-admin \
|
||||
--clusterrole=cluster-admin \
|
||||
--serviceaccount=kube-system:default
|
||||
```
|
||||
|
||||
|
||||
## Kubernetesクラスターにサービスカタログをインストール
|
||||
|
||||
以下のコマンドを使用して、Helmリポジトリーのrootからサービスカタログをインストールします:
|
||||
|
||||
{{< tabs name="helm-versions" >}}
|
||||
{{% tab name="Helm バージョン3" %}}
|
||||
```shell
|
||||
helm install catalog svc-cat/catalog --namespace catalog
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Helm バージョン2" %}}
|
||||
```shell
|
||||
helm install svc-cat/catalog --name catalog --namespace catalog
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [sample service brokers](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers)
|
||||
* [kubernetes-sigs/service-catalog](https://github.com/kubernetes-sigs/service-catalog)
|
||||
|
||||
|
||||
|
|
@ -1,68 +0,0 @@
|
|||
---
|
||||
title: SCを使用したサービスカタログのインストール
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
{{< glossary_definition term_id="service-catalog" length="all" prepend="サービスカタログは" >}}
|
||||
|
||||
GCPの[Service Catalog Installer](https://github.com/GoogleCloudPlatform/k8s-service-catalog#installation)ツールを使うと、Kubernetesクラスター上にサービスカタログを簡単にインストール・アンインストールして、Google Cloudのプロジェクトに紐付けることもできます。
|
||||
|
||||
サービスカタログ自体は、Google Cloudだけではなく、どのような種類のマネージドサービスでも動作します。
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
* [サービスカタログ](/docs/concepts/extend-kubernetes/service-catalog/)の基本概念を理解してください。
|
||||
* [Go 1.6+](https://golang.org/dl/)をインストールして、`GOPATH`を設定してください。
|
||||
* SSLに関するファイルを生成するために必要な[cfssl](https://github.com/cloudflare/cfssl)ツールをインストールしてください。
|
||||
* サービスカタログを使用するには、Kubernetesクラスターのバージョンが1.7以降である必要があります。
|
||||
* [kubectlのインストールおよびセットアップ](/ja/docs/tasks/tools/install-kubectl/)を参考に、v1.7以降のkubectlをインストールし、設定を行ってください。
|
||||
* サービスカタログをインストールするためには、kubectlのユーザーが*cluster-admin*ロールにバインドされている必要があります。正しくバインドされていることを確認するには、次のコマンドを実行します。
|
||||
|
||||
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<user-name>
|
||||
|
||||
<!-- steps -->
|
||||
## ローカル環境に`sc`をインストールする
|
||||
|
||||
インストーラーは、ローカルのコンピューター上で`sc`と呼ばれるCLIツールとして実行します。
|
||||
|
||||
`go get`を使用してインストールします。
|
||||
|
||||
```shell
|
||||
go get github.com/GoogleCloudPlatform/k8s-service-catalog/installer/cmd/sc
|
||||
```
|
||||
|
||||
これで、`sc`が`GOPATH/bin`ディレクトリー内にインストールされたはずです。
|
||||
|
||||
## Kubernetesクラスターにサービスカタログをインストールする
|
||||
|
||||
まず、すべての依存関係がインストールされていることを確認します。次のコマンドを実行してください。
|
||||
|
||||
```shell
|
||||
sc check
|
||||
```
|
||||
|
||||
チェックが成功したら、次のように表示されるはずです。
|
||||
|
||||
```
|
||||
Dependency check passed. You are good to go.
|
||||
```
|
||||
|
||||
次に、バックアップに使用したい`storageclass`を指定して、installコマンドを実行します。
|
||||
|
||||
```shell
|
||||
sc install --etcd-backup-storageclass "standard"
|
||||
```
|
||||
|
||||
## サービスカタログのアンインストール
|
||||
|
||||
Kubernetesクラスターからサービスカタログをアンインストールしたい場合は、`sc`ツールを使って次のコマンドを実行します。
|
||||
|
||||
```shell
|
||||
sc uninstall
|
||||
```
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [サービスブローカーのサンプル](https://github.com/openservicebrokerapi/servicebroker/blob/master/gettingStarted.md#sample-service-brokers)を読む。
|
||||
* [kubernetes-incubator/service-catalog](https://github.com/kubernetes-incubator/service-catalog)プロジェクトを探索する。
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
---
|
||||
title: Atribuindo Recursos Estendidos a um Contêiner
|
||||
content_type: task
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state state="stable" >}}
|
||||
|
||||
Esta página mostra como atribuir recursos estendidos a um Contêiner.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
Antes de fazer este exercício, faça o exercício em
|
||||
[Anunciar recursos estendidos para um Nó](/docs/tasks/administer-cluster/extended-resource-node/).
|
||||
Isso configurará um de seus nós para anunciar um recurso de *dongle*.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Atribua um recurso estendido a um Pod
|
||||
|
||||
Para solicitar um recurso estendido, inclua o campo `resources:requests` no seu
|
||||
manifesto do contêiner. Recursos estendidos são totalmente qualificados
|
||||
com qualquer domínio fora do `*.kubernetes.io/`. Nomes de recursos estendidos válidos
|
||||
tem a forma de `example.com/foo`, onde `example.com` é substituído pelo domínio
|
||||
da sua organização e `foo` é um nome descritivo de recurso.
|
||||
|
||||
Aqui está o arquivo de configuração para um pod que possui um contêiner:
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod.yaml" >}}
|
||||
|
||||
No arquivo de configuração, você pode ver que o contêiner solicita 3 *dongles*.
|
||||
|
||||
Crie um Pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod.yaml
|
||||
```
|
||||
|
||||
Verifique se o pod está em execução:
|
||||
|
||||
```shell
|
||||
kubectl get pod extended-resource-demo
|
||||
```
|
||||
|
||||
Descreva o pod:
|
||||
|
||||
```shell
|
||||
kubectl describe pod extended-resource-demo
|
||||
```
|
||||
|
||||
A saída mostra as solicitações de *dongle*:
|
||||
|
||||
```yaml
|
||||
Limits:
|
||||
example.com/dongle: 3
|
||||
Requests:
|
||||
example.com/dongle: 3
|
||||
```
|
||||
|
||||
## Tente criar um segundo Pod
|
||||
|
||||
Aqui está o arquivo de configuração para um pod que possui um contêiner.
|
||||
O contêiner solicita dois *dongles*.
|
||||
|
||||
{{< codenew file="pods/resource/extended-resource-pod-2.yaml" >}}
|
||||
|
||||
O Kubernetes não poderá satisfazer o pedido de dois *dongles*, porque o primeiro pod
|
||||
usou três dos quatro *dongles* disponíveis.
|
||||
|
||||
Tente criar um pod:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/resource/extended-resource-pod-2.yaml
|
||||
```
|
||||
|
||||
Descreva o pod:
|
||||
|
||||
```shell
|
||||
kubectl describe pod extended-resource-demo-2
|
||||
```
|
||||
|
||||
A saída mostra que o pod não pode ser agendado, porque não há nó que tenha
|
||||
2 *dongles* disponíveis:
|
||||
|
||||
```
|
||||
Conditions:
|
||||
Type Status
|
||||
PodScheduled False
|
||||
...
|
||||
Events:
|
||||
...
|
||||
... Warning FailedScheduling pod (extended-resource-demo-2) failed to fit in any node
|
||||
fit failure summary on nodes : Insufficient example.com/dongle (1)
|
||||
```
|
||||
|
||||
Veja o status do pod:
|
||||
|
||||
```shell
|
||||
kubectl get pod extended-resource-demo-2
|
||||
```
|
||||
|
||||
A saída mostra que o Pod foi criado, mas não está programado para ser executado em um nó.
|
||||
Tem um status de pendente:
|
||||
|
||||
```yaml
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
extended-resource-demo-2 0/1 Pending 0 6m
|
||||
```
|
||||
|
||||
## Limpeza
|
||||
|
||||
Exclua os Pods que você criou para este exercício:
|
||||
|
||||
```shell
|
||||
kubectl delete pod extended-resource-demo
|
||||
kubectl delete pod extended-resource-demo-2
|
||||
```
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
### Para desenvolvedores de aplicativos
|
||||
|
||||
* [Atribuir recursos de memória a contêineres e Pods](/docs/tasks/configure-pod-container/assign-memory-resource/)
|
||||
* [Atribuir recursos de CPU a contêineres e Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/)
|
||||
|
||||
### Para administradores de cluster
|
||||
|
||||
* [Anunciar recursos estendidos para um nó](/docs/tasks/administer-cluster/extended-resource-node/)
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: extended-resource-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: extended-resource-demo-2-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
example.com/dongle: 2
|
||||
limits:
|
||||
example.com/dongle: 2
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: extended-resource-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: extended-resource-demo-ctr
|
||||
image: nginx
|
||||
resources:
|
||||
requests:
|
||||
example.com/dongle: 3
|
||||
limits:
|
||||
example.com/dongle: 3
|
||||
|
|
@ -14,12 +14,11 @@ url: /blog/2015/04/Borg-Predecessor-To-Kubernetes
|
|||
Google has been running containerized workloads in production for more than a decade. Whether it's service jobs like web front-ends and stateful servers, infrastructure systems like [Bigtable](http://research.google.com/archive/bigtable.html) and [Spanner](http://research.google.com/archive/spanner.html), or batch frameworks like [MapReduce](http://research.google.com/archive/mapreduce.html) and [Millwheel](http://research.google.com/pubs/pub41378.html), virtually everything at Google runs as a container. Today, we took the wraps off of Borg, Google’s long-rumored internal container-oriented cluster-management system, publishing details at the academic computer systems conference [Eurosys](http://eurosys2015.labri.fr/). You can find the paper [here](https://research.google.com/pubs/pub43438.html).
|
||||
-->
|
||||
十多年来,谷歌一直在生产中运行容器化工作负载。
|
||||
无论是像网络前端和有状态服务器之类的工作,像 [Bigtable](http://research.google.com/archive/bigtable.html) 和
|
||||
无论是像网络前端和有状态服务器之类的工作,像 [Bigtable](http://research.google.com/archive/bigtable.html) 和
|
||||
[Spanner](http://research.google.com/archive/spanner.html)一样的基础架构系统,或是像
|
||||
[MapReduce](http://research.google.com/archive/mapreduce.html) 和 [Millwheel](http://research.google.com/pubs/pub41378.html)一样的批处理框架,
|
||||
Google 的几乎一切都是以容器的方式运行的。今天,我们揭开了 Borg 的面纱,Google 传闻已久的面向容器的内部集群管理系统,并在学术计算机系统会议 [Eurosys](http://eurosys2015.labri.fr/) 上发布了详细信息。你可以在 [此处](https://research.google.com/pubs/pub43438.html) 找到论文。
|
||||
|
||||
|
||||
<!--
|
||||
Kubernetes traces its lineage directly from Borg. Many of the developers at Google working on Kubernetes were formerly developers on the Borg project. We've incorporated the best ideas from Borg in Kubernetes, and have tried to address some pain points that users identified with Borg over the years.
|
||||
-->
|
||||
|
|
@ -33,15 +32,15 @@ To give you a flavor, here are four Kubernetes features that came from our exper
|
|||
Kubernetes 中的以下四个功能特性源于我们从 Borg 获得的经验:
|
||||
|
||||
<!--
|
||||
1) [Pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes.
|
||||
1) [Pods](/docs/concepts/workloads/pods/). A pod is the unit of scheduling in Kubernetes. It is a resource envelope in which one or more containers run. Containers that are part of the same pod are guaranteed to be scheduled together onto the same machine, and can share state via local volumes.
|
||||
-->
|
||||
1) [Pods](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md).
|
||||
1) [Pods](/zh-cn/docs/concepts/workloads/pods/)。
|
||||
Pod 是 Kubernetes 中调度的单位。
|
||||
它是一个或多个容器在其中运行的资源封装。
|
||||
保证属于同一 Pod 的容器可以一起调度到同一台计算机上,并且可以通过本地卷共享状态。
|
||||
|
||||
<!--
|
||||
Borg has a similar abstraction, called an alloc (short for “resource allocation”). Popular uses of allocs in Borg include running a web server that generates logs alongside a lightweight log collection process that ships the log to a cluster filesystem (not unlike fluentd or logstash); running a web server that serves data from a disk directory that is populated by a process that reads data from a cluster filesystem and prepares/stages it for the web server (not unlike a Content Management System); and running user-defined processing functions alongside a storage shard.
|
||||
Borg has a similar abstraction, called an alloc (short for “resource allocation”). Popular uses of allocs in Borg include running a web server that generates logs alongside a lightweight log collection process that ships the log to a cluster filesystem (not unlike fluentd or logstash); running a web server that serves data from a disk directory that is populated by a process that reads data from a cluster filesystem and prepares/stages it for the web server (not unlike a Content Management System); and running user-defined processing functions alongside a storage shard.
|
||||
-->
|
||||
Borg 有一个类似的抽象,称为 alloc(“资源分配”的缩写)。
|
||||
Borg 中 alloc 的常见用法包括运行 Web 服务器,该服务器生成日志,一起部署一个轻量级日志收集进程,
|
||||
|
|
@ -54,12 +53,10 @@ Pods not only support these use cases, but they also provide an environment simi
|
|||
-->
|
||||
Pod 不仅支持这些用例,而且还提供类似于在单个 VM 中运行多个进程的环境 -- Kubernetes 用户可以在 Pod 中部署多个位于同一地点的协作过程,而不必放弃一个应用程序一个容器的部署模型。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
2) [Services](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md). Although Borg’s primary role is to manage the lifecycles of tasks and machines, the applications that run on Borg benefit from many other cluster services, including naming and load balancing. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector (see next section). Any container in the cluster can connect to the service using the service name.
|
||||
2) [Services](/docs/concepts/services-networking/service/). Although Borg’s primary role is to manage the lifecycles of tasks and machines, the applications that run on Borg benefit from many other cluster services, including naming and load balancing. Kubernetes supports naming and load balancing using the service abstraction: a service has a name and maps to a dynamic set of pods defined by a label selector (see next section). Any container in the cluster can connect to the service using the service name.
|
||||
-->
|
||||
2) [服务](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md)。
|
||||
2) [服务](/zh-cn/docs/concepts/services-networking/service/)。
|
||||
尽管 Borg 的主要角色是管理任务和计算机的生命周期,但是在 Borg 上运行的应用程序还可以从许多其它集群服务中受益,包括命名和负载均衡。
|
||||
Kubernetes 使用服务抽象支持命名和负载均衡:带名字的服务,会映射到由标签选择器定义的一组动态 Pod 集(请参阅下一节)。
|
||||
集群中的任何容器都可以使用服务名称链接到服务。
|
||||
|
|
@ -68,30 +65,24 @@ Under the covers, Kubernetes automatically load-balances connections to the serv
|
|||
-->
|
||||
在幕后,Kubernetes 会自动在与标签选择器匹配到 Pod 之间对与服务的连接进行负载均衡,并跟踪 Pod 在哪里运行,由于故障,它们会随着时间的推移而重新安排。
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
3) [Labels](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md).
|
||||
A container in Borg is usually one replica in a collection of identical or nearly identical containers that correspond to one tier of an Internet service (e.g. the front-ends for Google Maps) or to the workers of a batch job (e.g. a MapReduce). The collection is called a Job, and each replica is called a Task. While the Job is a very useful abstraction, it can be limiting. For example, users often want to manage their entire service (composed of many Jobs) as a single entity, or to uniformly manage several related instances of their service, for example separate canary and stable release tracks.
|
||||
At the other end of the spectrum, users frequently want to reason about and control subsets of tasks within a Job -- the most common example is during rolling updates, when different subsets of the Job need to have different configurations.
|
||||
3) [Labels](/docs/concepts/overview/working-with-objects/labels/). A container in Borg is usually one replica in a collection of identical or nearly identical containers that correspond to one tier of an Internet service (e.g. the front-ends for Google Maps) or to the workers of a batch job (e.g. a MapReduce). The collection is called a Job, and each replica is called a Task. While the Job is a very useful abstraction, it can be limiting. For example, users often want to manage their entire service (composed of many Jobs) as a single entity, or to uniformly manage several related instances of their service, for example separate canary and stable release tracks. At the other end of the spectrum, users frequently want to reason about and control subsets of tasks within a Job -- the most common example is during rolling updates, when different subsets of the Job need to have different configurations.
|
||||
-->
|
||||
3) [标签](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/labels.md)。
|
||||
|
||||
3) [标签](/zh-cn/docs/concepts/overview/working-with-objects/labels/)。
|
||||
Borg 中的容器通常是一组相同或几乎相同的容器中的一个副本,该容器对应于 Internet 服务的一层(例如 Google Maps 的前端)或批处理作业的工人(例如 MapReduce)。
|
||||
该集合称为 Job ,每个副本称为任务。
|
||||
尽管 Job 是一个非常有用的抽象,但它可能是有限的。
|
||||
例如,用户经常希望将其整个服务(由许多 Job 组成)作为一个实体进行管理,或者统一管理其服务的几个相关实例,例如单独的 Canary 和稳定的发行版。
|
||||
另一方面,用户经常希望推理和控制 Job 中的任务子集 --最常见的示例是在滚动更新期间,此时作业的不同子集需要具有不同的配置。
|
||||
|
||||
|
||||
<!--
|
||||
Kubernetes supports more flexible collections than Borg by organizing pods using labels, which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can create groupings equivalent to Borg Jobs by using a “job:\<jobname\>” label on their pods, but they can also use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and [replication controllers](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md) allow for very flexible update semantics, as well as for operations that span the equivalent of Borg Jobs.
|
||||
Kubernetes supports more flexible collections than Borg by organizing pods using labels, which are arbitrary key/value pairs that users attach to pods (and in fact to any object in the system). Users can create groupings equivalent to Borg Jobs by using a “job:\<jobname\>” label on their pods, but they can also use additional labels to tag the service name, service instance (production, staging, test), and in general, any subset of their pods. A label query (called a “label selector”) is used to select which set of pods an operation should be applied to. Taken together, labels and [replication controllers](/docs/concepts/workloads/controllers/replicationcontroller/) allow for very flexible update semantics, as well as for operations that span the equivalent of Borg Jobs.
|
||||
-->
|
||||
通过使用标签组织 Pod ,Kubernetes 比 Borg 支持更灵活的集合,标签是用户附加到 Pod(实际上是系统中的任何对象)的任意键/值对。
|
||||
用户可以通过在其 Pod 上使用 “job:\<jobname\>” 标签来创建与 Borg Jobs 等效的分组,但是他们还可以使用其他标签来标记服务名称,服务实例(生产,登台,测试)以及一般而言,其 pod 的任何子集。
|
||||
标签查询(称为“标签选择器”)用于选择操作应用于哪一组 Pod 。
|
||||
结合起来,标签和[复制控制器](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md) 允许非常灵活的更新语义,以及跨等效项的操作 Borg Jobs。
|
||||
|
||||
|
||||
结合起来,标签和[复制控制器](/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/) 允许非常灵活的更新语义,以及跨等效项的操作 Borg Jobs。
|
||||
|
||||
<!--
|
||||
4) IP-per-Pod. In Borg, all tasks on a machine use the IP address of that host, and thus share the host’s port space. While this means Borg can use a vanilla network, it imposes a number of burdens on infrastructure and application developers: Borg must schedule ports as a resource; tasks must pre-declare how many ports they need, and take as start-up arguments which ports to use; the Borglet (node agent) must enforce port isolation; and the naming and RPC systems must handle ports as well as IP addresses.
|
||||
|
|
@ -99,7 +90,6 @@ Kubernetes supports more flexible collections than Borg by organizing pods using
|
|||
4) 每个 Pod 一个 IP。在 Borg 中,计算机上的所有任务都使用该主机的 IP 地址,从而共享主机的端口空间。
|
||||
虽然这意味着 Borg 可以使用普通网络,但是它给基础结构和应用程序开发人员带来了许多负担:Borg 必须将端口作为资源进行调度;任务必须预先声明它们需要多少个端口,并将要使用的端口作为启动参数;Borglet(节点代理)必须强制端口隔离;命名和 RPC 系统必须处理端口以及 IP 地址。
|
||||
|
||||
|
||||
<!--
|
||||
Thanks to the advent of software-defined overlay networks such as [flannel](https://coreos.com/blog/introducing-rudder/) or those built into [public clouds](https://cloud.google.com/compute/docs/networking), Kubernetes is able to give every pod and service its own IP address. This removes the infrastructure complexity of managing ports, and allows developers to choose any ports they want rather than requiring their software to adapt to the ones chosen by the infrastructure. The latter point is crucial for making it easy to run off-the-shelf open-source applications on Kubernetes--pods can be treated much like VMs or physical hosts, with access to the full port space, oblivious to the fact that they may be sharing the same physical machine with other pods.
|
||||
-->
|
||||
|
|
@ -107,11 +97,8 @@ Thanks to the advent of software-defined overlay networks such as [flannel](http
|
|||
这消除了管理端口的基础架构的复杂性,并允许开发人员选择他们想要的任何端口,而不需要其软件适应基础架构选择的端口。
|
||||
后一点对于使现成的易于运行 Kubernetes 上的开源应用程序至关重要 -- 可以将 Pod 视为 VMs 或物理主机,可以访问整个端口空间,他们可能与其他 Pod 共享同一台物理计算机,这一事实已被忽略。
|
||||
|
||||
|
||||
<!--
|
||||
With the growing popularity of container-based microservice architectures, the lessons Google has learned from running such systems internally have become of increasing interest to the external DevOps community. By revealing some of the inner workings of our cluster manager Borg, and building our next-generation cluster manager as both an open-source project (Kubernetes) and a publicly available hosted service ([Google Container Engine](http://cloud.google.com/container-engine)), we hope these lessons can benefit the broader community outside of Google and advance the state-of-the-art in container scheduling and cluster management.
|
||||
With the growing popularity of container-based microservice architectures, the lessons Google has learned from running such systems internally have become of increasing interest to the external DevOps community. By revealing some of the inner workings of our cluster manager Borg, and building our next-generation cluster manager as both an open-source project (Kubernetes) and a publicly available hosted service ([Google Container Engine](http://cloud.google.com/container-engine)), we hope these lessons can benefit the broader community outside of Google and advance the state-of-the-art in container scheduling and cluster management.
|
||||
-->
|
||||
随着基于容器的微服务架构的日益普及,Google 从内部运行此类系统所汲取的经验教训已引起外部 DevOps 社区越来越多的兴趣。
|
||||
通过揭示集群管理器 Borg 的一些内部工作原理,并将下一代集群管理器构建为一个开源项目(Kubernetes)和一个公开可用的托管服务([Google Container Engine](http://cloud.google.com/container-engine)),我们希望这些课程可以使 Google 之外的广大社区受益,并推动容器调度和集群管理方面的最新技术发展。
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -13,6 +13,21 @@ title: "PodSecurityPolicy Deprecation: Past, Present, and Future"
|
|||
-->
|
||||
作者:Tabitha Sable(Kubernetes SIG Security)
|
||||
|
||||
{{% pageinfo color="primary" %}}
|
||||
<!--
|
||||
**Update:** *With the release of Kubernetes v1.25, PodSecurityPolicy has been removed.*
|
||||
-->
|
||||
**更新:随着 Kubernetes v1.25 的发布,PodSecurityPolicy 已被删除。**
|
||||
|
||||
<!--
|
||||
*You can read more information about the removal of PodSecurityPolicy in the
|
||||
[Kubernetes 1.25 release notes](/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes).*
|
||||
-->
|
||||
**你可以在 [Kubernetes 1.25 发布说明](/zh-cn/blog/2022/08/23/kubernetes-v1-25-release/#pod-security-changes)
|
||||
中阅读有关删除 PodSecurityPolicy 的更多信息。**
|
||||
|
||||
{{% /pageinfo %}}
|
||||
|
||||
<!--
|
||||
PodSecurityPolicy (PSP) is being deprecated in Kubernetes 1.21, to be released later this week.
|
||||
This starts the countdown to its removal, but doesn’t change anything else.
|
||||
|
|
@ -128,7 +143,7 @@ configuring some software or hardware to accomplish our goals.
|
|||
In most Kubernetes clusters,
|
||||
RBAC (Role-Based Access Control) [rules](/docs/reference/access-authn-authz/rbac/#role-and-clusterrole)
|
||||
control access to these resources. `list`, `get`, `create`, `edit`, and `delete` are
|
||||
the sorts of API operations that RBAC cares about,
|
||||
the sorts of API operations that RBAC cares about,
|
||||
but _RBAC does not consider what settings are being put into the resources it controls_.
|
||||
For example, a Pod can be almost anything from a simple webserver to
|
||||
a privileged command prompt offering full access to the underlying server node and all the data.
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ Andy 是 Kubernetes 中国社区的活跃成员。
|
|||
## [Shiming Zhang](https://github.com/wzshiming)
|
||||
|
||||
<!--
|
||||
Shiming Zhang is a Software Engineer working on Kubernetes for DaoCloud in Shanghai, China.
|
||||
Shiming Zhang is a Software Engineer working on Kubernetes for DaoCloud in Shanghai, China.
|
||||
|
||||
He has mostly been involved with SIG Node as a reviewer. His major contributions have mainly been bug fixes and feature improvements in an ongoing [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2712-pod-priority-based-graceful-node-shutdown), all revolving around SIG Node.
|
||||
|
||||
|
|
@ -128,11 +128,11 @@ Jintao Zhang 目前受聘于 API7,他专注于 Ingress 和服务网格。
|
|||
在为 Kubernetes 做贡献之前,Jintao 是 Docker 相关开源项目的长期贡献者。
|
||||
|
||||
<!--
|
||||
Currently Jintao is a reviewer for the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) project.
|
||||
Currently Jintao is a maintainer for the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) project.
|
||||
|
||||
He suggests keeping track of job opportunities at open source companies so that you can find one that allows you to contribute full time. For new contributors Jintao says that if anyone wants to make a significant contribution to an open source project, then they should choose the project based on their interests and should generously invest time.
|
||||
-->
|
||||
目前 Jintao 是 [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) 项目的 Reviewer。
|
||||
目前 Jintao 是 [ingress-nginx](https://kubernetes.github.io/ingress-nginx/) 项目的 Maintainer。
|
||||
|
||||
他建议关注开源公司的工作机会,这样你就可以找到一个可以让你全职贡献的机会。
|
||||
对于新的贡献者们,Jintao 表示如果有人想为一个开源项目做重大贡献,
|
||||
|
|
|
|||
|
|
@ -19,8 +19,10 @@ slug: retroactive-default-storage-class
|
|||
**译者:** Michael Yao (DaoCloud)
|
||||
|
||||
<!--
|
||||
The v1.25 release of Kubernetes introduced an alpha feature to change how a default StorageClass was assigned to a PersistentVolumeClaim (PVC).
|
||||
With the feature enabled, you no longer need to create a default StorageClass first and PVC second to assign the class. Additionally, any PVCs without a StorageClass assigned can be updated later.
|
||||
The v1.25 release of Kubernetes introduced an alpha feature to change how a default
|
||||
StorageClass was assigned to a PersistentVolumeClaim (PVC). With the feature enabled,
|
||||
you no longer need to create a default StorageClass first and PVC second to assign the
|
||||
class. Additionally, any PVCs without a StorageClass assigned can be updated later.
|
||||
This feature was graduated to beta in Kubernetes 1.26.
|
||||
-->
|
||||
Kubernetes v1.25 引入了一个 Alpha 特性来更改默认 StorageClass 被分配到 PersistentVolumeClaim (PVC) 的方式。
|
||||
|
|
@ -28,7 +30,8 @@ Kubernetes v1.25 引入了一个 Alpha 特性来更改默认 StorageClass 被分
|
|||
此外,任何未分配 StorageClass 的 PVC 都可以在后续被更新。此特性在 Kubernetes 1.26 中已进阶至 Beta。
|
||||
|
||||
<!--
|
||||
You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment) in the Kubernetes documentation for more details about how to use that,
|
||||
You can read [retroactive default StorageClass assignment](/docs/concepts/storage/persistent-volumes/#retroactive-default-storageclass-assignment)
|
||||
in the Kubernetes documentation for more details about how to use that,
|
||||
or you can read on to learn about why the Kubernetes project is making this change.
|
||||
-->
|
||||
有关如何使用的更多细节,请参阅 Kubernetes
|
||||
|
|
@ -38,7 +41,9 @@ or you can read on to learn about why the Kubernetes project is making this chan
|
|||
<!--
|
||||
## Why did StorageClass assignment need improvements
|
||||
|
||||
Users might already be familiar with a similar feature that assigns default StorageClasses to **new** PVCs at the time of creation. This is currently handled by the [admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass).
|
||||
Users might already be familiar with a similar feature that assigns default StorageClasses
|
||||
to **new** PVCs at the time of creation. This is currently handled by the
|
||||
[admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass).
|
||||
-->
|
||||
## 为什么 StorageClass 赋值需要改进 {#why-did-sc-assignment-need-improvements}
|
||||
|
||||
|
|
@ -46,10 +51,10 @@ Users might already be familiar with a similar feature that assigns default Stor
|
|||
这个目前由[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)处理。
|
||||
|
||||
<!--
|
||||
But what if there wasn't a default StorageClass defined at the time of PVC creation?
|
||||
Users would end up with a PVC that would never be assigned a class.
|
||||
As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point.
|
||||
Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road.
|
||||
But what if there wasn't a default StorageClass defined at the time of PVC creation?
|
||||
Users would end up with a PVC that would never be assigned a class.
|
||||
As a result, no storage would be provisioned, and the PVC would be somewhat "stuck" at this point.
|
||||
Generally, two main scenarios could result in "stuck" PVCs and cause problems later down the road.
|
||||
Let's take a closer look at each of them.
|
||||
-->
|
||||
但是,如果在创建 PVC 时没有定义默认 StorageClass 会怎样?
|
||||
|
|
@ -66,9 +71,11 @@ With the alpha feature enabled, there were two options admins had when they want
|
|||
启用这个 Alpha 特性后,管理员想要更改默认 StorageClass 时会有两个选项:
|
||||
|
||||
<!--
|
||||
1. Creating a new StorageClass as default before removing the old one associated with the PVC.
|
||||
This would result in having two defaults for a short period.
|
||||
At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to <code>null</code> (implying default StorageClass), the newest default StorageClass would be chosen and assigned to this PVC.
|
||||
1. Creating a new StorageClass as default before removing the old one associated with the PVC.
|
||||
This would result in having two defaults for a short period.
|
||||
At this point, if a user were to create a PersistentVolumeClaim with storageClassName set to
|
||||
<code>null</code> (implying default StorageClass), the newest default StorageClass would be
|
||||
chosen and assigned to this PVC.
|
||||
-->
|
||||
1. 在移除与 PVC 关联的旧 StorageClass 之前,创建一个新的 StorageClass 作为默认值。
|
||||
这将导致在短时间内出现两个默认值。此时,如果用户要创建一个 PersistentVolumeClaim,
|
||||
|
|
@ -77,9 +84,11 @@ At this point, if a user were to create a PersistentVolumeClaim with storageClas
|
|||
|
||||
<!--
|
||||
2. Removing the old default first and creating a new default StorageClass.
|
||||
This would result in having no default for a short time.
|
||||
Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName set to <code>null</code> (implying default StorageClass), the PVC would be in <code>Pending</code> state forever.
|
||||
The user would have to fix this by deleting the PVC and recreating it once the default StorageClass was available.
|
||||
This would result in having no default for a short time.
|
||||
Subsequently, if a user were to create a PersistentVolumeClaim with storageClassName
|
||||
set to <code>null</code> (implying default StorageClass), the PVC would be in
|
||||
<code>Pending</code> state forever. The user would have to fix this by deleting
|
||||
the PVC and recreating it once the default StorageClass was available.
|
||||
-->
|
||||
2. 先移除旧的默认值再创建一个新的默认 StorageClass。这将导致短时间内没有默认值。
|
||||
接下来如果用户创建一个 PersistentVolumeClaim,并将 storageClassName 设置为 <code>null</code>
|
||||
|
|
@ -89,8 +98,10 @@ The user would have to fix this by deleting the PVC and recreating it once the d
|
|||
<!--
|
||||
### Resource ordering during cluster installation
|
||||
|
||||
If a cluster installation tool needed to create resources that required storage, for example, an image registry, it was difficult to get the ordering right.
|
||||
This is because any Pods that required storage would rely on the presence of a default StorageClass and would fail to be created if it wasn't defined.
|
||||
If a cluster installation tool needed to create resources that required storage,
|
||||
for example, an image registry, it was difficult to get the ordering right.
|
||||
This is because any Pods that required storage would rely on the presence of
|
||||
a default StorageClass and would fail to be created if it wasn't defined.
|
||||
-->
|
||||
### 集群安装期间的资源顺序 {#resource-ordering-during-cluster-installation}
|
||||
|
||||
|
|
@ -101,8 +112,10 @@ This is because any Pods that required storage would rely on the presence of a d
|
|||
<!--
|
||||
## What changed
|
||||
|
||||
We've changed the PersistentVolume (PV) controller to assign a default StorageClass to any unbound PersistentVolumeClaim that has the storageClassName set to <code>null</code>.
|
||||
We've also modified the PersistentVolumeClaim admission within the API server to allow the change of values from an unset value to an actual StorageClass name.
|
||||
We've changed the PersistentVolume (PV) controller to assign a default StorageClass
|
||||
to any unbound PersistentVolumeClaim that has the storageClassName set to <code>null</code>.
|
||||
We've also modified the PersistentVolumeClaim admission within the API server to allow
|
||||
the change of values from an unset value to an actual StorageClass name.
|
||||
-->
|
||||
## 发生了什么变化 {#what-changed}
|
||||
|
||||
|
|
@ -113,7 +126,10 @@ storageClassName 设置为 `null` 且未被绑定的所有 PersistentVolumeClaim
|
|||
<!--
|
||||
### Null `storageClassName` versus `storageClassName: ""` - does it matter? { #null-vs-empty-string }
|
||||
|
||||
Before this feature was introduced, those values were equal in terms of behavior. Any PersistentVolumeClaim with the storageClassName set to <code>null</code> or <code>""</code> would bind to an existing PersistentVolume resource with storageClassName also set to <code>null</code> or <code>""</code>.
|
||||
Before this feature was introduced, those values were equal in terms of behavior.
|
||||
Any PersistentVolumeClaim with the storageClassName set to <code>null</code> or <code>""</code>
|
||||
would bind to an existing PersistentVolume resource with storageClassName also set to
|
||||
<code>null</code> or <code>""</code>.
|
||||
-->
|
||||
### Null `storageClassName` 与 `storageClassName: ""` - 有什么影响? {#null-vs-empty-string}
|
||||
|
||||
|
|
@ -123,7 +139,10 @@ Before this feature was introduced, those values were equal in terms of behavior
|
|||
|
||||
<!--
|
||||
With this new feature enabled we wanted to maintain this behavior but also be able to update the StorageClass name.
|
||||
With these constraints in mind, the feature changes the semantics of <code>null</code>. If a default StorageClass is present, <code>null</code> would translate to "Give me a default" and <code>""</code> would mean "Give me PersistentVolume that also has <code>""</code> StorageClass name." In the absence of a StorageClass, the behavior would remain unchanged.
|
||||
With these constraints in mind, the feature changes the semantics of <code>null</code>.
|
||||
If a default StorageClass is present, <code>null</code> would translate to "Give me a default" and
|
||||
<code>""</code> would mean "Give me PersistentVolume that also has <code>""</code> StorageClass name."
|
||||
In the absence of a StorageClass, the behavior would remain unchanged.
|
||||
-->
|
||||
启用此新特性时,我们希望保持此行为,但也希望能够更新 StorageClass 名称。
|
||||
考虑到这些限制,此特性更改了 `null` 的语义。
|
||||
|
|
@ -132,7 +151,8 @@ With these constraints in mind, the feature changes the semantics of <code>null<
|
|||
所以行为将保持不变。
|
||||
|
||||
<!--
|
||||
Summarizing the above, we've changed the semantics of <code>null</code> so that its behavior depends on the presence or absence of a definition of default StorageClass.
|
||||
Summarizing the above, we've changed the semantics of <code>null</code> so that
|
||||
its behavior depends on the presence or absence of a definition of default StorageClass.
|
||||
|
||||
The tables below show all these cases to better describe when PVC binds and when its StorageClass gets updated.
|
||||
-->
|
||||
|
|
@ -179,7 +199,9 @@ The tables below show all these cases to better describe when PVC binds and when
|
|||
<!--
|
||||
## How to use it
|
||||
|
||||
If you want to test the feature whilst it's alpha, you need to enable the relevant feature gate in the kube-controller-manager and the kube-apiserver. Use the `--feature-gates` command line argument:
|
||||
If you want to test the feature whilst it's alpha, you need to enable the relevant
|
||||
feature gate in the kube-controller-manager and the kube-apiserver.
|
||||
Use the `--feature-gates` command line argument:
|
||||
-->
|
||||
## 如何使用 {#how-to-use-it}
|
||||
|
||||
|
|
@ -218,7 +240,9 @@ If you would like to see the feature in action and verify it works fine in your
|
|||
```
|
||||
|
||||
<!--
|
||||
2. Create the PersistentVolumeClaim when there is no default StorageClass. The PVC won't provision or bind (unless there is an existing, suitable PV already present) and will remain in <code>Pending</code> state.
|
||||
2. Create the PersistentVolumeClaim when there is no default StorageClass.
|
||||
The PVC won't provision or bind (unless there is an existing, suitable PV already present)
|
||||
and will remain in <code>Pending</code> state.
|
||||
-->
|
||||
2. 在没有默认 StorageClass 时创建 PersistentVolumeClaim。
|
||||
PVC 不会制备或绑定(除非当前已存在一个合适的 PV),PVC 将保持在 `Pending` 状态。
|
||||
|
|
@ -226,7 +250,7 @@ If you would like to see the feature in action and verify it works fine in your
|
|||
```
|
||||
$ kc get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
pvc-1 Pending
|
||||
pvc-1 Pending
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -253,7 +277,10 @@ If you would like to see the feature in action and verify it works fine in your
|
|||
<!--
|
||||
### New metrics
|
||||
|
||||
To help you see that the feature is working as expected we also introduced a new <code>retroactive_storageclass_total</code> metric to show how many times that the PV controller attempted to update PersistentVolumeClaim, and <code>retroactive_storageclass_errors_total</code> to show how many of those attempts failed.
|
||||
To help you see that the feature is working as expected we also introduced a new
|
||||
<code>retroactive_storageclass_total</code> metric to show how many times that the
|
||||
PV controller attempted to update PersistentVolumeClaim, and
|
||||
<code>retroactive_storageclass_errors_total</code> to show how many of those attempts failed.
|
||||
-->
|
||||
### 新指标 {#new-metrics}
|
||||
|
||||
|
|
@ -264,7 +291,8 @@ To help you see that the feature is working as expected we also introduced a new
|
|||
<!--
|
||||
## Getting involved
|
||||
|
||||
We always welcome new contributors so if you would like to get involved you can join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
We always welcome new contributors so if you would like to get involved you can
|
||||
join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
|
||||
|
||||
If you would like to share feedback, you can do so on our [public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
|
||||
|
||||
|
|
|
|||
|
|
@ -19,9 +19,16 @@ slug: k8s-gcr-io-freeze-announcement
|
|||
**译者**:Michael Yao (Daocloud)
|
||||
|
||||
<!--
|
||||
The Kubernetes project runs a community-owned image registry called `registry.k8s.io` to host its container images. On the 3rd of April 2023, the old registry `k8s.gcr.io` will be frozen and no further images for Kubernetes and related subprojects will be pushed to the old registry.
|
||||
The Kubernetes project runs a community-owned image registry called `registry.k8s.io`
|
||||
to host its container images. On the 3rd of April 2023, the old registry `k8s.gcr.io`
|
||||
will be frozen and no further images for Kubernetes and related subprojects will be
|
||||
pushed to the old registry.
|
||||
|
||||
This registry `registry.k8s.io` replaced the old one and has been generally available for several months. We have published a [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/) about its benefits to the community and the Kubernetes project. This post also announced that future versions of Kubernetes will not be available in the old registry. Now that time has come.
|
||||
This registry `registry.k8s.io` replaced the old one and has been generally available
|
||||
for several months. We have published a [blog post](/blog/2022/11/28/registry-k8s-io-faster-cheaper-ga/)
|
||||
about its benefits to the community and the Kubernetes project. This post also
|
||||
announced that future versions of Kubernetes will not be available in the old
|
||||
registry. Now that time has come.
|
||||
-->
|
||||
Kubernetes 项目运行一个名为 `registry.k8s.io`、由社区管理的镜像仓库来托管其容器镜像。
|
||||
2023 年 4 月 3 日,旧仓库 `k8s.gcr.io` 将被冻结,Kubernetes 及其相关子项目的镜像将不再推送到这个旧仓库。
|
||||
|
|
@ -32,7 +39,9 @@ Kubernetes 项目带来的好处。这篇博客再次宣布后续版本的 Kuber
|
|||
|
||||
<!--
|
||||
What does this change mean for contributors:
|
||||
- If you are a maintainer of a subproject, you will need to update your manifests and Helm charts to use the new registry.
|
||||
|
||||
- If you are a maintainer of a subproject, you will need to update your manifests
|
||||
and Helm charts to use the new registry.
|
||||
-->
|
||||
这次变更对贡献者意味着:
|
||||
|
||||
|
|
@ -40,10 +49,18 @@ What does this change mean for contributors:
|
|||
|
||||
<!--
|
||||
What does this change mean for end users:
|
||||
|
||||
- 1.27 Kubernetes release will not be published to the old registry.
|
||||
- Patch releases for 1.24, 1.25, and 1.26 will no longer be published to the old registry from April. Please read the timelines below for details of the final patch releases in the old registry.
|
||||
- Starting in 1.25, the default image registry has been set to `registry.k8s.io`. This value is overridable in `kubeadm` and `kubelet` but setting it to `k8s.gcr.io` will fail for new releases after April as they won’t be present in the old registry.
|
||||
- If you want to increase the reliability of your cluster and remove dependency on the community-owned registry or you are running Kubernetes in networks where external traffic is restricted, you should consider hosting local image registry mirrors. Some cloud vendors may offer hosted solutions for this.
|
||||
- Patch releases for 1.24, 1.25, and 1.26 will no longer be published to the old
|
||||
registry from April. Please read the timelines below for details of the final
|
||||
patch releases in the old registry.
|
||||
- Starting in 1.25, the default image registry has been set to `registry.k8s.io`.
|
||||
This value is overridable in `kubeadm` and `kubelet` but setting it to `k8s.gcr.io`
|
||||
will fail for new releases after April as they won’t be present in the old registry.
|
||||
- If you want to increase the reliability of your cluster and remove dependency on
|
||||
the community-owned registry or you are running Kubernetes in networks where
|
||||
external traffic is restricted, you should consider hosting local image registry
|
||||
mirrors. Some cloud vendors may offer hosted solutions for this.
|
||||
-->
|
||||
这次变更对终端用户意味着:
|
||||
|
||||
|
|
@ -77,7 +94,8 @@ What does this change mean for end users:
|
|||
<!--
|
||||
## What's next
|
||||
|
||||
Please make sure your cluster does not have dependencies on old image registry. For example, you can run this command to list the images used by pods:
|
||||
Please make sure your cluster does not have dependencies on old image registry.
|
||||
For example, you can run this command to list the images used by pods:
|
||||
-->
|
||||
## 下一步 {#whats-next}
|
||||
|
||||
|
|
@ -91,14 +109,19 @@ uniq -c
|
|||
```
|
||||
|
||||
<!--
|
||||
There may be other dependencies on the old image registry. Make sure you review any potential dependencies to keep your cluster healthy and up to date.
|
||||
There may be other dependencies on the old image registry. Make sure you review
|
||||
any potential dependencies to keep your cluster healthy and up to date.
|
||||
-->
|
||||
旧的镜像仓库可能存在其他依赖项。请确保你检查了所有潜在的依赖项,以保持集群健康和最新。
|
||||
|
||||
<!--
|
||||
## Acknowledgments
|
||||
|
||||
__Change is hard__, and evolving our image-serving platform is needed to ensure a sustainable future for the project. We strive to make things better for everyone using Kubernetes. Many contributors from all corners of our community have been working long and hard to ensure we are making the best decisions possible, executing plans, and doing our best to communicate those plans.
|
||||
__Change is hard__, and evolving our image-serving platform is needed to ensure
|
||||
a sustainable future for the project. We strive to make things better for everyone
|
||||
using Kubernetes. Many contributors from all corners of our community have been
|
||||
working long and hard to ensure we are making the best decisions possible,
|
||||
executing plans, and doing our best to communicate those plans.
|
||||
-->
|
||||
## 致谢 {#acknowledgments}
|
||||
|
||||
|
|
@ -107,7 +130,14 @@ __改变是艰难的__,但只有镜像服务平台演进才能确保 Kubernete
|
|||
确保我们能够做出尽可能最好的决策、履行计划并尽最大努力传达这些计划。
|
||||
|
||||
<!--
|
||||
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine, Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen, and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle, Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from the Security Response Committee. Also a big thank you to our friends acting as liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson Jr. from Google.
|
||||
Thanks to Aaron Crickenberger, Arnaud Meukam, Benjamin Elder, Caleb Woodbine,
|
||||
Davanum Srinivas, Mahamed Ali, and Tim Hockin from SIG K8s Infra, Brian McQueen,
|
||||
and Sergey Kanzhelev from SIG Node, Lubomir Ivanov from SIG Cluster Lifecycle,
|
||||
Adolfo García Veytia, Jeremy Rickard, Sascha Grunert, and Stephen Augustus from
|
||||
SIG Release, Bob Killen and Kaslin Fields from SIG Contribex, Tim Allclair from
|
||||
the Security Response Committee. Also a big thank you to our friends acting as
|
||||
liaisons with our cloud provider partners: Jay Pipes from Amazon and Jon Johnson
|
||||
Jr. from Google.
|
||||
-->
|
||||
衷心感谢:
|
||||
|
||||
|
|
|
|||
|
|
@ -402,6 +402,20 @@ Kubernetes v1.25 版本还稳定了对 DaemonSet Pod 的浪涌支持,
|
|||
其实现是为了最大限度地减少部署期间 DaemonSet 的停机时间。
|
||||
`DaemonSetUpdateSurge` 特性门控将在 Kubernetes v1.27 中被移除。
|
||||
|
||||
<!--
|
||||
### Removal of `--container-runtime` command line argument
|
||||
-->
|
||||
### 移除 `--container-runtime` 命令行参数
|
||||
|
||||
<!--
|
||||
The kubelet accepts a deprecated command line argument, `--container-runtime`, and the only
|
||||
valid value would be `remote` after dockershim code is removed. Kubernetes v1.27 will remove
|
||||
that argument, which has been deprecated since the v1.24 release.
|
||||
-->
|
||||
kubelet 接受一个已弃用的命令行参数 `--container-runtime`,
|
||||
并且在移除 dockershim 代码后,唯一有效的值将是 `remote`。
|
||||
Kubernetes v1.27 将移除该参数,该参数自 v1.24 版本以来已被弃用。
|
||||
|
||||
<!--
|
||||
## Looking ahead
|
||||
|
||||
|
|
|
|||
|
|
@ -918,11 +918,14 @@ poorly-behaved workloads that may be harming system health.
|
|||
queue excess requests, or
|
||||
* `time-out`, indicating that the request was still in the queue
|
||||
when its queuing time limit expired.
|
||||
* `cancelled`, indicating that the request is not purge locked
|
||||
and has been ejected from the queue.
|
||||
-->
|
||||
* `queue-full`,表明已经有太多请求排队,
|
||||
* `concurrency-limit`,表示将 PriorityLevelConfiguration 配置为
|
||||
`Reject` 而不是 `Queue`,或者
|
||||
* `time-out`,表示在其排队时间超期的请求仍在队列中。
|
||||
* `cancelled`,表示该请求未被清除锁定,已从队列中移除。
|
||||
|
||||
<!--
|
||||
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
|
||||
|
|
|
|||
|
|
@ -91,9 +91,11 @@ Kubernetes 区分用户账号和服务账号的概念,主要基于以下原因
|
|||
onboard human users makes it easier for workloads to following the principle of
|
||||
least privilege.
|
||||
-->
|
||||
- 通常情况下,集群的用户账号可能会从企业数据库进行同步,创建新用户需要特殊权限,并且涉及到复杂的业务流程。
|
||||
- 通常情况下,集群的用户账号可能会从企业数据库进行同步,
|
||||
创建新用户需要特殊权限,并且涉及到复杂的业务流程。
|
||||
服务账号创建有意做得更轻量,允许集群用户为了具体的任务按需创建服务账号。
|
||||
将 ServiceAccount 的创建与新用户注册的步骤分离开来,使工作负载更易于遵从权限最小化原则。
|
||||
将 ServiceAccount 的创建与新用户注册的步骤分离开来,
|
||||
使工作负载更易于遵从权限最小化原则。
|
||||
<!--
|
||||
- Auditing considerations for humans and service accounts may differ; the separation
|
||||
makes that easier to achieve.
|
||||
|
|
@ -186,7 +188,8 @@ There is no specific mechanism to invalidate a token issued via TokenRequest. If
|
|||
trust a bound service account token for a Pod, you can delete that Pod. Deleting a Pod expires
|
||||
its bound service account tokens.
|
||||
-->
|
||||
没有特定的机制可以使通过 TokenRequest 签发的令牌无效。如果你不再信任为某个 Pod 绑定的服务账号令牌,
|
||||
没有特定的机制可以使通过 TokenRequest 签发的令牌无效。
|
||||
如果你不再信任为某个 Pod 绑定的服务账号令牌,
|
||||
你可以删除该 Pod。删除 Pod 将使其绑定的服务账号令牌过期。
|
||||
{{< /note >}}
|
||||
|
||||
|
|
@ -211,9 +214,10 @@ The tokens obtained using this method have bounded lifetimes, and are automatica
|
|||
invalidated when the Pod they are mounted into is deleted.
|
||||
-->
|
||||
在包括 Kubernetes v{{< skew currentVersion >}} 在内最近的几个版本中,使用
|
||||
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API
|
||||
[直接获得](#bound-service-account-token-volume) API 凭据,并使用投射卷挂载到 Pod 中。
|
||||
使用这种方法获得的令牌具有绑定的生命周期,当挂载的 Pod 被删除时这些令牌将自动失效。
|
||||
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
|
||||
API [直接获得](#bound-service-account-token-volume) API 凭据,
|
||||
并使用投射卷挂载到 Pod 中。使用这种方法获得的令牌具有绑定的生命周期,
|
||||
当挂载的 Pod 被删除时这些令牌将自动失效。
|
||||
|
||||
<!--
|
||||
You can still [manually create](/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount) a Secret to hold a service account token; for example, if you need a token that never expires.
|
||||
|
|
@ -223,7 +227,8 @@ Once you manually create a Secret and link it to a ServiceAccount, the Kubernete
|
|||
你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)
|
||||
Secret 来保存服务账号令牌;例如在你需要一个永不过期的令牌的时候。
|
||||
|
||||
一旦你手动创建一个 Secret 并将其关联到 ServiceAccount,Kubernetes 控制平面就会自动将令牌填充到该 Secret 中。
|
||||
一旦你手动创建一个 Secret 并将其关联到 ServiceAccount,
|
||||
Kubernetes 控制平面就会自动将令牌填充到该 Secret 中。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
|
|
@ -239,6 +244,15 @@ to obtain short-lived API access tokens is recommended instead.
|
|||
<!--
|
||||
## Control plane details
|
||||
|
||||
A ServiceAccount controller manages the ServiceAccounts inside namespaces, and
|
||||
ensures a ServiceAccount named "default" exists in every active namespace.
|
||||
-->
|
||||
## 控制平面细节 {#control-plane-details}
|
||||
|
||||
ServiceAccount 控制器管理名字空间内的 ServiceAccount,
|
||||
并确保每个活跃的名字空间中都存在名为 `default` 的 ServiceAccount。
|
||||
|
||||
<!--
|
||||
### Token controller
|
||||
|
||||
The service account token controller runs as part of `kube-controller-manager`.
|
||||
|
|
@ -251,16 +265,14 @@ This controller acts asynchronously. It:
|
|||
- watches for Secret deletion and removes a reference from the corresponding
|
||||
ServiceAccount if needed.
|
||||
-->
|
||||
## 控制平面细节 {#control-plane-details}
|
||||
|
||||
### 令牌控制器 {#token-controller}
|
||||
|
||||
服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。
|
||||
其职责包括:
|
||||
|
||||
- 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。
|
||||
- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要,
|
||||
向 Secret 中添加令牌。
|
||||
- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,
|
||||
如有需要,向 Secret 中添加令牌。
|
||||
- 监测服务账号令牌 Secret 的删除,如有需要,从相应的 ServiceAccount 中移除引用。
|
||||
|
||||
<!--
|
||||
|
|
@ -271,9 +283,10 @@ Similarly, you must pass the corresponding public key to the `kube-apiserver`
|
|||
using the `--service-account-key-file` flag. The public key will be used to
|
||||
verify the tokens during authentication.
|
||||
-->
|
||||
你必须通过 `--service-account-private-key-file` 标志为 `kube-controller-manager`
|
||||
的令牌控制器传入一个服务账号私钥文件。该私钥用于为所生成的服务账号令牌签名。
|
||||
同样地,你需要通过 `--service-account-key-file` 标志将对应的公钥通知给
|
||||
你必须通过 `--service-account-private-key-file` 标志为
|
||||
`kube-controller-manager`的令牌控制器传入一个服务账号私钥文件。
|
||||
该私钥用于为所生成的服务账号令牌签名。同样地,你需要通过
|
||||
`--service-account-key-file` 标志将对应的公钥通知给
|
||||
kube-apiserver。公钥用于在身份认证过程中校验令牌。
|
||||
|
||||
<!--
|
||||
|
|
@ -345,7 +358,8 @@ If you want to use the TokenRequest API from `kubectl`, see
|
|||
你使用 ServiceAccount 的
|
||||
[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
|
||||
子资源为该 ServiceAccount 获取有时间限制的令牌。
|
||||
你不需要调用它来获取在容器中使用的 API 令牌,因为 kubelet 使用 **投射卷** 对此进行了设置。
|
||||
你不需要调用它来获取在容器中使用的 API 令牌,
|
||||
因为 kubelet 使用**投射卷**对此进行了设置。
|
||||
|
||||
如果你想要从 `kubectl` 使用 TokenRequest API,
|
||||
请参阅[为 ServiceAccount 手动创建 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)。
|
||||
|
|
@ -441,8 +455,9 @@ updates that Secret with that generated token data.
|
|||
Here is a sample manifest for such a Secret:
|
||||
-->
|
||||
要为 ServiceAccount 创建一个不过期、持久化的 API 令牌,
|
||||
请创建一个类型为 `kubernetes.io/service-account-token` 的 Secret,附带引用 ServiceAccount 的注解。
|
||||
控制平面随后生成一个长久的令牌,并使用生成的令牌数据更新该 Secret。
|
||||
请创建一个类型为 `kubernetes.io/service-account-token` 的 Secret,
|
||||
附带引用 ServiceAccount 的注解。控制平面随后生成一个长久的令牌,
|
||||
并使用生成的令牌数据更新该 Secret。
|
||||
|
||||
以下是此类 Secret 的示例清单:
|
||||
|
||||
|
|
@ -491,7 +506,7 @@ token: ...
|
|||
If you launch a new Pod into the `examplens` namespace, it can use the `myserviceaccount`
|
||||
service-account-token Secret that you just created.
|
||||
-->
|
||||
如果你在 `examplens` 名字空间中启动新的 Pod,可以使用你刚刚创建的
|
||||
如果你在 `examplens` 名字空间中启动一个新的 Pod,它可以使用你刚刚创建的
|
||||
`myserviceaccount` service-account-token Secret。
|
||||
|
||||
<!--
|
||||
|
|
@ -588,60 +603,6 @@ If you created a namespace `examplens` to experiment with, you can remove it:
|
|||
kubectl delete namespace examplens
|
||||
```
|
||||
|
||||
<!--
|
||||
## Control plane details
|
||||
|
||||
### ServiceAccount controller
|
||||
|
||||
A ServiceAccount controller manages the ServiceAccounts inside namespaces, and
|
||||
ensures a ServiceAccount named "default" exists in every active namespace.
|
||||
-->
|
||||
## 控制平面细节 {#control-plane-details}
|
||||
|
||||
### ServiceAccount 控制器 {#serviceaccount-controller}
|
||||
|
||||
ServiceAccount 控制器管理名字空间内的 ServiceAccount,并确保每个活跃的名字空间中都存在名为
|
||||
“default” 的 ServiceAccount。
|
||||
|
||||
<!--
|
||||
### Token controller
|
||||
|
||||
The service account token controller runs as part of `kube-controller-manager`.
|
||||
This controller acts asynchronously. It:
|
||||
|
||||
- watches for ServiceAccount creation and creates a corresponding
|
||||
ServiceAccount token Secret to allow API access.
|
||||
- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount
|
||||
token Secrets.
|
||||
- watches for ServiceAccount token Secret addition, and ensures the referenced
|
||||
ServiceAccount exists, and adds a token to the Secret if needed.
|
||||
- watches for Secret deletion and removes a reference from the corresponding
|
||||
ServiceAccount if needed.
|
||||
-->
|
||||
### 令牌控制器
|
||||
|
||||
服务账号令牌控制器作为 `kube-controller-manager` 的一部分运行,以异步的形式工作。
|
||||
其职责包括:
|
||||
|
||||
- 监测 ServiceAccount 的创建并创建相应的服务账号令牌 Secret 以允许 API 访问。
|
||||
- 监测 ServiceAccount 的删除并删除所有相应的服务账号令牌 Secret。
|
||||
- 监测服务账号令牌 Secret 的添加,保证相应的 ServiceAccount 存在,如有需要,
|
||||
向 Secret 中添加令牌。
|
||||
- 监测 Secret 的删除,如有需要,从相应的 ServiceAccount 中移除引用。
|
||||
|
||||
<!--
|
||||
You must pass a service account private key file to the token controller in
|
||||
the `kube-controller-manager` using the `--service-account-private-key-file`
|
||||
flag. The private key is used to sign generated service account tokens.
|
||||
Similarly, you must pass the corresponding public key to the `kube-apiserver`
|
||||
using the `--service-account-key-file` flag. The public key will be used to
|
||||
verify the tokens during authentication.
|
||||
-->
|
||||
你必须通过 `--service-account-private-key-file` 标志为 `kube-controller-manager`
|
||||
的令牌控制器传入一个服务账号私钥文件。该私钥用于为所生成的服务账号令牌签名。
|
||||
同样地,你需要通过 `--service-account-key-file` 标志将对应的公钥通知给
|
||||
kube-apiserver。公钥用于在身份认证过程中校验令牌。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
|
|
|
|||
|
|
@ -45,6 +45,6 @@ kubelet 监控集群节点上的 CPU、内存、磁盘空间和文件系统 inod
|
|||
kubelet 可以主动使节点上的一个或多个 Pod 失效,以回收资源并防止饥饿。
|
||||
|
||||
<!--
|
||||
Node-pressure eviction is not the same as [API-initiated eviction](/docs/reference/generated/kubernetes-api/v1.23/).
|
||||
Node-pressure eviction is not the same as [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/).
|
||||
-->
|
||||
节点压力驱逐不用于 [API 发起的驱逐](/docs/reference/generated/kubernetes-api/v1.23/)。
|
||||
节点压力驱逐不用于 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)。
|
||||
|
|
|
|||
|
|
@ -162,7 +162,7 @@ If you misspelled `command` as `commnd` then will give an error like this:
|
|||
例如,运行 `kubectl apply --validate -f mypod.yaml`。
|
||||
如果 `command` 被误拼成 `commnd`,你将会看到下面的错误信息:
|
||||
|
||||
```
|
||||
```shell
|
||||
I0805 10:43:25.129850 46757 schema.go:126] unknown field: commnd
|
||||
I0805 10:43:25.129973 46757 schema.go:129] this may be a false alarm, see https://github.com/kubernetes/kubernetes/issues/6842
|
||||
pods/mypod
|
||||
|
|
@ -279,17 +279,17 @@ Verify that the pod's `containerPort` matches up with the Service's `targetPort`
|
|||
<!--
|
||||
#### Network traffic is not forwarded
|
||||
|
||||
Please see [debugging service](/docs/tasks/debug/debug-applications/debug-service/) for more information.
|
||||
Please see [debugging service](/docs/tasks/debug/debug-application/debug-service/) for more information.
|
||||
-->
|
||||
#### 网络流量未被转发 {#network-traffic-is-not-forwarded}
|
||||
|
||||
请参阅[调试 Service](/zh-cn/docs/tasks/debug/debug-applications/debug-service/) 了解更多信息。
|
||||
请参阅[调试 Service](/zh-cn/docs/tasks/debug/debug-application/debug-service/) 了解更多信息。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
If none of the above solves your problem, follow the instructions in
|
||||
[Debugging Service document](/docs/tasks/debug/debug-applications/debug-service/)
|
||||
[Debugging Service document](/docs/tasks/debug/debug-application/debug-service/)
|
||||
to make sure that your `Service` is running, has `Endpoints`, and your `Pods` are
|
||||
actually serving; you have DNS working, iptables rules installed, and kube-proxy
|
||||
does not seem to be misbehaving.
|
||||
|
|
@ -297,7 +297,7 @@ does not seem to be misbehaving.
|
|||
You may also visit [troubleshooting document](/docs/tasks/debug/) for more information.
|
||||
-->
|
||||
如果上述方法都不能解决你的问题,
|
||||
请按照[调试 Service 文档](/zh-cn/docs/tasks/debug/debug-applications/debug-service/)中的介绍,
|
||||
请按照[调试 Service 文档](/zh-cn/docs/tasks/debug/debug-application/debug-service/)中的介绍,
|
||||
确保你的 `Service` 处于 Running 态,有 `Endpoints` 被创建,`Pod` 真的在提供服务;
|
||||
DNS 服务已配置并正常工作,iptables 规则也以安装并且 `kube-proxy` 也没有异常行为。
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,302 @@
|
|||
---
|
||||
title: 探索 Pod 及其端点的终止行为
|
||||
content_type: tutorial
|
||||
weight: 60
|
||||
---
|
||||
<!--
|
||||
title: Explore Termination Behavior for Pods And Their Endpoints
|
||||
content_type: tutorial
|
||||
weight: 60
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!--
|
||||
Once you connected your Application with Service following steps
|
||||
like those outlined in [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/),
|
||||
you have a continuously running, replicated application, that is exposed on a network.
|
||||
This tutorial helps you look at the termination flow for Pods and to explore ways to implement
|
||||
graceful connection draining.
|
||||
-->
|
||||
一旦你参照[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)中概述的那些步骤使用
|
||||
Service 连接到了你的应用,你就有了一个持续运行的多副本应用暴露在了网络上。
|
||||
本教程帮助你了解 Pod 的终止流程,探索实现连接排空的几种方式。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Termination process for Pods and their endpoints
|
||||
|
||||
There are often cases when you need to terminate a Pod - be it for upgrade or scale down.
|
||||
In order to improve application availability, it may be important to implement
|
||||
a proper active connections draining.
|
||||
|
||||
This tutorial explains the flow of Pod termination in connection with the
|
||||
corresponding endpoint state and removal by using
|
||||
a simple nginx web server to demonstrate the concept.
|
||||
-->
|
||||
## Pod 及其端点的终止过程 {#termination-process-for-pods-and-endpoints}
|
||||
|
||||
你经常会遇到需要终止 Pod 的场景,例如为了升级或缩容。
|
||||
为了改良应用的可用性,实现一种合适的活跃连接排空机制变得重要。
|
||||
|
||||
本教程将通过使用一个简单的 nginx Web 服务器演示此概念,
|
||||
解释 Pod 终止的流程及其与相应端点状态和移除的联系。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
## Example flow with endpoint termination
|
||||
|
||||
The following is the example of the flow described in the
|
||||
[Termination of Pods](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
|
||||
document.
|
||||
|
||||
Let's say you have a Deployment containing of a single `nginx` replica
|
||||
(just for demonstration purposes) and a Service:
|
||||
-->
|
||||
## 端点终止的示例流程 {#example-flow-with-endpoint-termination}
|
||||
|
||||
以下是 [Pod 终止](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)文档中所述的流程示例。
|
||||
|
||||
假设你有包含单个 nginx 副本(仅用于演示目的)的一个 Deployment 和一个 Service:
|
||||
|
||||
{{< codenew file="service/pod-with-graceful-termination.yaml" >}}
|
||||
|
||||
<!--
|
||||
# extra long grace period
|
||||
# Real life termination may take any time up to terminationGracePeriodSeconds.
|
||||
# In this example - just hang around for at least the duration of terminationGracePeriodSeconds,
|
||||
# at 120 seconds container will be forcibly terminated.
|
||||
# Note, all this time nginx will keep processing requests.
|
||||
-->
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 120 # 超长优雅期
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
# 实际生产环境中的 Pod 终止可能需要执行任何时长,但不会超过 terminationGracePeriodSeconds。
|
||||
# 在本例中,只需挂起至少 terminationGracePeriodSeconds 所指定的持续时间,
|
||||
# 在 120 秒时容器将被强制终止。
|
||||
# 请注意,在所有这些时间点 nginx 都将继续处理请求。
|
||||
command: [
|
||||
"/bin/sh", "-c", "sleep 180"
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-service
|
||||
spec:
|
||||
selector:
|
||||
app: nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
```
|
||||
|
||||
<!--
|
||||
Once the Pod and Service are running, you can get the name of any associated EndpointSlices:
|
||||
-->
|
||||
一旦 Pod 和 Service 开始运行,你就可以获取对应的所有 EndpointSlices 的名称:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslice
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```none
|
||||
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
|
||||
nginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see its status, and validate that there is one endpoint registered:
|
||||
-->
|
||||
你可以查看其 status 并验证已经有一个端点被注册:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslices -o json -l kubernetes.io/service-name=nginx-service
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```none
|
||||
{
|
||||
"addressType": "IPv4",
|
||||
"apiVersion": "discovery.k8s.io/v1",
|
||||
"endpoints": [
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.201"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": true,
|
||||
"serving": true,
|
||||
"terminating": false
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
Now let's terminate the Pod and validate that the Pod is being terminated
|
||||
respecting the graceful termination period configuration:
|
||||
-->
|
||||
现在让我们终止这个 Pod 并验证该 Pod 正在遵从体面终止期限的配置进行终止:
|
||||
|
||||
```shell
|
||||
kubectl delete pod nginx-deployment-7768647bf9-b4b9s
|
||||
```
|
||||
|
||||
<!--
|
||||
All pods:
|
||||
-->
|
||||
查看所有 Pod:
|
||||
|
||||
```shell
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```none
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s
|
||||
nginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s
|
||||
```
|
||||
|
||||
<!--
|
||||
You can see that the new pod got scheduled.
|
||||
|
||||
While the new endpoint is being created for the new Pod, the old endpoint is
|
||||
still around in the terminating state:
|
||||
-->
|
||||
你可以看到新的 Pod 已被调度。
|
||||
|
||||
当系统在为新的 Pod 创建新的端点时,旧的端点仍处于 Terminating 状态:
|
||||
|
||||
```shell
|
||||
kubectl get endpointslice -o json nginx-service-6tjbr
|
||||
```
|
||||
|
||||
<!--
|
||||
The output is similar to this:
|
||||
-->
|
||||
输出类似于:
|
||||
|
||||
```none
|
||||
{
|
||||
"addressType": "IPv4",
|
||||
"apiVersion": "discovery.k8s.io/v1",
|
||||
"endpoints": [
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.201"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": false,
|
||||
"serving": true,
|
||||
"terminating": true
|
||||
},
|
||||
"nodeName": "gke-main-default-pool-dca1511c-d17b",
|
||||
"targetRef": {
|
||||
"kind": "Pod",
|
||||
"name": "nginx-deployment-7768647bf9-b4b9s",
|
||||
"namespace": "default",
|
||||
"uid": "66fa831c-7eb2-407f-bd2c-f96dfe841478"
|
||||
},
|
||||
"zone": "us-central1-c"
|
||||
},
|
||||
]
|
||||
{
|
||||
"addresses": [
|
||||
"10.12.1.202"
|
||||
],
|
||||
"conditions": {
|
||||
"ready": true,
|
||||
"serving": true,
|
||||
"terminating": false
|
||||
},
|
||||
"nodeName": "gke-main-default-pool-dca1511c-d17b",
|
||||
"targetRef": {
|
||||
"kind": "Pod",
|
||||
"name": "nginx-deployment-7768647bf9-rkxlw",
|
||||
"namespace": "default",
|
||||
"uid": "722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35"
|
||||
},
|
||||
"zone": "us-central1-c"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<!--
|
||||
This allows applications to communicate their state during termination
|
||||
and clients (such as load balancers) to implement a connections draining functionality.
|
||||
These clients may detect terminating endpoints and implement a special logic for them.
|
||||
-->
|
||||
这种设计使得应用可以在终止期间公布自己的状态,而客户端(如负载均衡器)则可以实现连接排空功能。
|
||||
这些客户端可以检测到正在终止的端点,并为这些端点实现特殊的逻辑。
|
||||
|
||||
<!--
|
||||
In Kubernetes, endpoints that are terminating always have their `ready` status set as as `false`.
|
||||
This needs to happen for backward
|
||||
compatibility, so existing load balancers will not use it for regular traffic.
|
||||
If traffic draining on terminating pod is needed, the actual readiness can be
|
||||
checked as a condition `serving`.
|
||||
|
||||
When Pod is deleted, the old endpoint will also be deleted.
|
||||
-->
|
||||
在 Kubernetes 中,正在终止的端点始终将其 `ready` 状态设置为 `false`。
|
||||
这是为了满足向后兼容的需求,确保现有的负载均衡器不会将 Pod 用于常规流量。
|
||||
如果需要排空正被终止的 Pod 上的流量,可以将 `serving` 状况作为实际的就绪状态。
|
||||
|
||||
当 Pod 被删除时,旧的端点也会被删除。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
<!--
|
||||
* Learn how to [Connect Applications with Services](/docs/tutorials/services/connect-applications-service/)
|
||||
* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
|
||||
* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
-->
|
||||
* 了解如何[使用 Service 连接到应用](/zh-cn/docs/tutorials/services/connect-applications-service/)
|
||||
* 进一步了解[使用 Service 访问集群中的应用](/zh-cn/docs/tasks/access-application-cluster/service-access-application-cluster/)
|
||||
* 进一步了解[使用 Service 把前端连接到后端](/zh-cn/docs/tasks/access-application-cluster/connecting-frontend-backend/)
|
||||
* 进一步了解[创建外部负载均衡器](/zh-cn/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 120 # 超长优雅期
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
# 实际生产环境中的 Pod 终止可能需要执行任何时长,但不会超过 terminationGracePeriodSeconds。
|
||||
# 在本例中,只需挂起至少 terminationGracePeriodSeconds 所指定的持续时间,
|
||||
# 在 120 秒时容器将被强制终止。
|
||||
# 请注意,在所有这些时间点 nginx 都将继续处理请求。
|
||||
command: [
|
||||
"/bin/sh", "-c", "sleep 180"
|
||||
]
|
||||
|
|
@ -143,7 +143,7 @@ announcements:
|
|||
Please read our [announcement](/blog/2023/02/06/k8s-gcr-io-freeze-announcement/) for more details.
|
||||
|
||||
- name: Redirecting k8s.gcr.io - Before
|
||||
startTime: 2023-03-10T00:00:00 # This should run before and after Kubecon EU 2023
|
||||
startTime: 2023-03-10T00:00:00 # This should run before the redirect
|
||||
endTime: 2023-03-26T00:00:00
|
||||
style: "background: #c70202"
|
||||
title: Legacy k8s.gcr.io container image registry will be redirected to registry.k8s.io
|
||||
|
|
@ -153,11 +153,11 @@ announcements:
|
|||
Please read our [announcement](/blog/2023/03/10/image-registry-redirect/) for more details.
|
||||
|
||||
- name: Redirecting k8s.gcr.io - After
|
||||
startTime: 2023-04-05T00:00:00 # This should run before and after Kubecon EU 2023
|
||||
startTime: 2023-03-27T00:00:00 # This should run after the redirect begins
|
||||
endTime: 2023-05-31T00:00:00
|
||||
style: "background: #c70202"
|
||||
title: Legacy k8s.gcr.io container image registry will be redirected to registry.k8s.io
|
||||
title: Legacy k8s.gcr.io container image registry is being redirected to registry.k8s.io
|
||||
message: |
|
||||
k8s.gcr.io image registry is being redirected to registry.k8s.io (since Monday March 20th).<br>
|
||||
k8s.gcr.io image registry is gradually being redirected to registry.k8s.io (since Monday March 20th).<br>
|
||||
All images available in k8s.gcr.io are available at registry.k8s.io.</br>
|
||||
Please read our [announcement](/blog/2023/03/10/image-registry-redirect/) for more details.
|
||||
|
|
|
|||
Loading…
Reference in New Issue