Merge pull request #50366 from hacktivist123/merged-main-dev-1.33

Merged main branch into dev-1.33
pull/49868/head
Rey Lejano 2025-04-07 11:26:00 -07:00 committed by GitHub
commit 930294a0f4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
126 changed files with 6105 additions and 2183 deletions

View File

@ -163,16 +163,6 @@ div.feature-state-notice {
background-color: rgba(255, 255, 255, 0.25);
}
/* Sidebar menu */
#td-sidebar-menu {
#m-docs span, small {
visibility: hidden;
}
#m-docs small {
visibility: collapse; // if supported
}
}
/* Styles for CVE table */
table tr.cve-status-open, table tr.cve-status-unknown {
> td.cve-item-summary {

View File

@ -6,8 +6,6 @@ sitemap:
priority: 1.0
---
{{< site-searchbar >}}
{{< blocks/section class="k8s-overview" >}}
{{% blocks/feature image="flower" id="feature-primary" %}}
[কুবারনেটিস]({{< relref "/docs/concepts/overview/" >}}), K8s নামেও পরিচিত, কনটেইনারাইজড অ্যাপ্লিকেশনের স্বয়ংক্রিয় ডিপ্লয়মেন্ট, স্কেলিং এবং পরিচালনার জন্য একটি ওপেন-সোর্স সিস্টেম।

View File

@ -0,0 +1,54 @@
---
layout: blog
title: "Ingress-nginx CVE-2025-1974: What You Need to Know"
date: 2025-03-24T12:00:00-08:00
slug: ingress-nginx-CVE-2025-1974
author: >
Tabitha Sable (Kubernetes Security Response Committee)
---
Today, the ingress-nginx maintainers have [released patches for a batch of critical vulnerabilities](https://github.com/kubernetes/ingress-nginx/releases) that could make it easy for attackers to take over your Kubernetes cluster. If you are among the over 40% of Kubernetes administrators using [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), you should take action immediately to protect your users and data.
## Background
[Ingress](/docs/concepts/services-networking/ingress/) is the traditional Kubernetes feature for exposing your workload Pods to the world so that they can be useful. In an implementation-agnostic way, Kubernetes users can define how their applications should be made available on the network. Then, an [ingress controller](/docs/concepts/services-networking/ingress-controllers/) uses that definition to set up local or cloud resources as required for the users particular situation and needs.
Many different ingress controllers are available, to suit users of different cloud providers or brands of load balancers. Ingress-nginx is a software-only ingress controller provided by the Kubernetes project. Because of its versatility and ease of use, ingress-nginx is quite popular: it is deployed in over 40% of Kubernetes clusters\!
Ingress-nginx translates the requirements from Ingress objects into configuration for nginx, a powerful open source webserver daemon. Then, nginx uses that configuration to accept and route requests to the various applications running within a Kubernetes cluster. Proper handling of these nginx configuration parameters is crucial, because ingress-nginx needs to allow users significant flexibility while preventing them from accidentally or intentionally tricking nginx into doing things it shouldnt.
## Vulnerabilities Patched Today
Four of todays ingress-nginx vulnerabilities are improvements to how ingress-nginx handles particular bits of nginx config. Without these fixes, a specially-crafted Ingress object can cause nginx to misbehave in various ways, including revealing the values of [Secrets](/docs/concepts/configuration/secret/) that are accessible to ingress-nginx. By default, ingress-nginx has access to all Secrets cluster-wide, so this can often lead to complete cluster takeover by any user or entity that has permission to create an Ingress.
The most serious of todays vulnerabilities, [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009), rated [9.8 CVSS](https://www.first.org/cvss/calculator/3-1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H), allows anything on the Pod network to exploit configuration injection vulnerabilities via the Validating Admission Controller feature of ingress-nginx. This makes such vulnerabilities far more dangerous: ordinarily one would need to be able to create an Ingress object in the cluster, which is a fairly privileged action. When combined with todays other vulnerabilities, **CVE-2025-1974 means that anything on the Pod network has a good chance of taking over your Kubernetes cluster, with no credentials or administrative access required**. In many common scenarios, the Pod network is accessible to all workloads in your cloud VPC, or even anyone connected to your corporate network\! This is a very serious situation.
Today, we have [released ingress-nginx v1.12.1 and v1.11.5](https://github.com/kubernetes/ingress-nginx/releases), which have fixes for all five of these vulnerabilities.
## Your next steps
First, determine if your clusters are using ingress-nginx. In most cases, you can check this by running `kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx` with cluster administrator permissions.
**If you are using ingress-nginx, make a plan to remediate these vulnerabilities immediately.**
**The best and easiest remedy is to [upgrade to the new patch release of ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/upgrade/).** All five of todays vulnerabilities are fixed by installing todays patches.
If you cant upgrade right away, you can significantly reduce your risk by turning off the Validating Admission Controller feature of ingress-nginx.
* If you have installed ingress-nginx using Helm
* Reinstall, setting the Helm value `controller.admissionWebhooks.enabled=false`
* If you have installed ingress-nginx manually
* delete the ValidatingWebhookconfiguration called `ingress-nginx-admission`
* edit the `ingress-nginx-controller` Deployment or Daemonset, removing `--validating-webhook` from the controller containers argument list
If you turn off the Validating Admission Controller feature as a mitigation for CVE-2025-1974, remember to turn it back on after you upgrade. This feature provides important quality of life improvements for your users, warning them about incorrect Ingress configurations before they can take effect.
## Conclusion, thanks, and further reading
The ingress-nginx vulnerabilities announced today, including CVE-2025-1974, present a serious risk to many Kubernetes users and their data. If you use ingress-nginx, you should take action immediately to keep yourself safe.
Thanks go out to Nir Ohfeld, Sagi Tzadik, Ronen Shustin, and Hillai Ben-Sasson from Wiz for responsibly disclosing these vulnerabilities, and for working with the Kubernetes SRC members and ingress-nginx maintainers (Marco Ebert and James Strong) to ensure we fixed them effectively.
For further information about the maintenance and future of ingress-nginx, please see this [GitHub issue](https://github.com/kubernetes/ingress-nginx/issues/13002) and/or attend [James and Marcos KubeCon/CloudNativeCon EU 2025 presentation](https://kccnceu2025.sched.com/event/1tcyc/).
For further information about the specific vulnerabilities discussed in this article, please see the appropriate GitHub issue: [CVE-2025-24513](https://github.com/kubernetes/kubernetes/issues/131005), [CVE-2025-24514](https://github.com/kubernetes/kubernetes/issues/131006), [CVE-2025-1097](https://github.com/kubernetes/kubernetes/issues/131007), [CVE-2025-1098](https://github.com/kubernetes/kubernetes/issues/131008), or [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009)

View File

@ -1,7 +1,7 @@
---
layout: blog
title: "Fresh Swap Features for Linux Users in Kubernetes 1.32"
date: 2025-03-24T10:00:00-08:00
date: 2025-03-25T10:00:00-08:00
slug: swap-linux-improvements
author: >
Itamar Holder (Red Hat)

View File

@ -1,7 +1,7 @@
---
layout: blog
title: 'Kubernetes v1.33 sneak peek'
date: 2025-03-24
date: 2025-03-26T10:30:00-08:00
slug: kubernetes-v1-33-upcoming-changes
author: >
Agustina Barbetta,

View File

@ -0,0 +1,219 @@
---
layout: blog
title: "Introducing kube-scheduler-simulator"
date: 2025-04-07
draft: false
slug: introducing-kube-scheduler-simulator
author: Kensei Nakada (Tetrate)
---
The Kubernetes Scheduler is a crucial control plane component that determines which node a Pod will run on.
Thus, anyone utilizing Kubernetes relies on a scheduler.
[kube-scheduler-simulator](https://github.com/kubernetes-sigs/kube-scheduler-simulator) is a _simulator_ for the Kubernetes scheduler, that started as a [Google Summer of Code 2021](https://summerofcode.withgoogle.com/) project developed by me (Kensei Nakada) and later received a lot of contributions.
This tool allows users to closely examine the schedulers behavior and decisions.
It is useful for casual users who employ scheduling constraints (for example, [inter-Pod affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity/#affinity-and-anti-affinity))
and experts who extend the scheduler with custom plugins.
## Motivation
The scheduler often appears as a black box,
composed of many plugins that each contribute to the scheduling decision-making process from their unique perspectives.
Understanding its behavior can be challenging due to the multitude of factors it considers.
Even if a Pod appears to be scheduled correctly in a simple test cluster, it might have been scheduled based on different calculations than expected. This discrepancy could lead to unexpected scheduling outcomes when deployed in a large production environment.
Also, testing a scheduler is a complex challenge.
There are countless patterns of operations executed within a real cluster, making it unfeasible to anticipate every scenario with a finite number of tests.
More often than not, bugs are discovered only when the scheduler is deployed in an actual cluster.
Actually, many bugs are found by users after shipping the release,
even in the upstream kube-scheduler.
Having a development or sandbox environment for testing the scheduler — or, indeed, any Kubernetes controllers — is a common practice.
However, this approach falls short of capturing all the potential scenarios that might arise in a production cluster
because a development cluster is often much smaller with notable differences in workload sizes and scaling dynamics.
It never sees the exact same use or exhibits the same behavior as its production counterpart.
The kube-scheduler-simulator aims to solve those problems.
It enables users to test their scheduling constraints, scheduler configurations,
and custom plugins while checking every detailed part of scheduling decisions.
It also allows users to create a simulated cluster environment, where they can test their scheduler
with the same resources as their production cluster without affecting actual workloads.
## Features of the kube-scheduler-simulator
The kube-scheduler-simulators core feature is its ability to expose the scheduler's internal decisions.
The scheduler operates based on the [scheduling framework](/docs/concepts/scheduling-eviction/scheduling-framework/),
using various plugins at different extension points,
filter nodes (Filter phase), score nodes (Score phase), and ultimately determine the best node for the Pod.
The simulator allows users to create Kubernetes resources and observe how each plugin influences the scheduling decisions for Pods.
This visibility helps users understand the schedulers workings and define appropriate scheduling constraints.
{{< figure src="/images/blog/2025-04-07-kube-scheduler-simulator/simulator.png" alt="Screenshot of the simulator web frontend that shows the detailed scheduling results per node and per extension point" title="The simulator web frontend" >}}
Inside the simulator, a debuggable scheduler runs instead of the vanilla scheduler.
This debuggable scheduler outputs the results of each scheduler plugin at every extension point to the Pods annotations like the following manifest shows
and the web front end formats/visualizes the scheduling results based on these annotations.
```yaml
kind: Pod
apiVersion: v1
metadata:
# The JSONs within these annotations are manually formatted for clarity in the blog post.
annotations:
kube-scheduler-simulator.sigs.k8s.io/bind-result: '{"DefaultBinder":"success"}'
kube-scheduler-simulator.sigs.k8s.io/filter-result: >-
{
"node-jjfg5":{
"NodeName":"passed",
"NodeResourcesFit":"passed",
"NodeUnschedulable":"passed",
"TaintToleration":"passed"
},
"node-mtb5x":{
"NodeName":"passed",
"NodeResourcesFit":"passed",
"NodeUnschedulable":"passed",
"TaintToleration":"passed"
}
}
kube-scheduler-simulator.sigs.k8s.io/finalscore-result: >-
{
"node-jjfg5":{
"ImageLocality":"0",
"NodeAffinity":"0",
"NodeResourcesBalancedAllocation":"52",
"NodeResourcesFit":"47",
"TaintToleration":"300",
"VolumeBinding":"0"
},
"node-mtb5x":{
"ImageLocality":"0",
"NodeAffinity":"0",
"NodeResourcesBalancedAllocation":"76",
"NodeResourcesFit":"73",
"TaintToleration":"300",
"VolumeBinding":"0"
}
}
kube-scheduler-simulator.sigs.k8s.io/permit-result: '{}'
kube-scheduler-simulator.sigs.k8s.io/permit-result-timeout: '{}'
kube-scheduler-simulator.sigs.k8s.io/postfilter-result: '{}'
kube-scheduler-simulator.sigs.k8s.io/prebind-result: '{"VolumeBinding":"success"}'
kube-scheduler-simulator.sigs.k8s.io/prefilter-result: '{}'
kube-scheduler-simulator.sigs.k8s.io/prefilter-result-status: >-
{
"AzureDiskLimits":"",
"EBSLimits":"",
"GCEPDLimits":"",
"InterPodAffinity":"",
"NodeAffinity":"",
"NodePorts":"",
"NodeResourcesFit":"success",
"NodeVolumeLimits":"",
"PodTopologySpread":"",
"VolumeBinding":"",
"VolumeRestrictions":"",
"VolumeZone":""
}
kube-scheduler-simulator.sigs.k8s.io/prescore-result: >-
{
"InterPodAffinity":"",
"NodeAffinity":"success",
"NodeResourcesBalancedAllocation":"success",
"NodeResourcesFit":"success",
"PodTopologySpread":"",
"TaintToleration":"success"
}
kube-scheduler-simulator.sigs.k8s.io/reserve-result: '{"VolumeBinding":"success"}'
kube-scheduler-simulator.sigs.k8s.io/result-history: >-
[
{
"kube-scheduler-simulator.sigs.k8s.io/bind-result":"{\"DefaultBinder\":\"success\"}",
"kube-scheduler-simulator.sigs.k8s.io/filter-result":"{\"node-jjfg5\":{\"NodeName\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"TaintToleration\":\"passed\"},\"node-mtb5x\":{\"NodeName\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"TaintToleration\":\"passed\"}}",
"kube-scheduler-simulator.sigs.k8s.io/finalscore-result":"{\"node-jjfg5\":{\"ImageLocality\":\"0\",\"NodeAffinity\":\"0\",\"NodeResourcesBalancedAllocation\":\"52\",\"NodeResourcesFit\":\"47\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"},\"node-mtb5x\":{\"ImageLocality\":\"0\",\"NodeAffinity\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"}}",
"kube-scheduler-simulator.sigs.k8s.io/permit-result":"{}",
"kube-scheduler-simulator.sigs.k8s.io/permit-result-timeout":"{}",
"kube-scheduler-simulator.sigs.k8s.io/postfilter-result":"{}",
"kube-scheduler-simulator.sigs.k8s.io/prebind-result":"{\"VolumeBinding\":\"success\"}",
"kube-scheduler-simulator.sigs.k8s.io/prefilter-result":"{}",
"kube-scheduler-simulator.sigs.k8s.io/prefilter-result-status":"{\"AzureDiskLimits\":\"\",\"EBSLimits\":\"\",\"GCEPDLimits\":\"\",\"InterPodAffinity\":\"\",\"NodeAffinity\":\"\",\"NodePorts\":\"\",\"NodeResourcesFit\":\"success\",\"NodeVolumeLimits\":\"\",\"PodTopologySpread\":\"\",\"VolumeBinding\":\"\",\"VolumeRestrictions\":\"\",\"VolumeZone\":\"\"}",
"kube-scheduler-simulator.sigs.k8s.io/prescore-result":"{\"InterPodAffinity\":\"\",\"NodeAffinity\":\"success\",\"NodeResourcesBalancedAllocation\":\"success\",\"NodeResourcesFit\":\"success\",\"PodTopologySpread\":\"\",\"TaintToleration\":\"success\"}",
"kube-scheduler-simulator.sigs.k8s.io/reserve-result":"{\"VolumeBinding\":\"success\"}",
"kube-scheduler-simulator.sigs.k8s.io/score-result":"{\"node-jjfg5\":{\"ImageLocality\":\"0\",\"NodeAffinity\":\"0\",\"NodeResourcesBalancedAllocation\":\"52\",\"NodeResourcesFit\":\"47\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"},\"node-mtb5x\":{\"ImageLocality\":\"0\",\"NodeAffinity\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"}}",
"kube-scheduler-simulator.sigs.k8s.io/selected-node":"node-mtb5x"
}
]
kube-scheduler-simulator.sigs.k8s.io/score-result: >-
{
"node-jjfg5":{
"ImageLocality":"0",
"NodeAffinity":"0",
"NodeResourcesBalancedAllocation":"52",
"NodeResourcesFit":"47",
"TaintToleration":"0",
"VolumeBinding":"0"
},
"node-mtb5x":{
"ImageLocality":"0",
"NodeAffinity":"0",
"NodeResourcesBalancedAllocation":"76",
"NodeResourcesFit":"73",
"TaintToleration":"0",
"VolumeBinding":"0"
}
}
kube-scheduler-simulator.sigs.k8s.io/selected-node: node-mtb5x
```
Users can also integrate [their custom plugins](/docs/concepts/scheduling-eviction/scheduling-framework/) or [extenders](https://github.com/kubernetes/design-proposals-archive/blob/main/scheduling/scheduler_extender.md), into the debuggable scheduler and visualize their results.
This debuggable scheduler can also run standalone, for example, on any Kubernetes cluster or in integration tests.
This would be useful to custom plugin developers who want to test their plugins or examine their custom scheduler in a real cluster with better debuggability.
## The simulator as a better dev cluster
As mentioned earlier, with a limited set of tests, it is impossible to predict every possible scenario in a real-world cluster.
Typically, users will test the scheduler in a small, development cluster before deploying it to production, hoping that no issues arise.
[The simulators importing feature](https://github.com/kubernetes-sigs/kube-scheduler-simulator/blob/master/simulator/docs/import-cluster-resources.md)
provides a solution by allowing users to simulate deploying a new scheduler version in a production-like environment without impacting their live workloads.
By continuously syncing between a production cluster and the simulator, users can safely test a new scheduler version with the same resources their production cluster handles.
Once confident in its performance, they can proceed with the production deployment, reducing the risk of unexpected issues.
## What are the use cases?
1. **Cluster users**: Examine if scheduling constraints (for example, PodAffinity, PodTopologySpread) work as intended.
1. **Cluster admins**: Assess how a cluster would behave with changes to the scheduler configuration.
1. **Scheduler plugin developers**: Test a custom scheduler plugins or extenders, use the debuggable scheduler in integration tests or development clusters, or use the [syncing](https://github.com/kubernetes-sigs/kube-scheduler-simulator/blob/simulator/v0.3.0/simulator/docs/import-cluster-resources.md) feature for testing within a production-like environment.
## Getting started
The simulator only requires Docker to be installed on a machine; a Kubernetes cluster is not necessary.
```
git clone git@github.com:kubernetes-sigs/kube-scheduler-simulator.git
cd kube-scheduler-simulator
make docker_up
```
You can then access the simulator's web UI at `http://localhost:3000`.
Visit the [kube-scheduler-simulator repository](https://sigs.k8s.io/kube-scheduler-simulator) for more details!
## Getting involved
The scheduler simulator is developed by [Kubernetes SIG Scheduling](https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md#kube-scheduler-simulator). Your feedback and contributions are welcome!
Open issues or PRs at the [kube-scheduler-simulator repository](https://sigs.k8s.io/kube-scheduler-simulator).
Join the conversation on the [#sig-scheduling](https://kubernetes.slack.com/messages/sig-scheduling) slack channel.
## Acknowledgments
The simulator has been maintained by dedicated volunteer engineers, overcoming many challenges to reach its current form.
A big shout out to all [the awesome contributors](https://github.com/kubernetes-sigs/kube-scheduler-simulator/graphs/contributors)!

View File

@ -16,21 +16,21 @@ case_study_details:
<h2>Challenge</h2>
<p>A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, <a href="http://www.huawei.com/">Huawei</a> has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."</p>
<p>A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, <a href="https://www.huawei.com/">Huawei</a> has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."</p>
<h2>Solution</h2>
<p>After deciding to use container technology, Huawei began moving the internal I.T. department's applications to run on <a href="http://kubernetes.io/">Kubernetes</a>. So far, about 30 percent of these applications have been transferred to cloud native.</p>
<p>After deciding to use container technology, Huawei began moving the internal I.T. department's applications to run on <a href="https://kubernetes.io/">Kubernetes</a>. So far, about 30 percent of these applications have been transferred to cloud native.</p>
<h2>Impact</h2>
<p>"By the end of 2016, Huawei's internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally and the demand it is seeing externally the company has also built the technologies into <a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a>, the PaaS solution it offers its customers.</p>
<p>"By the end of 2016, Huawei's internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally and the demand it is seeing externally the company has also built the technologies into <a href="https://support.huawei.com/enterprise/en/cloud-computing/fusionstage-pid-21733180">FusionStage™</a>, the PaaS solution it offers its customers.</p>
{{< case-studies/quote author="Peixin Hou, chief software architect and community director for open source" >}}
"If you're a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology."
{{< /case-studies/quote >}}
<p>Huawei's Kubernetes journey began with one developer. Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in <a href="http://kubernetes.io/">Kubernetes</a>, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.</p>
<p>Huawei's Kubernetes journey began with one developer. Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in <a href="https://kubernetes.io/">Kubernetes</a>, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.</p>
<p>And as fate would have it, at the same time, Huawei was looking for a better orchestration system for its internal enterprise I.T. department, which supports every business flow processing. "We have more than 180,000 employees worldwide, and a complicated internal procedure, so probably every week this department needs to develop some new applications," says Peixin Hou, Huawei's Chief Software Architect and Community Director for Open Source. "Very often our I.T. departments need to launch tens of thousands of containers, with tasks running across thousands of nodes across the world. It's very much a distributed system, so we found that managing all of the tasks in a more consistent way is always a challenge."</p>
@ -46,7 +46,7 @@ case_study_details:
<p>Pleased with those initial results, and seeing a demand for cloud native technologies from its customers, Huawei doubled down on Kubernetes. In the spring of 2016, the company became not only a user but also a vendor.</p>
<p>"We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei's <a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a> PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We've started to work with some Chinese banks, and we see a lot of interest from our customers like <a href="http://www.chinamobileltd.com/">China Mobile</a> and <a href="https://www.telekom.com/en">Deutsche Telekom</a>."</p>
<p>"We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei's <a href="https://support.huawei.com/enterprise/en/cloud-computing/fusionstage-pid-21733180">FusionStage™</a> PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We've started to work with some Chinese banks, and we see a lot of interest from our customers like <a href="https://www.chinamobileltd.com/">China Mobile</a> and <a href="https://www.telekom.com/en">Deutsche Telekom</a>."</p>
<p>"If you're just a user, you're just a user," adds Hou. "But if you're a vendor, in order to even convince your customers, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology. We provide customer wisdom." While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huawei's solutions. It's a big selling point that most of the public cloud providers now support Kubernetes. "This makes the cross-cloud transition much easier than with other solutions," says Hou.</p>
@ -66,7 +66,7 @@ case_study_details:
"In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There's still 20 percent that's not, but it's fine. If we can make 80 percent of our workload really be cloud native, to have agility, it's a much better world at the end of the day."
{{< /case-studies/quote >}}
<p>In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called <a href="http://containerops.org/">Container Ops</a>, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."</p>
<p>In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called <a href="https://containerops.org/">Container Ops</a>, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."</p>
<p>Still, Hou sees this technology as only halfway to its full potential. First and foremost, he'd like to expand the scale it can orchestrate, which is important for supersized companies like Huawei as well as some of its customers.</p>

View File

@ -0,0 +1,45 @@
---
title: Kubernetes Self-Healing
content_type: concept
Weight: 50
---
<!-- overview -->
Kubernetes is designed with self-healing capabilities that help maintain the health and availability of workloads.
It automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained.
<!-- body -->
## Self-Healing capabilities {#self-healing-capabilities}
- **Container-level restarts:** If a container inside a Pod fails, Kubernetes restarts it based on the [`restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy).
- **Replica replacement:** If a Pod in a [Deployment](/docs/concepts/workloads/controllers/deployment/) or [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) fails, Kubernetes creates a replacement Pod to maintain the specified number of replicas.
If a Pod fails that is part of a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) fails, the control plane
creates a replacement Pod to run on the same node.
- **Persistent storage recovery:** If a node is running a Pod with a PersistentVolume (PV) attached, and the node fails, Kubernetes can reattach the volume to a new Pod on a different node.
- **Load balancing for Services:** If a Pod behind a [Service](/docs/concepts/services-networking/service/) fails, Kubernetes automatically removes it from the Service's endpoints to route traffic only to healthy Pods.
Here are some of the key components that provide Kubernetes self-healing:
- **[kubelet](/docs/concepts/architecture/#kubelet):** Ensures that containers are running, and restarts those that fail.
- **ReplicaSet, StatefulSet and DaemonSet controller:** Maintains the desired number of Pod replicas.
- **PersistentVolume controller:** Manages volume attachment and detachment for stateful workloads.
## Considerations {#considerations}
- **Storage Failures:** If a persistent volume becomes unavailable, recovery steps may be required.
- **Application Errors:** Kubernetes can restart containers, but underlying application issues must be addressed separately.
## {{% heading "whatsnext" %}}
- Read more about [Pods](/docs/concepts/workloads/pods/)
- Learn about [Kubernetes Controllers](/docs/concepts/architecture/controller/)
- Explore [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
- Read about [node autoscaling](/docs/concepts/cluster-administration/node-autoscaling/). Node autoscaling
also provides automatic healing if or when nodes fail in your cluster.

View File

@ -77,6 +77,10 @@ Before choosing a guide, here are some considerations:
explains plug-ins which intercepts requests to the Kubernetes API server after authentication
and authorization.
* [Admission Webhook Good Practices](/docs/concepts/cluster-administration/admission-webhooks-good-practices/)
provides good practices and considerations when designing mutating admission
webhooks and validating admission webhooks.
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/)
describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters
.

View File

@ -0,0 +1,648 @@
---
title: Admission Webhook Good Practices
description: >
Recommendations for designing and deploying admission webhooks in Kubernetes.
content_type: concept
weight: 60
---
<!-- overview -->
This page provides good practices and considerations when designing
_admission webhooks_ in Kubernetes. This information is intended for
cluster operators who run admission webhook servers or third-party applications
that modify or validate your API requests.
Before reading this page, ensure that you're familiar with the following
concepts:
* [Admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
* [Admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)
<!-- body -->
## Importance of good webhook design {#why-good-webhook-design-matters}
Admission control occurs when any create, update, or delete request
is sent to the Kubernetes API. Admission controllers intercept requests that
match specific criteria that you define. These requests are then sent to
mutating admission webhooks or validating admission webhooks. These webhooks are
often written to ensure that specific fields in object specifications exist or
have specific allowed values.
Webhooks are a powerful mechanism to extend the Kubernetes API. Badly-designed
webhooks often result in workload disruptions because of how much control
the webhooks have over objects in the cluster. Like other API extension
mechanisms, webhooks are challenging to test at scale for compatibility with
all of your workloads, other webhooks, add-ons, and plugins.
Additionally, with every release, Kubernetes adds or modifies the API with new
features, feature promotions to beta or stable status, and deprecations. Even
stable Kubernetes APIs are likely to change. For example, the `Pod` API changed
in v1.29 to add the
[Sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/) feature.
While it's rare for a Kubernetes object to enter a broken state because of a new
Kubernetes API, webhooks that worked as expected with earlier versions of an API
might not be able to reconcile more recent changes to that API. This can result
in unexpected behavior after you upgrade your clusters to newer versions.
This page describes common webhook failure scenarios and how to avoid them by
cautiously and thoughtfully designing and implementing your webhooks.
## Identify whether you use admission webhooks {#identify-admission-webhooks}
Even if you don't run your own admission webhooks, some third-party applications
that you run in your clusters might use mutating or validating admission
webhooks.
To check whether your cluster has any mutating admission webhooks, run the
following command:
```shell
kubectl get mutatingwebhookconfigurations
```
The output lists any mutating admission controllers in the cluster.
To check whether your cluster has any validating admission webhooks, run the
following command:
```shell
kubectl get validatingwebhookconfigurations
```
The output lists any validating admission controllers in the cluster.
## Choose an admission control mechanism {#choose-admission-mechanism}
Kubernetes includes multiple admission control and policy enforcement options.
Knowing when to use a specific option can help you to improve latency and
performance, reduce management overhead, and avoid issues during version
upgrades. The following table describes the mechanisms that let you mutate or
validate resources during admission:
<!-- This table is HTML because it uses unordered lists for readability. -->
<table>
<caption>Mutating and validating admission control in Kubernetes</caption>
<thead>
<tr>
<th>Mechanism</th>
<th>Description</th>
<th>Use cases</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="/docs/reference/access-authn-authz/extensible-admission-controllers/">Mutating admission webhook</a></td>
<td>Intercept API requests before admission and modify as needed using
custom logic.</td>
<td><ul>
<li>Make critical modifications that must happen before resource
admission.</li>
<li>Make complex modifications that require advanced logic, like calling
external APIs.</li>
</ul></td>
</tr>
<tr>
<td><a href="/docs/reference/access-authn-authz/mutating-admission-policy/">Mutating admission policy</a></td>
<td>Intercept API requests before admission and modify as needed using
Common Expression Language (CEL) expressions.</td>
<td><ul>
<li>Make critical modifications that must happen before resource
admission.</li>
<li>Make simple modifications, such as adjusting labels or replica
counts.</li>
</ul></td>
</tr>
<tr>
<td><a href="/docs/reference/access-authn-authz/extensible-admission-controllers/">Validating admission webhook</a></td>
<td>Intercept API requests before admission and validate against complex
policy declarations.</td>
<td><ul>
<li>Validate critical configurations before resource admission.</li>
<li>Enforce complex policy logic before admission.</li>
</ul></td>
</tr>
<tr>
<td><a href="/docs/reference/access-authn-authz/validating-admission-policy/">Validating admission policy</a></td>
<td>Intercept API requests before admission and validate against CEL
expressions.</td>
<td><ul>
<li>Validate critical configurations before resource admission.</li>
<li>Enforce policy logic using CEL expressions.</li>
</ul></td>
</tr>
</tbody>
</table>
In general, use _webhook_ admission control when you want an extensible way to
declare or configure the logic. Use built-in CEL-based admission control when
you want to declare simpler logic without the overhead of running a webhook
server. The Kubernetes project recommends that you use CEL-based admission
control when possible.
### Use built-in validation and defaulting for CustomResourceDefinitions {#no-crd-validation-defaulting}
If you use
{{< glossary_tooltip text="CustomResourceDefinitions" term_id="customresourcedefinition" >}},
don't use admission webhooks to validate values in CustomResource specifications
or to set default values for fields. Kubernetes lets you define validation rules
and default field values when you create CustomResourceDefinitions.
To learn more, see the following resources:
* [Validation rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules)
* [Defaulting](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting)
## Performance and latency {#performance-latency}
This section describes recommendations for improving performance and reducing
latency. In summary, these are as follows:
* Consolidate webhooks and limit the number of API calls per webhook.
* Use audit logs to check for webhooks that repeatedly do the same action.
* Use load balancing for webhook availability.
* Set a small timeout value for each webhook.
* Consider cluster availability needs during webhook design.
### Design admission webhooks for low latency {#design-admission-webhooks-low-latency}
Mutating admission webhooks are called in sequence. Depending on the mutating
webhook setup, some webhooks might be called multiple times. Every mutating
webhook call adds latency to the admission process. This is unlike validating
webhooks, which get called in parallel.
When designing your mutating webhooks, consider your latency requirements and
tolerance. The more mutating webhooks there are in your cluster, the greater the
chance of latency increases.
Consider the following to reduce latency:
* Consolidate webhooks that perform a similar mutation on different objects.
* Reduce the number of API calls made in the mutating webhook server logic.
* Limit the match conditions of each mutating webhook to reduce how many
webhooks are triggered by a specific API request.
* Consolidate small webhooks into one server and configuration to help with
ordering and organization.
### Prevent loops caused by competing controllers {#prevent-loops-competing-controllers}
Consider any other components that run in your cluster that might conflict with
the mutations that your webhook makes. For example, if your webhook adds a label
that a different controller removes, your webhook gets called again. This leads
to a loop.
To detect these loops, try the following:
1. Update your cluster audit policy to log audit events. Use the following
parameters:
* `level`: `RequestResponse`
* `verbs`: `["patch"]`
* `omitStages`: `RequestReceived`
Set the audit rule to create events for the specific resources that your
webhook mutates.
1. Check your audit events for webhooks being reinvoked multiple times with the
same patch being applied to the same object, or for an object having
a field updated and reverted multiple times.
### Set a small timeout value {#small-timeout}
Admission webhooks should evaluate as quickly as possible (typically in
milliseconds), since they add to API request latency. Use a small timeout for
webhooks.
For details, see
[Timeouts](/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts).
### Use a load balancer to ensure webhook availability {#load-balancer-webhook}
Admission webhooks should leverage some form of load-balancing to provide high
availability and performance benefits. If a webhook is running within the
cluster, you can run multiple webhook backends behind a Service of type
`ClusterIP`.
### Use a high-availability deployment model {#ha-deployment}
Consider your cluster's availability requirements when designing your webhook.
For example, during node downtime or zonal outages, Kubernetes marks Pods as
`NotReady` to allow load balancers to reroute traffic to available zones and
nodes. These updates to Pods might trigger your mutating webhooks. Depending on
the number of affected Pods, the mutating webhook server has a risk of timing
out or causing delays in Pod processing. As a result, traffic won't get
rerouted as quickly as you need.
Consider situations like the preceding example when writing your webhooks.
Exclude operations that are a result of Kubernetes responding to unavoidable
incidents.
## Request filtering {#request-filtering}
This section provides recommendations for filtering which requests trigger
specific webhooks. In summary, these are as follows:
* Limit the webhook scope to avoid system components and read-only requests.
* Limit webhooks to specific namespaces.
* Use match conditions to perform fine-grained request filtering.
* Match all versions of an object.
### Limit the scope of each webhook {#webhook-limit-scope}
Admission webhooks are only called when an API request matches the corresponding
webhook configuration. Limit the scope of each webhook to reduce unnecessary
calls to the webhook server. Consider the following scope limitations:
* Avoid matching objects in the `kube-system` namespace. If you run your own
Pods in the `kube-system` namespace, use an
[`objectSelector`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-objectselector)
to avoid mutating a critical workload.
* Don't mutate node leases, which exist as Lease objects in the
`kube-node-lease` system namespace. Mutating node leases might result in
failed node upgrades. Only apply validation controls to Lease objects in this
namespace if you're confident that the controls won't put your cluster at
risk.
* Don't mutate TokenReview or SubjectAccessReview objects. These are always
read-only requests. Modifying these objects might break your cluster.
* Limit each webhook to a specific namespace by using a
[`namespaceSelector`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector).
### Filter for specific requests by using match conditions {#filter-match-conditions}
Admission controllers support multiple fields that you can use to match requests
that meet specific criteria. For example, you can use a `namespaceSelector` to
filter for requests that target a specific namespace.
For more fine-grained request filtering, use the `matchConditions` field in your
webhook configuration. This field lets you write multiple CEL expressions that
must evaluate to `true` for a request to trigger your admission webhook. Using
`matchConditions` might significantly reduce the number of calls to your webhook
server.
For details, see
[Matching requests: `matchConditions`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchconditions).
### Match all versions of an API {#match-all-versions}
By default, admission webhooks run on any API versions that affect a specified
resource. The `matchPolicy` field in the webhook configuration controls this
behavior. Specify a value of `Equivalent` in the `matchPolicy` field or omit
the field to allow the webhook to run on any API version.
For details, see
[Matching requests: `matchPolicy`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy).
## Mutation scope and field considerations {#mutation-scope-considerations}
This section provides recommendations for the scope of mutations and any special
considerations for object fields. In summary, these are as follows:
* Patch only the fields that you need to patch.
* Don't overwrite array values.
* Avoid side effects in mutations when possible.
* Avoid self-mutations.
* Fail open and validate the final state.
* Plan for future field updates in later versions.
* Prevent webhooks from self-triggering.
* Don't change immutable objects.
### Patch only required fields {#patch-required-fields}
Admission webhook servers send HTTP responses to indicate what to do with a
specific Kubernetes API request. This response is an AdmissionReview object.
A mutating webhook can add specific fields to mutate before allowing admission
by using the `patchType` field and the `patch` field in the response. Ensure
that you only modify the fields that require a change.
For example, consider a mutating webhook that's configured to ensure that
`web-server` Deployments have at least three replicas. When a request to
create a Deployment object matches your webhook configuration, the webhook
should only update the value in the `spec.replicas` field.
### Don't overwrite array values {#dont-overwrite-arrays}
Fields in Kubernetes object specifications might include arrays. Some arrays
contain key:value pairs (like the `envVar` field in a container specification),
while other arrays are unkeyed (like the `readinessGates` field in a Pod
specification). The order of values in an array field might matter in some
situations. For example, the order of arguments in the `args` field of a
container specification might affect the container.
Consider the following when modifying arrays:
* Whenever possible, use the `add` JSONPatch operation instead of `replace` to
avoid accidentally replacing a required value.
* Treat arrays that don't use key:value pairs as sets.
* Ensure that the values in the field that you modify aren't required to be
in a specific order.
* Don't overwrite existing key:value pairs unless absolutely necessary.
* Use caution when modifying label fields. An accidental modification might
cause label selectors to break, resulting in unintended behavior.
### Avoid side effects {#avoid-side-effects}
Ensure that your webhooks operate only on the content of the AdmissionReview
that's sent to them, and do not make out-of-band changes. These additional
changes, called _side effects_, might cause conflicts during admission if they
aren't reconciled properly. The `.webhooks[].sideEffects` field should
be set to `None` if a webhook doesn't have any side effect.
If side effects are required during the admission evaluation, they must be
suppressed when processing an AdmissionReview object with `dryRun` set to
`true`, and the `.webhooks[].sideEffects` field should be set to `NoneOnDryRun`.
For details, see
[Side effects](/docs/reference/access-authn-authz/extensible-admission-controllers/#side-effects).
### Avoid self-mutations {#avoid-self-mutation}
A webhook running inside the cluster might cause deadlocks for its own
deployment if it is configured to intercept resources required to start its own
Pods.
For example, a mutating admission webhook is configured to admit **create** Pod
requests only if a certain label is set in the Pod (such as `env: prod`).
The webhook server runs in a Deployment that doesn't set the `env` label.
When a node that runs the webhook server Pods becomes unhealthy, the webhook
Deployment tries to reschedule the Pods to another node. However, the existing
webhook server rejects the requests since the `env` label is unset. As a
result, the migration cannot happen.
Exclude the namespace where your webhook is running with a
[`namespaceSelector`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector).
### Avoid dependency loops {#avoid-dependency-loops}
Dependency loops can occur in scenarios like the following:
* Two webhooks check each other's Pods. If both webhooks become unavailable
at the same time, neither webhook can start.
* Your webhook intercepts cluster add-on components, such as networking plugins
or storage plugins, that your webhook depends on. If both the webhook and the
dependent add-on become unavailable, neither component can function.
To avoid these dependency loops, try the following:
* Use
[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
to avoid introducing dependencies.
* Prevent webhooks from validating or mutating other webhooks. Consider
[excluding specific namespaces](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector)
from triggering your webhook.
* Prevent your webhooks from acting on dependent add-ons by using an
[`objectSelector`](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-objectselector).
### Fail open and validate the final state {#fail-open-validate-final-state}
Mutating admission webhooks support the `failurePolicy` configuration field.
This field indicates whether the API server should admit or reject the request
if the webhook fails. Webhook failures might occur because of timeouts or errors
in the server logic.
By default, admission webhooks set the `failurePolicy` field to Fail. The API
server rejects a request if the webhook fails. However, rejecting requests by
default might result in compliant requests being rejected during webhook
downtime.
Let your mutating webhooks "fail open" by setting the `failurePolicy` field to
Ignore. Use a validating controller to check the state of requests to ensure
that they comply with your policies.
This approach has the following benefits:
* Mutating webhook downtime doesn't affect compliant resources from deploying.
* Policy enforcement occurs during validating admission control.
* Mutating webhooks don't interfere with other controllers in the cluster.
### Plan for future updates to fields {#plan-future-field-updates}
In general, design your webhooks under the assumption that Kubernetes APIs might
change in a later version. Don't write a server that takes the stability of an
API for granted. For example, the release of sidecar containers in Kubernetes
added a `restartPolicy` field to the Pod API.
### Prevent your webhook from triggering itself {#prevent-webhook-self-trigger}
Mutating webhooks that respond to a broad range of API requests might
unintentionally trigger themselves. For example, consider a webhook that
responds to all requests in the cluster. If you configure the webhook to create
Event objects for every mutation, it'll respond to its own Event object
creation requests.
To avoid this, consider setting a unique label in any resources that your
webhook creates. Exclude this label from your webhook match conditions.
### Don't change immutable objects {#dont-change-immutable-objects}
Some Kubernetes objects in the API server can't change. For example, when you
deploy a {{< glossary_tooltip text="static Pod" term_id="static-pod" >}}, the
kubelet on the node creates a
{{< glossary_tooltip text="mirror Pod" term_id="mirror-pod" >}} in the API
server to track the static Pod. However, changes to the mirror Pod don't
propagate to the static Pod.
Don't attempt to mutate these objects during admission. All mirror Pods have the
`kubernetes.io/config.mirror` annotation. To exclude mirror Pods while reducing
the security risk of ignoring an annotation, allow static Pods to only run in
specific namespaces.
## Mutating webhook ordering and idempotence {#ordering-idempotence}
This section provides recommendations for webhook order and designing idempotent
webhooks. In summary, these are as follows:
* Don't rely on a specific order of execution.
* Validate mutations before admission.
* Check for mutations being overwritten by other controllers.
* Ensure that the set of mutating webhooks is idempotent, not just the
individual webhooks.
### Don't rely on mutating webhook invocation order {#dont-rely-webhook-order}
Mutating admission webhooks don't run in a consistent order. Various factors
might change when a specific webhook is called. Don't rely on your webhook
running at a specific point in the admission process. Other webhooks could still
mutate your modified object.
The following recommendations might help to minimize the risk of unintended
changes:
* [Validate mutations before admission](#validate-mutations)
* Use a reinvocation policy to observe changes to an object by other plugins
and re-run the webhook as needed. For details, see
[Reinvocation policy](/docs/reference/access-authn-authz/extensible-admission-controllers/#reinvocation-policy).
### Ensure that the mutating webhooks in your cluster are idempotent {#ensure-mutating-webhook-idempotent}
Every mutating admission webhook should be _idempotent_. The webhook should be
able to run on an object that it already modifed without making additional
changes beyond the original change.
Additionally, all of the mutating webhooks in your cluster should, as a
collection, be idempotent. After the mutation phase of admission control ends,
every individual mutating webhook should be able to run on an object without
making additional changes to the object.
Depending on your environment, ensuring idempotence at scale might be
challenging. The following recommendations might help:
* Use validating admission controllers to verify the final state of
critical workloads.
* Test your deployments in a staging cluster to see if any objects get modified
multiple times by the same webhook.
* Ensure that the scope of each mutating webhook is specific and limited.
The following examples show idempotent mutation logic:
1. For a **create** Pod request, set the field
`.spec.securityContext.runAsNonRoot` of the Pod to true.
1. For a **create** Pod request, if the field
`.spec.containers[].resources.limits` of a container is not set, set default
resource limits.
1. For a **create** Pod request, inject a sidecar container with name
`foo-sidecar` if no container with the name `foo-sidecar` already exists.
In these cases, the webhook can be safely reinvoked, or admit an object that
already has the fields set.
The following examples show non-idempotent mutation logic:
1. For a **create** Pod request, inject a sidecar container with name
`foo-sidecar` suffixed with the current timestamp (such as
`foo-sidecar-19700101-000000`).
Reinvoking the webhook can result in the same sidecar being injected multiple
times to a Pod, each time with a different container name. Similarly, the
webhook can inject duplicated containers if the sidecar already exists in
a user-provided pod.
1. For a **create**/**update** Pod request, reject if the Pod has label `env`
set, otherwise add an `env: prod` label to the Pod.
Reinvoking the webhook will result in the webhook failing on its own output.
1. For a **create** Pod request, append a sidecar container named `foo-sidecar`
without checking whether a `foo-sidecar` container exists.
Reinvoking the webhook will result in duplicated containers in the Pod, which
makes the request invalid and rejected by the API server.
## Mutation testing and validation {#mutation-testing-validation}
This section provides recommendations for testing your mutating webhooks and
validating mutated objects. In summary, these are as follows:
* Test webhooks in staging environments.
* Avoid mutations that violate validations.
* Test minor version upgrades for regressions and conflicts.
* Validate mutated objects before admission.
### Test webhooks in staging environments {#test-in-staging-environments}
Robust testing should be a core part of your release cycle for new or updated
webhooks. If possible, test any changes to your cluster webhooks in a staging
environment that closely resembles your production clusters. At the very least,
consider using a tool like [minikube](https://minikube.sigs.k8s.io/docs/) or
[kind](https://kind.sigs.k8s.io/) to create a small test cluster for webhook
changes.
### Ensure that mutations don't violate validations {#ensure-mutations-dont-violate-validations}
Your mutating webhooks shouldn't break any of the validations that apply to an
object before admission. For example, consider a mutating webhook that sets the
default CPU request of a Pod to a specific value. If the CPU limit of that Pod
is set to a lower value than the mutated request, the Pod fails admission.
Test every mutating webhook against the validations that run in your cluster.
### Test minor version upgrades to ensure consistent behavior {#test-minor-version-upgrades}
Before upgrading your production clusters to a new minor version, test your
webhooks and workloads in a staging environment. Compare the results to ensure
that your webhooks continue to function as expected after the upgrade.
Additionally, use the following resources to stay informed about API changes:
* [Kubernetes release notes](/releases/)
* [Kubernetes blog](/blog/)
### Validate mutations before admission {#validate-mutations}
Mutating webhooks run to completion before any validating webhooks run. There is
no stable order in which mutations are applied to objects. As a result, your
mutations could get overwritten by a mutating webhook that runs at a later time.
Add a validating admission controller like a ValidatingAdmissionWebhook or a
ValidatingAdmissionPolicy to your cluster to ensure that your mutations
are still present. For example, consider a mutating webhook that inserts the
`restartPolicy: Always` field to specific init containers to make them run as
sidecar containers. You could run a validating webhook to ensure that those
init containers retained the `restartPolicy: Always` configuration after all
mutations were completed.
For details, see the following resources:
* [Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/)
* [ValidatingAdmissionWebhooks](/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)
## Mutating webhook deployment {#mutating-webhook-deployment}
This section provides recommendations for deploying your mutating admission
webhooks. In summary, these are as follows:
* Gradually roll out the webhook configuration and monitor for issues by
namespace.
* Limit access to edit the webhook configuration resources.
* Limit access to the namespace that runs the webhook server, if the server is
in-cluster.
### Install and enable a mutating webhook {#install-enable-mutating-webhook}
When you're ready to deploy your mutating webhook to a cluster, use the
following order of operations:
1. Install the webhook server and start it.
1. Set the `failurePolicy` field in the MutatingWebhookConfiguration manifest
to Ignore. This lets you avoid disruptions caused by misconfigured webhooks.
1. Set the `namespaceSelector` field in the MutatingWebhookConfiguration
manifest to a test namespace.
1. Deploy the MutatingWebhookConfiguration to your cluster.
Monitor the webhook in the test namespace to check for any issues, then roll the
webhook out to other namespaces. If the webhook intercepts an API request that
it wasn't meant to intercept, pause the rollout and adjust the scope of the
webhook configuration.
### Limit edit access to mutating webhooks {#limit-edit-access}
Mutating webhooks are powerful Kubernetes controllers. Use RBAC or another
authorization mechanism to limit access to your webhook configurations and
servers. For RBAC, ensure that the following access is only available to trusted
entities:
* Verbs: **create**, **update**, **patch**, **delete**, **deletecollection**
* API group: `admissionregistration.k8s.io/v1`
* API kind: MutatingWebhookConfigurations
If your mutating webhook server runs in the cluster, limit access to create or
modify any resources in that namespace.
## Examples of good implementations {#example-good-implementations}
{{% thirdparty-content %}}
The following projects are examples of "good" custom webhook server
implementations. You can use them as a starting point when designing your own
webhooks. Don't use these examples as-is; use them as a starting point and
design your webhooks to run well in your specific environment.
* [`cert-manager`](https://github.com/cert-manager/cert-manager/tree/master/internal/webhook)
* [Gatekeeper Open Policy Agent (OPA)](https://open-policy-agent.github.io/gatekeeper/website/docs/mutation)
## {{% heading "whatsnext" %}}
* [Use webhooks for authentication and authorization](/docs/reference/access-authn-authz/webhook/)
* [Learn about MutatingAdmissionPolicies](/docs/reference/access-authn-authz/mutating-admission-policy/)
* [Learn about ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)

View File

@ -196,7 +196,7 @@ To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plug
in the [scheduler configuration](/docs/reference/scheduling/config/). For example:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:

View File

@ -517,7 +517,7 @@ ReplicaSets, StatefulSets or ReplicationControllers that the Pod belongs to.
An example configuration might look like follows:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:
@ -567,7 +567,7 @@ you can disable those defaults by setting `defaultingType` to `List` and leaving
empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
```yaml
apiVersion: kubescheduler.config.k8s.io/v1beta3
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
profiles:

View File

@ -53,6 +53,15 @@ network traffic between Pods, or between Pods and the network outside your clust
You can deploy security controls from the wider ecosystem to implement preventative
or detective controls around Pods, their containers, and the images that run in them.
### Admission control {#admission-control}
[Admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
are plugins that intercept Kubernetes API requests and can validate or mutate
the requests based on specific fields in the request. Thoughtfully designing
these controllers helps to avoid unintended disruptions as Kubernetes APIs
change across version updates. For design considerations, see
[Admission Webhook Good Practices](/docs/concepts/cluster-administration/admission-webhooks-good-practices/).
### Auditing
Kubernetes [audit logging](/docs/tasks/debug/debug-cluster/audit/) provides a

View File

@ -830,17 +830,26 @@ the request is for storage. The same
[resource model](https://git.k8s.io/design-proposals-archive/scheduling/resources.md)
applies to both volumes and claims.
{{< note >}}
For `Filesystem` volumes, the storage request refers to the "outer" volume size
(i.e. the allocated size from the storage backend).
This means that the writeable size may be slightly lower for providers that
build a filesystem on top of a block device, due to filesystem overhead.
This is especially visible with XFS, where many metadata features are enabled by default.
{{< /note >}}
### Selector
Claims can specify a
[label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
to further filter the set of volumes. Only the volumes whose labels match the selector
can be bound to the claim. The selector can consist of two fields:
to further filter the set of volumes.
Only the volumes whose labels match the selector can be bound to the claim.
The selector can consist of two fields:
* `matchLabels` - the volume must have a label with this value
* `matchExpressions` - a list of requirements made by specifying key, list of values,
and operator that relates the key and values. Valid operators include In, NotIn,
Exists, and DoesNotExist.
and operator that relates the key and values.
Valid operators include `In`, `NotIn`, `Exists`, and `DoesNotExist`.
All of the requirements, from both `matchLabels` and `matchExpressions`, are
ANDed together they must all be satisfied in order to match.
@ -850,31 +859,30 @@ ANDed together they must all be satisfied in order to match.
A claim can request a particular class by specifying the name of a
[StorageClass](/docs/concepts/storage/storage-classes/)
using the attribute `storageClassName`.
Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can
be bound to the PVC.
Only PVs of the requested class, ones with the same `storageClassName` as the PVC,
can be bound to the PVC.
PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set
equal to `""` is always interpreted to be requesting a PV with no class, so it
can only be bound to PVs with no class (no annotation or one set equal to
`""`). A PVC with no `storageClassName` is not quite the same and is treated differently
can only be bound to PVs with no class (no annotation or one set equal to `""`).
A PVC with no `storageClassName` is not quite the same and is treated differently
by the cluster, depending on whether the
[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
is turned on.
* If the admission plugin is turned on, the administrator may specify a
default StorageClass. All PVCs that have no `storageClassName` can be bound only to
PVs of that default. Specifying a default StorageClass is done by setting the
annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in
a StorageClass object. If the administrator does not specify a default, the
cluster responds to PVC creation as if the admission plugin were turned off. If more than one
default StorageClass is specified, the newest default is used when the
PVC is dynamically provisioned.
* If the admission plugin is turned off, there is no notion of a default
StorageClass. All PVCs that have `storageClassName` set to `""` can be
bound only to PVs that have `storageClassName` also set to `""`.
However, PVCs with missing `storageClassName` can be updated later once
default StorageClass becomes available. If the PVC gets updated it will no
longer bind to PVs that have `storageClassName` also set to `""`.
* If the admission plugin is turned on, the administrator may specify a default StorageClass.
All PVCs that have no `storageClassName` can be bound only to PVs of that default.
Specifying a default StorageClass is done by setting the annotation
`storageclass.kubernetes.io/is-default-class` equal to `true` in a StorageClass object.
If the administrator does not specify a default, the cluster responds to PVC creation
as if the admission plugin were turned off.
If more than one default StorageClass is specified, the newest default is used when
the PVC is dynamically provisioned.
* If the admission plugin is turned off, there is no notion of a default StorageClass.
All PVCs that have `storageClassName` set to `""` can be bound only to PVs
that have `storageClassName` also set to `""`.
However, PVCs with missing `storageClassName` can be updated later once default StorageClass becomes available.
If the PVC gets updated it will no longer bind to PVs that have `storageClassName` also set to `""`.
See [retroactive default StorageClass assignment](#retroactive-default-storageclass-assignment) for more details.

View File

@ -84,7 +84,18 @@ The kubelet will pick host UIDs/GIDs a pod is mapped to, and will do so in a way
to guarantee that no two pods on the same node use the same mapping.
The `runAsUser`, `runAsGroup`, `fsGroup`, etc. fields in the `pod.spec` always
refer to the user inside the container.
refer to the user inside the container. These users will be used for volume
mounts (specified in `pod.spec.volumes`) and therefore the host UID/GID will not
have any effect on writes/reads from volumes the pod can mount. In other words,
the inodes created/read in volumes mounted by the pod will be the same as if the
pod wasn't using user namespaces.
This way, a pod can easily enable and disable user namespaces (without affecting
its volume's file ownerships) and can also share volumes with pods without user
namespaces by just setting the appropriate users inside the container
(`RunAsUser`, `RunAsGroup`, `fsGroup`, etc.). This applies to any volume the pod
can mount, including `hostPath` (if the pod is allowed to mount `hostPath`
volumes).
By default, the valid UIDs/GIDs when this feature is enabled is the range 0-65535.
This applies to files and processes (`runAsUser`, `runAsGroup`, etc.).

View File

@ -450,7 +450,7 @@ translate the value of each string. For example, this is the German-language
placeholder text for the search form:
```toml
[ui_search_placeholder]
[ui_search]
other = "Suchen"
```

View File

@ -1176,140 +1176,6 @@ apiserver_admission_webhook_rejection_count{error_type="no_error",name="deny-unw
## Best practices and warnings
### Idempotence
An idempotent mutating admission webhook is able to successfully process an object it has already admitted
and potentially modified. The admission can be applied multiple times without changing the result beyond
the initial application.
#### Example of idempotent mutating admission webhooks:
1. For a `CREATE` pod request, set the field `.spec.securityContext.runAsNonRoot` of the
pod to true, to enforce security best practices.
2. For a `CREATE` pod request, if the field `.spec.containers[].resources.limits`
of a container is not set, set default resource limits.
3. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar` if no container
with the name `foo-sidecar` already exists.
In the cases above, the webhook can be safely reinvoked, or admit an object that already has the fields set.
#### Example of non-idempotent mutating admission webhooks:
1. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar`
suffixed with the current timestamp (e.g. `foo-sidecar-19700101-000000`).
2. For a `CREATE`/`UPDATE` pod request, reject if the pod has label `"env"` set,
otherwise add an `"env": "prod"` label to the pod.
3. For a `CREATE` pod request, blindly append a sidecar container named
`foo-sidecar` without looking to see if there is already a `foo-sidecar`
container in the pod.
In the first case above, reinvoking the webhook can result in the same sidecar being injected multiple times to a pod, each time
with a different container name. Similarly the webhook can inject duplicated containers if the sidecar already exists in
a user-provided pod.
In the second case above, reinvoking the webhook will result in the webhook failing on its own output.
In the third case above, reinvoking the webhook will result in duplicated containers in the pod spec, which makes
the request invalid and rejected by the API server.
### Intercepting all versions of an object
It is recommended that admission webhooks should always intercept all versions of an object by setting `.webhooks[].matchPolicy`
to `Equivalent`. It is also recommended that admission webhooks should prefer registering for stable versions of resources.
Failure to intercept all versions of an object can result in admission policies not being enforced for requests in certain
versions. See [Matching requests: matchPolicy](#matching-requests-matchpolicy) for examples.
### Availability
It is recommended that admission webhooks should evaluate as quickly as possible (typically in
milliseconds), since they add to API request latency.
It is encouraged to use a small timeout for webhooks. See [Timeouts](#timeouts) for more detail.
It is recommended that admission webhooks should leverage some format of load-balancing, to
provide high availability and performance benefits. If a webhook is running within the cluster,
you can run multiple webhook backends behind a service to leverage the load-balancing that service
supports.
### Guaranteeing the final state of the object is seen
Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy
should use a validating admission webhook, since objects can be modified after being seen by mutating webhooks.
For example, a mutating admission webhook is configured to inject a sidecar container with name
"foo-sidecar" on every `CREATE` pod request. If the sidecar *must* be present, a validating
admisson webhook should also be configured to intercept `CREATE` pod requests, and validate that a
container with name "foo-sidecar" with the expected configuration exists in the to-be-created
object.
### Avoiding deadlocks in self-hosted webhooks
There are several ways that webhooks can cause deadlocks, where the cluster cannot make progress in
scheduling pods:
* A webhook running inside the cluster might cause deadlocks for its own deployment if it is configured
to intercept resources required to start its own pods.
For example, a mutating admission webhook is configured to admit **create** Pod requests only if a certain label is set in the
pod (such as `env: "prod"`). However, the webhook server runs as a Deployment that doesn't set the `env` label.
When a node that runs the webhook server pods
becomes unhealthy, the webhook deployment will try to reschedule the pods to another node. However the requests will
get rejected by the existing webhook server since the `env` label is unset, and the replacement Pod
cannot be created. Eventually, the entire Deployment for the webhook server may become unhealthy.
If you use admission webhooks to check Pods, consider excluding the namespace where your webhook
listener is running, by specifying a
[namespaceSelector](#matching-requests-namespaceselector).
* If the cluster has multiple webhooks configured (possibly from independent applications deployed on
the cluster), they can form a cycle. Webhook A must be called to process startup of webhook B's
pods and vice versa. If both webhook A and webhook B ever become unavailable at the same time (for
example, due to a cluster-wide outage or a node failure where both pods run on the same node)
deadlock occurs because neither webhook pod can be recreated without the other already running.
One way to prevent this is to exclude webhook A's pods from being acted on be webhook B. This
allows webhook A's pods to start, which in turn allows webhook B's pods to start. If you had a
third webhook, webhook C, you'd need to exclude both webhook A and webhook B's pods from
webhook C. This ensures that webhook A can _always_ start, which then allows webhook B's pods
to start, which in turn allows webhook C's pods to start.
If you want to ensure protection that avoids these risks, [ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
can
provide many protection capabilities without introducing dependency cycles.
* Admission webhooks can intercept resources used by critical cluster add-ons, such as CoreDNS,
network plugins, or storage plugins. These add-ons may be required to schedule or successfully run the
pods for a particular admission webhook on the cluster. This can cause a deadlock if both the
webhook and critical add-on is unavailable at the same time.
You may wish to exclude cluster infrastructure namespaces from webhooks, or make sure that
the webhook does not depend on the particular add-on that it acts on. For exmaple, running
a webhook as a host-networked pod ensures that it does not depend on a networking plugin.
If you want to ensure protection for a core add-on / or its namespace,
[ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/)
can
provide many protection capabilities without any dependency on worker nodes and Pods.
### Side effects
It is recommended that admission webhooks should avoid side effects if possible, which means the webhooks operate only on the
content of the `AdmissionReview` sent to them, and do not make out-of-band changes. The `.webhooks[].sideEffects` field should
be set to `None` if a webhook doesn't have any side effect.
If side effects are required during the admission evaluation, they must be suppressed when processing an
`AdmissionReview` object with `dryRun` set to `true`, and the `.webhooks[].sideEffects` field should be
set to `NoneOnDryRun`. See [Side effects](#side-effects) for more detail.
### Avoiding operating on the kube-system namespace
The `kube-system` namespace contains objects created by the Kubernetes system,
e.g. service accounts for the control plane components, pods like `kube-dns`.
Accidentally mutating or rejecting requests in the `kube-system` namespace may
cause the control plane components to stop functioning or introduce unknown behavior.
If your admission webhooks don't intend to modify the behavior of the Kubernetes control
plane, exclude the `kube-system` namespace from being intercepted using a
[`namespaceSelector`](#matching-requests-namespaceselector).
For recommendations and considerations when writing mutating admission webhooks,
see
[Admission Webhooks Good Practices](/docs/concepts/cluster-administration/admission-webhooks-good-practices).

View File

@ -100,7 +100,6 @@ Kubelet API | resource | subresource
/stats/\* | nodes | stats
/metrics/\* | nodes | metrics
/logs/\* | nodes | log
/spec/\* | nodes | spec
/pods | nodes | pods, proxy
/runningPods/ | nodes | pods, proxy
/healthz | nodes | healthz, proxy
@ -115,8 +114,12 @@ flags passed to the API server is authorized for the following attributes:
* verb=\*, resource=nodes, subresource=proxy
* verb=\*, resource=nodes, subresource=stats
* verb=\*, resource=nodes, subresource=log
* verb=\*, resource=nodes, subresource=spec
* verb=\*, resource=nodes, subresource=metrics
* verb=\*, resource=nodes, subresource=configz
* verb=\*, resource=nodes, subresource=healthz
* verb=\*, resource=nodes, subresource=pods
If [RBAC authorization](/docs/reference/access-authn-authz/rbac/) is used,
enabling this gate also ensure that the builtin `system:kubelet-api-admin` ClusterRole
is updated with permissions to access all the above mentioned subresources.

View File

@ -578,8 +578,8 @@ Apply can send partially specified objects as YAML as the body of a `PATCH` requ
to the URI of a resource. When applying a configuration, you should always include all the
fields that are important to the outcome (such as a desired state) that you want to define.
All JSON messages are valid YAML. Some clients specify Server-Side Apply requests using YAML
request bodies that are also valid JSON.
All JSON messages are valid YAML. Therefore, in addition to using YAML request bodies for Server-Side Apply requests, you can also use JSON request bodies, as they are also valid YAML.
In either case, use the media type `application/apply-patch+yaml` for the HTTP request.
### Access control and permissions {#rbac-and-permissions}

View File

@ -1466,8 +1466,9 @@ Defaulting happens on the object
Defaults applied when reading data from etcd are not automatically written back to etcd.
An update request via the API is required to persist those defaults back into etcd.
Default values must be pruned (with the exception of defaults for `metadata` fields) and must
validate against a provided schema.
Default values for non-leaf fields must be pruned (with the exception of defaults for `metadata` fields) and must
validate against a provided schema. For example in the above example, a default of `{"replicas": "foo", "badger": 1}`
for the `spec` field would be invalid, because `badger` is an unknown field, and `replicas` is not a string.
Default values for `metadata` fields of `x-kubernetes-embedded-resources: true` nodes (or parts of
a default value covering `metadata`) are not pruned during CustomResourceDefinition creation, but

View File

@ -4,8 +4,6 @@ abstract: "Automatización del despliegue, escalado y administración de contene
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s) es una plataforma de código abierto para automatizar la implementación, el escalado y la administración de aplicaciones en contenedores.

View File

@ -6,8 +6,6 @@ sitemap:
priority: 1.0
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/" >}}) est un système open-source permettant d'automatiser le déploiement, la mise à l'échelle et la gestion des applications conteneurisées.

View File

@ -4,8 +4,6 @@ abstract: "Otomatisasi Kontainer deployment, scaling, dan management"
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s)

12
content/id/blog/OWNERS Normal file
View File

@ -0,0 +1,12 @@
# See the OWNERS docs at https://go.k8s.io/owners
# Owned by Kubernetes Blog reviewers
approvers:
- sig-docs-blog-owners # Defined in OWNERS_ALIASES
reviewers:
- sig-docs-blog-reviewers # Defined in OWNERS_ALIASES
labels:
- area/blog

14
content/id/blog/_index.md Normal file
View File

@ -0,0 +1,14 @@
---
title: Kubernetes Blog
linkTitle: Blog
menu:
main:
title: "Blog"
weight: 20
---
{{< comment >}}
Untuk informasi lebih lanjut tentang berkontribusi ke blog, lihat
https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post
{{< /comment >}}

View File

@ -0,0 +1,54 @@
---
layout: blog
title: "Ingress-nginx CVE-2025-1974: Yang Perlu Kamu Ketahui"
date: 2025-03-24T12:00:00-08:00
slug: ingress-nginx-CVE-2025-1974
author: >
Tabitha Sable (Komite Respon Keamanan Kubernetes)
---
Hari ini, pengelola ingress-nginx telah [merilis patch untuk sejumlah kerentanan kritis](https://github.com/kubernetes/ingress-nginx/releases) yang dapat mempermudah penyerang untuk mengambil alih kluster Kubernetes kamu. Jika kamu termasuk di antara lebih dari 40% administrator Kubernetes yang menggunakan [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), kamu harus segera mengambil tindakan untuk melindungi pengguna dan data kamu.
## Latar Belakang
[Ingress](/docs/concepts/services-networking/ingress/) adalah fitur tradisional Kubernetes untuk mengekspos Pod workload kamu ke dunia luar agar dapat digunakan. Dengan cara yang tidak bergantung pada implementasi tertentu, pengguna Kubernetes dapat mendefinisikan bagaimana aplikasi mereka harus tersedia di jaringan. Kemudian, sebuah [ingress controller](/docs/concepts/services-networking/ingress-controllers/) menggunakan definisi tersebut untuk mengatur sumber daya lokal atau cloud sesuai dengan situasi dan kebutuhan pengguna.
Tersedia berbagai ingress controller untuk memenuhi kebutuhan pengguna dari penyedia cloud atau merek load balancer yang berbeda. Ingress-nginx adalah ingress controller berbasis perangkat lunak yang disediakan oleh proyek Kubernetes. Karena fleksibilitas dan kemudahan penggunaannya, ingress-nginx cukup populer: digunakan di lebih dari 40% kluster Kubernetes!
Ingress-nginx menerjemahkan kebutuhan dari objek Ingress menjadi konfigurasi untuk nginx, sebuah daemon web server open source yang kuat. Kemudian, nginx menggunakan konfigurasi tersebut untuk menerima dan merutekan permintaan ke berbagai aplikasi yang berjalan di dalam kluster Kubernetes. Penanganan parameter konfigurasi nginx yang tepat sangat penting, karena ingress-nginx perlu memberikan fleksibilitas yang signifikan kepada pengguna sambil mencegah mereka secara tidak sengaja atau sengaja memanipulasi nginx untuk melakukan hal-hal yang tidak seharusnya.
## Kerentanan yang Ditambal Hari Ini
Empat dari kerentanan ingress-nginx yang diumumkan hari ini adalah perbaikan terhadap cara ingress-nginx menangani bagian tertentu dari konfigurasi nginx. Tanpa perbaikan ini, sebuah objek Ingress yang dirancang khusus dapat menyebabkan nginx berperilaku tidak semestinya, termasuk mengungkapkan nilai [Secrets](/docs/concepts/configuration/secret/) yang dapat diakses oleh ingress-nginx. Secara default, ingress-nginx memiliki akses ke semua Secrets di seluruh kluster, sehingga hal ini sering kali dapat menyebabkan pengambilalihan kluster secara penuh oleh pengguna atau entitas yang memiliki izin untuk membuat Ingress.
Kerentanan paling serius hari ini, [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009), yang diberi nilai [9.8 CVSS](https://www.first.org/cvss/calculator/3-1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H), memungkinkan apa pun di jaringan Pod untuk mengeksploitasi kerentanan injeksi konfigurasi melalui fitur Validating Admission Controller dari ingress-nginx. Hal ini membuat kerentanan tersebut jauh lebih berbahaya: biasanya seseorang perlu dapat membuat objek Ingress di kluster, yang merupakan tindakan yang cukup berhak. Ketika digabungkan dengan kerentanan lainnya hari ini, **CVE-2025-1974 berarti bahwa apa pun di jaringan Pod memiliki peluang besar untuk mengambil alih kluster Kubernetes kamu, tanpa kredensial atau akses administratif yang diperlukan**. Dalam banyak skenario umum, jaringan Pod dapat diakses oleh semua workload di VPC cloud kamu, atau bahkan siapa pun yang terhubung ke jaringan perusahaan kamu! Ini adalah situasi yang sangat serius.
Hari ini, kami telah [merilis ingress-nginx v1.12.1 dan v1.11.5](https://github.com/kubernetes/ingress-nginx/releases), yang memiliki perbaikan untuk semua lima kerentanan ini.
## Langkah kamu Selanjutnya
Pertama, tentukan apakah kluster kamu menggunakan ingress-nginx. Dalam banyak kasus, kamu dapat memeriksanya dengan menjalankan `kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx` dengan izin administrator kluster.
**Jika kamu menggunakan ingress-nginx, buat rencana untuk memperbaiki kerentanan ini segera.**
**Solusi terbaik dan termudah adalah [memperbarui ke rilis patch baru dari ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/upgrade/).** Semua lima kerentanan hari ini diperbaiki dengan menginstal patch yang dirilis hari ini.
Jika kamu tidak dapat memperbarui segera, kamu dapat secara signifikan mengurangi risiko dengan mematikan fitur Validating Admission Controller dari ingress-nginx.
* Jika kamu menginstal ingress-nginx menggunakan Helm
* Instal ulang, dengan mengatur nilai Helm `controller.admissionWebhooks.enabled=false`
* Jika kamu menginstal ingress-nginx secara manual
* hapus ValidatingWebhookConfiguration bernama `ingress-nginx-admission`
* edit Deployment atau Daemonset `ingress-nginx-controller`, hapus `--validating-webhook` dari daftar argumen kontainer controller
Jika kamu mematikan fitur Validating Admission Controller sebagai mitigasi untuk CVE-2025-1974, ingatlah untuk mengaktifkannya kembali setelah kamu memperbarui. Fitur ini memberikan peningkatan kualitas hidup yang penting bagi pengguna kamu, dengan memperingatkan mereka tentang konfigurasi Ingress yang salah sebelum dapat diterapkan.
## Kesimpulan, Terima Kasih, dan Bacaan Lebih Lanjut
Kerentanan ingress-nginx yang diumumkan hari ini, termasuk CVE-2025-1974, menghadirkan risiko serius bagi banyak pengguna Kubernetes dan data mereka. Jika kamu menggunakan ingress-nginx, kamu harus segera mengambil tindakan untuk menjaga keamanan kamu.
Terima kasih kepada Nir Ohfeld, Sagi Tzadik, Ronen Shustin, dan Hillai Ben-Sasson dari Wiz atas pengungkapan kerentanan ini secara bertanggung jawab, serta atas kerja sama mereka dengan anggota SRC Kubernetes dan pengelola ingress-nginx (Marco Ebert dan James Strong) untuk memastikan kami memperbaikinya secara efektif.
Untuk informasi lebih lanjut tentang pemeliharaan dan masa depan ingress-nginx, silakan lihat [isu GitHub ini](https://github.com/kubernetes/ingress-nginx/issues/13002) dan/atau hadiri [presentasi James dan Marco di KubeCon/CloudNativeCon EU 2025](https://kccnceu2025.sched.com/event/1tcyc/).
Untuk informasi lebih lanjut tentang kerentanan spesifik yang dibahas dalam artikel ini, silakan lihat isu GitHub yang sesuai: [CVE-2025-24513](https://github.com/kubernetes/kubernetes/issues/131005), [CVE-2025-24514](https://github.com/kubernetes/kubernetes/issues/131006), [CVE-2025-1097](https://github.com/kubernetes/kubernetes/issues/131007), [CVE-2025-1098](https://github.com/kubernetes/kubernetes/issues/131008), atau [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009)

View File

@ -12,14 +12,14 @@ lingkaran tertutup yang mengatur keadaan suatu sistem.
Berikut adalah salah satu contoh kontrol tertutup: termostat di sebuah ruangan.
Ketika kamu mengatur suhunya, itu mengisyaratkan ke termostat
tentang *keadaan yang kamu inginkan*. Sedangkan suhu kamar yang sebenarnya
tentang *keadaan yang kamu inginkan*. Sedangkan suhu kamar yang sebenarnya
adalah *keadaan saat ini*. Termostat berfungsi untuk membawa keadaan saat ini
mendekati ke keadaan yang diinginkan, dengan menghidupkan atau mematikan
mendekati ke keadaan yang diinginkan, dengan menghidupkan atau mematikan
perangkat.
Di Kubernetes, _controller_ adalah kontrol tertutup yang mengawasi keadaan klaster
{{< glossary_tooltip term_id="cluster" text="klaster" >}} kamu, lalu membuat atau meminta
perubahan jika diperlukan. Setiap _controller_ mencoba untuk memindahkan status
{{< glossary_tooltip term_id="cluster" text="klaster" >}} kamu, lalu membuat atau meminta
perubahan jika diperlukan. Setiap _controller_ mencoba untuk memindahkan status
klaster saat ini mendekati keadaan yang diinginkan.
{{< glossary_definition term_id="controller" length="short">}}
@ -29,24 +29,24 @@ klaster saat ini mendekati keadaan yang diinginkan.
<!-- body -->
## Pola _controller_
## Pola _controller_
Sebuah _controller_ melacak sekurang-kurangnya satu jenis sumber daya dari
Sebuah _controller_ melacak sekurang-kurangnya satu jenis sumber daya dari
Kubernetes.
[objek-objek](/id/docs/concepts/overview/working-with-objects/kubernetes-objects/) ini
memiliki *spec field* yang merepresentasikan keadaan yang diinginkan. Satu atau
lebih _controller_ untuk *resource* tersebut bertanggung jawab untuk membuat
memiliki *spec field* yang merepresentasikan keadaan yang diinginkan. Satu atau
lebih _controller_ untuk *resource* tersebut bertanggung jawab untuk membuat
keadaan sekarang mendekati keadaan yang diinginkan.
_Controller_ mungkin saja melakukan tindakan itu sendiri; namun secara umum, di
_Controller_ mungkin saja melakukan tindakan itu sendiri; namun secara umum, di
Kubernetes, _controller_ akan mengirim pesan ke
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} yang
mempunyai efek samping yang bermanfaat. Kamu bisa melihat contoh-contoh
{{< glossary_tooltip text="API server" term_id="kube-apiserver" >}} yang
mempunyai efek samping yang bermanfaat. Kamu bisa melihat contoh-contoh
di bawah ini.
{{< comment >}}
Beberapa _controller_ bawaan, seperti _controller namespace_, bekerja pada objek
yang tidak memiliki *spec*. Agar lebih sederhana, halaman ini tidak
yang tidak memiliki *spec*. Agar lebih sederhana, halaman ini tidak
menjelaskannya secara detail.
{{< /comment >}}
@ -57,34 +57,34 @@ bawaan dari Kubernetes. _Controller_ bawaan tersebut mengelola status melalui
interaksi dengan server API dari suatu klaster.
Job adalah sumber daya dalam Kubernetes yang menjalankan a
{{< glossary_tooltip term_id="pod" >}}, atau mungkin beberapa Pod sekaligus,
{{< glossary_tooltip term_id="pod" >}}, atau mungkin beberapa Pod sekaligus,
untuk melakukan sebuah pekerjaan dan kemudian berhenti.
(Setelah [dijadwalkan](/id/docs/concepts/scheduling-eviction/), objek Pod
(Setelah [dijadwalkan](/id/docs/concepts/scheduling-eviction/), objek Pod
akan menjadi bagian dari keadaan yang diinginkan oleh kubelet).
Ketika _controller job_ melihat tugas baru, maka _controller_ itu memastikan bahwa,
di suatu tempat pada klaster kamu, kubelet dalam sekumpulan Node menjalankan
Pod-Pod dengan jumlah yang benar untuk menyelesaikan pekerjaan. _Controller job_
tidak menjalankan sejumlah Pod atau kontainer apa pun untuk dirinya sendiri.
Namun, _controller job_ mengisyaratkan kepada server API untuk membuat atau
Ketika _controller job_ melihat tugas baru, maka _controller_ itu memastikan bahwa,
di suatu tempat pada klaster kamu, kubelet dalam sekumpulan Node menjalankan
Pod-Pod dengan jumlah yang benar untuk menyelesaikan pekerjaan. _Controller job_
tidak menjalankan sejumlah Pod atau kontainer apa pun untuk dirinya sendiri.
Namun, _controller job_ mengisyaratkan kepada server API untuk membuat atau
menghapus Pod. Komponen-komponen lain dalam
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}
bekerja berdasarkan informasi baru (adakah Pod-Pod baru untuk menjadwalkan dan
bekerja berdasarkan informasi baru (adakah Pod-Pod baru untuk menjadwalkan dan
menjalankan pekerjan), dan pada akhirnya pekerjaan itu selesai.
Setelah kamu membuat Job baru, status yang diharapkan adalah bagaimana
pekerjaan itu bisa selesai. _Controller job_ membuat status pekerjaan saat ini
agar mendekati dengan keadaan yang kamu inginkan: membuat Pod yang melakukan
pekerjaan yang kamu inginkan untuk Job tersebut, sehingga Job hampir
Setelah kamu membuat Job baru, status yang diharapkan adalah bagaimana
pekerjaan itu bisa selesai. _Controller job_ membuat status pekerjaan saat ini
agar mendekati dengan keadaan yang kamu inginkan: membuat Pod yang melakukan
pekerjaan yang kamu inginkan untuk Job tersebut, sehingga Job hampir
terselesaikan.
_Controller_ juga memperbarui objek yang mengkonfigurasinya. Misalnya: setelah
pekerjaan dilakukan untuk Job tersebut, _controller job_ memperbarui objek Job
_Controller_ juga memperbarui objek yang mengkonfigurasinya. Misalnya: setelah
pekerjaan dilakukan untuk Job tersebut, _controller job_ memperbarui objek Job
dengan menandainya `Finished`.
(Ini hampir sama dengan bagaimana beberapa termostat mematikan lampu untuk
mengindikasikan bahwa kamar kamu sekarang sudah berada pada suhu yang kamu
(Ini hampir sama dengan bagaimana beberapa termostat mematikan lampu untuk
mengindikasikan bahwa kamar kamu sekarang sudah berada pada suhu yang kamu
inginkan).
### Kontrol Langsung
@ -92,17 +92,17 @@ inginkan).
Berbeda dengan sebuah Job, beberapa dari _controller_ perlu melakukan perubahan
sesuatu di luar dari klaster kamu.
Sebagai contoh, jika kamu menggunakan kontrol tertutup untuk memastikan apakah
Sebagai contoh, jika kamu menggunakan kontrol tertutup untuk memastikan apakah
cukup {{< glossary_tooltip text="Node" term_id="node" >}}
dalam kluster kamu, maka _controller_ memerlukan sesuatu di luar klaster saat ini
dalam klaster kamu, maka _controller_ memerlukan sesuatu di luar klaster saat ini
untuk mengatur Node-Node baru apabila dibutuhkan.
_controller_ yang berinteraksi dengan keadaan eksternal dapat menemukan keadaan
yang diinginkannya melalui server API, dan kemudian berkomunikasi langsung
dengan sistem eksternal untuk membawa keadaan saat ini mendekat keadaan yang
_controller_ yang berinteraksi dengan keadaan eksternal dapat menemukan keadaan
yang diinginkannya melalui server API, dan kemudian berkomunikasi langsung
dengan sistem eksternal untuk membawa keadaan saat ini mendekat keadaan yang
diinginkan.
(Sebenarnya ada sebuah [_controller_](https://github.com/kubernetes/autoscaler/) yang melakukan penskalaan node secara
(Sebenarnya ada sebuah [_controller_](https://github.com/kubernetes/autoscaler/) yang melakukan penskalaan node secara
horizontal dalam klaster kamu.
## Status sekarang berbanding status yang diinginkan {#sekarang-banding-diinginkan}
@ -110,39 +110,39 @@ horizontal dalam klaster kamu.
Kubernetes mengambil pandangan sistem secara _cloud-native_, dan mampu menangani
perubahan yang konstan.
Klaster kamu dapat mengalami perubahan kapan saja pada saat pekerjaan sedang
Klaster kamu dapat mengalami perubahan kapan saja pada saat pekerjaan sedang
berlangsung dan kontrol tertutup secara otomatis memperbaiki setiap kegagalan.
Hal ini berarti bahwa, secara potensi, klaster kamu tidak akan pernah mencapai
Hal ini berarti bahwa, secara potensi, klaster kamu tidak akan pernah mencapai
kondisi stabil.
Selama _controller_ dari klaster kamu berjalan dan mampu membuat perubahan yang
Selama _controller_ dari klaster kamu berjalan dan mampu membuat perubahan yang
bermanfaat, tidak masalah apabila keadaan keseluruhan stabil atau tidak.
## Perancangan
Sebagai prinsip dasar perancangan, Kubernetes menggunakan banyak _controller_ yang
masing-masing mengelola aspek tertentu dari keadaan klaster. Yang paling umum,
kontrol tertutup tertentu menggunakan salah satu jenis sumber daya
sebagai suatu keadaan yang diinginkan, dan memiliki jenis sumber daya yang
Sebagai prinsip dasar perancangan, Kubernetes menggunakan banyak _controller_ yang
masing-masing mengelola aspek tertentu dari keadaan klaster. Yang paling umum,
kontrol tertutup tertentu menggunakan salah satu jenis sumber daya
sebagai suatu keadaan yang diinginkan, dan memiliki jenis sumber daya yang
berbeda untuk dikelola dalam rangka membuat keadaan yang diinginkan terjadi.
Sangat penting untuk memiliki beberapa _controller_ sederhana daripada hanya satu
_controller_ saja, dimana satu kumpulan monolitik kontrol tertutup saling
Sangat penting untuk memiliki beberapa _controller_ sederhana daripada hanya satu
_controller_ saja, dimana satu kumpulan monolitik kontrol tertutup saling
berkaitan satu sama lain. Karena _controller_ bisa saja gagal, sehingga Kubernetes
dirancang untuk memungkinkan hal tersebut.
Misalnya: _controller_ pekerjaan melacak objek pekerjaan (untuk menemukan
adanya pekerjaan baru) dan objek Pod (untuk menjalankan pekerjaan tersebut dan
kemudian melihat lagi ketika pekerjaan itu sudah selesai). Dalam hal ini yang
adanya pekerjaan baru) dan objek Pod (untuk menjalankan pekerjaan tersebut dan
kemudian melihat lagi ketika pekerjaan itu sudah selesai). Dalam hal ini yang
lain membuat pekerjaan, sedangkan _controller_ pekerjaan membuat Pod-Pod.
{{< note >}}
Ada kemungkinan beberapa _controller_ membuat atau memperbarui jenis objek yang
sama. Namun di belakang layar, _controller_ Kubernetes memastikan bahwa mereka
hanya memperhatikan sumbr daya yang terkait dengan sumber daya yang mereka
Ada kemungkinan beberapa _controller_ membuat atau memperbarui jenis objek yang
sama. Namun di belakang layar, _controller_ Kubernetes memastikan bahwa mereka
hanya memperhatikan sumbr daya yang terkait dengan sumber daya yang mereka
kendalikan.
Misalnya, kamu dapat memiliki Deployment dan Job; dimana keduanya akan membuat
Misalnya, kamu dapat memiliki Deployment dan Job; dimana keduanya akan membuat
Pod. _Controller Job_ tidak akan menghapus Pod yang dibuat oleh Deployment kamu,
karena ada informasi ({{< glossary_tooltip term_id="label" text="labels" >}})
yang dapat oleh _controller_ untuk membedakan Pod-Pod tersebut.
@ -156,14 +156,14 @@ bawaan memberikan perilaku inti yang sangat penting.
_Controller Deployment_ dan _controller Job_ adalah contoh dari _controller_ yang
hadir sebagai bagian dari Kubernetes itu sendiri (_controller_ "bawaan").
Kubernetes memungkinkan kamu menjalankan _control plane_ yang tangguh, sehingga
jika ada _controller_ bawaan yang gagal, maka bagian lain dari _control plane_ akan
Kubernetes memungkinkan kamu menjalankan _control plane_ yang tangguh, sehingga
jika ada _controller_ bawaan yang gagal, maka bagian lain dari _control plane_ akan
mengambil alih pekerjaan.
Kamu juga dapat menemukan pengontrol yang berjalan di luar _control plane_, untuk
mengembangkan lebih jauh Kubernetes. Atau, jika mau, kamu bisa membuat
Kamu juga dapat menemukan pengontrol yang berjalan di luar _control plane_, untuk
mengembangkan lebih jauh Kubernetes. Atau, jika mau, kamu bisa membuat
_controller_ baru sendiri. Kamu dapat menjalankan _controller_ kamu sendiri sebagai
satu kumpulan dari beberapa Pod, atau bisa juga sebagai bagian eksternal dari
satu kumpulan dari beberapa Pod, atau bisa juga sebagai bagian eksternal dari
Kubernetes. Manakah yang paling sesuai akan tergantung pada apa yang _controller_
khusus itu lakukan.

View File

@ -1,5 +1,5 @@
---
title: Jaringan Kluster
title: Jaringan Klaster
content_type: concept
weight: 50
---
@ -24,7 +24,7 @@ Kubernetes adalah tentang berbagi mesin antar aplikasi. Pada dasarnya,
saat berbagi mesin harus memastikan bahwa dua aplikasi tidak mencoba menggunakan
_port_ yang sama. Mengkoordinasikan _port_ di banyak pengembang sangat sulit
dilakukan pada skala yang berbeda dan memaparkan pengguna ke masalah
tingkat kluster yang di luar kendali mereka.
tingkat klaster yang di luar kendali mereka.
Alokasi _port_ yang dinamis membawa banyak komplikasi ke sistem - setiap aplikasi
harus menganggap _port_ sebagai _flag_, _server_ API harus tahu cara memasukkan
@ -73,9 +73,9 @@ Detail tentang cara kerja sistem AOS dapat diakses di sini: http://www.apstra.co
### AWS VPC CNI untuk Kubernetes
[AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) menawarkan jaringan AWS _Virtual Private Cloud_ (VPC) terintegrasi untuk kluster Kubernetes. Plugin CNI ini menawarkan _throughput_ dan ketersediaan tinggi, latensi rendah, dan _jitter_ jaringan minimal. Selain itu, pengguna dapat menerapkan jaringan AWS VPC dan praktik keamanan terbaik untuk membangun kluster Kubernetes. Ini termasuk kemampuan untuk menggunakan catatan aliran VPC, kebijakan perutean VPC, dan grup keamanan untuk isolasi lalu lintas jaringan.
[AWS VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) menawarkan jaringan AWS _Virtual Private Cloud_ (VPC) terintegrasi untuk klaster Kubernetes. Plugin CNI ini menawarkan _throughput_ dan ketersediaan tinggi, latensi rendah, dan _jitter_ jaringan minimal. Selain itu, pengguna dapat menerapkan jaringan AWS VPC dan praktik keamanan terbaik untuk membangun klaster Kubernetes. Ini termasuk kemampuan untuk menggunakan catatan aliran VPC, kebijakan perutean VPC, dan grup keamanan untuk isolasi lalu lintas jaringan.
Menggunakan _plugin_ CNI ini memungkinkan Pod Kubernetes memiliki alamat IP yang sama di dalam Pod seperti yang mereka lakukan di jaringan VPC. CNI mengalokasikan AWS _Elastic Networking Interfaces_ (ENIs) ke setiap node Kubernetes dan menggunakan rentang IP sekunder dari setiap ENI untuk Pod pada Node. CNI mencakup kontrol untuk pra-alokasi ENI dan alamat IP untuk waktu mulai Pod yang cepat dan memungkinkan kluster besar hingga 2.000 Node.
Menggunakan _plugin_ CNI ini memungkinkan Pod Kubernetes memiliki alamat IP yang sama di dalam Pod seperti yang mereka lakukan di jaringan VPC. CNI mengalokasikan AWS _Elastic Networking Interfaces_ (ENIs) ke setiap node Kubernetes dan menggunakan rentang IP sekunder dari setiap ENI untuk Pod pada Node. CNI mencakup kontrol untuk pra-alokasi ENI dan alamat IP untuk waktu mulai Pod yang cepat dan memungkinkan klaster besar hingga 2.000 Node.
Selain itu, CNI dapat dijalankan bersama [Calico untuk penegakan kebijakan jaringan](https://docs.aws.amazon.com/eks/latest/userguide/calico.html). Proyek AWS VPC CNI adalah _open source_ dengan [dokumentasi di GitHub](https://github.com/aws/amazon-vpc-cni-k8s).
@ -83,7 +83,7 @@ Selain itu, CNI dapat dijalankan bersama [Calico untuk penegakan kebijakan jarin
[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) adalah arsitektur jaringan asli layanan cloud, yang dirancang untuk menjalankan Kubernetes di lingkungan cloud pribadi / lokal. Dengan menggunakan SDN fisik & _virtual_ terpadu, Big Cloud Fabric menangani masalah yang sering melekat pada jaringan kontainer seperti penyeimbangan muatan, visibilitas, pemecahan masalah, kebijakan keamanan & pemantauan lalu lintas kontainer.
Dengan bantuan arsitektur multi-penyewa Pod virtual pada Big Cloud Fabric, sistem orkestrasi kontainer seperti Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm akan terintegrasi secara alami bersama dengan sistem orkestrasi VM seperti VMware, OpenStack & Nutanix. Pelanggan akan dapat terhubung dengan aman berapa pun jumlah klusternya dan memungkinkan komunikasi antar penyewa di antara mereka jika diperlukan.
Dengan bantuan arsitektur multi-penyewa Pod virtual pada Big Cloud Fabric, sistem orkestrasi kontainer seperti Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm akan terintegrasi secara alami bersama dengan sistem orkestrasi VM seperti VMware, OpenStack & Nutanix. Pelanggan akan dapat terhubung dengan aman berapa pun jumlah klasternya dan memungkinkan komunikasi antar penyewa di antara mereka jika diperlukan.
Terbaru ini BCF diakui oleh Gartner sebagai visioner dalam [_Magic Quadrant_](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). Salah satu penyebaran BCF Kubernetes di tempat (yang mencakup Kubernetes, DC/OS & VMware yang berjalan di beberapa DC di berbagai wilayah geografis) juga dirujuk [di sini](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
@ -113,7 +113,7 @@ Plugin ini dirancang untuk secara langsung mengkonfigurasi dan _deploy_ dalam VP
### DANM
[DANM] (https://github.com/nokia/danm) adalah solusi jaringan untuk beban kerja telco yang berjalan di kluster Kubernetes. Dibangun dari komponen-komponen berikut:
[DANM] (https://github.com/nokia/danm) adalah solusi jaringan untuk beban kerja telco yang berjalan di klaster Kubernetes. Dibangun dari komponen-komponen berikut:
* Plugin CNI yang mampu menyediakan antarmuka IPVLAN dengan fitur-fitur canggih
* Modul IPAM built-in dengan kemampuan mengelola dengan jumlah banyak, _cluster-wide_, _discontinous_ jaringan L3 dan menyediakan skema dinamis, statis, atau tidak ada permintaan skema IP
@ -129,7 +129,7 @@ Dengan _toolset_ ini, DANM dapat memberikan beberapa antarmuka jaringan yang ter
### Google Compute Engine (GCE)
Untuk skrip konfigurasi kluster Google Compute Engine, [perutean lanjutan](https://cloud.google.com/vpc/docs/routes) digunakan untuk menetapkan setiap VM _subnet_ (standarnya adalah `/24` - 254 IP). Setiap lalu lintas yang terikat untuk _subnet_ itu akan dialihkan langsung ke VM oleh _fabric_ jaringan GCE. Ini adalah tambahan untuk alamat IP "utama" yang ditugaskan untuk VM, yang NAT'ed untuk akses internet keluar. Sebuah linux _bridge_ (disebut `cbr0`) dikonfigurasikan untuk ada pada subnet itu, dan diteruskan ke _flag_ `-bridge` milik docker.
Untuk skrip konfigurasi klaster Google Compute Engine, [perutean lanjutan](https://cloud.google.com/vpc/docs/routes) digunakan untuk menetapkan setiap VM _subnet_ (standarnya adalah `/24` - 254 IP). Setiap lalu lintas yang terikat untuk _subnet_ itu akan dialihkan langsung ke VM oleh _fabric_ jaringan GCE. Ini adalah tambahan untuk alamat IP "utama" yang ditugaskan untuk VM, yang NAT'ed untuk akses internet keluar. Sebuah linux _bridge_ (disebut `cbr0`) dikonfigurasikan untuk ada pada subnet itu, dan diteruskan ke _flag_ `-bridge` milik docker.
Docker dimulai dengan:

View File

@ -265,8 +265,8 @@ melakukan mekanisme _pipeline_ `base64 | tr -d '\n'` jika tidak terdapat opsi `-
#### Membuat Secret dengan Menggunakan _Generator_
Kubectl mendukung [mekanisme manajemen objek dengan menggunakan Kustomize](/docs/tasks/manage-kubernetes-objects/kustomization/)
sejak versi 1.14. Dengan fitur baru ini, kamu juga dapat membuat sebuah Secret dari sebuah _generator_
dan kemudian mengaplikasikannya untuk membuat sebuah objek pada Apiserver. _Generator_ yang digunakan haruslah
sejak versi 1.14. Dengan fitur baru ini, kamu juga dapat membuat sebuah Secret dari sebuah _generator_
dan kemudian mengaplikasikannya untuk membuat sebuah objek pada Apiserver. _Generator_ yang digunakan haruslah
dispesifikasikan di dalam sebuah _file_ `kustomization.yaml` di dalam sebuah direktori.
Sebagai contoh, untuk menghasilan sebuah Secret dari _file-file_ `./username.txt` dan `./password.txt`
@ -325,14 +325,14 @@ $ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
```
{{< note >}}
Secret yang dihasilkan nantinya akan memiliki tambahan sufix dengan cara melakukan teknik _hashing_
pada isi Secret tersebut. Hal ini dilakukan untuk menjamin dibuatnya sebuah Secret baru setiap kali terjadi
Secret yang dihasilkan nantinya akan memiliki tambahan sufix dengan cara melakukan teknik _hashing_
pada isi Secret tersebut. Hal ini dilakukan untuk menjamin dibuatnya sebuah Secret baru setiap kali terjadi
perubahan isi dari Secret tersebut.
{{< /note >}}
#### Melakukan Proses _Decode_ pada Secret
Secret dapat dibaca dengan menggunakan perintah `kubectl get secret`.
Secret dapat dibaca dengan menggunakan perintah `kubectl get secret`.
Misalnya saja, untuk membaca Secret yang dibuat pada bagian sebelumya:
```shell
@ -366,9 +366,9 @@ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
## Menggunakan Secret
Secret dapat di-_mount_ sebagai _volume_ data atau dapat diekspos sebagai {{< glossary_tooltip text="variabel-variabel environment" term_id="container-env-variables" >}}
dapat digunakan di dalam Pod. Secret ini juga dapat digunakan secara langsug
oleh bagian lain dari sistem, tanpa secara langsung berkaitan dengan Pod.
Sebagai contoh, Secret dapat berisikan kredensial bagian suatu sistem lain yang digunakan
dapat digunakan di dalam Pod. Secret ini juga dapat digunakan secara langsug
oleh bagian lain dari sistem, tanpa secara langsung berkaitan dengan Pod.
Sebagai contoh, Secret dapat berisikan kredensial bagian suatu sistem lain yang digunakan
untuk berinteraksi dengan sistem eksternal yang kamu butuhkan.
### Menggunakan Secret sebagai _File_ melalui Pod
@ -403,17 +403,17 @@ spec:
Setiap Secret yang ingin kamu gunakan harus dirujuk pada _field_ `.spec.volumes`.
Jika terdapat lebih dari satu container di dalam Pod,
maka setiap container akan membutuhkan blok `volumeMounts`-nya masing-masing,
Jika terdapat lebih dari satu container di dalam Pod,
maka setiap container akan membutuhkan blok `volumeMounts`-nya masing-masing,
meskipun demikian hanya sebuah _field_ `.spec.volumes` yang dibutuhkan untuk setiap Secret.
Kamu dapat menyimpan banyak _file_ ke dalam satu Secret,
Kamu dapat menyimpan banyak _file_ ke dalam satu Secret,
atau menggunakan banyak Secret, hal ini tentunya bergantung pada preferensi pengguna.
**Proyeksi _key_ Secret pada Suatu _Path_ Spesifik**
Kita juga dapat mengontrol _path_ di dalam _volume_ di mana sebuah Secret diproyeksikan.
Kamu dapat menggunakan _field_ `.spec.volumes[].secret.items` untuk mengubah
Kita juga dapat mengontrol _path_ di dalam _volume_ di mana sebuah Secret diproyeksikan.
Kamu dapat menggunakan _field_ `.spec.volumes[].secret.items` untuk mengubah
_path_ target dari setiap _key_:
```yaml
@ -443,17 +443,17 @@ Apa yang akan terjadi jika kita menggunakan definisi di atas:
* Secret `username` akan disimpan pada _file_ `/etc/foo/my-group/my-username` dan bukan `/etc/foo/username`.
* Secret `password` tidak akan diproyeksikan.
Jika _field_ `.spec.volumes[].secret.items` digunakan, hanya _key-key_ yang dispesifikan di dalam
`items` yang diproyeksikan. Untuk mengonsumsi semua _key-key_ yang ada dari Secret,
Jika _field_ `.spec.volumes[].secret.items` digunakan, hanya _key-key_ yang dispesifikan di dalam
`items` yang diproyeksikan. Untuk mengonsumsi semua _key-key_ yang ada dari Secret,
semua _key_ yang ada harus didaftarkan pada _field_ `items`.
Semua _key_ yang didaftarkan juga harus ada di dalam Secret tadi.
Semua _key_ yang didaftarkan juga harus ada di dalam Secret tadi.
Jika tidak, _volume_ yang didefinisikan tidak akan dibuat.
**_Permission_ _File-File_ Secret**
Kamu juga dapat menspesifikasikan mode _permission_ dari _file_ Secret yang kamu inginkan.
Jika kamu tidak menspesifikasikan hal tersebut, maka nilai _default_ yang akan diberikan adalah `0644` is used by default.
Kamu dapat memberikan mode _default_ untuk semua Secret yang ada serta melakukan mekanisme _override_ _permission_
Kamu juga dapat menspesifikasikan mode _permission_ dari _file_ Secret yang kamu inginkan.
Jika kamu tidak menspesifikasikan hal tersebut, maka nilai _default_ yang akan diberikan adalah `0644` is used by default.
Kamu dapat memberikan mode _default_ untuk semua Secret yang ada serta melakukan mekanisme _override_ _permission_
pada setiap _key_ jika memang diperlukan.
Sebagai contoh, kamu dapat memberikan spesifikasi mode _default_ sebagai berikut:
@ -477,15 +477,15 @@ spec:
defaultMode: 256
```
Kemudian, sebuah Secret akan di-_mount_ pada `/etc/foo`, selanjutnya semua _file_
Kemudian, sebuah Secret akan di-_mount_ pada `/etc/foo`, selanjutnya semua _file_
yang dibuat pada _volume_ secret tersebut akan memiliki _permission_ `0400`.
Perhatikan bahwa spesifikasi JSON tidak mendukung notasi _octal_, dengan demikian gunakanlah
_value_ 256 untuk _permission_ 0400. Jika kamu menggunakan format YAML untuk spesifikasi Pod,
kamu dapat menggunakan notasi _octal_ untuk memberikan spesifikasi _permission_ dengan cara yang lebih
Perhatikan bahwa spesifikasi JSON tidak mendukung notasi _octal_, dengan demikian gunakanlah
_value_ 256 untuk _permission_ 0400. Jika kamu menggunakan format YAML untuk spesifikasi Pod,
kamu dapat menggunakan notasi _octal_ untuk memberikan spesifikasi _permission_ dengan cara yang lebih
natural.
Kamu juga dapat melakukan mekanisme pemetaan, seperti yang sudah dilakukan pada contoh sebelumnya,
Kamu juga dapat melakukan mekanisme pemetaan, seperti yang sudah dilakukan pada contoh sebelumnya,
dan kemudian memberikan spesifikasi _permission_ yang berbeda untuk _file_ yang berbeda.
```yaml
@ -510,19 +510,19 @@ spec:
mode: 511
```
Pada kasus tersebut, _file_ yang dihasilkan pada `/etc/foo/my-group/my-username` akan memiliki
_permission_ `0777`. Karena terdapat batasan pada representasi JSON, maka kamu
Pada kasus tersebut, _file_ yang dihasilkan pada `/etc/foo/my-group/my-username` akan memiliki
_permission_ `0777`. Karena terdapat batasan pada representasi JSON, maka kamu
harus memberikan spesifikasi mode _permission_ dalam bentuk notasi desimal.
Perhatikan bahwa _permission_ ini bida saja ditampilkan dalam bentuk notasi desimal,
Perhatikan bahwa _permission_ ini bida saja ditampilkan dalam bentuk notasi desimal,
hal ini akan ditampilkan pada bagian selanjutnya.
**Mengonsumsi _Value_ dari Secret melalui Volume**
Di dalam sebuah container dimana _volume_ secret di-_mount_,
_key_ dari Secret akan ditampilkan sebagai _file_ dan _value_ dari Secret yang berada dalam bentuk
base64 ini akan di-_decode_ dam disimpan pada _file-file_ tadi.
Berikut merupakan hasil dari eksekusi perintah di dalam container berdasarkan contoh
Di dalam sebuah container dimana _volume_ secret di-_mount_,
_key_ dari Secret akan ditampilkan sebagai _file_ dan _value_ dari Secret yang berada dalam bentuk
base64 ini akan di-_decode_ dam disimpan pada _file-file_ tadi.
Berikut merupakan hasil dari eksekusi perintah di dalam container berdasarkan contoh
yang telah dipaparkan di atas:
```shell
@ -548,34 +548,34 @@ cat /etc/foo/password
1f2d1e2e67df
```
Program di dalam container bertanggung jawab untuk membaca Secret
Program di dalam container bertanggung jawab untuk membaca Secret
dari _file-file_ yang ada.
**Secret yang di-_mount_ Akan Diubah Secara Otomatis**
Ketika sebuah Secret yang sedang digunakan di dalam _volume_ diubah,
maka _key_ yang ada juga akan diubah. Kubelet akan melakukan mekanisme pengecekan secara periodik
apakah terdapat perubahan pada Secret yang telah di-_mount_. Meskipun demikian,
proses pengecekan ini dilakukan dengan menggunakan _cache_ lokal untuk mendapatkan _value_ saat ini
dari sebuah Secret. Tipe _cache_ yang ada dapat diatur dengan menggunakan
Ketika sebuah Secret yang sedang digunakan di dalam _volume_ diubah,
maka _key_ yang ada juga akan diubah. Kubelet akan melakukan mekanisme pengecekan secara periodik
apakah terdapat perubahan pada Secret yang telah di-_mount_. Meskipun demikian,
proses pengecekan ini dilakukan dengan menggunakan _cache_ lokal untuk mendapatkan _value_ saat ini
dari sebuah Secret. Tipe _cache_ yang ada dapat diatur dengan menggunakan
(_field_ `ConfigMapAndSecretChangeDetectionStrategy` pada
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)).
Mekanisme ini kemudian dapat diteruskan dengan mekanisme _watch_(_default_), ttl, atau melakukan pengalihan semua
Mekanisme ini kemudian dapat diteruskan dengan mekanisme _watch_(_default_), ttl, atau melakukan pengalihan semua
_request_ secara langsung pada kube-apiserver.
Sebagai hasilnya, _delay_ total dari pertama kali Secret diubah hingga dilakukannya mekanisme
proyeksi _key_ yang baru pada Pod berlangsung dalam jangka waktu sinkronisasi periodik kubelet +
_delay_ propagasi _cache_, dimana _delay_ propagasi _cache_ bergantung pada jenis _cache_ yang digunakan
Sebagai hasilnya, _delay_ total dari pertama kali Secret diubah hingga dilakukannya mekanisme
proyeksi _key_ yang baru pada Pod berlangsung dalam jangka waktu sinkronisasi periodik kubelet +
_delay_ propagasi _cache_, dimana _delay_ propagasi _cache_ bergantung pada jenis _cache_ yang digunakan
(ini sama dengan _delay_ propagasi _watch_, ttl dari _cache_, atau nol).
{{< note >}}
Sebuah container menggunakan Secret sebagai
[subPath](/id/docs/concepts/storage/volumes#using-subpath) dari _volume_
[subPath](/id/docs/concepts/storage/volumes#using-subpath) dari _volume_
yang di-_mount_ tidak akan menerima perubahan Secret.
{{< /note >}}
### Menggunakan Secret sebagai Variabel _Environment_
Berikut merupakan langkah-langkah yang harus kamu terapkan,
Berikut merupakan langkah-langkah yang harus kamu terapkan,
untuk menggunakan secret sebagai {{< glossary_tooltip text="variabel _environment_" term_id="container-env-variables" >}}
pada sebuah Pod:
@ -610,9 +610,9 @@ spec:
**Menggunakan Secret dari Variabel _Environment_**
Di dalam sebuah container yang mengkonsumsi Secret pada sebuah variabel _environment_, _key_ dari sebuah secret
akan ditampilkan sebagai variabel _environment_ pada umumnya dengan _value_ berupa informasi yang telah di-_decode_
ke dalam base64. Berikut merupakan hasil yang didapatkan apabila perintah-perintah di bawah ini
Di dalam sebuah container yang mengkonsumsi Secret pada sebuah variabel _environment_, _key_ dari sebuah secret
akan ditampilkan sebagai variabel _environment_ pada umumnya dengan _value_ berupa informasi yang telah di-_decode_
ke dalam base64. Berikut merupakan hasil yang didapatkan apabila perintah-perintah di bawah ini
dijalankan dari dalam container yang didefinisikan di atas:
```shell
@ -630,8 +630,8 @@ echo $SECRET_PASSWORD
### Menggunakan imagePullSecrets
Sebuah `imagePullSecret` merupakan salah satu cara yang dapat digunakan untuk menempatkan secret
yang mengandung _password_ dari registri Docker (atau registri _image_ lainnya)
Sebuah `imagePullSecret` merupakan salah satu cara yang dapat digunakan untuk menempatkan secret
yang mengandung _password_ dari registri Docker (atau registri _image_ lainnya)
pada Kubelet, sehingga Kubelet dapat mengunduh _image_ dan menempatkannya pada Pod.
**Memberikan spesifikasi manual dari sebuah imagePullSecret**
@ -640,17 +640,17 @@ Penggunaan imagePullSecrets dideskripsikan di dalam [dokumentasi _image_](/id/do
### Mekanisme yang Dapat Diterapkan agar imagePullSecrets dapat Secara Otomatis Digunakan
Kamu dapat secara manual membuat sebuah imagePullSecret, serta merujuk imagePullSecret
yang sudah kamu buat dari sebuah serviceAccount. Semua Pod yang dibuat dengan menggunakan
serviceAccount tadi atau serviceAccount _default_ akan menerima _field_ imagePullSecret dari
Kamu dapat secara manual membuat sebuah imagePullSecret, serta merujuk imagePullSecret
yang sudah kamu buat dari sebuah serviceAccount. Semua Pod yang dibuat dengan menggunakan
serviceAccount tadi atau serviceAccount _default_ akan menerima _field_ imagePullSecret dari
serviceAccount yang digunakan.
Bacalah [Cara menambahkan ImagePullSecrets pada sebuah _service account_](/id/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
Bacalah [Cara menambahkan ImagePullSecrets pada sebuah _service account_](/id/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
untuk informasi lebih detail soal proses yang dijalankan.
### Mekanisme _Mounting_ Otomatis dari Secret yang Sudah Dibuat
Secret yang dibuat secara manual (misalnya, secret yang mengandung token yang dapat digunakan
untuk mengakses akun GitHub) dapat di-_mount_ secara otomatis pada sebuah Pod berdasarkan _service account_
Secret yang dibuat secara manual (misalnya, secret yang mengandung token yang dapat digunakan
untuk mengakses akun GitHub) dapat di-_mount_ secara otomatis pada sebuah Pod berdasarkan _service account_
yang digunakan oleh Pod tadi.
Baca [Bagaimana Penggunaan PodPreset untuk Memasukkan Informasi ke Dalam Pod](/docs/tasks/inject-data-application/podpreset/) untuk informasi lebih lanjut.
@ -658,41 +658,41 @@ Baca [Bagaimana Penggunaan PodPreset untuk Memasukkan Informasi ke Dalam Pod](/d
### Batasan-Batasan
Sumber dari _secret volume_ akan divalidasi untuk menjamin rujukan pada
objek yang dispesifikasikan mengarah pada objek dengan _type_ `Secret`.
Oleh karenanya, sebuah _secret_ harus dibuat sebelum Pod yang merujuk pada _secret_
Sumber dari _secret volume_ akan divalidasi untuk menjamin rujukan pada
objek yang dispesifikasikan mengarah pada objek dengan _type_ `Secret`.
Oleh karenanya, sebuah _secret_ harus dibuat sebelum Pod yang merujuk pada _secret_
tersebut dibuat.
Sebuah objek API Secret berada di dalam sebuah {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
Objek-objek ini hanya dapat dirujuk oleh Pod-Pod yang ada pada namespace yang sama.
Secret memiliki batasi dalam hal ukuran maksimalnya yaitu hanya sampai 1MiB per objek.
Oleh karena itulah, pembuatan secret dalam ukuran yang sangat besar tidak dianjurkan
karena dapat menghabiskan sumber daya apiserver dan memori kubelet. Meskipun demikian,
pembuatan banyak secret dengan ukuran kecil juga dapat menhabiskan memori. Pembatasan
sumber daya yang diizinkan untuk pembuatan secret merupakan salah satu fitur tambahan
Secret memiliki batasi dalam hal ukuran maksimalnya yaitu hanya sampai 1MiB per objek.
Oleh karena itulah, pembuatan secret dalam ukuran yang sangat besar tidak dianjurkan
karena dapat menghabiskan sumber daya apiserver dan memori kubelet. Meskipun demikian,
pembuatan banyak secret dengan ukuran kecil juga dapat menhabiskan memori. Pembatasan
sumber daya yang diizinkan untuk pembuatan secret merupakan salah satu fitur tambahan
yang direncanakan kedepannya.
Kubelet hanya mendukung penggunaan secret oleh Pod apabila Pod tersebut
didapatkan melalui apiserver. Hal ini termasuk Pod yang dibuat dengan menggunakan
kubectl, atau secara tak langsung melalui _replication controller_. Hal ini tidak
termasuk Pod yang dibuat melalui _flag_ `--manifest-url` yang ada pada kubelet,
maupun REST API yang disediakan (hal ini bukanlah merupakan mekanisme umum yang dilakukan
Kubelet hanya mendukung penggunaan secret oleh Pod apabila Pod tersebut
didapatkan melalui apiserver. Hal ini termasuk Pod yang dibuat dengan menggunakan
kubectl, atau secara tak langsung melalui _replication controller_. Hal ini tidak
termasuk Pod yang dibuat melalui _flag_ `--manifest-url` yang ada pada kubelet,
maupun REST API yang disediakan (hal ini bukanlah merupakan mekanisme umum yang dilakukan
untuk membuat sebuah Pod).
Secret harus dibuat sebelum digunakan oleh Pod sebagai variabel _environment_,
kecuali apabila variabel _environment_ ini dianggap opsional. Rujukan pada Secret
Secret harus dibuat sebelum digunakan oleh Pod sebagai variabel _environment_,
kecuali apabila variabel _environment_ ini dianggap opsional. Rujukan pada Secret
yang tidak dapat dipenuhi akan menyebabkan Pod gagal _start_.
Rujukan melalui `secretKeyRef` pada _key_ yang tidak ada pada _named_ Secret
Rujukan melalui `secretKeyRef` pada _key_ yang tidak ada pada _named_ Secret
akan akan menyebabkan Pod gagal _start_.
Secret yang digunakan untuk memenuhi variabel _environment_ melalui `envFrom` yang
memiliki _key_ yang dianggap memiliki penamaan yang tidak valid akan diabaikan.
Hal ini akan akan menyebabkan Pod gagal _start_. Selanjutnya akan terdapat _event_
dengan alasan `InvalidvariabeleNames` dan pesan yang berisikan _list_ dari
_key_ yang diabaikan akibat penamaan yang tidak valid. Contoh yang ada akan menunjukkan
sebuah pod yang merujuk pada secret `default/mysecret` yang mengandung dua buah _key_
Secret yang digunakan untuk memenuhi variabel _environment_ melalui `envFrom` yang
memiliki _key_ yang dianggap memiliki penamaan yang tidak valid akan diabaikan.
Hal ini akan akan menyebabkan Pod gagal _start_. Selanjutnya akan terdapat _event_
dengan alasan `InvalidvariabeleNames` dan pesan yang berisikan _list_ dari
_key_ yang diabaikan akibat penamaan yang tidak valid. Contoh yang ada akan menunjukkan
sebuah pod yang merujuk pada secret `default/mysecret` yang mengandung dua buah _key_
yang tidak valid, yaitu 1badkey dan 2alsobad.
```shell
@ -705,15 +705,15 @@ LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT
### Interaksi Secret dan Pod Lifetime
Ketika sebuah pod dibuat melalui API, tidak terdapat mekanisme pengecekan
yang digunakan untuk mengetahui apakah sebuah Secret yang dirujuk sudah dibuat
atau belum. Ketika sebuah Pod di-_schedule_, kubelet akan mencoba mengambil
informasi mengenai _value_ dari secret tadi. Jika secret tidak dapat diambil
_value_-nya dengan alasan temporer karena hilangnya koneksi ke API server atau
secret yang dirujuk tidak ada, kubelet akan melakukan mekanisme _retry_ secara periodik.
Kubelet juga akan memberikan laporan mengenai _event_ yang terjadi pada Pod serta alasan
kenapa Pod tersebut belum di-_start_. Apabila Secret berhasil didapatkan, kubelet
akan membuat dan me-_mount_ volume yang mengandung secret tersebut. Tidak akan ada
Ketika sebuah pod dibuat melalui API, tidak terdapat mekanisme pengecekan
yang digunakan untuk mengetahui apakah sebuah Secret yang dirujuk sudah dibuat
atau belum. Ketika sebuah Pod di-_schedule_, kubelet akan mencoba mengambil
informasi mengenai _value_ dari secret tadi. Jika secret tidak dapat diambil
_value_-nya dengan alasan temporer karena hilangnya koneksi ke API server atau
secret yang dirujuk tidak ada, kubelet akan melakukan mekanisme _retry_ secara periodik.
Kubelet juga akan memberikan laporan mengenai _event_ yang terjadi pada Pod serta alasan
kenapa Pod tersebut belum di-_start_. Apabila Secret berhasil didapatkan, kubelet
akan membuat dan me-_mount_ volume yang mengandung secret tersebut. Tidak akan ada
container dalam pod yang akan di-_start_ hingga semua volume pod berhasil di-_mount_.
## Contoh-Contoh Penggunaan
@ -731,12 +731,12 @@ secret "ssh-key-secret" created
```
{{< caution >}}
Pikirkanlah terlebih dahulu sebelum kamu menggunakan _ssh key_ milikmu sendiri: pengguna lain pada kluster tersebut bisa saja memiliki akses pada secret yang kamu definisikan.
Gunakanlah service account untuk membagi informasi yang kamu inginkan di dalam kluster tersebut, dengan demikian kamu dapat membatalkan service account tersebut apabila secret tersebut disalahgunakan.
Pikirkanlah terlebih dahulu sebelum kamu menggunakan _ssh key_ milikmu sendiri: pengguna lain pada klaster tersebut bisa saja memiliki akses pada secret yang kamu definisikan.
Gunakanlah service account untuk membagi informasi yang kamu inginkan di dalam klaster tersebut, dengan demikian kamu dapat membatalkan service account tersebut apabila secret tersebut disalahgunakan.
{{< /caution >}}
Sekarang, kita dapat membuat sebuah pod yang merujuk pada secret dengan _ssh key_ yang sudah
Sekarang, kita dapat membuat sebuah pod yang merujuk pada secret dengan _ssh key_ yang sudah
dibuat tadi serta menggunakannya melalui sebuah volume yang di-_mount_:
```yaml
@ -760,7 +760,7 @@ spec:
mountPath: "/etc/secret-volume"
```
Ketika sebuah perintah dijalankan di dalam container, bagian dari _key_ tadi akan
Ketika sebuah perintah dijalankan di dalam container, bagian dari _key_ tadi akan
terdapat pada:
```shell
@ -768,12 +768,12 @@ terdapat pada:
/etc/secret-volume/ssh-privatekey
```
container kemudian dapat menggunakan secret secara bebas untuk
container kemudian dapat menggunakan secret secara bebas untuk
membuat koneksi ssh.
### Contoh Penggunaan: Pod dengan kredensial prod / test
Contoh ini memberikan ilustrasi pod yang mengonsumsi secret yang mengandung
Contoh ini memberikan ilustrasi pod yang mengonsumsi secret yang mengandung
kredensial dari _environment_ _production_ atau _environment_ _test_.
Buatlah suatu kustomization.yaml dengan SecretGenerator
@ -793,8 +793,8 @@ secret "test-db-secret" created
```
{{< note >}}
Karakter spesial seperti `$`, `\*`, dan `!` membutuhkan mekanisme _escaping_.
Jika password yang kamu gunakan memiliki karakter spesial, kamu dapat melakukan mekanisme _escape_
dengan karakter `\\` character. Sebagai contohnya, jika _password_ kamu yang sebenarnya adalah
Jika password yang kamu gunakan memiliki karakter spesial, kamu dapat melakukan mekanisme _escape_
dengan karakter `\\` character. Sebagai contohnya, jika _password_ kamu yang sebenarnya adalah
`S!B\*d$zDsb`, maka kamu harus memanggil perintah eksekusi dengan cara sebagai berikut:
```shell
@ -864,7 +864,7 @@ Terapkan semua perubahan pada objek-objek tadi ke Apiserver dengan menggunakan
kubectl apply --k .
```
Kedua container kemudian akan memiliki _file-file_ berikut ini di dalam
Kedua container kemudian akan memiliki _file-file_ berikut ini di dalam
_filesystem_ keduanya dengan _value_ sebagai berikut untuk masing-masing _environment_:
```shell
@ -872,12 +872,12 @@ _filesystem_ keduanya dengan _value_ sebagai berikut untuk masing-masing _enviro
/etc/secret-volume/password
```
Perhatikan bahwa _specs_ untuk kedua pod berbeda hanya pada satu _field_ saja;
hal ini bertujuan untuk memfasilitasi adanya kapabilitas yang berbeda dari templat
Perhatikan bahwa _specs_ untuk kedua pod berbeda hanya pada satu _field_ saja;
hal ini bertujuan untuk memfasilitasi adanya kapabilitas yang berbeda dari templat
konfigurasi umum yang tersedia.
Kamu dapat mensimplifikasi spesifikasi dasar Pod dengan menggunakan dua buah _service account_ yang berbeda:
misalnya saja salah satunya disebut sebagai `prod-user` dengan `prod-db-secret`, dan satunya lagi disebut
Kamu dapat mensimplifikasi spesifikasi dasar Pod dengan menggunakan dua buah _service account_ yang berbeda:
misalnya saja salah satunya disebut sebagai `prod-user` dengan `prod-db-secret`, dan satunya lagi disebut
`test-user` dengan `test-db-secret`. Kemudian spesifikasi Pod tadi dapat diringkas menjadi:
```yaml
@ -896,9 +896,9 @@ spec:
### Contoh Penggunaan: _Dotfiles_ pada volume secret
Dengan tujuan membuat data yang ada 'tersembunyi' (misalnya, di dalam sebuah _file_ dengan nama yang dimulai
dengan karakter titik), kamu dapat melakukannya dengan cara yang cukup sederhana, yaitu cukup dengan membuat
karakter awal _key_ yang kamu inginkan dengan titik. Contohnya, ketika sebuah secret di bawah ini di-_mount_
Dengan tujuan membuat data yang ada 'tersembunyi' (misalnya, di dalam sebuah _file_ dengan nama yang dimulai
dengan karakter titik), kamu dapat melakukannya dengan cara yang cukup sederhana, yaitu cukup dengan membuat
karakter awal _key_ yang kamu inginkan dengan titik. Contohnya, ketika sebuah secret di bawah ini di-_mount_
pada sebuah volume:
```yaml
@ -932,8 +932,8 @@ spec:
```
Volume `secret-volume` akan mengandung sebuah _file_, yang disebut sebagai `.secret-file`, serta
container `dotfile-test-container` akan memiliki _file_ konfigurasinya pada _path_
Volume `secret-volume` akan mengandung sebuah _file_, yang disebut sebagai `.secret-file`, serta
container `dotfile-test-container` akan memiliki _file_ konfigurasinya pada _path_
`/etc/secret-volume/.secret-file`.
{{< note >}}
@ -943,20 +943,20 @@ kamu harus menggunakan perintah `ls -la` untuk melihat _file-file_ tadi dari seb
### Contoh Penggunaan: Secret yang dapat diakses hanya pada salah satu container di dalam pod
Misalkan terdapat sebuah program yang memiliki kebutuhan untuk menangani _request_ HTTP,
melakukan logika bisnis yang kompleks, serta kemudian menandai beberapa _message_ yang ada
dengan menggunakan HMAC. Karena program ini memiliki logika aplikasi yang cukup kompleks,
maka bisa jadi terdapat beberapa celah terjadinya eksploitasi _remote_ _file_ pada server,
Misalkan terdapat sebuah program yang memiliki kebutuhan untuk menangani _request_ HTTP,
melakukan logika bisnis yang kompleks, serta kemudian menandai beberapa _message_ yang ada
dengan menggunakan HMAC. Karena program ini memiliki logika aplikasi yang cukup kompleks,
maka bisa jadi terdapat beberapa celah terjadinya eksploitasi _remote_ _file_ pada server,
yang nantinya bisa saja mengekspos _private key_ yang ada pada _attacker_.
Hal ini dapat dipisah menjadi dua buah proses yang berbeda di dalam dua container:
sebuah container _frontend_ yang menangani interaksi pengguna dan logika bisnis, tetapi
tidak memiliki kapabilitas untuk melihat _private key_; container lain memiliki kapabilitas
melihat _private key_ yang ada dan memiliki fungsi untuk menandai _request_ yang berasal
Hal ini dapat dipisah menjadi dua buah proses yang berbeda di dalam dua container:
sebuah container _frontend_ yang menangani interaksi pengguna dan logika bisnis, tetapi
tidak memiliki kapabilitas untuk melihat _private key_; container lain memiliki kapabilitas
melihat _private key_ yang ada dan memiliki fungsi untuk menandai _request_ yang berasal
dari _frontend_ (melalui jaringan _localhost_).
Dengan strategi ini, seorang _attacker_ harus melakukan teknik tambahan
untuk memaksa aplikasi melakukan hal yang acak, yang kemudian menyebabkan
Dengan strategi ini, seorang _attacker_ harus melakukan teknik tambahan
untuk memaksa aplikasi melakukan hal yang acak, yang kemudian menyebabkan
mekanisme pembacaan _file_ menjadi lebih susah.
<!-- TODO: menjelaskan bagaimana cara melakukan hal ini menggunakan mekanisme yang diotomatisasi. -->
@ -965,34 +965,34 @@ mekanisme pembacaan _file_ menjadi lebih susah.
### Klien yang menggunakan API secret
Ketika men-_deploy_ aplikasi yang berinteraksi dengan API secret, akses yang dilakukan
Ketika men-_deploy_ aplikasi yang berinteraksi dengan API secret, akses yang dilakukan
haruslah dibatasi menggunakan [_policy_ autorisasi](
/docs/reference/access-authn-authz/authorization/) seperti [RBAC](
/docs/reference/access-authn-authz/rbac/).
Secret seringkali menyimpan _value_ yang memiliki jangkauan spektrum
kepentingan, yang mungkin saja dapat menyebabkan terjadinya eskalasi baik
di dalam Kubernetes (misalnya saja token dari sebuah _service account_) maupun
sistem eksternal. Bahkan apabila setiap aplikasi secara individual memiliki
kapabilitas untuk memahami tingkatan yang dimilikinya untuk berinteraksi dengan secret tertentu,
Secret seringkali menyimpan _value_ yang memiliki jangkauan spektrum
kepentingan, yang mungkin saja dapat menyebabkan terjadinya eskalasi baik
di dalam Kubernetes (misalnya saja token dari sebuah _service account_) maupun
sistem eksternal. Bahkan apabila setiap aplikasi secara individual memiliki
kapabilitas untuk memahami tingkatan yang dimilikinya untuk berinteraksi dengan secret tertentu,
aplikasi lain dalam namespace itu bisa saja menyebabkan asumsi tersebut menjadi tidak valid.
Karena alasan-alasan yang sudah disebutkan tadi _request_ `watch` dan `list` untuk sebuah
secret di dalam suatu namespace merupakan kapabilitas yang sebisa mungkin harus dihindari,
karena menampilkan semua secret yang ada berimplikasi pada akses untuk melihat isi yang ada
pada secret yang ada. Kapabilitas untuk melakukan _request_ `watch` dan `list` pada semua secret di kluster
Karena alasan-alasan yang sudah disebutkan tadi _request_ `watch` dan `list` untuk sebuah
secret di dalam suatu namespace merupakan kapabilitas yang sebisa mungkin harus dihindari,
karena menampilkan semua secret yang ada berimplikasi pada akses untuk melihat isi yang ada
pada secret yang ada. Kapabilitas untuk melakukan _request_ `watch` dan `list` pada semua secret di klaster
hanya boleh dimiliki oleh komponen pada sistem level yang paling _previleged_.
Aplikasi yang membutuhkan akses ke API secret harus melakukan _request_ `get` pada
secret yang dibutuhkan. Hal ini memungkinkan administrator untuk membatasi
akses pada semua secret dengan tetap memberikan [akses pada instans secret tertentu](/id/docs/reference/access-authn-authz/rbac/#referring-to-resources)
Aplikasi yang membutuhkan akses ke API secret harus melakukan _request_ `get` pada
secret yang dibutuhkan. Hal ini memungkinkan administrator untuk membatasi
akses pada semua secret dengan tetap memberikan [akses pada instans secret tertentu](/id/docs/reference/access-authn-authz/rbac/#referring-to-resources)
yang dibutuhkan aplikasi.
Untuk meningkatkan performa dengan menggunakan iterasi `get`, klien dapat mendesain
sumber daya yang merujuk pada suatu secret dan kemudian melakukan `watch` pada secret tersebut,
Untuk meningkatkan performa dengan menggunakan iterasi `get`, klien dapat mendesain
sumber daya yang merujuk pada suatu secret dan kemudian melakukan `watch` pada secret tersebut,
serta melakukan _request_ secret ketika terjadi perubahan pada rujukan tadi. Sebagai tambahan, [API "bulk watch"](
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/bulk_watch.md)
yang dapat memberikan kapabilitas `watch` individual pada sumber daya melalui klien juga sudah direncanakan,
yang dapat memberikan kapabilitas `watch` individual pada sumber daya melalui klien juga sudah direncanakan,
dan kemungkinan akan diimplementasikan dirilis Kubernetes selanjutnya.
## Properti Keamanan
@ -1000,59 +1000,59 @@ dan kemungkinan akan diimplementasikan dirilis Kubernetes selanjutnya.
### Proteksi
Karena objek `secret` dapat dibuat secara independen dengan `pod` yang menggunakannya,
risiko tereksposnya secret di dalam workflow pembuatan, pemantauan, serta pengubahan pod.
Sistem yang ada juga dapat memberikan tindakan pencegahan ketika berinteraksi dengan `secret`,
misalnya saja tidak melakukan penulisan isi `secret` ke dalam disk apabila hal tersebut
memungkinkan.
Karena objek `secret` dapat dibuat secara independen dengan `pod` yang menggunakannya,
risiko tereksposnya secret di dalam workflow pembuatan, pemantauan, serta pengubahan pod.
Sistem yang ada juga dapat memberikan tindakan pencegahan ketika berinteraksi dengan `secret`,
misalnya saja tidak melakukan penulisan isi `secret` ke dalam disk apabila hal tersebut
memungkinkan.
Sebuah secret hanya diberikan pada node apabila pod yang ada di dalam node
membutuhkan secret tersebut. Kubelet menyimpan secret yang ada pada `tmpfs`
sehingga secret tidak ditulis pada disk. Setelah pod yang bergantung pada secret tersebut dihapus,
Sebuah secret hanya diberikan pada node apabila pod yang ada di dalam node
membutuhkan secret tersebut. Kubelet menyimpan secret yang ada pada `tmpfs`
sehingga secret tidak ditulis pada disk. Setelah pod yang bergantung pada secret tersebut dihapus,
maka kubelet juga akan menghapus salinan lokal data secret.
Di dalam sebuah node bisa saja terdapat beberapa secret yang dibutuhkan
oleh pod yang ada di dalamnya. Meskipun demikian, hanya secret yang di-_request_
oleh sebuah pod saja yang dapat dilihat oleh container yang ada di dalamnya.
Dengan demikian, sebuah Pod tidak memiliki akses untuk melihat secret yang ada
Di dalam sebuah node bisa saja terdapat beberapa secret yang dibutuhkan
oleh pod yang ada di dalamnya. Meskipun demikian, hanya secret yang di-_request_
oleh sebuah pod saja yang dapat dilihat oleh container yang ada di dalamnya.
Dengan demikian, sebuah Pod tidak memiliki akses untuk melihat secret yang ada
pada pod yang lain.
Di dalam sebuah pod bisa jadi terdapat beberapa container.
Meskipun demikian, agar sebuah container bisa mengakses _volume secret_, container
tersebut haruslah mengirimkan _request_ `volumeMounts` yang ada dapat diakses dari
container tersebut. Pengetahuan ini dapat digunakan untuk membentuk [partisi security
pada level pod](#contoh-penggunaan-secret-yang-dapat-diakses-hanya-pada-salah-satu-container-di-dalam-pod).
Di dalam sebuah pod bisa jadi terdapat beberapa container.
Meskipun demikian, agar sebuah container bisa mengakses _volume secret_, container
tersebut haruslah mengirimkan _request_ `volumeMounts` yang ada dapat diakses dari
container tersebut. Pengetahuan ini dapat digunakan untuk membentuk [partisi security
pada level pod](#contoh-penggunaan-secret-yang-dapat-diakses-hanya-pada-salah-satu-container-di-dalam-pod).
Pada sebagian besar distribusi yang dipelihara projek Kubernetes,
Pada sebagian besar distribusi yang dipelihara projek Kubernetes,
komunikasi antara pengguna dan apiserver serta apisserver dan kubelet dilindungi dengan menggunakan SSL/TLS.
Dengan demikian, secret dalam keadaan dilindungi ketika ditransmisi.
{{< feature-state for_k8s_version="v1.13" state="beta" >}}
Kamu dapat mengaktifkan [enkripsi pada rest](/docs/tasks/administer-cluster/encrypt-data/)
untuk data secret, sehingga secret yang ada tidak akan ditulis ke dalam {{< glossary_tooltip term_id="etcd" >}}
untuk data secret, sehingga secret yang ada tidak akan ditulis ke dalam {{< glossary_tooltip term_id="etcd" >}}
dalam keadaan tidak terenkripsi.
### Resiko
- Pada API server, data secret disimpan di dalam {{< glossary_tooltip term_id="etcd" >}};
dengan demikian:
- Administrator harus mengaktifkan enkripsi pada rest untuk data kluster (membutuhkan versi v1.13 atau lebih)
- Administrator harus mengaktifkan enkripsi pada rest untuk data klaster (membutuhkan versi v1.13 atau lebih)
- Administrator harus membatasi akses etcd pada pengguna dengan kapabilitas admin
- Administrator bisa saja menghapus data disk yang sudah tidak lagi digunakan oleh etcd
- Jika etcd dijalankan di dalam kluster, administrator harus memastikan SSL/TLS
- Jika etcd dijalankan di dalam klaster, administrator harus memastikan SSL/TLS
digunakan pada proses komunikasi peer-to-peer etcd.
- Jika kamu melakukan konfigurasi melalui sebuah _file_ manifest (JSON or YAML)
yang menyimpan data secret dalam bentuk base64, membagi atau menyimpan secret ini
dalam repositori kode sumber sama artinya dengan memberikan informasi mengenai data secret.
- Jika kamu melakukan konfigurasi melalui sebuah _file_ manifest (JSON or YAML)
yang menyimpan data secret dalam bentuk base64, membagi atau menyimpan secret ini
dalam repositori kode sumber sama artinya dengan memberikan informasi mengenai data secret.
Mekanisme _encoding_ base64 bukanlah merupakan teknik enkripsi dan nilainya dianggap sama saja dengan _plain text_.
- Aplikasi masih harus melindungi _value_ dari secret setelah membaca nilainya dari suatu volume
dengan demikian risiko terjadinya _logging_ secret secara tidak engaja dapat dihindari.
- Seorang pengguna yang dapat membuat suatu pod yang menggunakan secret, juga dapat melihat _value_ secret.
Bahkan apabila _policy_ apiserver tidak memberikan kapabilitas untuk membaca objek secret, pengguna
- Aplikasi masih harus melindungi _value_ dari secret setelah membaca nilainya dari suatu volume
dengan demikian risiko terjadinya _logging_ secret secara tidak engaja dapat dihindari.
- Seorang pengguna yang dapat membuat suatu pod yang menggunakan secret, juga dapat melihat _value_ secret.
Bahkan apabila _policy_ apiserver tidak memberikan kapabilitas untuk membaca objek secret, pengguna
dapat menjalankan pod yang mengekspos secret.
- Saat ini, semua orang dengan akses _root_ pada node dapat membaca secret _apapun_ dari apiserver,
dengan cara meniru kubelet. Meskipun begitu, terdapat fitur yang direncanakan pada rilis selanjutnya yang memungkinkan pengiriman secret hanya dapat
- Saat ini, semua orang dengan akses _root_ pada node dapat membaca secret _apapun_ dari apiserver,
dengan cara meniru kubelet. Meskipun begitu, terdapat fitur yang direncanakan pada rilis selanjutnya yang memungkinkan pengiriman secret hanya dapat
mengirimkan secret pada node yang membutuhkan secret tersebut untuk membatasi adanya eksploitasi akses _root_ pada node ini.
## {{% heading "whatsnext" %}}

View File

@ -13,11 +13,11 @@ Lapisan agregasi memungkinkan Kubernetes untuk diperluas dengan API tambahan, se
<!-- body -->
## Ikhtisar
Lapisan agregasi memungkinkan instalasi tambahan beragam API _Kubernetes-style_ di kluster kamu. Tambahan-tambahan ini dapat berupa solusi-solusi yang sudah dibangun (_prebuilt_) oleh pihak ke-3 yang sudah ada, seperti [_service-catalog_](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md), atau API yang dibuat oleh pengguna seperti [apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md), yang dapat membantu kamu memulainya.
Lapisan agregasi memungkinkan instalasi tambahan beragam API _Kubernetes-style_ di klaster kamu. Tambahan-tambahan ini dapat berupa solusi-solusi yang sudah dibangun (_prebuilt_) oleh pihak ke-3 yang sudah ada, seperti [_service-catalog_](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md), atau API yang dibuat oleh pengguna seperti [apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md), yang dapat membantu kamu memulainya.
Lapisan agregasi berjalan di dalam proses bersama dengan kube-apiserver. Hingga sebuah sumber daya ekstensi terdaftar, lapisan agregasi tidak akan melakukan apapun. Untuk mendaftarkan sebuah API, pengguna harus menambahkan sebuah objek _APIService_, yang "mengklaim" jalur URL di API Kubernetes. Pada titik tersebut, lapisan agregasi akan mem-_proxy_ apapun yang dikirim ke jalur API tersebut (misalnya /apis/myextension.mycompany.io/v1/…) ke _APIService_ yang terdaftar.
Lapisan agregasi berjalan di dalam proses bersama dengan kube-apiserver. Hingga sebuah sumber daya ekstensi terdaftar, lapisan agregasi tidak akan melakukan apapun. Untuk mendaftarkan sebuah API, pengguna harus menambahkan sebuah objek _APIService_, yang "mengklaim" jalur URL di API Kubernetes. Pada titik tersebut, lapisan agregasi akan mem-_proxy_ apapun yang dikirim ke jalur API tersebut (misalnya /apis/myextension.mycompany.io/v1/…) ke _APIService_ yang terdaftar.
Biasanya, _APIService_ akan diimplementasikan oleh sebuah ekstensi-apiserver di dalam sebuah Pod yang berjalan di kluster. Ekstensi-apiserver ini biasanya perlu di pasangkan dengan satu atau lebih _controller_ apabila manajemen aktif dari sumber daya tambahan diperlukan. Sebagai hasilnya, apiserver-builder sebenarnya akan memberikan kerangka untuk keduanya. Sebagai contoh lain, ketika service-catalog diinstal, ia menyediakan ekstensi-apiserver dan _controller_ untuk layanan-layanan yang disediakannya.
Biasanya, _APIService_ akan diimplementasikan oleh sebuah ekstensi-apiserver di dalam sebuah Pod yang berjalan di klaster. Ekstensi-apiserver ini biasanya perlu di pasangkan dengan satu atau lebih _controller_ apabila manajemen aktif dari sumber daya tambahan diperlukan. Sebagai hasilnya, apiserver-builder sebenarnya akan memberikan kerangka untuk keduanya. Sebagai contoh lain, ketika service-catalog diinstal, ia menyediakan ekstensi-apiserver dan _controller_ untuk layanan-layanan yang disediakannya.
Ekstensi-apiserver harus memiliki latensi koneksi yang rendah dari dan ke kube-apiserver.
Secara Khusus, permintaan pencarian diperlukan untuk bolak-balik dari kube-apiserver dalam 5 detik atau kurang.

View File

@ -21,7 +21,7 @@ _Plugin_ jaringan di Kubernetes hadir dalam beberapa varian:
## Instalasi
Kubelet memiliki _plugin_ jaringan bawaan tunggal, dan jaringan bawaan umum untuk seluruh kluster. _Plugin_ ini memeriksa _plugin-plugin_ ketika dijalankan, mengingat apa yang ditemukannya, dan mengeksekusi _plugin_ yang dipilih pada waktu yang tepat dalam siklus pod (ini hanya berlaku untuk Docker, karena rkt mengelola _plugin_ CNI sendiri). Ada dua parameter perintah Kubelet yang perlu diingat saat menggunakan _plugin_:
Kubelet memiliki _plugin_ jaringan bawaan tunggal, dan jaringan bawaan umum untuk seluruh klaster. _Plugin_ ini memeriksa _plugin-plugin_ ketika dijalankan, mengingat apa yang ditemukannya, dan mengeksekusi _plugin_ yang dipilih pada waktu yang tepat dalam siklus pod (ini hanya berlaku untuk Docker, karena rkt mengelola _plugin_ CNI sendiri). Ada dua parameter perintah Kubelet yang perlu diingat saat menggunakan _plugin_:
* `cni-bin-dir`: Kubelet memeriksa direktori ini untuk _plugin-plugin_ saat _startup_
* `network-plugin`: _Plugin_ jaringan untuk digunakan dari `cni-bin-dir`. Ini harus cocok dengan nama yang dilaporkan oleh _plugin_ yang diperiksa dari direktori _plugin_. Untuk _plugin_ CNI, ini (nilainya) hanyalah "cni".

View File

@ -8,19 +8,19 @@ weight: 90
<!-- overview -->
Memilih mekanisme autentikasi yang tepat adalah aspek penting dalam mengamankan kluster Anda.
Memilih mekanisme autentikasi yang tepat adalah aspek penting dalam mengamankan klaster Anda.
Kubernetes menyediakan beberapa mekanisme bawaan, masing-masing dengan kelebihan dan kekurangannya
yang harus dipertimbangkan dengan hati-hati saat memilih mekanisme autentikasi terbaik untuk kluster Anda.
yang harus dipertimbangkan dengan hati-hati saat memilih mekanisme autentikasi terbaik untuk klaster Anda.
Secara umum, disarankan untuk mengaktifkan sesedikit mungkin mekanisme autentikasi untuk menyederhanakan
manajemen pengguna dan mencegah kasus di mana pengguna tetap memiliki akses ke kluster yang tidak lagi diperlukan.
manajemen pengguna dan mencegah kasus di mana pengguna tetap memiliki akses ke klaster yang tidak lagi diperlukan.
Penting untuk dicatat bahwa Kubernetes tidak memiliki basis data pengguna bawaan di dalam kluster.
Penting untuk dicatat bahwa Kubernetes tidak memiliki basis data pengguna bawaan di dalam klaster.
Sebaliknya, Kubernetes mengambil informasi pengguna dari sistem autentikasi yang dikonfigurasi dan menggunakan
informasi tersebut untuk membuat keputusan otorisasi. Oleh karena itu, untuk mengaudit akses pengguna, Anda perlu
meninjau kredensial dari setiap sumber autentikasi yang dikonfigurasi.
Untuk kluster produksi dengan banyak pengguna yang mengakses API Kubernetes secara langsung, disarankan untuk
Untuk klaster produksi dengan banyak pengguna yang mengakses API Kubernetes secara langsung, disarankan untuk
menggunakan sumber autentikasi eksternal seperti OIDC. Mekanisme autentikasi internal, seperti sertifikat klien
dan token akun layanan yang dijelaskan di bawah ini, tidak cocok untuk kasus penggunaan ini.
@ -36,8 +36,8 @@ untuk autentikasi pengguna, mekanisme ini mungkin tidak cocok untuk penggunaan p
hingga kedaluwarsa. Untuk mengurangi risiko ini, disarankan untuk mengonfigurasi masa berlaku yang pendek untuk
kredensial autentikasi pengguna yang dibuat menggunakan sertifikat klien.
- Jika sertifikat perlu dibatalkan, otoritas sertifikat harus diubah kuncinya, yang dapat memperkenalkan risiko
ketersediaan ke kluster.
- Tidak ada catatan permanen tentang sertifikat klien yang dibuat di kluster. Oleh karena itu, semua sertifikat yang
ketersediaan ke klaster.
- Tidak ada catatan permanen tentang sertifikat klien yang dibuat di klaster. Oleh karena itu, semua sertifikat yang
diterbitkan harus dicatat jika Anda perlu melacaknya.
- Kunci privat yang digunakan untuk autentikasi sertifikat klien tidak dapat dilindungi dengan kata sandi. Siapa pun
yang dapat membaca file yang berisi kunci tersebut akan dapat menggunakannya.
@ -55,13 +55,13 @@ di disk node control plane, pendekatan ini tidak disarankan untuk server produks
- Kredensial disimpan dalam teks biasa di disk node control plane, yang dapat menjadi risiko keamanan.
- Mengubah kredensial apa pun memerlukan restart proses API server agar berlaku, yang dapat memengaruhi ketersediaan.
- Tidak ada mekanisme yang tersedia untuk memungkinkan pengguna memutar kredensial mereka. Untuk memutar kredensial,
administrator kluster harus memodifikasi token di disk dan mendistribusikannya ke pengguna.
administrator klaster harus memodifikasi token di disk dan mendistribusikannya ke pengguna.
- Tidak ada mekanisme penguncian yang tersedia untuk mencegah serangan brute-force.
## Token bootstrap {#bootstrap-tokens}
[Token bootstrap](/docs/reference/access-authn-authz/bootstrap-tokens/) digunakan untuk menghubungkan
node ke kluster dan tidak disarankan untuk autentikasi pengguna karena beberapa alasan:
node ke klaster dan tidak disarankan untuk autentikasi pengguna karena beberapa alasan:
- Mereka memiliki keanggotaan grup yang dikodekan keras yang tidak cocok untuk penggunaan umum, sehingga tidak cocok
untuk tujuan autentikasi.
@ -73,12 +73,12 @@ node ke kluster dan tidak disarankan untuk autentikasi pengguna karena beberapa
## Token rahasia ServiceAccount {#serviceaccount-secret-tokens}
[Rahasia akun layanan](/docs/reference/access-authn-authz/service-accounts-admin/#manual-secret-management-for-serviceaccounts)
tersedia sebagai opsi untuk memungkinkan beban kerja yang berjalan di kluster mengautentikasi ke API server.
tersedia sebagai opsi untuk memungkinkan beban kerja yang berjalan di klaster mengautentikasi ke API server.
Di Kubernetes < 1.23, ini adalah opsi default, namun, mereka sedang digantikan dengan token API TokenRequest.
Meskipun rahasia ini dapat digunakan untuk autentikasi pengguna, mereka umumnya tidak cocok karena beberapa alasan:
- Mereka tidak dapat diatur dengan masa berlaku dan akan tetap berlaku hingga akun layanan terkait dihapus.
- Token autentikasi terlihat oleh pengguna kluster mana pun yang dapat membaca rahasia di namespace tempat mereka
- Token autentikasi terlihat oleh pengguna klaster mana pun yang dapat membaca rahasia di namespace tempat mereka
didefinisikan.
- Akun layanan tidak dapat ditambahkan ke grup arbitrer, yang mempersulit manajemen RBAC di mana mereka digunakan.
@ -99,7 +99,7 @@ Kubernetes mendukung integrasi layanan autentikasi eksternal dengan API Kubernet
Ada berbagai macam perangkat lunak yang dapat digunakan untuk mengintegrasikan Kubernetes dengan penyedia identitas.
Namun, saat menggunakan autentikasi OIDC di Kubernetes, penting untuk mempertimbangkan langkah-langkah penguatan berikut:
- Perangkat lunak yang diinstal di kluster untuk mendukung autentikasi OIDC harus diisolasi dari beban kerja umum
- Perangkat lunak yang diinstal di klaster untuk mendukung autentikasi OIDC harus diisolasi dari beban kerja umum
karena akan berjalan dengan hak istimewa tinggi.
- Beberapa layanan Kubernetes yang dikelola memiliki batasan pada penyedia OIDC yang dapat digunakan.
- Seperti halnya token TokenRequest, token OIDC harus memiliki masa berlaku yang pendek untuk mengurangi dampak
@ -109,13 +109,13 @@ Namun, saat menggunakan autentikasi OIDC di Kubernetes, penting untuk mempertimb
[Autentikasi token Webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication)
adalah opsi lain untuk mengintegrasikan penyedia autentikasi eksternal ke Kubernetes. Mekanisme ini memungkinkan
layanan autentikasi, baik yang berjalan di dalam kluster atau di luar, untuk dihubungi untuk keputusan autentikasi
layanan autentikasi, baik yang berjalan di dalam klaster atau di luar, untuk dihubungi untuk keputusan autentikasi
melalui webhook. Penting untuk dicatat bahwa kesesuaian mekanisme ini kemungkinan besar bergantung pada perangkat
lunak yang digunakan untuk layanan autentikasi, dan ada beberapa pertimbangan khusus Kubernetes yang harus diperhatikan.
Untuk mengonfigurasi autentikasi Webhook, akses ke sistem file server control plane diperlukan. Ini berarti bahwa
hal ini tidak akan memungkinkan dengan Kubernetes yang dikelola kecuali penyedia secara khusus membuatnya tersedia.
Selain itu, perangkat lunak apa pun yang diinstal di kluster untuk mendukung akses ini harus diisolasi dari beban
Selain itu, perangkat lunak apa pun yang diinstal di klaster untuk mendukung akses ini harus diisolasi dari beban
kerja umum, karena akan berjalan dengan hak istimewa tinggi.
## Proxy autentikasi {#authenticating-proxy}

View File

@ -0,0 +1,132 @@
---
title: Gateway API
content_type: concept
description: >-
Gateway API merupakan bagian dari API yang menyediakan penyediaan infrastruktur dinamis dan pengaturan trafik lanjutan.
weight: 55
---
<!-- overview -->
Gateway API menyediakan layanan jaringan dengan menggunakan mekanisme konfigurasi yang mudah di-_extend_, berorientasi _role_, dan mengerti konsep protokol. [Gateway API](https://gateway-api.sigs.k8s.io/) adalah sebuah {{<glossary_tooltip text="add-on" term_id="addons">}} yang berisi [jenis-jenis](https://gateway-api.sigs.k8s.io/references/spec/) API yang menyediakan penyediaan infrastruktur dinamis dan pengaturan trafik tingkat lanjut.
<!-- body -->
## Prinsip Desain
Prinsip-prinsip berikut membentuk desain dan arsitektur Gateway API:
* __Berorientasi _role_:__ Gateway API dimodelkan sesuai dengan _role_ organisasi yang bertanggung jawab untuk mengelola jaringan layanan Kubernetes:
* __Penyedia Infrastruktur:__ Mengelola infrastruktur yang memungkinkan beberapa klaster terisolasi untuk melayani beberapa _tenant_, misalnya penyedia layanan _cloud_.
* __Operator klaster:__ Mengelola klaster dan biasanya memperhatikan kebijakan, akses jaringan, izin aplikasi, dll.
* __Pengembang Aplikasi:__ Mengelola aplikasi yang berjalan di dalam klaster dan biasanya memperhatikan konfigurasi tingkat aplikasi dan komposisi [Service](/docs/concepts/services-networking/service/).
* __Portabel:__ Spesifikasi Gateway API didefinisikan sebagai [Custom Resource](/docs/concepts/extend-kubernetes/api-extension/custom-resources) dan didukung oleh banyak [implementasi](https://gateway-api.sigs.k8s.io/implementations/).
* __Ekspresif:__ Jenis-jenis Gateway API mendukung fungsi untuk kasus penggunaan routing trafik pada umumnya, seperti pencocokan berbasis header, pembobotan trafik, dan lainnya yang sebelumnya hanya mungkin dilakukan di [Ingress](/docs/concepts/services-networking/ingress/) dengan menggunakan anotasi kustom.
* __Dapat diperluas:__ Gateway memungkinkan sumber daya kustom untuk dihubungkan pada berbagai lapisan API. Ini memungkinkan penyesuaian yang lebih detail pada tempat yang tepat dalam struktur API.
## Model Sumber Daya (_Resource_)
Gateway API memiliki tiga jenis API stabil:
* __GatewayClass:__ Mendefinisikan satu set gateway dengan konfigurasi umum dan dikelola oleh pengendali yang mengimplementasikan kelas tersebut.
* __Gateway:__ Mendefinisikan instans infrastruktur penanganan trafik, seperti penyeimbang beban (_load balancer_) _cloud_.
* __HTTPRoute:__ Mendefinisikan aturan khusus HTTP untuk memetakan trafik dari pendengar (_listener_) Gateway ke representasi titik akhir (_endpoint_) jaringan backend. Titik akhir ini sering diwakili sebagai sebuah {{<glossary_tooltip text="Service" term_id="service">}}.
Gateway API diatur ke dalam berbagai jenis API yang memiliki hubungan saling ketergantungan untuk mendukung sifat berorientasi _role_ dari organisasi. Objek Gateway dikaitkan dengan tepat satu GatewayClass; GatewayClass menggambarkan pengendali gateway yang bertanggung jawab untuk mengelola Gateway dari kelas ini. Satu atau lebih jenis rute seperti HTTPRoute, kemudian dikaitkan dengan Gateway. Sebuah Gateway dapat memfilter rute yang mungkin akan dilampirkan pada `listeners`-nya, membentuk model kepercayaan dua arah dengan rute.
Gambar berikut mengilustrasikan hubungan dari tiga jenis Gateway API yang stabil:
{{< figure src="/docs/images/gateway-kind-relationships.svg" alt="Gambar yang mengilustrasikan hubungan dari tiga jenis Gateway API yang stabil" class="diagram-medium" >}}
### Gateway {#api-kind-gateway}
Gateway menggambarkan sebuah instans infrastruktur penanganan trafik. Ini mendefinisikan titik akhir jaringan yang dapat digunakan untuk memproses trafik, seperti penyaringan (_filter_), penyeimbangan (_balancing_), pemisahan (_splitting_), dll. untuk backend seperti sebuah Service. Sebagai contoh, Gateway dapat mewakili penyeimbang beban (_load balancer_) _cloud_ atau server proksi dalam klaster yang dikonfigurasikan untuk menerima trafik HTTP.
Contoh minimal dari Gateway _resource_:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: example-class
listeners:
- name: http
protocol: HTTP
port: 80
```
Dalam contoh ini, sebuah instans dari infrastruktur penanganan trafik diprogram untuk mendengarkan trafik HTTP pada port 80. Karena _field_ `addresses` tidak ditentukan, sebuah alamat atau nama host ditugaskan ke Gateway oleh pengendali implementasi. Alamat ini digunakan sebagai titik akhir jaringan untuk memproses trafik titik akhir jaringan backend yang didefinisikan dalam rute.
Lihat [Gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Gateway) referensi untuk definisi lengkap dari API ini.
### HTTPRoute {#api-kind-httproute}
Jenis HTTPRoute menentukan perilaku _routing_ dari permintaan HTTP dari _listener_ Gateway ke titik akhir jaringan backend. Untuk backend Service, implementasi dapat mewakili titik akhir jaringan backend sebagai IP Service atau Endpoints pendukung dari Service tersebut. HTTPRoute mewakili konfigurasi yang diterapkan pada implementasi Gateway yang mendasarinya. Sebagai contoh, mendefinisikan HTTPRoute baru dapat mengakibatkan pengaturan rute trafik tambahan pada penyeimbang beban _cloud_ atau server proksi dalam klaster.
Contoh minimal dari HTTPRoute:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example-httproute
spec:
parentRefs:
- name: example-gateway
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /login
backendRefs:
- name: example-svc
port: 8080
```
Dalam contoh ini, trafik HTTP dari Gateway `example-gateway` dengan header Host: yang disetel ke `www.example.com` dan jalur permintaan yang ditentukan sebagai `/login` akan diarahkan ke Service `example-svc` pada port `8080`.
Lihat referensi [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPRoute) untuk definisi lengkap dari API ini.
## Aliran Permintaan (_Request Flow_)
Berikut adalah contoh sederhana dari trafik HTTP yang diarahkan ke sebuah Service menggunakan Gateway dan HTTPRoute:
{{< figure src="/docs/images/gateway-request-flow.svg" alt="Diagram yang memberikan contoh trafik HTTP yang diarahkan ke sebuah Service menggunakan Gateway dan HTTPRoute" class="diagram-medium" >}}
Dalam contoh ini, aliran permintaan untuk Gateway yang diimplementasikan sebagai _reverse proxy_ adalah:
1. Klien mulai mempersiapkan permintaan HTTP untuk URL `http://www.example.com`
2. Resolver DNS klien melakukan query untuk nama tujuan dan mengetahui pemetaan ke satu atau lebih alamat IP yang terkait dengan Gateway.
3. Klien mengirimkan permintaan ke alamat IP Gateway; _reverse proxy_ menerima permintaan HTTP dan menggunakan header Host: untuk mencocokkan konfigurasi yang berasal dari Gateway dan HTTPRoute yang terlampir.
4. Secara opsional, _reverse proxy_ dapat melakukan pencocokan header permintaan dan/atau jalur berdasarkan aturan pencocokan dari HTTPRoute.
5. Secara opsional, _reverse proxy_ dapat memodifikasi permintaan; sebagai contoh, untuk menambah atau menghapus header, berdasarkan aturan filter dari HTTPRoute.
6. Terakhir, _reverse proxy_ meneruskan permintaan ke satu atau lebih backend.
## Kesesuaian (_Conformance_)
Gateway API mencakup beragam fitur dan diimplementasikan secara luas. Kombinasi ini memerlukan definisi dan pengujian kesesuaian yang jelas untuk memastikan bahwa API memberikan pengalaman yang konsisten di mana pun digunakan.
Lihat dokumentasi [conformance](https://gateway-api.sigs.k8s.io/concepts/conformance/) untuk memahami rincian seperti saluran rilis (_release channel_), tingkat dukungan, dan menjalankan tes kesesuaian (_conformance test_).
## Migrasi dari Ingress
Gateway API adalah penerus API [Ingress](/docs/concepts/services-networking/ingress/) tapi tidak termasuk dalam jenis Ingress. Akibatnya, konversi satu kali dari sumber daya Ingress yang ada ke sumber daya Gateway API diperlukan.
Referensi panduan [migrasi ingress](https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/#migrating-from-ingress) untuk rincian tentang migrasi sumber daya Ingress ke sumber daya Gateway API.
## {{% heading "whatsnext" %}}
Alih-alih sumber daya Gateway API yang diimplementasikan secara natif oleh Kubernetes, spesifikasinya didefinisikan sebagai [Custom Resource Definition](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) yang didukung oleh berbagai [implementasi](https://gateway-api.sigs.k8s.io/implementations/).
[Instal](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) CRD Gateway API atau ikuti petunjuk instalasi dari implementasi yang kamu pilih. Setelah menginstal sebuah implementasi, gunakan panduan [Memulai](https://gateway-api.sigs.k8s.io/guides/) untuk membantu kamu segera memulai bekerja dengan Gateway API.
{{< note >}}
Pastikan untuk meninjau dokumentasi dari implementasi yang kamu pilih untuk memahami hal-hal yang perlu diperhatikan.
{{< /note >}}
Referensi [spesifikasi API](https://gateway-api.sigs.k8s.io/reference/spec/) untuk rincian tambahan dari semua jenis Gateway API.

View File

@ -67,7 +67,7 @@ Perilaku tertentu independen dari kelas QoS yang ditetapkan oleh Kubernetes. Mis
* Permintaan sumber daya Pod sama dengan jumlah permintaan sumber daya dari Kontainer komponennya, dan batas sumber daya Pod sama dengan jumlah batas sumber daya dari Kontainer komponennya.
* kube-scheduler tidak mempertimbangkan kelas QoS saat memilih Pod mana yang akan [didahulukan](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption). Pendahuluan dapat terjadi saat kluster tidak memiliki cukup sumber daya untuk menjalankan semua Pod yang Anda tentukan.
* kube-scheduler tidak mempertimbangkan kelas QoS saat memilih Pod mana yang akan [didahulukan](/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption). Pendahuluan dapat terjadi saat klaster tidak memiliki cukup sumber daya untuk menjalankan semua Pod yang Anda tentukan.
## {{% heading "whatsnext" %}}

View File

@ -17,48 +17,48 @@ Pod adalah unit komputasi terkecil yang bisa di-_deploy_ dan dibuat serta dikelo
## Apa Itu Pod?
Sebuah Pod (seperti pod pada paus atau kacang polong) adalah sebuah kelompok yang
terdiri dari satu atau lebih {{< glossary_tooltip text="kontainer" term_id="container" >}}
terdiri dari satu atau lebih {{< glossary_tooltip text="kontainer" term_id="container" >}}
(misalnya kontainer Docker), dengan ruang penyimpanan ataupun jaringan yang dipakai bersama,
dan sebuah spesifikasi mengenai bagaimana menjalankan kontainer. Isi dari Pod akan
selalu diletakkan dan dijadwalkan bersama, serta berjalan dalam konteks yang sama.
Sebuah Pod memodelkan _"logical host"_ yang spesifik terhadap aplikasi. Ini mengandung
lebih dari satu kontainer aplikasi yang secara relatif saling terhubung erat. Sebelum
lebih dari satu kontainer aplikasi yang secara relatif saling terhubung erat. Sebelum
masa kontainer, menjalankan aplikasi dalam mesin fisik atau _virtual_ berarti
menjalankan dalam _logical host_ yang sama.
Walaupun Kubernetes mendukung lebih banyak _runtime_ kontainer selain Docker,
Walaupun Kubernetes mendukung lebih banyak _runtime_ kontainer selain Docker,
namun Docker adalah yang paling umum diketahui dan ini membantu dalam menjelaskan
Pod dengan istilah pada Docker.
Konteks bersama dalam sebuah Pod adalah kumpulan Linux namespace, cgroup dan
Konteks bersama dalam sebuah Pod adalah kumpulan Linux namespace, cgroup dan
kemungkinan segi isolasi lain, hal yang sama yang mengisolasi kontainer Docker.
Dalam sebuah konteks pada Pod, setiap aplikasi bisa menerapkan sub-isolasi lebih lanjut.
Semua kontainer dalam suatu Pod akan berbagi alamat IP dan _port_ yang sama,
dan bisa saling berkomunikasi melalui `localhost`. Komunikasi tersebut mengunakan
standar _inter-process communications_ (IPC) seperti SystemV semaphores
atau POSIX shared memory. Kontainer pada Pod yang berbeda memiliki alamat IP
yang berbeda dan tidak dapat berkomunikasi menggunakan IPC tanpa
Semua kontainer dalam suatu Pod akan berbagi alamat IP dan _port_ yang sama,
dan bisa saling berkomunikasi melalui `localhost`. Komunikasi tersebut mengunakan
standar _inter-process communications_ (IPC) seperti SystemV semaphores
atau POSIX shared memory. Kontainer pada Pod yang berbeda memiliki alamat IP
yang berbeda dan tidak dapat berkomunikasi menggunakan IPC tanpa
[pengaturan khusus](/id/docs/concepts/policy/pod-security-policy/). Kontainer ini
biasa berkomunikasi dengan yang lain menggunakan alamat IP setiap Pod.
Aplikasi dalam suatu Pod juga memiliki akses ke {{< glossary_tooltip text="ruang penyimpanan" term_id="volume" >}} bersama,
Aplikasi dalam suatu Pod juga memiliki akses ke {{< glossary_tooltip text="ruang penyimpanan" term_id="volume" >}} bersama,
yang didefinisikan sebagai bagian dari Pod dan dibuat bisa diikatkan ke masing-masing
_filesystem_ pada aplikasi.
Dalam istilah konsep [Docker](https://www.docker.com/), sebuah Pod dimodelkan sebagai
Dalam istilah konsep [Docker](https://www.docker.com/), sebuah Pod dimodelkan sebagai
gabungan dari kontainer Docker yang berbagi _namespace_ dan ruang penyimpanan _filesystem_.
Layaknya aplikasi dengan kontainer, Pod dianggap sebagai entitas yang relatif tidak kekal
(tidak bertahan lama). Seperti yang didiskusikan dalam
Layaknya aplikasi dengan kontainer, Pod dianggap sebagai entitas yang relatif tidak kekal
(tidak bertahan lama). Seperti yang didiskusikan dalam
[siklus hidup Pod](/id/docs/concepts/workloads/pods/pod-lifecycle/), Pod dibuat, diberikan
ID unik (UID), dan dijadwalkan pada suatu mesin dan akan tetap disana hingga dihentikan
(bergantung pada aturan _restart_) atau dihapus. Jika {{< glossary_tooltip text="mesin" term_id="node" >}}
mati, maka semua Pod pada mesin tersebut akan dijadwalkan untuk dihapus, namun setelah
mati, maka semua Pod pada mesin tersebut akan dijadwalkan untuk dihapus, namun setelah
suatu batas waktu. Suatu Pod tertentu (sesuai dengan ID unik) tidak akan dijadwalkan ulang
ke mesin baru, namun akan digantikan oleh Pod yang identik, bahkan jika dibutuhkan bisa
dengan nama yang sama, tapi dengan ID unik yang baru
(baca [_replication controller_](/id/docs/concepts/workloads/controllers/replicationcontroller/)
ke mesin baru, namun akan digantikan oleh Pod yang identik, bahkan jika dibutuhkan bisa
dengan nama yang sama, tapi dengan ID unik yang baru
(baca [_replication controller_](/id/docs/concepts/workloads/controllers/replicationcontroller/)
untuk info lebih lanjut)
Ketika sesuatu dikatakan memiliki umur yang sama dengan Pod, misalnya saja ruang penyimpanan,
@ -78,9 +78,9 @@ ruang penyimpanan persisten untuk berbagi ruang penyimpanan bersama antara konta
Pod adalah suatu model dari pola beberapa proses yang bekerja sama dan membentuk
suatu unit layanan yang kohesif. Menyederhanakan proses melakukan _deploy_ dan
pengelolaan aplikasi dengan menyediakan abstraksi tingkat yang lebih tinggi
daripada konstituen aplikasinya. Pod melayani sebagai unit dari _deployment_,
penskalaan horizontal, dan replikasi. _Colocation_ (_co-scheduling_), berbagi nasib
(misalnya dimatikan), replikasi terkoordinasi, berbagi sumber daya dan
daripada konstituen aplikasinya. Pod melayani sebagai unit dari _deployment_,
penskalaan horizontal, dan replikasi. _Colocation_ (_co-scheduling_), berbagi nasib
(misalnya dimatikan), replikasi terkoordinasi, berbagi sumber daya dan
pengelolaan ketergantungan akan ditangani otomatis untuk kontainer dalam suatu Pod.
### Berbagi sumber daya dan komunikasi
@ -88,8 +88,8 @@ pengelolaan ketergantungan akan ditangani otomatis untuk kontainer dalam suatu P
Pod memungkinkan berbagi data dan komunikasi diantara konstituennya.
Semua aplikasi dalam suatu Pod menggunakan _namespace_ jaringan yang sama
(alamat IP dan _port_ yang sama), dan menjadikan bisa saling mencari dan berkomunikasi
dengan menggunakan `localhost`. Oleh karena itu, aplikasi dalam Pod harus
(alamat IP dan _port_ yang sama), dan menjadikan bisa saling mencari dan berkomunikasi
dengan menggunakan `localhost`. Oleh karena itu, aplikasi dalam Pod harus
berkoordinasi mengenai penggunaan _port_. Setiap Pod memiliki alamat IP
dalam satu jaringan bersama yang bisa berkomunikasi dengan komputer lain
dan Pod lain dalam jaringan yang sama.
@ -116,7 +116,7 @@ penerbit peristiwa, dll.
* proksi, jembatan dan adaptor.
* pengontrol, manajer, konfigurasi dan pembaharu.
Secara umum, masing-masing Pod tidak dimaksudkan untuk menjalankan beberapa
Secara umum, masing-masing Pod tidak dimaksudkan untuk menjalankan beberapa
aplikasi yang sama.
Penjelasan lebih lengkap bisa melihat [The Distributed System ToolKit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns).
@ -128,9 +128,9 @@ Kenapa tidak menjalankan banyak program dalam satu kontainer (Docker)?
1. Transparansi. Membuat kontainer dalam suatu Pod menjadi terlihat dari infrastruktur,
memungkinkan infrastruktur menyediakan servis ke kontainer tersebut, misalnya saja
pengelolaan proses dan pemantauan sumber daya. Ini memfasilitasi sejumlah
pengelolaan proses dan pemantauan sumber daya. Ini memfasilitasi sejumlah
kenyamanan untuk pengguna.
1. Pemisahan ketergantungan perangkat lunak. Setiap kontainer mungkin memiliki
1. Pemisahan ketergantungan perangkat lunak. Setiap kontainer mungkin memiliki
versi, dibuat dan dijalankan ulang secara independen. Kubernetes mungkin mendukung
pembaharuan secara langsung terhadap suatu kontainer, suatu saat nanti.
1. Mudah digunakan. Penguna tidak diharuskan menjalankan manajer prosesnya sendiri,
@ -140,30 +140,30 @@ Kenapa tidak menjalankan banyak program dalam satu kontainer (Docker)?
Kenapa tidak mendukung penjadwalan kontainer berdasarkan _affinity_?
Cara itu bisa menyediakan lokasi yang sama, namun tidak memberikan banyak
Cara itu bisa menyediakan lokasi yang sama, namun tidak memberikan banyak
keuntungan dari Pod, misalnya saja berbagi sumber daya, IPC, jaminan berbagi nasib
dan kemudahan manajemen.
## Ketahanan suatu Pod (atau kekurangan)
Pod tidak dimaksudkan untuk diperlakukan sebagai entitas yang tahan lama.
Pod tidak dimaksudkan untuk diperlakukan sebagai entitas yang tahan lama.
Mereka tidak akan bertahan dengan kegagalan penjadwalan, kegagalan mesin,
atau _eviction_ (pengusiran), misalnya karena kurangnya sumber daya atau dalam suatu
kasus mesin sedang dalam pemeliharaan.
Secara umum, pengguna tidak seharusnya butuh membuat Pod secara langsung. Mereka
Secara umum, pengguna tidak seharusnya butuh membuat Pod secara langsung. Mereka
seharusnya selalu menggunakan pengontrol, sekalipun untuk yang tunggal, misalnya,
[_Deployment_](/id/docs/concepts/workloads/controllers/deployment/). Pengontrol
menyediakan penyembuhan diri dengan ruang lingkup kelompok, begitu juga dengan
pengelolaan replikasi dan penluncuran.
pengelolaan replikasi dan penluncuran.
Pengontrol seperti [_StatefulSet_](/id/docs/concepts/workloads/controllers/statefulset.md)
bisa memberikan dukungan terhadap Pod yang _stateful_.
Penggunaan API kolektif sebagai _user-facing primitive_ utama adalah hal yang
relatif umum diantara sistem penjadwalan kluster, seperti
relatif umum diantara sistem penjadwalan klaster, seperti
[Borg](https://research.google/pubs/large-scale-cluster-management-at-google-with-borg/),
[Marathon](https://github.com/d2iq-archive/marathon),
[Borg](https://research.google/pubs/large-scale-cluster-management-at-google-with-borg/),
[Marathon](https://github.com/d2iq-archive/marathon),
[Aurora](http://aurora.apache.org/documentation/latest/reference/configuration/#job-schema), dan
[Tupperware](https://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
@ -173,23 +173,23 @@ Pod diekspose sebagai _primitive_ untuk memfasilitasi hal berikut:
* mendukung operasi pada level Pod tanpa perlu melakukan proksi melalui API pengontrol
* pemisahan antara umur suatu Pod dan pengontrol, seperti misalnya _bootstrapping_.
* pemisahan antara pengontrol dan servis, pengontrol _endpoint_ hanya memperhatikan Pod
* komposisi yang bersih antara fungsionalitas dilevel Kubelet dan klaster. Kubelet
* komposisi yang bersih antara fungsionalitas dilevel Kubelet dan klaster. Kubelet
secara efektif adalah pengontrol Pod.
* aplikasi dengan ketersediaan tinggi, yang akan mengharapkan Pod akan digantikan
sebelum dihentikan dan tentu saja sebelum dihapus, seperti dalam kasus penggusuran
* aplikasi dengan ketersediaan tinggi, yang akan mengharapkan Pod akan digantikan
sebelum dihentikan dan tentu saja sebelum dihapus, seperti dalam kasus penggusuran
yang direncanakan atau pengambilan gambar.
## Penghentian Pod
Karena Pod merepresentasikan proses yang berjalan pada mesin didalam klaster, sangat
penting untuk memperbolehkan proses ini berhenti secara normal ketika sudah tidak
Karena Pod merepresentasikan proses yang berjalan pada mesin didalam klaster, sangat
penting untuk memperbolehkan proses ini berhenti secara normal ketika sudah tidak
dibutuhkan (dibandingkan dengan dihentikan paksa dengan sinyal KILL dan tidak memiliki
waktu untuk dibersihkan). Pengguna seharusnya dapat meminta untuk menghapus dan tahu
proses penghentiannya, serta dapat memastikan penghentian berjalan sempurna. Ketika
proses penghentiannya, serta dapat memastikan penghentian berjalan sempurna. Ketika
pengguna meminta menghapus Pod, sistem akan mencatat masa tenggang untuk penghentian
secara normal sebelum Pod dipaksa untuk dihentikan, dan sinyal TERM akan dikirim ke
proses utama dalam setiap kontainer. Setelah masa tenggang terlewati, sinyal KILL
akan dikirim ke setiap proses dan Pod akan dihapus dari API server. Jika Kubelet
proses utama dalam setiap kontainer. Setelah masa tenggang terlewati, sinyal KILL
akan dikirim ke setiap proses dan Pod akan dihapus dari API server. Jika Kubelet
atau kontainer manajer dijalankan ulang ketika menunggu suatu proses dihentikan,
penghentian tersebut akan diulang dengan mengembalikan masa tenggang senilai semula.
@ -199,19 +199,19 @@ Contohnya sebagai berikut:
1. Pod dalam API server akan diperbarui dengan waktu dimana Pod dianggap "mati"
bersama dengan masa tenggang.
1. Pod ditampilkan dalam status "Terminating" ketika tercantum dalam perintah klien
1. (bersamaan dengan poin 3) Ketika Kubelet melihat Pod sudah ditandai sebagai
1. (bersamaan dengan poin 3) Ketika Kubelet melihat Pod sudah ditandai sebagai
"Terminating" karena waktu pada poin 2 sudah diatur, ini memulai proses penghentian Pod
1. Jika salah satu kontainer pada Pod memiliki
[preStop _hook_](/id/docs/concepts/containers/container-lifecycle-hooks/#hook-details),
1. Jika salah satu kontainer pada Pod memiliki
[preStop _hook_](/id/docs/concepts/containers/container-lifecycle-hooks/#hook-details),
maka akan dipanggil di dalam kontainer. Jika `preStop` _hook_ masih berjalan
setelah masa tenggang habis, langkah 2 akan dipanggil dengan tambahan masa tenggang
yang sedikit, 2 detik.
1. Semua kontainer akan diberikan sinyal TERM. Sebagai catatan, tidak semua kontainer
akan menerima sinyal TERM dalam waktu yang sama dan mungkin butuh waktu untuk
1. Semua kontainer akan diberikan sinyal TERM. Sebagai catatan, tidak semua kontainer
akan menerima sinyal TERM dalam waktu yang sama dan mungkin butuh waktu untuk
menjalankan `preStop` _hook_ jika bergantung pada urutan penghentiannya.
1. (bersamaan dengan poin 3) Pod akan dihapus dari daftar _endpoint_ untuk servis dan
1. (bersamaan dengan poin 3) Pod akan dihapus dari daftar _endpoint_ untuk servis dan
tidak lagi dianggap sebagai bagian dari Pod yang berjalan dalam _replication controllers_.
Pod yang dihentikan, secara perlahan tidak akan melayani permintaan karena load balancer
Pod yang dihentikan, secara perlahan tidak akan melayani permintaan karena load balancer
(seperti servis proksi) menghapus mereka dari daftar rotasi.
1. Ketika masa tenggang sudah lewat, semua proses yang masih berjalan dalam Pod
akan dihentikan dengan sinyal SIGKILL.
@ -229,25 +229,25 @@ untuk melakukan penghapusan paksa.
### Penghapusan paksa sebuah Pod
Penghapusan paksa dari sebuah Pod didefinisikan sebagai penghapusan Pod dari _state_
Penghapusan paksa dari sebuah Pod didefinisikan sebagai penghapusan Pod dari _state_
klaster dan etcd secara langsung. Ketika penghapusan paksa dilakukan, API server tidak
akan menunggu konfirmasi dari kubelet bahwa Pod sudah dihentikan pada mesin ia berjalan.
Ini menghapus Pod secara langsung dari API, sehingga Pod baru bisa dibuat dengan nama
yang sama. Dalam mesin, Pod yang dihentikan paksa akan tetap diberikan sedikit masa
yang sama. Dalam mesin, Pod yang dihentikan paksa akan tetap diberikan sedikit masa
tenggang sebelum dihentikan paksa.
Penghentian paksa dapat menyebabkan hal berbahaya pada beberapa Pod dan seharusnya
dilakukan dengan perhatian lebih. Dalam kasus StatefulSet Pods, silakan melihat
dilakukan dengan perhatian lebih. Dalam kasus StatefulSet Pods, silakan melihat
dokumentasi untuk [penghentian Pod dari StatefulSet](/docs/tasks/run-application/force-delete-stateful-set-pod/).
## Hak istimewa untuk kontainer pada Pod
Setiap kontainer dalam Pod dapat mengaktifkan hak istimewa (mode _privileged_), dengan menggunakan tanda
`privileged` pada [konteks keamanan](/id/docs/tasks/configure-pod-container/security-context/)
pada spesifikasi kontainer. Ini akan berguna untuk kontainer yang ingin menggunakan
pada spesifikasi kontainer. Ini akan berguna untuk kontainer yang ingin menggunakan
kapabilitas Linux seperti memanipulasi jaringan dan mengakses perangkat. Proses dalam
kontainer mendapatkan hak istimewa yang hampir sama dengan proses di luar kontainer.
Dengan hak istimerwa, seharusnya lebih mudah untuk menulis pada jaringan dan _plugin_
Dengan hak istimerwa, seharusnya lebih mudah untuk menulis pada jaringan dan _plugin_
ruang penyimpanan sebagai Pod berbeda yang tidak perlu dikompilasi ke dalam kubelet.
{{< note >}}

View File

@ -4,14 +4,14 @@ id: pod
date: 2019-06-24
full_link: /docs/concepts/workloads/pods/pod-overview/
short_description: >
Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan pada kluster kamu.
Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan pada klaster kamu.
aka:
aka:
tags:
- core-object
- fundamental
---
Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan {{< glossary_tooltip text="kontainer" term_id="container" >}} pada kluster kamu.
Unit Kubernetes yang paling sederhana dan kecil. Sebuah Pod merepresentasikan sebuah set kontainer yang dijalankan {{< glossary_tooltip text="kontainer" term_id="container" >}} pada klaster kamu.
<!--more-->
<!--more-->
Sebuah Pod biasanya digunakan untuk menjalankan sebuah kontainer. Pod juga dapat digunakan untuk menjalankan beberapa sidecar container dan beberapa fiture tambahan. Pod biasanya diatur oleh sebuah {{< glossary_tooltip term_id="deployment" >}}.

View File

@ -207,7 +207,7 @@ Pada Kubernetes v1.13.0, etcd2 tidak lagi didukung sebagai _backend_ penyimpanan
dan `kube-apiserver` standarnya ke etcd3
- Kubernetes v1.9.0: pengumuman penghentian _backend_ penyimpanan etcd2 diumumkan
- Kubernetes v1.13.0: _backend_ penyimpanan etcd2 dihapus, `kube-apiserver` akan
menolak untuk start dengan `--storage-backend=etcd2`, dengan pesan
menolak untuk start dengan `--storage-backend=etcd2`, dengan pesan
`etcd2 is no longer a supported storage backend`
Sebelum memutakhirkan v1.12.x kube-apiserver menggunakan `--storage-backend=etcd2` ke
@ -215,7 +215,7 @@ v1.13.x, data etcd v2 harus dimigrasikan ke _backend_ penyimpanan v3 dan
permintaan kube-apiserver harus diubah untuk menggunakan `--storage-backend=etcd3`.
Proses untuk bermigrasi dari etcd2 ke etcd3 sangat tergantung pada bagaimana
klaster etcd diluncurkan dan dikonfigurasi, serta bagaimana klaster Kubernetes diluncurkan dan dikonfigurasi. Kami menyarankan kamu berkonsultasi dengan dokumentasi penyedia kluster kamu untuk melihat apakah ada solusi yang telah ditentukan.
klaster etcd diluncurkan dan dikonfigurasi, serta bagaimana klaster Kubernetes diluncurkan dan dikonfigurasi. Kami menyarankan kamu berkonsultasi dengan dokumentasi penyedia klaster kamu untuk melihat apakah ada solusi yang telah ditentukan.
Jika klaster kamu dibuat melalui `kube-up.sh` dan masih menggunakan etcd2 sebagai penyimpanan _backend_, silakan baca [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters)

View File

@ -11,7 +11,7 @@ ServiceAccount menyediakan identitas untuk proses yang sedang berjalan dalam seb
Dokumen ini digunakan sebagai pengenalan untuk pengguna terhadap ServiceAccount dan menjelaskan bagaimana perilaku ServiceAccount dalam konfigurasi klaster seperti yang direkomendasikan Kubernetes. Pengubahan perilaku yang bisa saja dilakukan administrator klaster terhadap klaster tidak menjadi bagian pembahasan dokumentasi ini.
{{< /note >}}
Ketika kamu mengakses klaster (contohnya menggunakan `kubectl`), kamu terautentikasi oleh apiserver sebagai sebuah akun pengguna (untuk sekarang umumnya sebagai `admin`, kecuali jika administrator klustermu telah melakukan pengubahan). Berbagai proses yang ada di dalam kontainer dalam Pod juga dapat mengontak apiserver. Ketika itu terjadi, mereka akan diautentikasi sebagai sebuah ServiceAccount (contohnya sebagai `default`).
Ketika kamu mengakses klaster (contohnya menggunakan `kubectl`), kamu terautentikasi oleh apiserver sebagai sebuah akun pengguna (untuk sekarang umumnya sebagai `admin`, kecuali jika administrator klastermu telah melakukan pengubahan). Berbagai proses yang ada di dalam kontainer dalam Pod juga dapat mengontak apiserver. Ketika itu terjadi, mereka akan diautentikasi sebagai sebuah ServiceAccount (contohnya sebagai `default`).
@ -292,7 +292,7 @@ kubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml
_Token_ yang mewakili Pod akan diminta dan disimpan kubelet, lalu kubelet akan membuat _token_ yang dapat diakses oleh Pod pada _file path_ yang ditentukan, dan melakukan _refresh_ _token_ ketika telah mendekati waktu berakhir. _Token_ akan diganti oleh kubelet jika _token_ telah melewati 80% dari total TTL, atau jika _token_ telah melebihi waktu 24 jam.
Aplikasi bertanggung jawab untuk memuat ulang _token_ ketika terjadi penggantian. Pemuatan ulang teratur (misalnya sekali setiap 5 menit) cukup untuk mencakup kebanyakan kasus.
Aplikasi bertanggung jawab untuk memuat ulang _token_ ketika terjadi penggantian. Pemuatan ulang teratur (misalnya sekali setiap 5 menit) cukup untuk mencakup kebanyakan kasus.
## ServiceAccountIssuerDiscovery
@ -326,7 +326,7 @@ Pada banyak kasus, server API Kubernetes tidak tersedia di internet publik, namu
Lihat juga:
- [Panduan Admin Kluster mengenai ServiceAccount](/docs/reference/access-authn-authz/service-accounts-admin/)
- [Panduan Admin klaster mengenai ServiceAccount](/docs/reference/access-authn-authz/service-accounts-admin/)
- [ServiceAccount Signing Key Retrieval KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/20190730-oidc-discovery.md)
- [OIDC Discovery Spec](https://openid.net/specs/openid-connect-discovery-1_0.html)

View File

@ -0,0 +1,159 @@
---
title: Menerapkan Standar Keamanan Pod di Tingkat Namespace
content_type: tutorial
weight: 20
---
{{% alert title="Catatan" %}}
Tutorial ini hanya berlaku untuk klaster baru.
{{% /alert %}}
Pod Security Admission adalah pengendali penerimaan (admission controller) yang menerapkan
[Standar Keamanan Pod](/docs/concepts/security/pod-security-standards/)
saat pod dibuat. Fitur ini telah mencapai status GA di v1.25.
Dalam tutorial ini, Anda akan menerapkan Standar Keamanan Pod `baseline`,
satu namespace pada satu waktu.
Anda juga dapat menerapkan Standar Keamanan Pod ke beberapa namespace sekaligus di tingkat klaster. Untuk instruksi, lihat
[Menerapkan Standar Keamanan Pod di tingkat klaster](/docs/tutorials/security/cluster-level-pss/).
## {{% heading "prerequisites" %}}
Pasang alat berikut di workstation Anda:
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
- [kubectl](/docs/tasks/tools/)
## Membuat klaster
1. Buat klaster `kind` sebagai berikut:
```shell
kind create cluster --name psa-ns-level
```
Outputnya mirip dengan ini:
```
Membuat klaster "psa-ns-level" ...
✓ Memastikan gambar node (kindest/node:v{{< skew currentPatchVersion >}}) 🖼
✓ Menyiapkan node 📦
✓ Menulis konfigurasi 📜
✓ Memulai control-plane 🕹️
✓ Memasang CNI 🔌
✓ Memasang StorageClass 💾
Atur konteks kubectl ke "kind-psa-ns-level"
Anda sekarang dapat menggunakan klaster Anda dengan:
kubectl cluster-info --context kind-psa-ns-level
Tidak yakin apa yang harus dilakukan selanjutnya? 😅 Lihat https://kind.sigs.k8s.io/docs/user/quick-start/
```
1. Atur konteks kubectl ke klaster baru:
```shell
kubectl cluster-info --context kind-psa-ns-level
```
Outputnya mirip dengan ini:
```
Control plane Kubernetes berjalan di https://127.0.0.1:50996
CoreDNS berjalan di https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Untuk debug dan diagnosis masalah klaster lebih lanjut, gunakan 'kubectl cluster-info dump'.
```
## Membuat namespace
Buat namespace baru bernama `example`:
```shell
kubectl create ns example
```
Outputnya mirip dengan ini:
```
namespace/example created
```
## Mengaktifkan pemeriksaan Standar Keamanan Pod untuk namespace tersebut
1. Aktifkan Standar Keamanan Pod pada namespace ini menggunakan label yang didukung oleh
Pod Security Admission bawaan. Dalam langkah ini Anda akan mengkonfigurasi pemeriksaan untuk
memberikan peringatan pada Pod yang tidak memenuhi versi terbaru dari standar keamanan pod _baseline_.
```shell
kubectl label --overwrite ns example \
pod-security.kubernetes.io/warn=baseline \
pod-security.kubernetes.io/warn-version=latest
```
2. Anda dapat mengonfigurasi beberapa pemeriksaan standar keamanan pod pada namespace mana pun, menggunakan label.
Perintah berikut akan `enforce` Standar Keamanan Pod `baseline`, tetapi
`warn` dan `audit` untuk Standar Keamanan Pod `restricted` sesuai dengan versi terbaru
(nilai default)
```shell
kubectl label --overwrite ns example \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/enforce-version=latest \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latest
```
## Memverifikasi penerapan Standar Keamanan Pod
1. Buat Pod baseline di namespace `example`:
```shell
kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml
```
Pod berhasil dibuat; outputnya termasuk peringatan. Sebagai contoh:
```
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
```
1. Buat Pod baseline di namespace `default`:
```shell
kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml
```
Outputnya mirip dengan ini:
```
pod/nginx created
```
Pengaturan penerapan dan peringatan Standar Keamanan Pod hanya diterapkan
ke namespace `example`. Anda dapat membuat Pod yang sama di namespace `default`
tanpa peringatan.
## Menghapus
Sekarang hapus klaster yang Anda buat di atas dengan menjalankan perintah berikut:
```shell
kind delete cluster --name psa-ns-level
```
## {{% heading "whatsnext" %}}
- Jalankan
[skrip shell](/examples/security/kind-with-namespace-level-baseline-pod-security.sh)
untuk melakukan semua langkah sebelumnya sekaligus.
1. Membuat klaster kind
2. Membuat namespace baru
3. Menerapkan Standar Keamanan Pod `baseline` dalam mode `enforce` sambil menerapkan
Standar Keamanan Pod `restricted` juga dalam mode `warn` dan `audit`.
4. Membuat pod baru dengan standar keamanan pod berikut diterapkan
- [Pod Security Admission](/docs/concepts/security/pod-security-admission/)
- [Standar Keamanan Pod](/docs/concepts/security/pod-security-standards/)
- [Menerapkan Standar Keamanan Pod di tingkat klaster](/docs/tutorials/security/cluster-level-pss/)

View File

@ -0,0 +1,230 @@
---
title: "Contoh: Men-deploy WordPress dan MySQL dengan Persistent Volumes"
content_type: tutorial
weight: 20
card:
name: tutorials
weight: 40
title: "Contoh Stateful: WordPress dengan Persistent Volumes"
---
<!-- overview -->
Tutorial ini menunjukkan cara untuk men-deploy situs WordPress dan database MySQL menggunakan Minikube. Kedua aplikasi ini menggunakan PersistentVolumes dan PersistentVolumeClaims untuk menyimpan data.
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) (PV) adalah bagian dari penyimpanan di dalam klaster yang telah disediakan secara manual oleh administrator, atau secara dinamis disediakan oleh Kubernetes menggunakan [StorageClass](/docs/concepts/storage/storage-classes).
[PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC) adalah permintaan penyimpanan oleh pengguna yang dapat dipenuhi oleh PV. PersistentVolumes dan PersistentVolumeClaims bersifat independen dari siklus hidup Pod dan mempertahankan data meskipun Pod di-restart, dijadwalkan ulang, atau bahkan dihapus.
{{< warning >}}
Deployment ini tidak cocok untuk kasus penggunaan produksi, karena menggunakan Pod WordPress dan MySQL instance tunggal. Pertimbangkan untuk menggunakan [WordPress Helm Chart](https://github.com/bitnami/charts/tree/master/bitnami/wordpress) untuk mendeploy WordPress di lingkungan produksi.
{{< /warning >}}
{{< note >}}
File yang disediakan dalam tutorial ini menggunakan API Deployment GA dan spesifik untuk Kubernetes versi 1.9 dan yang lebih baru. Jika kamu ingin menggunakan tutorial ini dengan versi Kubernetes yang lebih lama, harap perbarui versi API sesuai kebutuhan, atau rujuk ke versi tutorial sebelumnya.
{{< /note >}}
## {{% heading "objectives" %}}
* Membuat PersistentVolumeClaims dan PersistentVolumes
* Membuat `kustomization.yaml` dengan
* generator Secret
* konfigurasi sumber daya MySQL
* konfigurasi sumber daya WordPress
* Terapkan direktori kustomisasi dengan `kubectl apply -k ./`
* Bersihkan sumber daya
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Contoh yang ditunjukkan di halaman ini bekerja dengan `kubectl` versi 1.27 dan yang lebih baru.
Unduh file konfigurasi berikut:
1. [mysql-deployment.yaml](/examples/application/wordpress/mysql-deployment.yaml)
1. [wordpress-deployment.yaml](/examples/application/wordpress/wordpress-deployment.yaml)
<!-- lessoncontent -->
## Membuat PersistentVolumeClaims dan PersistentVolumes
MySQL dan WordPress masing-masing memerlukan PersistentVolume untuk menyimpan data. PersistentVolumeClaims mereka akan dibuat pada langkah deployment.
Banyak lingkungan klaster memiliki StorageClass default yang sudah di-instal. Ketika StorageClass tidak ditentukan dalam PersistentVolumeClaim, StorageClass default klaster akan digunakan.
Ketika PersistentVolumeClaim dibuat, PersistentVolume akan disediakan secara dinamis berdasarkan konfigurasi StorageClass.
{{< warning >}}
Di klaster lokal, StorageClass default menggunakan provisioner `hostPath`. Volume `hostPath` hanya cocok untuk pengembangan dan pengujian. Dengan volume `hostPath`, data kamu akan disimpan di `/tmp` pada node tempat Pod dijadwalkan dan tidak akan berpindah antar node. Jika sebuah Pod mati dan dijadwalkan ke node lain di klaster, atau node di-reboot, data akan hilang.
{{< /warning >}}
{{< note >}}
Jika kamuu menjalankan klaster yang memerlukan provisioner `hostPath`, flag `--enable-hostpath-provisioner` harus diatur pada komponen `controller-manager`.
{{< /note >}}
{{< note >}}
Jika kamu memiliki klaster Kubernetes yang berjalan di Google Kubernetes Engine, silakan ikuti [panduan ini](https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk).
{{< /note >}}
## Membuat kustomization.yaml
### Menambahkan Generator Secret
[Secret](/docs/concepts/configuration/secret/) adalah objek yang menyimpan data sensitif seperti kata sandi atau kunci. Sejak versi 1.14, `kubectl` mendukung pengelolaan objek Kubernetes menggunakan file kustomisasi. kamu dapat membuat Secret menggunakan generator di `kustomization.yaml`.
Tambahkan generator Secret di `kustomization.yaml` dengan perintah berikut. kamu perlu mengganti `KATA_SANDI` dengan kata sandi yang ingin kamu gunakan.
```shell
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: mysql-pass
literals:
- password=KATA_SANDI
EOF
```
## Menambahkan Konfigurasi Sumber Daya untuk MySQL dan WordPress
Manifest berikut menjelaskan Deployment MySQL instance tunggal. Kontainer MySQL memasang PersistentVolume di /var/lib/mysql. Variabel lingkungan `MYSQL_ROOT_PASSWORD` mengatur kata sandi database dari Secret.
{{% code_sample file="application/wordpress/mysql-deployment.yaml" %}}
Manifest berikut menjelaskan Deployment WordPress instance tunggal. Kontainer WordPress memasang PersistentVolume di `/var/www/html` untuk file data situs web. Variabel lingkungan `WORDPRESS_DB_HOST` mengatur nama Layanan MySQL yang didefinisikan di atas, dan WordPress akan mengakses database melalui Layanan. Variabel lingkungan `WORDPRESS_DB_PASSWORD` mengatur kata sandi database dari Secret yang dihasilkan oleh kustomize.
{{% code_sample file="application/wordpress/wordpress-deployment.yaml" %}}
1. Unduh file konfigurasi deployment MySQL.
```shell
curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
```
2. Unduh file konfigurasi WordPress.
```shell
curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml
```
3. Tambahkan mereka ke file `kustomization.yaml`.
```shell
cat <<EOF >>./kustomization.yaml
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml
EOF
```
## Terapkan dan Verifikasi
`kustomization.yaml` berisi semua sumber daya untuk mendeploy situs WordPress dan database MySQL. kamu dapat menerapkan direktori dengan
```shell
kubectl apply -k ./
```
Sekarang kamu dapat memverifikasi bahwa semua objek ada.
1. Verifikasi bahwa Secret ada dengan menjalankan perintah berikut:
```shell
kubectl get secrets
```
Responsnya akan seperti ini:
```
NAME TYPE DATA AGE
mysql-pass-c57bb4t7mf Opaque 1 9s
```
2. Verifikasi bahwa PersistentVolume telah disediakan secara dinamis.
```shell
kubectl get pvc
```
{{< note >}}
Mungkin memerlukan waktu beberapa menit untuk PV disediakan dan terikat.
{{< /note >}}
Responsnya akan seperti ini:
```
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
wp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
```
3. Verifikasi bahwa Pod sedang berjalan dengan menjalankan perintah berikut:
```shell
kubectl get pods
```
{{< note >}}
Mungkin memerlukan waktu beberapa menit untuk Status Pod menjadi `RUNNING`.
{{< /note >}}
Responsnya akan seperti ini:
```
NAME READY STATUS RESTARTS AGE
wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s
```
4. Verifikasi bahwa Layanan sedang berjalan dengan menjalankan perintah berikut:
```shell
kubectl get services wordpress
```
Responsnya akan seperti ini:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress LoadBalancer 10.0.0.89 <pending> 80:32406/TCP 4m
```
{{< note >}}
Minikube hanya dapat mengekspos Layanan melalui `NodePort`. EXTERNAL-IP selalu pending.
{{< /note >}}
5. Jalankan perintah berikut untuk mendapatkan Alamat IP untuk Layanan WordPress:
```shell
minikube service wordpress --url
```
Responsnya akan seperti ini:
```
http://1.2.3.4:32406
```
6. Salin alamat IP, dan muat halaman di browser kamu untuk melihat situs kamu.
kamu akan melihat halaman pengaturan WordPress yang mirip dengan tangkapan layar berikut.
![wordpress-init](https://raw.githubusercontent.com/kubernetes/examples/master/mysql-wordpress-pd/WordPress.png)
{{< warning >}}
Jangan biarkan instalasi WordPress kamu di halaman ini. Jika pengguna lain menemukannya, mereka dapat mengatur situs web di instance kamu dan menggunakannya untuk menyajikan konten berbahaya.<br/><br/>
Instal WordPress dengan membuat nama pengguna dan kata sandi atau hapus instance kamu.
{{< /warning >}}
## {{% heading "cleanup" %}}
1. Jalankan perintah berikut untuk menghapus Secret, Deployment, Service, dan PersistentVolumeClaim kamu:
```shell
kubectl delete -k ./
```
## {{% heading "whatsnext" %}}
* Pelajari lebih lanjut tentang [Introspeksi dan Debugging](/docs/tasks/debug/debug-application/debug-running-pod/)
* Pelajari lebih lanjut tentang [Jobs](/docs/concepts/workloads/controllers/job/)
* Pelajari lebih lanjut tentang [Port Forwarding](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
* Pelajari cara [Mendapatkan Shell ke Kontainer](/docs/tasks/debug/debug-application/get-shell-running-container/)

View File

@ -0,0 +1,74 @@
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

View File

@ -0,0 +1,69 @@
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:6.2.1-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: WORDPRESS_DB_USER
value: wordpress
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim

View File

@ -4,8 +4,6 @@ abstract: Deployment, scalabilità, e gestione di container automatizzata
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) è un software open-source per l'automazione del deployment, scalabilità, e gestione di applicativi in containers.

View File

@ -4,8 +4,6 @@ abstract: "自動化されたコンテナのデプロイ・スケール・管理
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/" >}})は、デプロイやスケーリングを自動化したり、コンテナ化されたアプリケーションを管理したりするための、オープンソースのシステムです。

View File

@ -1,19 +1,15 @@
---
title: エフェメラルコンテナ
content_type: concept
weight: 80
weight: 60
---
<!-- overview -->
{{< feature-state state="alpha" for_k8s_version="v1.16" >}}
{{< feature-state state="stable" for_k8s_version="v1.25" >}}
このページでは、特別な種類のコンテナであるエフェメラルコンテナの概要を説明します。エフェメラルコンテナは、トラブルシューティングなどのユーザーが開始するアクションを実行するために、すでに存在する{{< glossary_tooltip term_id="pod" >}}内で一時的に実行するコンテナです。エフェメラルコンテナは、アプリケーションの構築ではなく、serviceの調査のために利用します。
{{< warning >}}
エフェメラルコンテナは初期のアルファ状態であり、本番クラスターには適しません。[Kubernetesの非推奨ポリシー](/docs/reference/using-api/deprecation-policy/)に従って、このアルファ機能は、将来大きく変更されたり、完全に削除される可能性があります。
{{< /warning >}}
<!-- body -->
## エフェメラルコンテナを理解する
@ -32,7 +28,11 @@ weight: 80
エフェメラルコンテナは、直接`pod.spec`に追加するのではなく、API内の特別な`ephemeralcontainers`ハンドラを使用して作成します。そのため、エフェメラルコンテナを`kubectl edit`を使用して追加することはできません。
エフェメラルコンテナをPodに追加した後は、通常のコンテナのようにエフェメラルコンテナを変更または削除することはできません。
エフェメラルコンテナをPodに追加した後は、通常のコンテナのようにエフェメラルコンテナを変更または削除することはできません。
{{< note >}}
エフェメラルコンテナは、[static Pod](/ja/docs/tasks/configure-pod-container/static-pod/)ではサポートされていません。
{{< /note >}}
## エフェメラルコンテナの用途
@ -42,106 +42,6 @@ weight: 80
エフェメラルコンテナを利用する場合には、他のコンテナ内のプロセスにアクセスできるように、[プロセス名前空間の共有](/ja/docs/tasks/configure-pod-container/share-process-namespace/)を有効にすると便利です。
エフェメラルコンテナを利用してトラブルシューティングを行う例については、[デバッグ用のエフェメラルコンテナを使用してデバッグする](/ja/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container)を参照してください。
## {{% heading "whatsnext" %}}
## Ephemeral containers API
{{< note >}}
このセクションの例を実行するには、`EphemeralContainers`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にして、Kubernetesクライアントとサーバーのバージョンをv1.16以上にする必要があります。
{{< /note >}}
このセクションの例では、API内でエフェメラルコンテナを表示する方法を示します。通常は、APIを直接呼び出すのではなく、`kubectl alpha debug`やその他の`kubectl`[プラグイン](/docs/tasks/extend-kubectl/kubectl-plugins/)を使用して、これらのステップを自動化します。
エフェメラルコンテナは、Podの`ephemeralcontainers`サブリソースを使用して作成されます。このサブリソースは、`kubectl --raw`を使用して確認できます。まずはじめに、以下に`EphemeralContainers`リストとして追加するためのエフェメラルコンテナを示します。
```json
{
"apiVersion": "v1",
"kind": "EphemeralContainers",
"metadata": {
"name": "example-pod"
},
"ephemeralContainers": [{
"command": [
"sh"
],
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"name": "debugger",
"stdin": true,
"tty": true,
"terminationMessagePolicy": "File"
}]
}
```
すでに実行中の`example-pod`のエフェメラルコンテナを更新するには、次のコマンドを実行します。
```shell
kubectl replace --raw /api/v1/namespaces/default/pods/example-pod/ephemeralcontainers -f ec.json
```
このコマンドを実行すると、新しいエフェメラルコンテナのリストが返されます。
```json
{
"kind":"EphemeralContainers",
"apiVersion":"v1",
"metadata":{
"name":"example-pod",
"namespace":"default",
"selfLink":"/api/v1/namespaces/default/pods/example-pod/ephemeralcontainers",
"uid":"a14a6d9b-62f2-4119-9d8e-e2ed6bc3a47c",
"resourceVersion":"15886",
"creationTimestamp":"2019-08-29T06:41:42Z"
},
"ephemeralContainers":[
{
"name":"debugger",
"image":"busybox",
"command":[
"sh"
],
"resources":{
},
"terminationMessagePolicy":"File",
"imagePullPolicy":"IfNotPresent",
"stdin":true,
"tty":true
}
]
}
```
新しく作成されたエフェメラルコンテナの状態を確認するには、`kubectl describe`を使用します。
```shell
kubectl describe pod example-pod
```
```
...
Ephemeral Containers:
debugger:
Container ID: docker://cf81908f149e7e9213d3c3644eda55c72efaff67652a2685c1146f0ce151e80f
Image: busybox
Image ID: docker-pullable://busybox@sha256:9f1003c480699be56815db0f8146ad2e22efea85129b5b5983d0e0fb52d9ab70
Port: <none>
Host Port: <none>
Command:
sh
State: Running
Started: Thu, 29 Aug 2019 06:42:21 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts: <none>
...
```
新しいエフェメラルコンテナとやりとりをするには、他のコンテナと同じように、`kubectl attach`、`kubectl exec`、`kubectl logs`などのコマンドが利用できます。例えば、次のようなコマンドが実行できます。
```shell
kubectl attach -it example-pod -c debugger
```
* [デバッグ用のエフェメラルコンテナを使用してデバッグする](/ja/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container)方法について学ぶ。

View File

@ -0,0 +1,18 @@
---
title: Minikube
id: minikube
date: 2018-04-12
full_link: /ja/docs/tasks/tools/#minikube
short_description: >
ローカルでKubernetesを実行するためのツールです。
aka:
tags:
- fundamental
- tool
---
ローカルでKubernetesを実行するためのツールです。
<!--more-->
Minikubeは、ローカルのVM内で、単一または複数ードのローカルKubernetesクラスターを実行します。
Minikubeを使って[学習環境でKubernetesを試す](/ja/docs/tasks/tools/#minikube)ことができます。

View File

@ -0,0 +1,166 @@
---
title: DaemonSet上でローリングアップデートを実施する
content_type: task
weight: 10
---
<!-- overview -->
このページでは、DaemonSet上でローリングアップデートを行う方法について説明します。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## DaemonSetのアップデート戦略
DaemonSetには2種類のアップデート戦略があります:
* `OnDelete`: `OnDelete`アップデート戦略では、DaemonSetのテンプレートを更新した後、古いDaemonSetのPodを手動で削除した時*のみ*、新しいDaemonSetのPodが作成されます。
これはKubernetesのバージョン1.5またはそれ以前のDaemonSetと同じ挙動です。
* `RollingUpdate`: これは既定のアップデート戦略です。
`RollingUpdate`アップデート戦略では、DaemonSetのテンプレートを更新した後、古いDaemonSetのPodが削除され、制御された方法で自動的に新しいDaemonSetのPodが作成されます。
アップデートのプロセス全体を通して、各ード上で稼働するDaemonSetのPodは最大で1つだけです。
## ローリングアップデートの実施
DaemonSetに対してローリングアップデートの機能を有効にするには、`.spec.updateStrategy.type`を`RollingUpdate`に設定する必要があります。
[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)(既定値は1)、[`.spec.minReadySeconds`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)(既定値は0)、そして[`.spec.updateStrategy.rollingUpdate.maxSurge`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)(既定値は0)についても設定したほうがよいでしょう。
### `RollingUpdate`アップデート戦略によるDaemonSetの作成
このYAMLファイルでは、アップデート戦略として`RollingUpdate`が指定されたDaemonSetを定義しています。
{{% code_sample file="controllers/fluentd-daemonset.yaml" %}}
DaemonSetのマニフェストのアップデート戦略を検証した後、DaemonSetを作成します:
```shell
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
```
あるいは、`kubectl apply`を使用してDaemonSetを更新する予定がある場合は、`kubectl apply`を使用して同じDaemonSetを作成してください。
```shell
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
```
### DaemonSetの`RollingUpdate`アップデート戦略の確認
DaemonSetのアップデート戦略を確認し、`RollingUpdate`が設定されているようにします:
```shell
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
```
システムにDaemonSetが作成されていない場合は、代わりに次のコマンドによってDaemonSetのマニフェストを確認します:
```shell
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
```
どちらのコマンドも、出力は次のようになります:
```
RollingUpdate
```
出力が`RollingUpdate`以外の場合は、DaemonSetオブジェクトまたはマニフェストを見直して、修正してください。
### DaemonSetテンプレートの更新
`RollingUpdate`のDaemonSetの`.spec.template`に対して任意の更新が行われると、ローリングアップデートがトリガーされます。
新しいYAMLファイルを適用してDaemonSetを更新しましょう。
これにはいくつかの異なる`kubectl`コマンドを使用することができます。
{{% code_sample file="controllers/fluentd-daemonset-update.yaml" %}}
#### 宣言型コマンド
[設定ファイル](/docs/tasks/manage-kubernetes-objects/declarative-config/)を使用してDaemonSetを更新する場合は、`kubectl apply`を使用します:
```shell
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml
```
#### 命令型コマンド
[命令型コマンド](/docs/tasks/manage-kubernetes-objects/imperative-command/)を使用してDaemonSetを更新する場合は、`kubectl edit`を使用します:
```shell
kubectl edit ds/fluentd-elasticsearch -n kube-system
```
##### コンテナイメージのみのアップデート
DaemonSetのテンプレート内のコンテナイメージ、つまり`.spec.template.spec.containers[*].image`のみを更新したい場合、`kubectl set image`を使用します:
```shell
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
```
### ローリングアップデートのステータスの監視
最後に、最新のDaemonSetの、ローリングアップデートのロールアウトステータスを監視します:
```shell
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
```
ロールアウトが完了すると、次のような出力となります:
```shell
daemonset "fluentd-elasticsearch" successfully rolled out
```
## トラブルシューティング
### DaemonSetのローリングアップデートがスタックする
時々、DaemonSetのローリングアップデートがスタックする場合があります。
これにはいくつかの原因が考えられます:
#### いくつかのノードのリソース不足
1つ以上のードで新しいDaemonSetのPodをスケジュールすることができず、ロールアウトがスタックしています。
これはノードの[リソース不足](/ja/docs/concepts/scheduling-eviction/node-pressure-eviction/)の可能性があります。
この事象が起きた場合は、`kubectl get nodes`の出力と次の出力を比較して、DaemonSetのPodがスケジュールされていないードを見つけます:
```shell
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
```
そのようなードを見つけたら、新しいDaemonSetのPodのための空きを作るために、ードからDaemonSet以外のいくつかのPodを削除します。
{{< note >}}
コントローラーによって制御されていないPodや、レプリケートされていないPodを削除すると、これはサービスの中断が発生する原因となります。
これはまた、[PodDisruptionBudget](/ja/docs/tasks/run-application/configure-pdb/)についても考慮しません。
{{< /note >}}
#### 壊れたロールアウト
例えばコンテナがクラッシュを繰り返したり、(しばしばtypoによって)コンテナイメージが存在しないといった理由で最新のDaemonSetのテンプレートの更新が壊れた場合、DaemonSetのロールアウトは進みません。
これを修正するためには、DaemonSetのテンプレートを再度更新します。
新しいロールアウトは、前の不健全なロールアウトによってブロックされません。
#### クロックスキュー
DaemonSet内で`.spec.minReadySeconds`が指定されると、マスターとードの間のクロックスキューによって、DaemonSetがロールアウトの進捗を正しく認識できなくなる場合があります。
## クリーンアップ
NamespaceからDaemonSetを削除します:
```shell
kubectl delete ds fluentd-elasticsearch -n kube-system
```
## {{% heading "whatsnext" %}}
* [DaemonSet上でロールバックを実施する](/docs/tasks/manage-daemon/rollback-daemon-set/)を参照
* [既存のDaemonSetのPodを再利用するためにDaemonSetを作成する](/ja/docs/concepts/workloads/controllers/daemonset/)を参照

View File

@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# これらのTolerationはコントロールプレーンード上でDaemonSetを実行できるようにするためのものです
# コントロールプレーンードでPodを実行すべきではない場合は、これらを削除してください
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

View File

@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# これらのTolerationはコントロールプレーンード上でDaemonSetを実行できるようにするためのものです
# コントロールプレーンードでPodを実行すべきではない場合は、これらを削除してください
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

View File

@ -6,8 +6,6 @@ sitemap:
priority: 1.0
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}), znany też jako K8s, to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach.

View File

@ -11,7 +11,8 @@ tags:
- fundamental
- workload
---
*Container runtime* to oprogramowanie zajmujące się uruchamianiem kontenerów.
Podstawowy komponent umożliwiający efektywne uruchamianie kontenerów w Kubernetesie.
Odpowiada za zarządzanie uruchamianiem i cyklem życia kontenerów w środowisku Kubernetes.
<!--more-->

View File

@ -14,4 +14,8 @@ tags:
<!--more-->
Kubelet korzysta z dostarczanych (różnymi metodami) _PodSpecs_ i gwarantuje, że kontenery opisane przez te PodSpecs są uruchomione i działają poprawnie. Kubelet nie zarządza kontenerami, które nie zostały utworzone przez Kubernetesa.
[kubelet](/docs/reference/command-line-tools-reference/kubelet/)
korzysta z dostarczanych (różnymi metodami) _PodSpecs_ i gwarantuje, że
kontenery opisane przez te PodSpecs są uruchomione i działają poprawnie.
Kubelet nie zarządza kontenerami, które nie zostały utworzone przez Kubernetesa.

View File

@ -24,7 +24,7 @@ card:
<div class="row">
<div class="col-md-9">
<h2>Podstawy Kubernetesa</h2>
<p>Ten samouczek poprowadzi Cię przez podstawy systemu zarządzania zadaniami na klastrze Kubernetes. W każdym module znajdziesz najważniejsze informacje o głównych pojęciach i funkcjonalnościach Kubernetes oraz interaktywny samouczek online. Dzięki samouczkom nauczysz się zarządzać prostym klasterem i skonteneryzowanymi aplikacjami uruchamianymi na tym klastrze.</p>
<p>Ten samouczek poprowadzi Cię przez podstawy systemu zarządzania zadaniami na klastrze Kubernetes. W każdym module znajdziesz najważniejsze informacje o głównych pojęciach i funkcjonalnościach Kubernetes. Dzięki samouczkom nauczysz się zarządzać prostym klasterem i skonteneryzowanymi aplikacjami uruchamianymi na tym klastrze.</p>
<p>Nauczysz się, jak:</p>
<ul>
<li>Zainstalować skonteneryzowaną aplikację na klastrze.</li>
@ -32,7 +32,6 @@ card:
<li>Zaktualizować aplikację do nowej wersji.</li>
<li>Rozwiązywać problemy z aplikacją.</li>
</ul>
<p>Ten samouczek korzysta z Katacoda do uruchomienia wirtualnego terminalu w przeglądarce. W terminalu dostępny jest Minikube, niewielka lokalna instalacja Kubernetes, która może być uruchamiana z dowolnego miejsca. Nie ma konieczności instalowania ani konfigurowania żadnego oprogramowania. Każdy z interaktywnych samouczków jest wykonywany bezpośrednio w przeglądarce.</p>
</div>
</div>
@ -46,7 +45,7 @@ card:
</div>
<br>
<div id="basics-modules" class="content__modules">
<h2>Podstawy Kubernetes — Moduły</h2>
<div class="row">

View File

@ -1,38 +0,0 @@
---
title: Interaktywny samouczek - Tworzenie klastra
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__alert">
Ekran jest za wąski do pracy z terminalem. Użyj wersji na desktop/tablet.
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Początek<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Przejdź do modułu 2 &gt;<span class=""></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -22,7 +22,6 @@ weight: 10
<ul>
<li>Nauczyć się, czym jest klaster Kubernetes.</li>
<li>Nauczyć się, czym jest Minikube.</li>
<li>Uruchomić klaster Kubernetes przy pomocy terminala online.</li>
</ul>
</div>
@ -86,17 +85,8 @@ weight: 10
<div class="col-md-8">
<p>Kiedy instalujesz aplikację na Kubernetesie, zlecasz warstwie sterowania uruchomienie kontenera z aplikacją. Warstwa sterowania zleca uruchomienie kontenera na węzłach klastra. <b>Węzły komunikują się z warstwą sterowania przy użyciu <a href="/docs/concepts/overview/kubernetes-api/">API Kubernetesa</a></b>, udostępnianego poprzez warstwę sterowania. Użytkownicy końcowi mogą korzystać bezpośrednio z API Kubernetesa do komunikacji z klastrem.</p>
<p>Klaster Kubernetes może być zainstalowany zarówno na fizycznych, jak i na maszynach wirtualnych. Aby wypróbować Kubernetesa, można też wykorzystać Minikube. Minikube to "lekka" implementacja Kubernetesa, która tworzy VM na maszynie lokalnej i instaluje prosty klaster składający się tylko z jednego węzła. Minikube jest dostępny na systemy Linux, macOS i Windows. Narzędzie linii poleceń Minikube obsługuje podstawowe operacje na klastrze, takie jak start, stop, prezentacja informacji jego stanie i usunięcie klastra. Na potrzeby tego samouczka wykorzystamy jednak terminal online z zainstalowanym już wcześniej Minikube.</p>
<p>Klaster Kubernetes może być zainstalowany zarówno na fizycznych, jak i na maszynach wirtualnych. Aby wypróbować Kubernetesa, można też wykorzystać Minikube. Minikube to "lekka" implementacja Kubernetesa, która tworzy VM na maszynie lokalnej i instaluje prosty klaster składający się tylko z jednego węzła. Minikube jest dostępny na systemy Linux, macOS i Windows. Narzędzie linii poleceń Minikube obsługuje podstawowe operacje na klastrze, takie jak start, stop, prezentacja informacji jego stanie i usunięcie klastra.</p>
<p>Teraz, kiedy już wiesz, co to jest Kubernetes, przejdźmy do samouczka online i stwórzmy nasz pierwszy klaster!</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" role="button">Uruchom interaktywny samouczek <span class="btn__next"></span></a>
</div>
</div>

View File

@ -1,52 +0,0 @@
---
title: Interaktywny samouczek - Instalacja aplikacji
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="row">
<div class="col-md-12">
<p>
Pod to podstawowy element odpowiedzialny za uruchomienie aplikacji na Kubernetesie. Każdy pod to część składowa całościowego obciążenia Twojego klastra. <a href="/docs/concepts/workloads/pods/">Dowiedz się więcej na temat Podów</a>.
</p>
</div>
</div>
<br>
<div class="katacoda">
<div class="katacoda__alert">
Do pracy z terminalem użyj wersji na desktop/tablet
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/7" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button"> &lt; Powrót do modułu 1<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Początek<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">Przejdź do modułu 3 &gt;<span class=""></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -93,15 +93,6 @@ weight: 10
<p>
Na potrzeby pierwszej instalacji użyjesz aplikacji hello-node zapakowaną w kontener Docker-a, która korzysta z NGINXa i powtarza wszystkie wysłane do niej zapytania. (Jeśli jeszcze nie próbowałeś stworzyć aplikacji hello-node i uruchomić za pomocą kontenerów, możesz spróbować teraz, kierując się instrukcjami samouczka <a href="/pl/docs/tutorials/hello-minikube/">Hello Minikube</a>).
<p>
<p>Teraz, kiedy wiesz, czym są Deploymenty, przejdźmy do samouczka online, żeby zainstalować naszą pierwszą aplikację!</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" role="button">Uruchom interaktywny samouczek <span class="btn__next"></span></a>
</div>
</div>

View File

@ -1,43 +0,0 @@
---
title: Interaktywny samouczek - Poznaj swoją aplikację
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<br>
<div class="katacoda">
<div class="katacoda__alert">
Do pracy z terminalem użyj wersji na desktop/tablet
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/4" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">&lt; Powrót do modułu 2<span class="btn"></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Początek<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/expose/expose-intro/" role="button">Przejdź do modułu 4 &gt;<span class="btn"></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -127,13 +127,6 @@ weight: 10
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/explore/explore-interactive/" role="button">Rozpocznij interaktywny samouczek <span class="btn__next"></span></a>
</div>
</div>
</main>

View File

@ -1,35 +0,0 @@
---
title: Interaktywny samouczek - Udostępnianie aplikacji
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/8" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/explore/explore-intro/" role="button">&lt; Powrót do modułu 3<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Początek<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/scale/scale-intro/" role="button">Przejdź do modułu 5 &gt;<span class=""></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -89,11 +89,6 @@ weight: 10
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/expose/expose-interactive/" role="button">Rozpocznij interaktywny samouczek<span class="btn__next"></span></a>
</div>
</div>
</main>
</div>

View File

@ -1,37 +0,0 @@
---
title: Interaktywny samouczek - Skalowanie aplikacji
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/5" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/expose/expose-interactive/" role="button">&lt; Powrót do modułu 4<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Początek<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/update/update-intro/" role="button">Przejdź do modułu 6 &gt;<span class=""></span></a>
</div>
</div>
</main>
<a class="scrolltop" href="#top"></a>
</div>
</body>
</html>

View File

@ -100,14 +100,7 @@ weight: 10
<div class="row">
<div class="col-md-8">
<p>Kiedy aplikacja ma uruchomioną więcej niż jedną instancję, można prowadzić ciągłe aktualizacje <em>(Rolling updates)</em> bez przerw w działaniu aplikacji. O tym będzie mowa w następnym module. Na razie przejdźmy do terminala online, aby przeprowadzić skalowanie aplikacji.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">Uruchom interaktywny samouczek<span class="btn__next"></span></a>
<p>Kiedy aplikacja ma uruchomioną więcej niż jedną instancję, można prowadzić ciągłe aktualizacje <em>(Rolling updates)</em> bez przerw w działaniu aplikacji. O tym będzie mowa w następnym module.</p>
</div>
</div>

View File

@ -1,38 +0,0 @@
---
title: Interaktywny samouczek - Aktualizowanie aplikacji
weight: 20
---
<!DOCTYPE html>
<html lang="pl">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
{{< katacoda-tutorial >}}
<div class="layout" id="top">
<main class="content katacoda-content">
<div class="katacoda">
<div class="katacoda__alert">
Do pracy z terminalem użyj wersji na desktop/tablet
</div>
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-id="kubernetes-bootcamp/6" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;">
</div>
</div>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/scale/scale-interactive/" role="button">&lt; Powrót do modułu 5<span class=""></span></a>
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/" role="button">Z powrotem do Podstaw Kubernetesa<span class=""></span></a>
</div>
</div>
</main>
</div>
</body>
</html>

View File

@ -30,7 +30,7 @@ weight: 10
<p>Użytkownicy oczekują, że aplikacje są dostępne non-stop, a deweloperzy chcieliby móc wprowadzać nowe wersje nawet kilka razy dziennie. W Kubernetes jest to możliwe dzięki mechanizmowi płynnych aktualizacji <em>(rolling updates)</em>. <b>Rolling updates</b> pozwala prowadzić aktualizację w ramach Deploymentu bez przerw w jego działaniu dzięki krokowemu aktualizowaniu kolejnych Podów. Nowe Pody uruchamiane są na Węzłach, które posiadają wystarczające zasoby.</p>
<p>W poprzednim module wyskalowaliśmy aplikację aby była uruchomiona na wielu instancjach. To niezbędny wymóg, aby móc prowadzić aktualizacje bez wpływu na dostępność aplikacji. Domyślnie, maksymalna liczba Podów, które mogą być niedostępne w trakcie aktualizacji oraz Podów, które mogą być tworzone, wynosi jeden. Obydwie opcje mogą być zdefiniowane w wartościach bezwzględnych lub procentowych (ogólnej liczby Podów).
<p>W poprzednim module wyskalowaliśmy aplikację aby była uruchomiona na wielu instancjach. To niezbędny wymóg, aby móc prowadzić aktualizacje bez wpływu na dostępność aplikacji. Domyślnie, maksymalna liczba Podów, które mogą być niedostępne w trakcie aktualizacji oraz Podów, które mogą być tworzone, wynosi jeden. Obydwie opcje mogą być zdefiniowane w wartościach bezwzględnych lub procentowych (ogólnej liczby Podów).
W Kubernetes, każdy aktualizacja ma nadany numer wersji i każdy Deployment może być wycofany do wersji poprzedniej (stabilnej).</p>
</div>
@ -114,21 +114,6 @@ weight: 10
</div>
</div>
<br>
<div class="row">
<div class="col-md-8">
<p>W ramach tego interaktywnego samouczka zaktualizujemy aplikację do nowej wersji oraz wycofamy tę aktualizację.</p>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/pl/docs/tutorials/kubernetes-basics/update/update-interactive/" role="button">Rozpocznij interaktywny samouczek <span class="btn__next"></span></a>
</div>
</div>
</main>
</div>

View File

@ -13,6 +13,7 @@ Projekt Kubernetes zapewnia wsparcie dla trzech ostatnich wydań _minor_
Poprawki do wydania 1.19 i nowszych [będą publikowane przez około rok](/releases/patch-releases/#support-period).
Kubernetes w wersji 1.18 i wcześniejszych otrzymywał poprawki przez 9 miesięcy.
Wersje Kubernetesa oznaczane są jako **x.y.z**,
gdzie **x** jest oznaczeniem wersji głównej (_major_), **y** — podwersji (_minor_), a **z** — numer poprawki (_patch_),
zgodnie z terminologią [Semantic Versioning](https://semver.org/).
@ -21,13 +22,16 @@ Więcej informacji można z znaleźć w dokumencie [version skew policy](/releas
<!-- body -->
## Historia wydań
## Historia wydań {#release-history}
{{< release-data >}}
## Nadchodzące wydania
## Nadchodzące wydania {#upcoming-release}
Zajrzyj na [harmonogram](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}})
nadchodzącego wydania Kubernetesa numer **{{< skew nextMinorVersion >}}**!
## Przydatne zasoby
## Przydatne zasoby {#helpful-resources}
Zajrzyj do zasobów zespołu [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)
w celu uzyskania kluczowych informacji na temat ról i procesu wydawania wersji.

View File

@ -4,8 +4,6 @@ abstract: "Implantação, dimensionamento e gerenciamento automatizado de contê
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### Kubernetes (K8s) é um produto Open Source utilizado para automatizar a implantação, o dimensionamento e o gerenciamento de aplicativos em contêiner.

View File

@ -0,0 +1,132 @@
---
title: Determine a razão para a falha do Pod
content_type: task
weight: 30
---
<!-- overview -->
Esta página mostra como escrever e ler uma mensagem de término do contêiner.
Mensagens de término fornecem uma maneira para os contêineres registrarem informações sobre eventos fatais em um local onde possam ser facilmente recuperadas e exibidas por ferramentas como painéis e softwares de monitoramento. Na maioria dos casos, as informações incluídas em uma mensagem de término também devem ser registradas nos
[logs do Kubernetes](/docs/concepts/cluster-administration/logging/).
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## Escrevendo e lendo uma mensagem de término
Neste exercício, você cria um Pod que executa um único contêiner.
O manifesto para esse Pod especifica um comando que é executado quando o contêiner é iniciado:
{{% code_sample file="debug/termination.yaml" %}}
1. Crie um Pod com base no arquivo de configuração YAML:
```shell
kubectl apply -f https://k8s.io/examples/debug/termination.yaml
```
No arquivo YAML, nos campos `command` e `args`, é possível ver que o
contêiner dorme por 10 segundos e, em seguida, escreve "Sleep expired"
no arquivo `/dev/termination-log`. Após escrever a mensagem "Sleep expired",
o contêiner é encerrado.
1. Exiba informações sobre o Pod:
```shell
kubectl get pod termination-demo
```
Repita o comando anterior até que o Pod não esteja mais em execução.
1. Exiba informações detalhadas sobre o Pod:
```shell
kubectl get pod termination-demo --output=yaml
```
A saída inclui a mensagem "Sleep expired":
```yaml
apiVersion: v1
kind: Pod
...
lastState:
terminated:
containerID: ...
exitCode: 0
finishedAt: ...
message: |
Sleep expired
...
```
1. Use um template Go para filtrar a saída, de modo que inclua apenas a mensagem de término:
```shell
kubectl get pod termination-demo -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
```
Se você estiver executando um Pod com vários contêineres, pode usar um template Go
para incluir o nome do contêiner.
Dessa forma, você pode descobrir qual dos contêineres está falhando:
```shell
kubectl get pod multi-container-pod -o go-template='{{range .status.containerStatuses}}{{printf "%s:\n%s\n\n" .name .lastState.terminated.message}}{{end}}'
```
## Personalizando a mensagem de término
O Kubernetes recupera mensagens de término do arquivo especificado no campo
`terminationMessagePath` de um contêiner, que tem o valor padrão de `/dev/termination-log`.
Ao personalizar esse campo, você pode instruir o Kubernetes a usar um arquivo diferente.
O Kubernetes usa o conteúdo do arquivo especificado para preencher a mensagem de status
do contêiner, tanto em casos de sucesso quanto de falha.
A mensagem de término deve ser um breve status final, como uma mensagem de falha de asserção.
O kubelet trunca mensagens que excedam 4096 bytes.
O tamanho total da mensagem entre todos os contêineres é limitado a 12KiB,
sendo dividido igualmente entre cada contêiner.
Por exemplo, se houver 12 contêineres (`initContainers` ou `containers`),
cada um terá 1024 bytes disponíveis para a mensagem de término.
O caminho padrão para a mensagem de término é `/dev/termination-log`.
Não é possível definir o caminho da mensagem de término após o lançamento de um Pod.
No exemplo a seguir, o contêiner grava mensagens de término em
`/tmp/my-log` para que o Kubernetes possa recuperá-las:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: msg-path-demo
spec:
containers:
- name: msg-path-demo-container
image: debian
terminationMessagePath: "/tmp/my-log"
```
Além disso, os usuários podem definir o campo `terminationMessagePolicy` de um contêiner
para uma personalização adicional. Esse campo tem como valor padrão "`File`",
o que significa que as mensagens de término são recuperadas apenas do arquivo
de mensagem de término.
Ao definir `terminationMessagePolicy` como "`FallbackToLogsOnError`", você instrui
o Kubernetes a usar o último trecho do log de saída do contêiner caso o arquivo
de mensagem de término esteja vazio e o contêiner tenha encerrado com erro.
A saída do log é limitada a 2048 bytes ou 80 linhas, o que for menor.
## {{% heading "whatsnext" %}}
* Veja o campo `terminationMessagePath` em [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core).
* Consulte [ImagePullBackOff](/docs/concepts/containers/images/#imagepullbackoff) em [Imagens](/docs/concepts/containers/images/).
* Saiba mais sobre [recuperação de logs](/docs/concepts/cluster-administration/logging/).
* Aprenda sobre [templates Go](https://pkg.go.dev/text/template).
* Conheça mais sobre [status do Pod](/docs/tasks/debug/debug-application/debug-init-containers/#understanding-pod-status) e [fase do Pod](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase).
* Entenda os [estados do contêiner](/docs/concepts/workloads/pods/pod-lifecycle/#container-states).

View File

@ -0,0 +1,150 @@
---
title: Obter um Shell em um Contêiner em Execução
content_type: task
---
<!-- overview -->
Esta página mostra como usar `kubectl exec` para obter um shell em um contêiner em execução.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
<!-- steps -->
## Obtendo um Shell em um Contêiner
Neste exercício, você cria um Pod que possui um contêiner. O contêiner
executa a imagem do nginx. Aqui está o arquivo de configuração para o Pod:
{{% code_sample file="application/shell-demo.yaml" %}}
Crie o Pod:
```shell
kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
```
Verifique se o contêiner está em execução:
```shell
kubectl get pod shell-demo
```
Obtenha um shell no contêiner em execução:
```shell
kubectl exec --stdin --tty shell-demo -- /bin/bash
```
{{< note >}}
O duplo traço (`--`) separa os argumentos que você deseja passar para o comando dos argumentos do `kubectl`.
{{< /note >}}
No seu shell, liste o diretório raiz:
```shell
# Execute isso dentro do contêiner
ls /
```
No seu shell, experimente outros comandos. Aqui estão alguns exemplos:
```shell
# Você pode executar esses comandos de exemplo dentro do contêiner
ls /
cat /proc/mounts
cat /proc/1/maps
apt-get update
apt-get install -y tcpdump
tcpdump
apt-get install -y lsof
lsof
apt-get install -y procps
ps aux
ps aux | grep nginx
```
## Escrevendo a página raiz para o nginx
Veja novamente o arquivo de configuração do seu Pod. O Pod
possui um volume `emptyDir`, e o contêiner monta esse volume
em `/usr/share/nginx/html`.
No seu shell, crie um arquivo `index.html` no diretório `/usr/share/nginx/html`:
```shell
# Execute isso dentro do contêiner
echo 'Hello shell demo' > /usr/share/nginx/html/index.html
```
No seu shell, envie uma solicitação GET para o servidor nginx:
```shell
# Execute isso no shell dentro do seu contêiner
apt-get update
apt-get install curl
curl http://localhost/
```
A saída exibe o texto que você escreveu no arquivo `index.html`:
```
Hello shell demo
```
Quando terminar de usar o shell, digite `exit`.
```shell
exit # Para sair do shell no contêiner
```
## Executando comandos individuais em um contêiner
Em uma janela de comando comum, fora do seu shell, liste as variáveis de ambiente no contêiner em execução:
```shell
kubectl exec shell-demo -- env
```
Experimente executar outros comandos. Aqui estão alguns exemplos:
```shell
kubectl exec shell-demo -- ps aux
kubectl exec shell-demo -- ls /
kubectl exec shell-demo -- cat /proc/1/mounts
```
<!-- discussion -->
## Abrindo um shell quando um Pod tem mais de um contêiner
Se um Pod tiver mais de um contêiner, use `--container` ou `-c` para
especificar um contêiner no comando `kubectl exec`. Por exemplo,
suponha que você tenha um Pod chamado `my-pod`, e esse Pod tenha dois contêineres
chamados _main-app_ e _helper-app_. O seguinte comando abriria um
shell no contêiner _main-app_.
```shell
kubectl exec -i -t my-pod --container main-app -- /bin/bash
```
{{< note >}}
As opções curtas `-i` e `-t` são equivalentes às opções longas `--stdin` e `--tty`
{{< /note >}}
## {{% heading "whatsnext" %}}
* Leia mais sobre [`kubectl exec`](/docs/reference/generated/kubectl/kubectl-commands/#exec)

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: shell-demo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
hostNetwork: true
dnsPolicy: Default

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: termination-demo
spec:
containers:
- name: termination-demo-container
image: debian
command: ["/bin/sh"]
args: ["-c", "sleep 10 && echo Sleep expired > /dev/termination-log"]

View File

@ -17,7 +17,7 @@ card:
## Начало работы
Из-за того, что участники не могут одобрять собственные пулреквесты, нужно как минимум два участника для инициализации локализацию.
Из-за того, что участники не могут одобрять собственные пулреквесты, нужно как минимум два участника для инициализации локализации.
Все команды по локализации должны быть самодостаточными. Это означает, что мы с радостью разместим вашу работу, но мы не можем сделать перевод за вас.

View File

@ -40,8 +40,8 @@ description: |-
<li><i>LoadBalancer</i> — создает внешний балансировщик нагрузки в текущем облаке (если это поддерживается) и назначает фиксированный внешний IP-адрес для сервиса. Является надмножеством NodePort.</li>
<li><i>ExternalName</i> — открывает доступ к сервису по содержимому поля <code>externalName</code> (например, <code>foo.bar.example.com</code>), возвращая запись <code>CNAME</code> с его значением. При этом прокси не используется. Для этого типа требуется версия <code>kube-dns</code> 1.7+ или CoreDNS 0.0.8+.</li>
</ul>
<p>Более подробно узнать о различных типах сервисах можно в руководстве <a href="/docs/tutorials/services/source-ip/">Использование IP-порта источника</a>. Также изучите <a href="/docs/concepts/services-networking/connect-applications-service">Подключение приложений к сервисам</a>.</p>
<p>Кроме этого, обратите внимание, что в некоторых случаях в сервисах не определяется <code>selector</code> в спецификации. Сервис без <code>selector</code> не будет создавать соответствующий эндпоинт (Endpoint). Таким образом, пользователь может вручную определить эндпоинты для сервиса. Ещё один возможный сценарий создания сервиса без селектора — это строгое использование <code>type: ExternalName</code>.</p>
<p>Более подробно узнать о различных типах сервисов можно в руководстве <a href="/docs/tutorials/services/source-ip/">Использование IP-порта источника</a>. Также изучите <a href="/docs/concepts/services-networking/connect-applications-service">Подключение приложений к сервисам</a>.</p>
<p>Кроме этого, обратите внимание, что в некоторых случаях в сервисах не определяется <code>selector</code> в спецификации. Сервис без <code>selector</code> не будет создавать соответствующий эндпойнт (Endpoint). Таким образом, пользователь может вручную определить эндпойнты для сервиса. Ещё один возможный сценарий создания сервиса без селектора — это строгое использование <code>type: ExternalName</code>.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">

View File

@ -4,8 +4,6 @@ abstract: "Автоматичне розгортання, масштабуван
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
<!--

View File

@ -4,8 +4,6 @@ abstract: "Triển khai tự động, nhân rộng và quản lý container"
cid: home
---
{{< site-searchbar >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) là một hệ thống mã nguồn mở giúp tự động hóa việc triển khai, nhân rộng và quản lý các ứng dụng container.

View File

@ -0,0 +1,14 @@
---
title: Các khái niệm
main_menu: true
content_type: concept
weight: 40
---
<!-- overview -->
Phần Khái niệm giúp bạn tìm hiểu về các bộ phận của hệ thống Kubernetes và các khái niệm mà Kubernetes sử dụng để biểu diễn {{< glossary_tooltip text="cụm cluster" term_id="cluster" length="all" >}} của bạn, đồng thời giúp bạn hiểu sâu hơn về cách thức hoạt động của Kubernetes.
<!-- body -->

View File

@ -0,0 +1,14 @@
---
title: Tài liệu tham khảo
linkTitle: "Tài liệu tham khảo"
main_menu: true
weight: 70
content_type: concept
no_list: true
---
<!-- overview -->
Phần này chứa các tài liệu tham khảo của Kubernetes.
<!-- body -->

View File

@ -0,0 +1,17 @@
---
title: Tasks
main_menu: true
weight: 50
content_type: concept
---
<!-- overview -->
Phần này của tài liệu chứa các hướng dẫn thực hiện các tác vụ. Mỗi tài liệu hướng dẫn tác vụ chỉ dẫn cách thực hiện một việc duy nhất, thường bằng cách đưa ra một chuỗi các bước ngắn.
Các tác vụ bao gồm: Cài đặt tool, chạy jobs, quản lý GPUs, etc.
Bạn có thể tạo và đóng góp tài liệu về một tác vụ mới thông qua
[Hướng dẫn tạo tài liệu mới](/docs/contribute/new-content/open-a-pr/).
<!-- body -->

View File

@ -0,0 +1,10 @@
---
title: "Tools Included"
description: "Snippets to be included in the main kubectl-installs-*.md pages."
headless: true
toc_hide: true
_build:
list: never
render: never
publishResources: false
---

View File

@ -0,0 +1,45 @@
---
title: "xác minh cài đặt lệnh kubectl"
description: "Cách kiểm tra lệnh kubectl đã được cài thành công"
headless: true
_build:
list: never
render: never
publishResources: false
---
<!-- TODO: update kubeconfig link when it's translated -->
Để kubectl có thể tìm kiếm và truy cập vào Kubernetes cluster, nó cần một [tệp kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/), được tạo tự động khi chúng ta tạo một cluster bằng [kube-up.sh](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh) hoặc khi triển khai thành công cluster Minikube.
Mặc định, thông tin cấu hình của kubectl được định nghĩa trong `~/.kube/config`.
Chúng ta có thể kiểm tra xem kubectl đã được cấu hình đúng chưa bằng cách kiểm tra thông tin của cluster:
```shell
kubectl cluster-info
```
Nếu bạn thấy kết quả trả về là một đường dẫn, thì kubectl đã được cấu hình đúng để truy cập cluster của chúng ta.
Nếu bạn thấy thông báo tương tự như dưới đây, điều đó có nghĩa kubectl chưa được cấu hình đúng hoặc không thể kết nối tới Kubernetes cluster.
```plaintext
The connection to the server <server-name:port> was refused - did you specify the right host or port?
```
Thông báo trên, được kubectl trả về, mong bạn kiểm tra lại đường dẫn (bao host và port) tới cluster đã đúng hay chưa
Ví dụ, nếu bạn đang dự định tạo một Kubernetes cluster trên máy tính cá nhân, bạn sẽ cần cài đặt một công cụ như [Minikube](https://minikube.sigs.k8s.io/docs/start/) trước, sau đó chạy lại các lệnh đã nêu ở trên.
Nếu lệnh `kubectl cluster-info` trả về đường dẫn nhưng bạn vẫn không thể truy cập vào cluster, hãy kiểm tra cấu hình kỹ hơn bằng lệnh sau:
```shell
kubectl cluster-info dump
```
### Xử lý lỗi 'No Auth Provider Found' {#no-auth-provider-found}
Ở phiên bản 1.26 của Kubernetes, kubectl đã loại bỏ tính năng xác thực tích hợp sẵn cho các dịch vụ Kubernetes được quản lý bởi các nhà cung cấp đám mây dưới đây. Các nhà cung cấp này đã phát hành plugin kubectl để hỗ trợ xác thực dành riêng cho nền tảng của họ. Tham khảo tài liệu hướng dẫn của nhà cung cấp để biết thêm thông tin:
* Azure AKS: [kubelogin plugin](https://azure.github.io/kubelogin/)
* Google Kubernetes Engine: [gke-gcloud-auth-plugin](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#install_plugin)
(Lưu ý: cùng một thông báo lỗi cũng có thể xuất hiện vì các lý do khác không liên quan đến sự thay đổi này.)

View File

@ -0,0 +1,15 @@
---
title: Tutorials
main_menu: true
no_list: true
weight: 60
content_type: concept
---
<!-- overview -->
Phần này của tài liệu có chứa các hướng dẫn. Phần hướng dẫn sẽ chỉ cho bạn cách thực hiện một mục tiêu lớn hơn một [tác vụ đơn lẻ](/docs/tasks/).
Thông thường, một hướng dẫn có nhiều phần, mỗi phần có một trình tự các bước. Trước khi thực hiện từng hướng dẫn, bạn có thể muốn đánh dấu trang [Thuật ngữ chuẩn hóa](/docs/reference/glossary/) để tham khảo.
<!-- body -->

View File

@ -1,60 +1,74 @@
---
title: "生产级别的容器编排系统"
abstract: "自动化的容器部署、扩和管理"
abstract: "自动化的容器部署、扩和管理"
cid: home
sitemap:
priority: 1.0
---
<!--
title: "Production-Grade Container Orchestration"
abstract: "Automated container deployment, scaling, and management"
cid: home
sitemap:
priority: 1.0
-->
{{< site-searchbar >}}
{{< blocks/section class="k8s-overview" >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
{{% blocks/feature image="flower" id="feature-primary" %}}
<!-- [Kubernetes]({{< relref "/docs/concepts/overview/" >}}), also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. -->
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}) 也称为 K8s是用于自动部署、扩缩和管理容器化应用程序的开源系统。
<!--
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}), also known as K8s, is an open source system for automating deployment, scaling, and management of containerized applications.
<!-- It groups containers that make up an application into logical units for easy management and discovery.
Kubernetes builds upon [15 years of experience of running production workloads at Google](http://queue.acm.org/detail.cfm?id=2898444),
combined with best-of-breed ideas and practices from the community. -->
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon [15 years of experience of running production workloads at Google](https://queue.acm.org/detail.cfm?id=2898444), combined with best-of-breed ideas and practices from the community.
-->
[Kubernetes](/zh-cn/docs/concepts/overview/) 也称为
K8s是用于自动部署、扩缩和管理容器化应用程序的开源系统。
它将组成应用程序的容器组合成逻辑单元以便于管理和服务发现。Kubernetes 源自[Google 15 年生产环境的运维经验](http://queue.acm.org/detail.cfm?id=2898444),同时凝聚了社区的最佳创意和实践。
它将组成应用程序的容器组合成逻辑单元以便于管理和服务发现。Kubernetes 源自
[Google 15 年生产环境的运维经验](http://queue.acm.org/detail.cfm?id=2898444),同时凝聚了社区的最佳创意和实践。
{{% /blocks/feature %}}
{{% blocks/feature image="scalable" %}}
<!--
#### Planet Scale
<!-- #### Planet Scale -->
Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team.
-->
#### 星际尺度
<!-- Designed on the same principles that allow Google to run billions of containers a week, Kubernetes can scale without increasing your operations team. -->
Google 每周运行数十亿个容器Kubernetes 基于与之相同的原则来设计,能够在不扩张运维团队的情况下进行规模扩展。
{{% /blocks/feature %}}
{{% blocks/feature image="blocks" %}}
<!--
#### Never Outgrow
<!-- #### Never Outgrow -->
Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is.
-->
#### 永不过时
<!-- Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications
consistently and easily no matter how complex your need is. -->
无论是本地测试还是跨国公司Kubernetes 的灵活性都能让你在应对复杂系统时得心应手。
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
<!-- #### Run K8s Anywhere -->
<!--
#### Run K8s Anywhere
-->
#### 处处适用
<!-- Kubernetes is open source giving you the freedom to take advantage of on-premises,
hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
To download Kubernetes, visit the [download](/releases/download/) section. -->
<!--
Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you.
To download Kubernetes, visit the [download](/releases/download/) section.
-->
Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、混合云或公有云,让您轻松地做出合适的选择。
请访问[下载](/releases/download/)部分下载 Kubernetes。
请访问[下载](/zh-cn/releases/download/)部分下载 Kubernetes。
{{% /blocks/feature %}}
@ -63,28 +77,42 @@ Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
<div class="light-text">
<!-- <h2>The Challenges of Migrating 150+ Microservices to Kubernetes</h2> -->
<h2>将 150+ 微服务迁移到 Kubernetes 上的挑战</h2>
<!-- <p>By Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p> -->
<p>Sarah Wells, 运营和可靠性技术总监, 金融时报</p>
<!--
<h2>The Challenges of Migrating 150+ Microservices to Kubernetes</h2>
-->
<h2>将 150+ 微服务迁移到 Kubernetes 上的挑战</h2>
<!--
<p>By Sarah Wells, Technical Director for Operations and Reliability, Financial Times</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
-->
<p>Sarah Wells运营和可靠性技术总监金融时报</p>
<button id="desktopShowVideoButton" onclick="kub.showVideo()">观看视频</button>
<br>
<br>
<!-- <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024</a> -->
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">参加 2024 年 3 月 19-22 日的欧洲 KubeCon + CloudNativeCon</a>
<br>
<br>
<br>
<br>
<!-- <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 12-15, 2024</a> -->
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">参加 2024 年 11 月 12-15 日的北美 KubeCon + CloudNativeCon</a>
<!--
<h3>Attend upcoming KubeCon + CloudNativeCon events</h3>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" class="desktopKCButton"><strong>Europe</strong> (London, Apr 1-4)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-china/" class="desktopKCButton"><strong>China</strong> (Hong Kong, Jun 10-11)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-japan/" class="desktopKCButton"><strong>Japan</strong> (Tokyo, Jun 16-17)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-india/" class="desktopKCButton"><strong>India</strong> (Hyderabad, Aug 6-7)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2025/" class="desktopKCButton"><strong>North America</strong> (Atlanta, Nov 10-13)</a>
-->
<h3>参加即将举行的 KubeCon + CloudNativeCon 大会</h3>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" class="desktopKCButton"><strong>欧洲</strong>伦敦4 月 1-4 日)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-china/" class="desktopKCButton"><strong>中国</strong>香港6 月 10-11 日)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-japan/" class="desktopKCButton"><strong>日本</strong>东京6 月 16-17 日)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-india/" class="desktopKCButton"><strong>印度</strong>海得拉巴8 月 6-7 日)</a>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2025/" class="desktopKCButton"><strong>北美</strong>亚特兰大11 月 10-13 日)</a>
</div>
<div id="videoPlayer">
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
<button id="closeButton"></button>
</div>
{{< /blocks/section >}}
{{< blocks/kubernetes-features >}}
{{< blocks/case-studies >}}
{{< kubeweekly id="kubeweekly" >}}

View File

@ -4,6 +4,14 @@ title: "Kubernetes 旧版软件包仓库将于 2023 年 9 月 13 日被冻结"
date: 2023-08-31T15:30:00-07:00
slug: legacy-package-repository-deprecation
evergreen: true
author: >
Bob Killen (Google),
Chris Short (AWS),
Jeremy Rickard (Microsoft),
Marko Mudrinić (Kubermatic),
Tim Bannister (The Scale Factory)
translator: >
[Mengjiao Liu](https://github.com/mengjiao-liu) (DaoCloud)
---
<!--
@ -12,13 +20,13 @@ title: "Kubernetes Legacy Package Repositories Will Be Frozen On September 13, 2
date: 2023-08-31T15:30:00-07:00
slug: legacy-package-repository-deprecation
evergreen: true
author: >
Bob Killen (Google),
Chris Short (AWS),
Jeremy Rickard (Microsoft),
Marko Mudrinić (Kubermatic),
Tim Bannister (The Scale Factory)
-->
<!--
**Authors**: Bob Killen (Google), Chris Short (AWS), Jeremy Rickard (Microsoft), Marko Mudrinić (Kubermatic), Tim Bannister (The Scale Factory)
-->
**作者**Bob Killen (Google), Chris Short (AWS), Jeremy Rickard (Microsoft), Marko Mudrinić (Kubermatic), Tim Bannister (The Scale Factory)
**译者**[Mengjiao Liu](https://github.com/mengjiao-liu) (DaoCloud)
<!--
On August 15, 2023, the Kubernetes project announced the general availability of
@ -51,6 +59,14 @@ distributor, and what steps you may need to take.
请继续阅读以了解这对于作为用户或分发商的你意味着什么,
以及你可能需要采取哪些步骤。
<!--
** Update (March 26, 2024): _the legacy Google-hosted repositories went
away on March 4, 2024. It's not possible to install Kubernetes packages from
the legacy Google-hosted package repositories any longer._**
-->
**i 更新2024 年 3 月 26 日):旧 Google 托管仓库已于 2024 年 3 月 4 日下线。
现在无法再从旧 Google 托管软件包仓库安装 Kubernetes 软件包。**
<!--
## How does this affect me as a Kubernetes end user?
@ -90,11 +106,11 @@ managing Kubernetes for you, then they would usually take responsibility for tha
那么他们通常会负责该检查。
<!--
If you have a managed [control plane](/docs/concepts/overview/components/#control-plane-components)
If you have a managed [control plane](/docs/concepts/architecture/#control-plane-components)
but you are responsible for **managing the nodes yourself**, and any of those nodes run Linux,
you should [check](#check-if-affected) whether you are affected.
-->
如果你使用的是托管的[控制平面](/zh-cn/docs/concepts/overview/components/#control-plane-components)
如果你使用的是托管的[控制平面](/zh-cn/docs/concepts/architecture/#control-plane-components)
但你负责**自行管理节点**,并且每个节点都运行 Linux
你应该[检查](#check-if-affected)你是否会受到影响。
@ -141,6 +157,8 @@ possible and inform your users about this change and what steps they need to tak
<!--
## Timeline of changes
_(updated on March 26, 2024)_
- **15th August 2023:**
Kubernetes announces a new, community-managed source for Linux software packages of Kubernetes components
- **31st August 2023:**
@ -150,10 +168,16 @@ possible and inform your users about this change and what steps they need to tak
Kubernetes will freeze the legacy package repositories,
(`apt.kubernetes.io` and `yum.kubernetes.io`).
The freeze will happen immediately following the patch releases that are scheduled for September, 2023.
- **12th January 2024:**
Kubernetes announced intentions to remove the legacy package repositories in January 2024
- **4th March 2024:**
The legacy package repositories have been removed. It's not possible to install Kubernetes packages from
the legacy package repositories any longer
-->
## 变更时间表 {#timeline-of-changes}
<!-- note to maintainers - the trailing whitespace is significant -->
**(更新于 2024 年 3 月 26 日)**
- **2023 年 8 月 15 日:**
Kubernetes 宣布推出一个新的社区管理的 Kubernetes 组件 Linux 软件包源
@ -162,6 +186,10 @@ possible and inform your users about this change and what steps they need to tak
- **2023 年 9 月 13 日**(左右):
Kubernetes 将冻结旧软件包仓库(`apt.kubernetes.io` 和 `yum.kubernetes.io`)。
冻结将计划于 2023 年 9 月发布补丁版本后立即进行。
- **2024 年 1 月 12 日:**
Kubernetes 宣布计划在 2024 年 1 月移除旧软件包仓库。
- **2024 年 3 月 4 日:**
旧软件包仓库已被移除,现在无法再从旧软件包仓库安装 Kubernetes 软件包。
<!--
The Kubernetes patch releases scheduled for September 2023 (v1.28.2, v1.27.6,
@ -195,29 +223,51 @@ community-owned repositories (`pkgs.k8s.io`).
Kubernetes 1.29 及以后的版本将**仅**发布软件包到社区拥有的仓库(`pkgs.k8s.io`)。
<!--
### What releases are available in the new community-owned package repositories?
Linux packages for releases starting from Kubernetes v1.24.0 are available in the
Kubernetes package repositories (`pkgs.k8s.io`). Kubernetes does not have official
Linux packages available for earlier releases of Kubernetes; however, your Linux
distribution may provide its own packages.
-->
### 新的社区拥有的软件包仓库提供哪些可用的软件包版本? {#what-releases-are-available-in-the-new-community-owned-package-repositories}
Kubernetes 软件包仓库(`pkgs.k8s.io`)提供从 Kubernetes v1.24.0 版本开始的 Linux 软件包。
Kubernetes 官方没有为早期的 Kubernetes 版本提供可用的 Linux 软件包,但你的 Linux 发行版可能会提供其自有的软件包。
<!--
## Can I continue to use the legacy package repositories?
_(updated on March 26, 2024)_
**The legacy Google-hosted repositories went away on March 4, 2024. It's not possible
to install Kubernetes packages from the legacy Google-hosted package repositories any
longer.**
The existing packages in the legacy repositories will be available for the foreseeable
future. However, the Kubernetes project can't provide _any_ guarantees on how long
is that going to be. The deprecated legacy repositories, and their contents, might
be removed at any time in the future and without a further notice period.
**UPDATE**: The legacy packages are expected to go away in January 2024.
-->
## 我可以继续使用旧软件包仓库吗? {#can-i-continue-to-use-the-legacy-package-repositories}
**(更新于 2024 年 3 月 26 日)**
**旧 Google 托管软件包仓库已于 2024 年 3 月 4 日下线。
现在无法再从旧 Google 托管软件包仓库安装 Kubernetes 软件包。**
~~旧仓库中的现有软件包将在可预见的未来内保持可用。然而,
Kubernetes 项目无法对这会持续多久提供**任何**保证。
已弃用的旧仓库及其内容可能会在未来随时删除,恕不另行通知。~~
**更新**: 旧版软件包预计将于 2024 年 1 月被删除。
<!--
The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.
~~The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.~~ Migrating to the new package repositories is
required to consume the official Kubernetes packages.
-->
Kubernetes 项目**强烈建议尽快**迁移到新的社区拥有的仓库。
~~Kubernetes 项目**强烈建议尽快**迁移到新的社区拥有的仓库。~~
要使用 Kubernetes 官方软件包,需要迁移到新的软件包仓库。
<!--
Given that no new releases will be published to the legacy repositories **after the September 13, 2023**

View File

@ -0,0 +1,180 @@
---
layout: blog
title: 'Kubernetes v1.32 增加了新的 CPU Manager 静态策略选项用于严格 CPU 预留'
date: 2024-12-16
slug: cpumanager-strict-cpu-reservation
author: >
[Jing Zhang](https://github.com/jingczhang) (Nokia)
translator: >
[Xin Li](https://github.com/my-git9) (DaoCloud)
---
<!--
layout: blog
title: 'Kubernetes v1.32 Adds A New CPU Manager Static Policy Option For Strict CPU Reservation'
date: 2024-12-16
slug: cpumanager-strict-cpu-reservation
author: >
[Jing Zhang](https://github.com/jingczhang) (Nokia)
-->
<!--
In Kubernetes v1.32, after years of community discussion, we are excited to introduce a
`strict-cpu-reservation` option for the [CPU Manager static policy](/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options).
This feature is currently in alpha, with the associated policy hidden by default. You can only use the
policy if you explicitly enable the alpha behavior in your cluster.
-->
在 Kubernetes v1.32 中,经过社区多年的讨论,我们很高兴地引入了
[CPU Manager 静态策略](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/#static-policy-options)的
`strict-cpu-reservation` 选项。此特性当前处于 Alpha 阶段,默认情况下关联的策略是隐藏的。
只有在你的集群中明确启用了此 Alpha 行为后,才能使用此策略。
<!--
## Understanding the feature
The CPU Manager static policy is used to reduce latency or improve performance. The `reservedSystemCPUs` defines an explicit CPU set for OS system daemons and kubernetes system daemons. This option is designed for Telco/NFV type use cases where uncontrolled interrupts/timers may impact the workload performance. you can use this option to define the explicit cpuset for the system/kubernetes daemons as well as the interrupts/timers, so the rest CPUs on the system can be used exclusively for workloads, with less impact from uncontrolled interrupts/timers. More details of this parameter can be found on the [Explicitly Reserved CPU List](/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-list) page.
If you want to protect your system daemons and interrupt processing, the obvious way is to use the `reservedSystemCPUs` option.
-->
## 理解此特性
CPU Manager 静态策略用于减少延迟或提高性能。`reservedSystemCPUs`
定义了一个明确的 CPU 集合,供操作系统系统守护进程和 Kubernetes 系统守护进程使用。
此选项专为 Telco/NFV 类型的使用场景设计,在这些场景中,不受控制的中断/计时器可能会影响工作负载的性能。
你可以使用此选项为系统/Kubernetes 守护进程以及中断/计时器定义明确的 CPU 集合,
从而使系统上的其余 CPU 可以专用于工作负载,并减少不受控制的中断/计时器带来的影响。
有关此参数的更多详细信息,请参阅
[显式预留的 CPU 列表](/zh-cn/docs/tasks/administer-cluster/reserve-compute-resources/#explicitly-reserved-cpu-list)
页面。
如果你希望保护系统守护进程和中断处理,显而易见的方法是使用 `reservedSystemCPUs` 选项。
<!--
However, until the Kubernetes v1.32 release, this isolation was only implemented for guaranteed
pods that made requests for a whole number of CPUs. At pod admission time, the kubelet only
compares the CPU _requests_ against the allocatable CPUs. In Kubernetes, limits can be higher than
the requests; the previous implementation allowed burstable and best-effort pods to use up
the capacity of `reservedSystemCPUs`, which could then starve host OS services of CPU - and we
know that people saw this in real life deployments.
The existing behavior also made benchmarking (for both infrastructure and workloads) results inaccurate.
When this new `strict-cpu-reservation` policy option is enabled, the CPU Manager static policy will not allow any workload to use the reserved system CPU cores.
-->
然而,在 Kubernetes v1.32 发布之前,这种隔离仅针对请求整数个 CPU
的 Guaranteed 类型 Pod 实现。在 Pod 准入时kubelet 仅将 CPU
**请求量**与可分配的 CPU 进行比较。在 Kubernetes 中,限制值可以高于请求值;
之前的实现允许 Burstable 和 BestEffort 类型的 Pod 使用 `reservedSystemCPUs` 的容量,
这可能导致主机操作系统服务缺乏足够的 CPU 资源 —— 并且我们已经知道在实际部署中确实发生过这种情况。
现有的行为还导致基础设施和工作负载的基准测试结果不准确。
当启用这个新的 `strict-cpu-reservation` 策略选项后CPU Manager
静态策略将不允许任何工作负载使用预留的系统 CPU 核心。
<!--
## Enabling the feature
To enable this feature, you need to turn on both the `CPUManagerPolicyAlphaOptions` feature gate and the `strict-cpu-reservation` policy option. And you need to remove the `/var/lib/kubelet/cpu_manager_state` file if it exists and restart kubelet.
With the following kubelet configuration:
-->
## 启用此特性
要启用此特性,你需要同时开启 `CPUManagerPolicyAlphaOptions` 特性门控和
`strict-cpu-reservation` 策略选项。并且如果存在 `/var/lib/kubelet/cpu_manager_state`
文件,则需要删除该文件并重启 kubelet。
使用以下 kubelet 配置:
```yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
featureGates:
...
CPUManagerPolicyOptions: true
CPUManagerPolicyAlphaOptions: true
cpuManagerPolicy: static
cpuManagerPolicyOptions:
strict-cpu-reservation: "true"
reservedSystemCPUs: "0,32,1,33,16,48"
...
```
<!--
When `strict-cpu-reservation` is not set or set to false:
-->
当未设置 `strict-cpu-reservation` 或将其设置为 false 时:
```console
# cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0-63","checksum":1058907510}
```
<!--
When `strict-cpu-reservation` is set to true:
-->
`strict-cpu-reservation` 设置为 true 时:
```console
# cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"2-15,17-31,34-47,49-63","checksum":4141502832}
```
<!--
## Monitoring the feature
You can monitor the feature impact by checking the following CPU Manager counters:
- `cpu_manager_shared_pool_size_millicores`: report shared pool size, in millicores (e.g. 13500m)
- `cpu_manager_exclusive_cpu_allocation_count`: report exclusively allocated cores, counting full cores (e.g. 16)
-->
## 监控此特性
你可以通过检查以下 CPU Manager 计数器来监控该特性的影响:
- `cpu_manager_shared_pool_size_millicores`:报告共享池大小,以毫核为单位(例如 13500m
- `cpu_manager_exclusive_cpu_allocation_count`:报告独占分配的核心数,按完整核心计数(例如 16
<!--
Your best-effort workloads may starve if the `cpu_manager_shared_pool_size_millicores` count is zero for prolonged time.
We believe any pod that is required for operational purpose like a log forwarder should not run as best-effort, but you can review and adjust the amount of CPU cores reserved as needed.
-->
如果 `cpu_manager_shared_pool_size_millicores` 计数在长时间内为零,
你的 BestEffort 类型工作负载可能会因资源匮乏而受到影响。
我们建议,任何用于操作目的的 Pod如日志转发器都不应以 BestEffort 方式运行,
但你可以根据需要审查并调整预留的 CPU 核心数量。
<!--
## Conclusion
Strict CPU reservation is critical for Telco/NFV use cases. It is also a prerequisite for enabling the all-in-one type of deployments where workloads are placed on nodes serving combined control+worker+storage roles.
We want you to start using the feature and looking forward to your feedback.
-->
## 总结
严格的 CPU 预留对于 Telco/NFV 使用场景至关重要。
它也是启用一体化部署类型(其中工作负载被放置在同时担任控制面节点、工作节点和存储角色的节点上)的前提条件。
我们希望你开始使用该特性,并期待你的反馈。
<!--
## Further reading
Please check out the [Control CPU Management Policies on the Node](/docs/tasks/administer-cluster/cpu-management-policies/)
task page to learn more about the CPU Manager, and how it fits in relation to the other node-level resource managers.
-->
## 进一步阅读
请查看[节点上的控制 CPU 管理策略](/zh-cn/docs/tasks/administer-cluster/cpu-management-policies/)任务页面,
以了解更多关于 CPU Manager 的信息,以及它如何与其他节点级资源管理器相关联。
<!--
## Getting involved
This feature is driven by the [SIG Node](https://github.com/Kubernetes/community/blob/master/sig-node/README.md). If you are interested in helping develop this feature, sharing feedback, or participating in any other ongoing SIG Node projects, please attend the SIG Node meeting for more details.
-->
## 参与其中
此特性由 [SIG Node](https://github.com/kubernetes/community/blob/master/sig-node/README.md)
推动。如果你有兴趣帮助开发此特性、分享反馈或参与任何其他正在进行的 SIG Node 项目,
请参加 SIG Node 会议以获取更多详情。

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -58,7 +58,7 @@ kubelet.
>}}
-->
{{< figure
src="/images/docs/components-of-kubernetes.svg"
src="/zh-cn/docs/images/components-of-kubernetes.svg"
alt="Kubernetes 组件"
caption="Kubernetes 组件"
>}}
@ -126,8 +126,8 @@ will schedule properly.
如上所述,在引导过程中,云控制器管理器可能无法被调度,
因此集群将无法正确初始化。以下几个具体示例说明此问题的可能表现形式及其根本原因。
这些示例假设你使用 Kubernetes 资源(例如 Deployment、DaemonSet 或类似资源)来控制
云控制器管理器的生命周期。由于这些方法依赖于 Kubernetes 来调度云控制器管理器,
这些示例假设你使用 Kubernetes 资源(例如 Deployment、DaemonSet
或类似资源)来控制云控制器管理器的生命周期。由于这些方法依赖于 Kubernetes 来调度云控制器管理器,
因此必须确保其能够正确调度。
<!--

View File

@ -0,0 +1,323 @@
---
layout: blog
title: "聚焦 SIG Apps"
slug: sig-apps-spotlight-2025
canonicalUrl: https://www.kubernetes.dev/blog/2025/03/12/sig-apps-spotlight-2025
date: 2025-03-12
author: "Sandipan Panda (DevZero)"
translator: >
[Xin Li](https://github.com/my-git9) (DaoCloud)
---
<!--
layout: blog
title: "Spotlight on SIG Apps"
slug: sig-apps-spotlight-2025
canonicalUrl: https://www.kubernetes.dev/blog/2025/03/12/sig-apps-spotlight-2025
date: 2025-03-12
author: "Sandipan Panda (DevZero)"
-->
<!--
In our ongoing SIG Spotlight series, we dive into the heart of the Kubernetes project by talking to
the leaders of its various Special Interest Groups (SIGs). This time, we focus on
**[SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group)**,
the group responsible for everything related to developing, deploying, and operating applications on
Kubernetes. [Sandipan Panda](https://www.linkedin.com/in/sandipanpanda)
([DevZero](https://www.devzero.io/)) had the opportunity to interview [Maciej
Szulik](https://github.com/soltysh) ([Defense Unicorns](https://defenseunicorns.com/)) and [Janet
Kuo](https://github.com/janetkuo) ([Google](https://about.google/)), the chairs and tech leads of
SIG Apps. They shared their experiences, challenges, and visions for the future of application
management within the Kubernetes ecosystem.
-->
在我们正在进行的 SIG 聚焦系列中,我们通过与 Kubernetes 项目各个特别兴趣小组SIG的领导者对话
深入探讨 Kubernetes 项目的核心。这一次,我们聚焦于
**[SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps#apps-special-interest-group)**
这个小组负责 Kubernetes 上与应用程序开发、部署和操作相关的所有内容。
[Sandipan Panda](https://www.linkedin.com/in/sandipanpanda)[DevZero](https://www.devzero.io/
有机会采访了 SIG Apps 的主席和技术负责人
[Maciej Szulik](https://github.com/soltysh)[Defense Unicorns](https://defenseunicorns.com/)
以及 [Janet Kuo](https://github.com/janetkuo)[Google](https://about.google/))。
他们分享了在 Kubernetes 生态系统中关于应用管理的经验、挑战以及未来愿景。
<!--
## Introductions
**Sandipan: Hello, could you start by telling us a bit about yourself, your role, and your journey
within the Kubernetes community that led to your current roles in SIG Apps?**
**Maciej**: Hey, my name is Maciej, and Im one of the leads for SIG Apps. Aside from this role, you
can also find me helping
[SIG CLI](https://github.com/kubernetes/community/tree/master/sig-cli#readme) and also being one of
the Steering Committee members. Ive been contributing to Kubernetes since late 2014 in various
areas, including controllers, apiserver, and kubectl.
-->
## 自我介绍
**Sandipan**:你好,能否先简单介绍一下你自己、你的角色,以及你在
Kubernetes 社区中的经历,这些经历是如何引导你担任 SIG Apps 的当前角色的?
**Maciej**:嗨,我叫 Maciej是 SIG Apps 的负责人之一。除了这个角色,
你还可以看到我在协助 [SIG CLI](https://github.com/kubernetes/community/tree/master/sig-cli#readme)
的工作,同时我也是指导委员会的成员之一。自 2014 年底以来,我一直为
Kubernetes 做出贡献涉及的领域包括控制器、API 服务器以及 kubectl。
<!--
**Janet**: Certainly! I'm Janet, a Staff Software Engineer at Google, and I've been deeply involved
with the Kubernetes project since its early days, even before the 1.0 launch in 2015. It's been an
amazing journey!
My current role within the Kubernetes community is one of the chairs and tech leads of SIG Apps. My
journey with SIG Apps started organically. I started with building the Deployment API and adding
rolling update functionalities. I naturally gravitated towards SIG Apps and became increasingly
involved. Over time, I took on more responsibilities, culminating in my current leadership roles.
-->
**Janet**:当然可以!我是 Janet在 Google 担任资深软件工程师,
并且从 Kubernetes 项目早期(甚至在 2015 年 1.0 版本发布之前)就深度参与其中。
这是一段非常精彩的旅程!
我在 Kubernetes 社区中的当前角色是 SIG Apps 的主席之一和技术负责人之一。
我与 SIG Apps 的结缘始于自然而然的过程。最初,我从构建 Deployment API
并添加滚动更新功能开始,逐渐对 SIG Apps 产生了浓厚的兴趣,并且参与度越来越高。
随着时间推移,我承担了更多的责任,最终走到了目前的领导岗位。
<!--
## About SIG Apps
*All following answers were jointly provided by Maciej and Janet.*
**Sandipan: For those unfamiliar, could you provide an overview of SIG Apps' mission and objectives?
What key problems does it aim to solve within the Kubernetes ecosystem?**
-->
## 关于 SIG Apps
**以下所有回答均由 Maciej 和 Janet 共同提供。**
**Sandipan**:对于那些不熟悉的人,能否简要介绍一下 SIG Apps 的使命和目标?
它在 Kubernetes 生态系统中旨在解决哪些关键问题?
<!--
As described in our
[charter](https://github.com/kubernetes/community/blob/master/sig-apps/charter.md#scope), we cover a
broad area related to developing, deploying, and operating applications on Kubernetes. That, in
short, means were open to each and everyone showing up at our bi-weekly meetings and discussing the
ups and downs of writing and deploying various applications on Kubernetes.
**Sandipan: What are some of the most significant projects or initiatives currently being undertaken
by SIG Apps?**
-->
正如我们在[章程](https://github.com/kubernetes/community/blob/master/sig-apps/charter.md#scope)中所描述的那样,
我们涵盖了与在 Kubernetes 上开发、部署和操作应用程序相关的广泛领域。
简而言之,这意味着我们欢迎每个人参加我们的双周会议,讨论在 Kubernetes
上编写和部署各种应用程序的经验和挑战。
**Sandipan**SIG Apps 目前正在进行的一些最重要项目或倡议有哪些?
<!--
At this point in time, the main factors driving the development of our controllers are the
challenges coming from running various AI-related workloads. Its worth giving credit here to two
working groups weve sponsored over the past years:
-->
在当前阶段,推动我们控制器开发的主要因素是运行各种 AI 相关工作负载所带来的挑战。
在此值得一提的是,过去几年我们支持的两个工作组:
<!--
1. [The Batch Working Group](https://github.com/kubernetes/community/tree/master/wg-batch), which is
looking at running HPC, AI/ML, and data analytics jobs on top of Kubernetes.
2. [The Serving Working Group](https://github.com/kubernetes/community/tree/master/wg-serving), which
is focusing on hardware-accelerated AI/ML inference.
-->
1. [Batch 工作组](https://github.com/kubernetes/community/tree/master/wg-batch)
该工作组致力于在 Kubernetes 上运行 HPC、AI/ML 和数据分析作业。
2. [Serving 工作组](https://github.com/kubernetes/community/tree/master/wg-serving)
该工作组专注于硬件加速的 AI/ML 推理。
<!---
## Best practices and challenges
**Sandipan: SIG Apps plays a crucial role in developing application management best practices for
Kubernetes. Can you share some of these best practices and how they help improve application
lifecycle management?**
-->
## 最佳实践与挑战
**Sandipan**SIG Apps 在为 Kubernetes 开发应用程序管理最佳实践方面发挥着关键作用。
你能分享一些这些最佳实践吗?以及它们如何帮助改进应用程序生命周期管理?
<!--
1. Implementing [health checks and readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
ensures that your applications are healthy and ready to serve traffic, leading to improved
reliability and uptime. The above, combined with comprehensive logging, monitoring, and tracing
solutions, will provide insights into your application's behavior, enabling you to identify and
resolve issues quickly.
-->
1. 实施[健康检查和就绪探针](/zh-cn/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
确保你的应用程序处于健康状态并准备好处理流量,从而提高可靠性和正常运行时间。
结合全面的日志记录、监控和跟踪解决方案,上述措施将为您提供应用程序行为的洞察,
使你能够快速识别并解决问题。
<!--
2. [Auto-scale your application](/docs/concepts/workloads/autoscaling/) based
on resource utilization or custom metrics, optimizing resource usage and ensuring your
application can handle varying loads.
-->
2. 根据资源利用率或自定义指标[自动扩缩你的应用](/zh-cn/docs/concepts/workloads/autoscaling/)
优化资源使用并确保您的应用程序能够处理不同的负载。
<!--
3. Use Deployment for stateless applications, StatefulSet for stateful applications, Job
and CronJob for batch workloads, and DaemonSet for running a daemon on each node. Use
Operators and CRDs to extend the Kubernetes API to automate the deployment, management, and
lifecycle of complex applications, making them easier to operate and reducing manual
intervention.
-->
3. 对于无状态应用程序使用 Deployment对于有状态应用程序使用 StatefulSet
对于批处理工作负载使用 Job 和 CronJob在每个节点上运行守护进程时使用
DaemonSet。使用 Operator 和 CRD 扩展 Kubernetes API 以自动化复杂应用程序的部署、
管理和生命周期,使其更易于操作并减少手动干预。
<!--
**Sandipan: What are some of the common challenges SIG Apps faces, and how do you address them?**
The biggest challenge were facing all the time is the need to reject a lot of features, ideas, and
improvements. This requires a lot of discipline and patience to be able to explain the reasons
behind those decisions.
-->
**Sandipan**SIG Apps 面临的一些常见挑战是什么?你们是如何解决这些问题的?
我们一直面临的最大挑战是需要拒绝许多功能、想法和改进。这需要大量的纪律性和耐心,
以便能够解释做出这些决定背后的原因。
<!--
**Sandipan: How has the evolution of Kubernetes influenced the work of SIG Apps? Are there any
recent changes or upcoming features in Kubernetes that you find particularly relevant or beneficial
for SIG Apps?**
The main benefit for both us and the whole community around SIG Apps is the ability to extend
kubernetes with [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
and the fact that users can build their own custom controllers leveraging the built-in ones to
achieve whatever sophisticated use cases they might have and we, as the core maintainers, havent
considered or werent able to efficiently resolve inside Kubernetes.
-->
**Sandipan**Kubernetes 的演进如何影响了 SIG Apps 的工作?
Kubernetes 最近是否有任何变化或即将推出的功能,你认为对
SIG Apps 特别相关或有益?
对我们以及围绕 SIG Apps 的整个社区而言,
最大的好处是能够通过[自定义资源定义Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)扩展
Kubernetes。用户可以利用内置控制器构建自己的自定义控制器
以实现他们可能面对的各种复杂用例,而我们作为核心维护者,
可能没有考虑过这些用例,或者无法在 Kubernetes 内部高效解决。
<!--
## Contributing to SIG Apps
**Sandipan: What opportunities are available for new contributors who want to get involved with SIG
Apps, and what advice would you give them?**
-->
## 贡献于 SIG Apps
**Sandipan**:对于想要参与 SIG Apps 的新贡献者,有哪些机会?
你会给他们什么建议?
<!--
We get the question, "What good first issue might you recommend we start with?" a lot :-) But
unfortunately, theres no easy answer to it. We always tell everyone that the best option to start
contributing to core controllers is to find one you are willing to spend some time with. Read
through the code, then try running unit tests and integration tests focusing on that
controller. Once you grasp the general idea, try breaking it and the tests again to verify your
breakage. Once you start feeling confident you understand that particular controller, you may want
to search through open issues affecting that controller and either provide suggestions, explaining
the problem users have, or maybe attempt your first fix.
-->
我们经常被问道:“你们建议我们从哪个好的初始问题开始?” :-)
但遗憾的是,这个问题没有简单的答案。我们总是告诉大家,
为核心控制器做贡献的最佳方式是找到一个你愿意花时间研究的控制器。
阅读代码,然后尝试运行针对该控制器的单元测试和集成测试。一旦你掌握了大致的概念,
试着破坏它并再次运行测试以验证你的改动。当你开始有信心理解了这个特定的控制器后,
你可以搜索影响该控制器的待处理问题,提供一些建议,解释用户遇到的问题,
或者尝试提交你的第一个修复。
<!--
Like we said, there are no shortcuts on that road; you need to spend the time with the codebase to
understand all the edge cases weve slowly built up to get to the point where we are. Once youre
successful with one controller, youll need to repeat that same process with others all over again.
**Sandipan: How does SIG Apps gather feedback from the community, and how is this feedback
integrated into your work?**
-->
正如我们所说,在这条道路上没有捷径可走;你需要花时间研究代码库,
以理解我们逐步积累的所有边缘情况,从而达到我们现在的位置。
一旦你在一个控制器上取得了成功,你就需要在其他控制器上重复同样的过程。
**Sandipan**SIG Apps 如何从社区收集反馈,以及这些反馈是如何整合到你们的工作中的?
<!--
We always encourage everyone to show up and present their problems and solutions during our
bi-weekly [meetings](https://github.com/kubernetes/community/tree/master/sig-apps#meetings). As long
as youre solving an interesting problem on top of Kubernetes and you can provide valuable feedback
about any of the core controllers, were always happy to hear from everyone.
-->
我们总是鼓励每个人参加我们的双周[会议](https://github.com/kubernetes/community/tree/master/sig-apps#meetings)
并在会上提出他们的问题和解决方案。只要你是在 Kubernetes 上解决一个有趣的问题,
并且能够对任何核心控制器提供有价值的反馈,我们都非常乐意听取每个人的意见。
<!--
## Looking ahead
**Sandipan: Looking ahead, what are the key focus areas or upcoming trends in application management
within Kubernetes that SIG Apps is excited about? How is the SIG adapting to these trends?**
Definitely the current AI hype is the major driving factor; as mentioned above, we have two working
groups, each covering a different aspect of it.
-->
## 展望未来
**Sandipan**展望未来Kubernetes 中应用程序管理的关键关注领域或即将到来的趋势有哪些是
SIG Apps 感到兴奋的SIG 是如何适应这些趋势的?
当前的 AI 热潮无疑是主要的驱动因素;如上所述,我们有两个工作组,
每个工作组都涵盖了它的一个不同方面。
<!--
**Sandipan: What are some of your favorite things about this SIG?**
Without a doubt, the people that participate in our meetings and on
[Slack](https://kubernetes.slack.com/messages/sig-apps), who tirelessly help triage issues, pull
requests and invest a lot of their time (very frequently their private time) into making kubernetes
great!
-->
**Sandipan**:关于这个 SIG你们最喜欢的事情有哪些
毫无疑问,参与我们会议和
[Slack](https://kubernetes.slack.com/messages/sig-apps) 频道的人们是最让我们感到欣慰的。
他们不知疲倦地帮助处理问题、拉取请求,并投入大量的时间(很多时候是他们的私人时间)来让
Kubernetes 变得更好!
---
<!--
SIG Apps is an essential part of the Kubernetes community, helping to shape how applications are
deployed and managed at scale. From its work on improving Kubernetes' workload APIs to driving
innovation in AI/ML application management, SIG Apps is continually adapting to meet the needs of
modern application developers and operators. Whether youre a new contributor or an experienced
developer, theres always an opportunity to get involved and make an impact.
-->
SIG Apps 是 Kubernetes 社区的重要组成部分,
帮助塑造了应用程序如何在大规模下部署和管理的方式。从改进 Kubernetes
的工作负载 API 到推动 AI/ML 应用程序管理的创新SIG Apps
不断适应以满足现代应用程序开发者和操作人员的需求。无论你是新贡献者还是有经验的开发者,
都有机会参与其中并产生影响。
<!--
If youre interested in learning more or contributing to SIG Apps, be sure to check out their [SIG
README](https://github.com/kubernetes/community/tree/master/sig-apps) and join their bi-weekly [meetings](https://github.com/kubernetes/community/tree/master/sig-apps#meetings).
- [SIG Apps Mailing List](https://groups.google.com/a/kubernetes.io/g/sig-apps)
- [SIG Apps on Slack](https://kubernetes.slack.com/messages/sig-apps)
-->
如果你有兴趣了解更多关于 SIG Apps 的信息或为其做出贡献,务必查看他们的
[SIG README](https://github.com/kubernetes/community/tree/master/sig-apps)
并加入他们的双周[会议](https://github.com/kubernetes/community/tree/master/sig-apps#meetings)。
- [SIG Apps 邮件列表](https://groups.google.com/a/kubernetes.io/g/sig-apps)
- [SIG Apps 在 Slack 上](https://kubernetes.slack.com/messages/sig-apps)

View File

@ -0,0 +1,163 @@
---
layout: blog
title: "ingress-nginx CVE-2025-1974 须知"
date: 2025-03-24T12:00:00-08:00
slug: ingress-nginx-CVE-2025-1974
author: >
Tabitha Sable (Kubernetes 安全响应委员会)
translator: >
[Michael Yao](https://github.com/windsonsea) (DaoCloud)
---
<!--
layout: blog
title: "Ingress-nginx CVE-2025-1974: What You Need to Know"
date: 2025-03-24T12:00:00-08:00
slug: ingress-nginx-CVE-2025-1974
author: >
Tabitha Sable (Kubernetes Security Response Committee)
-->
<!--
Today, the ingress-nginx maintainers have [released patches for a batch of critical vulnerabilities](https://github.com/kubernetes/ingress-nginx/releases) that could make it easy for attackers to take over your Kubernetes cluster. If you are among the over 40% of Kubernetes administrators using [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), you should take action immediately to protect your users and data.
-->
今天ingress-nginx 项目的维护者们[发布了一批关键漏洞的修复补丁](https://github.com/kubernetes/ingress-nginx/releases)
这些漏洞可能让攻击者轻易接管你的 Kubernetes 集群。目前有 40% 以上的 Kubernetes 管理员正在使用
[ingress-nginx](https://github.com/kubernetes/ingress-nginx/)
如果你是其中之一,请立即采取行动,保护你的用户和数据。
<!--
## Background
[Ingress](/docs/concepts/services-networking/ingress/) is the traditional Kubernetes feature for exposing your workload Pods to the world so that they can be useful. In an implementation-agnostic way, Kubernetes users can define how their applications should be made available on the network. Then, an [ingress controller](/docs/concepts/services-networking/ingress-controllers/) uses that definition to set up local or cloud resources as required for the users particular situation and needs.
-->
## 背景 {#background}
[Ingress](/zh-cn/docs/concepts/services-networking/ingress/)
是 Kubernetes 提供的一种传统特性,可以将你的工作负载 Pod 暴露给外部世界,方便外部用户使用。
Kubernetes 用户可以用与实现无关的方式来定义应用如何在网络上可用。
[Ingress 控制器](/zh-cn/docs/concepts/services-networking/ingress-controllers/)会根据定义,
配置所需的本地资源或云端资源,以满足用户的特定场景和需求。
<!--
Many different ingress controllers are available, to suit users of different cloud providers or brands of load balancers. Ingress-nginx is a software-only ingress controller provided by the Kubernetes project. Because of its versatility and ease of use, ingress-nginx is quite popular: it is deployed in over 40% of Kubernetes clusters\!
Ingress-nginx translates the requirements from Ingress objects into configuration for nginx, a powerful open source webserver daemon. Then, nginx uses that configuration to accept and route requests to the various applications running within a Kubernetes cluster. Proper handling of these nginx configuration parameters is crucial, because ingress-nginx needs to allow users significant flexibility while preventing them from accidentally or intentionally tricking nginx into doing things it shouldnt.
-->
为了满足不同云厂商用户或负载均衡器产品的需求,目前有许多不同类型的 Ingress 控制器。
ingress-nginx 是 Kubernetes 项目提供的纯软件的 Ingress 控制器。
ingress-nginx 由于灵活易用,非常受用户欢迎。它已经被部署在超过 40% 的 Kubernetes 集群中!
ingress-nginx 会将 Ingress 对象中的要求转换为 Nginx一个强大的开源 Web 服务器守护进程)的配置。
Nginx 使用这些配置接受请求并将其路由到 Kubernetes 集群中运行的不同应用。
正确处理这些 Nginx 配置参数至关重要,因为 ingress-nginx 既要给予用户足够的灵活性,
又要防止用户无意或有意诱使 Nginx 执行其不应执行的操作。
<!--
## Vulnerabilities Patched Today
Four of todays ingress-nginx vulnerabilities are improvements to how ingress-nginx handles particular bits of nginx config. Without these fixes, a specially-crafted Ingress object can cause nginx to misbehave in various ways, including revealing the values of [Secrets](/docs/concepts/configuration/secret/) that are accessible to ingress-nginx. By default, ingress-nginx has access to all Secrets cluster-wide, so this can often lead to complete cluster takeover by any user or entity that has permission to create an Ingress.
-->
## 今日修复的漏洞 {#vulnerabilities-patched-today}
今天修复的四个 ingress-nginx 漏洞都是对 ingress-nginx 如何处理特定 Nginx 配置细节的改进。
如果不打这些修复补丁,一个精心构造的 Ingress 资源对象就可以让 Nginx 出现异常行为,
包括泄露 ingress-nginx 可访问的 [Secret](/zh-cn/docs/concepts/configuration/secret/)
的值。默认情况下ingress-nginx 可以访问集群范围内的所有 Secret因此这往往会导致任一有权限创建
Ingress 的用户或实体接管整个集群。
<!--
The most serious of todays vulnerabilities, [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009), rated [9.8 CVSS](https://www.first.org/cvss/calculator/3-1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H), allows anything on the Pod network to exploit configuration injection vulnerabilities via the Validating Admission Controller feature of ingress-nginx. This makes such vulnerabilities far more dangerous: ordinarily one would need to be able to create an Ingress object in the cluster, which is a fairly privileged action. When combined with todays other vulnerabilities, **CVE-2025-1974 means that anything on the Pod network has a good chance of taking over your Kubernetes cluster, with no credentials or administrative access required**. In many common scenarios, the Pod network is accessible to all workloads in your cloud VPC, or even anyone connected to your corporate network\! This is a very serious situation.
-->
本次最严重的漏洞是 [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009)
CVSS 评分高达 [9.8](https://www.first.org/cvss/calculator/3-1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)
它允许 Pod 网络中的任意实体通过 ingress-nginx 的验证性准入控制器特性滥用配置注入漏洞。
这种机制使得这些漏洞会产生更危险的情形:攻击者通常需要能够在集群中创建 Ingress 对象(这是一种较高权限的操作)。
当结合使用今天修复的其他漏洞(比如 CVE-2025-1974
**就意味着 Pod 网络中的任何实体都有极大可能接管你的 Kubernetes 集群,而不需要任何凭证或管理权限**。
在许多常见场景下Pod 网络可以访问云端 VPC 中的所有工作负载,甚至能访问连接到你公司内网的任何人的机器!
这是一个非常严重的安全风险。
<!--
Today, we have [released ingress-nginx v1.12.1 and v1.11.5](https://github.com/kubernetes/ingress-nginx/releases), which have fixes for all five of these vulnerabilities.
## Your next steps
First, determine if your clusters are using ingress-nginx. In most cases, you can check this by running `kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx` with cluster administrator permissions.
-->
我们今天已经[发布了 ingress-nginx v1.12.1 和 v1.11.5](https://github.com/kubernetes/ingress-nginx/releases)
这两个版本修复了所有这 5 个漏洞。
## 你需要做什么 {#your-next-steps}
首先,确定你的集群是否在使用 ingress-nginx。大多数情况下你可以使用集群管理员权限运行以下命令进行检查
```shell
kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx
```
<!--
**If you are using ingress-nginx, make a plan to remediate these vulnerabilities immediately.**
**The best and easiest remedy is to [upgrade to the new patch release of ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/upgrade/).** All five of todays vulnerabilities are fixed by installing todays patches.
If you cant upgrade right away, you can significantly reduce your risk by turning off the Validating Admission Controller feature of ingress-nginx.
-->
**如果你在使用 ingress-nginx请立即针对这些漏洞制定补救计划。**
**最简单且推荐的补救方案是[立即升级到最新补丁版本](https://kubernetes.github.io/ingress-nginx/deploy/upgrade/)。**
安装今天的补丁,就能修复所有这 5 个漏洞。
如果你暂时无法升级,可以通过关闭 ingress-nginx 的验证性准入控制器特性来显著降低风险。
<!--
* If you have installed ingress-nginx using Helm
* Reinstall, setting the Helm value `controller.admissionWebhooks.enabled=false`
* If you have installed ingress-nginx manually
* delete the ValidatingWebhookconfiguration called `ingress-nginx-admission`
* edit the `ingress-nginx-controller` Deployment or Daemonset, removing `--validating-webhook` from the controller containers argument list
-->
* 如果你使用 Helm 安装了 ingress-nginx
* 重新安装,设置 Helm 参数 `controller.admissionWebhooks.enabled=false`
* 如果你是手动安装的
* 删除名为 `ingress-nginx-admission` 的 ValidatingWebhookConfiguration
* 编辑 `ingress-nginx-controller` Deployment 或 DaemonSet从控制器容器的参数列表中移除 `--validating-webhook`
<!--
If you turn off the Validating Admission Controller feature as a mitigation for CVE-2025-1974, remember to turn it back on after you upgrade. This feature provides important quality of life improvements for your users, warning them about incorrect Ingress configurations before they can take effect.
-->
如果你为了缓解 CVE-2025-1974 造成的风险而关闭了验证性准入控制器特性,
请在升级完成后记得重新开启此特性。这个特性可以为你的用户提供重要的生命期帮助,
可以在错误的 Ingress 配置在生效之前及时提醒用户。
<!--
## Conclusion, thanks, and further reading
The ingress-nginx vulnerabilities announced today, including CVE-2025-1974, present a serious risk to many Kubernetes users and their data. If you use ingress-nginx, you should take action immediately to keep yourself safe.
Thanks go out to Nir Ohfeld, Sagi Tzadik, Ronen Shustin, and Hillai Ben-Sasson from Wiz for responsibly disclosing these vulnerabilities, and for working with the Kubernetes SRC members and ingress-nginx maintainers (Marco Ebert and James Strong) to ensure we fixed them effectively.
-->
## 总结、致谢与更多参考 {#conclusion-thanks-and-further-reading}
今天公布的包括 CVE-2025-1974 在内的 ingress-nginx 漏洞对许多 Kubernetes 用户及其数据构成了严重风险。
如果你正在使用 ingress-nginx请立即采取行动确保自身安全。
我们要感谢来自 Wiz 的 Nir Ohfeld、Sagi Tzadik、Ronen Shustin 和 Hillai Ben-Sasson
他们负责任地披露了这些漏洞,并与 Kubernetes 安全响应委员会成员以及 ingress-nginx
维护者Marco Ebert 和 James Strong协同合作确保这些漏洞被有效修复。
<!--
For further information about the maintenance and future of ingress-nginx, please see this [GitHub issue](https://github.com/kubernetes/ingress-nginx/issues/13002) and/or attend [James and Marcos KubeCon/CloudNativeCon EU 2025 presentation](https://kccnceu2025.sched.com/event/1tcyc/).
For further information about the specific vulnerabilities discussed in this article, please see the appropriate GitHub issue: [CVE-2025-24513](https://github.com/kubernetes/kubernetes/issues/131005), [CVE-2025-24514](https://github.com/kubernetes/kubernetes/issues/131006), [CVE-2025-1097](https://github.com/kubernetes/kubernetes/issues/131007), [CVE-2025-1098](https://github.com/kubernetes/kubernetes/issues/131008), or [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009)
-->
有关 ingress-nginx 的维护和未来的更多信息,
请参阅[这个 GitHub Issue](https://github.com/kubernetes/ingress-nginx/issues/13002)
或参与 [James 和 Marco 在 KubeCon/CloudNativeCon EU 2025 的演讲](https://kccnceu2025.sched.com/event/1tcyc/)。
关于本文中提到的具体漏洞的信息,请参阅以下 GitHub Issue
- [CVE-2025-24513](https://github.com/kubernetes/kubernetes/issues/131005)
- [CVE-2025-24514](https://github.com/kubernetes/kubernetes/issues/131006)
- [CVE-2025-1097](https://github.com/kubernetes/kubernetes/issues/131007)
- [CVE-2025-1098](https://github.com/kubernetes/kubernetes/issues/131008)
- [CVE-2025-1974](https://github.com/kubernetes/kubernetes/issues/131009)

View File

@ -0,0 +1,329 @@
---
layout: blog
title: 'Kubernetes v1.33 预览'
date: 2025-03-26T10:30:00-08:00
slug: kubernetes-v1-33-upcoming-changes
author: >
Agustina Barbetta,
Aakanksha Bhende,
Udi Hofesh,
Ryota Sawada,
Sneha Yadav
translator: >
[Xin Li](https://github.com/my-git9) (DaoCloud)
---
<!--
layout: blog
title: 'Kubernetes v1.33 sneak peek'
date: 2025-03-26T10:30:00-08:00
slug: kubernetes-v1-33-upcoming-changes
author: >
Agustina Barbetta,
Aakanksha Bhende,
Udi Hofesh,
Ryota Sawada,
Sneha Yadav
-->
<!--
As the release of Kubernetes v1.33 approaches, the Kubernetes project continues to evolve. Features may be deprecated, removed, or replaced to improve the overall health of the project. This blog post outlines some planned changes for the v1.33 release, which the release team believes you should be aware of to ensure the continued smooth operation of your Kubernetes environment and to keep you up-to-date with the latest developments. The information below is based on the current status of the v1.33 release and is subject to change before the final release date.
-->
随着 Kubernetes v1.33 版本的发布临近Kubernetes 项目仍在不断发展。
为了提升项目的整体健康状况,某些特性可能会被弃用、移除或替换。
这篇博客文章概述了 v1.33 版本的一些计划变更,发布团队认为你有必要了解这些内容,
以确保 Kubernetes 环境的持续平稳运行,并让你掌握最新的发展动态。
以下信息基于 v1.33 版本的当前状态,在最终发布日期之前可能会有所变化。
<!--
## The Kubernetes API removal and deprecation process
The Kubernetes project has a well-documented [deprecation policy](/docs/reference/using-api/deprecation-policy/) for features. This policy states that stable APIs may only be deprecated when a newer, stable version of that same API is available and that APIs have a minimum lifetime for each stability level. A deprecated API has been marked for removal in a future Kubernetes release. It will continue to function until removal (at least one year from the deprecation), but usage will result in a warning being displayed. Removed APIs are no longer available in the current version, at which point you must migrate to using the replacement.
-->
## Kubernetes API 的移除与弃用流程
Kubernetes 项目针对特性的弃用有一套完善的[弃用政策](/zh-cn/docs/reference/using-api/deprecation-policy/)。
该政策规定,只有在有更新的、稳定的同名 API 可用时,才能弃用稳定的 API
并且每个稳定性级别的 API 都有最低的生命周期要求。被弃用的 API 已被标记为将在未来的
Kubernetes 版本中移除。在移除之前(自弃用起至少一年内),它仍然可以继续使用,
但使用时会显示警告信息。已被移除的 API 在当前版本中不再可用,届时你必须迁移到使用替代方案。
<!--
* Generally available (GA) or stable API versions may be marked as deprecated but must not be removed within a major version of Kubernetes.
* Beta or pre-release API versions must be supported for 3 releases after the deprecation.
* Alpha or experimental API versions may be removed in any release without prior deprecation notice; this process can become a withdrawal in cases where a different implementation for the same feature is already in place.
-->
* 一般可用GA或稳定 API 版本可以被标记为已弃用,但在 Kubernetes
的一个主要版本内不得移除。
* 测试版或预发布 API 版本在弃用后必须支持至少三个发行版本。
* Alpha 或实验性 API 版本可以在任何版本中被移除,且无需事先发出弃用通知;
如果同一特性已经有了不同的实现,这个过程可能会变为撤回。
<!--
Whether an API is removed as a result of a feature graduating from beta to stable, or because that API simply did not succeed, all removals comply with this deprecation policy. Whenever an API is removed, migration options are communicated in the [deprecation guide](/docs/reference/using-api/deprecation-guide/).
-->
无论是由于某个特性从测试阶段升级为稳定阶段而导致 API 被移除,还是因为该
API 未能成功,所有的移除操作都遵循此弃用政策。每当一个 API 被移除时,
迁移选项都会在[弃用指南](/zh-cn/docs/reference/using-api/deprecation-guide/)中进行说明。
<!--
## Deprecations and removals for Kubernetes v1.33
### Deprecation of the stable Endpoints API
The [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) API has been stable since v1.21, which effectively replaced the original Endpoints API. While the original Endpoints API was simple and straightforward, it also posed some challenges when scaling to large numbers of network endpoints. The EndpointSlices API has introduced new features such as dual-stack networking, making the original Endpoints API ready for deprecation.
-->
## Kubernetes v1.33 的弃用与移除
### 稳定版 Endpoints API 的弃用
[EndpointSlices](/zh-cn/docs/concepts/services-networking/endpoint-slices/) API
自 v1.21 起已稳定,实际上取代了原有的 Endpoints API。虽然原有的 Endpoints API 简单直接,
但在扩展到大量网络端点时也带来了一些挑战。EndpointSlices API 引入了诸如双栈网络等新特性,
使得原有的 Endpoints API 已准备好被弃用。
<!--
This deprecation only impacts those who use the Endpoints API directly from workloads or scripts; these users should migrate to use EndpointSlices instead. There will be a dedicated blog post with more details on the deprecation implications and migration plans in the coming weeks.
You can find more in [KEP-4974: Deprecate v1.Endpoints](https://kep.k8s.io/4974).
-->
此弃用仅影响那些直接在工作负载或脚本中使用 Endpoints API 的用户;
这些用户应迁移到使用 EndpointSlices。未来几周内将发布一篇专门的博客文章
详细介绍弃用的影响和迁移计划。
你可以在 [KEP-4974: Deprecate v1.Endpoints](https://kep.k8s.io/4974)
中找到更多信息。
<!--
### Removal of kube-proxy version information in node status
Following its deprecation in v1.31, as highlighted in the [release announcement](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004), the `status.nodeInfo.kubeProxyVersion` field will be removed in v1.33. This field was set by kubelet, but its value was not consistently accurate. As it has been disabled by default since v1.31, the v1.33 release will remove this field entirely.
-->
### 节点状态中 kube-proxy 版本信息的移除
继在 v1.31 中被弃用,并在[发布说明](/blog/2024/07/19/kubernetes-1-31-upcoming-changes/#deprecation-of-status-nodeinfo-kubeproxyversion-field-for-nodes-kep-4004-https-github-com-kubernetes-enhancements-issues-4004)中强调后,
`status.nodeInfo.kubeProxyVersion` 字段将在 v1.33 中被移除。
此字段由 kubelet 设置,但其值并不总是准确的。由于自 v1.31
起该字段默认已被禁用v1.33 发行版将完全移除此字段。
<!--
You can find more in [KEP-4004: Deprecate status.nodeInfo.kubeProxyVersion field](https://kep.k8s.io/4004).
### Removal of host network support for Windows pods
-->
你可以在 [KEP-4004: Deprecate status.nodeInfo.kubeProxyVersion field](https://kep.k8s.io/4004)
中找到更多信息。
### 移除对 Windows Pod 的主机网络支持
<!--
Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster density by allowing containers to use the Nodes networking namespace.
The original implementation landed as alpha with v1.26, but as it faced unexpected containerd behaviours,
and alternative solutions were available, the Kubernetes project has decided to withdraw the associated
KEP. We're expecting to see support fully removed in v1.33.
-->
Windows Pod 网络旨在通过允许容器使用节点的网络命名空间来实现与 Linux 的特性对等,
并提供更高的集群密度。最初的实现作为 Alpha 版本在 v1.26 中引入,但由于遇到了未预期的
containerd 行为且存在替代方案Kubernetes 项目决定撤回相关的 KEP。
我们预计在 v1.33 中完全移除对该特性的支持。
<!--
You can find more in [KEP-3503: Host network support for Windows pods](https://kep.k8s.io/3503).
## Featured improvement of Kubernetes v1.33
As authors of this article, we picked one improvement as the most significant change to call out!
-->
你可以在 [KEP-3503: Host network support for Windows pods](https://kep.k8s.io/3503)
中找到更多信息。
## Kubernetes v1.33 的特色改进
作为本文的作者,我们挑选了一项改进作为最重要的变更来特别提及!
<!--
### Support for user namespaces within Linux Pods
One of the oldest open KEPs today is [KEP-127](https://kep.k8s.io/127), Pod security improvement by using Linux [User namespaces](/docs/concepts/workloads/pods/user-namespaces/) for Pods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha release in v1.25, initial beta in v1.30 (where it was disabled by default), and now is set to be a part of v1.33, where the feature is available by default.
-->
### Linux Pods 中用户命名空间的支持
当前最古老的开放 KEP 之一是 [KEP-127](https://kep.k8s.io/127)
通过使用 Linux [用户命名空间](/zh-cn/docs/concepts/workloads/pods/user-namespaces/)为
Pod 提供安全性改进。该 KEP 最初在 2016 年末提出,经过多次迭代,在 v1.25 中发布了 Alpha 版本,
在 v1.30 中首次进入 Beta 阶段(在此版本中默认禁用),现在它将成为 v1.33 的一部分,
默认情况下即可使用该特性。
<!--
This support will not impact existing Pods unless you manually specify `pod.spec.hostUsers` to opt in. As highlighted in the [v1.30 sneak peek blog](/blog/2024/03/12/kubernetes-1-30-upcoming-changes/), this is an important milestone for mitigating vulnerabilities.
You can find more in [KEP-127: Support User Namespaces in pods](https://kep.k8s.io/127).
-->
除非你手动指定 `pod.spec.hostUsers` 以选择使用此特性,否则此支持不会影响现有的 Pod。
正如在 [v1.30 预览博客](/blog/2024/03/12/kubernetes-1-30-upcoming-changes/)中强调的那样,
就缓解漏洞的影响而言,这是一个重要里程碑。
你可以在 [KEP-127: Support User Namespaces in pods](https://kep.k8s.io/127)
中找到更多信息。
<!--
## Selected other Kubernetes v1.33 improvements
The following list of enhancements is likely to be included in the upcoming v1.33 release. This is not a commitment and the release content is subject to change.
-->
## 精选的其他 Kubernetes v1.33 改进
以下列出的改进很可能会包含在即将到来的 v1.33 发行版中。
这些改进尚无法承诺,发行内容仍有可能发生变化。
<!--
### In-place resource resize for vertical scaling of Pods
When provisioning a Pod, you can use various resources such as Deployment, StatefulSet, etc. Scalability requirements may need horizontal scaling by updating the Pod replica count, or vertical scaling by updating resources allocated to Pods container(s). Before this enhancement, container resources defined in a Pod's `spec` were immutable, and updating any of these details within a Pod template would trigger Pod replacement.
-->
### Pod 垂直扩展的就地资源调整
在制备某个 Pod 时,你可以使用诸如 Deployment、StatefulSet 等多种资源。
为了满足可扩缩性需求,可能需要通过更新 Pod 副本数量进行水平扩缩,或通过更新分配给
Pod 容器的资源进行垂直扩缩。在此增强特性之前Pod 的 `spec`
中定义的容器资源是不可变的,更新 Pod 模板中的这类细节会触发 Pod 的替换。
<!--
But what if you could dynamically update the resource configuration for your existing Pods without restarting them?
The [KEP-1287](https://kep.k8s.io/1287) is precisely to allow such in-place Pod updates. It opens up various possibilities of vertical scale-up for stateful processes without any downtime, seamless scale-down when the traffic is low, and even allocating larger resources during startup that is eventually reduced once the initial setup is complete. This was released as alpha in v1.27, and is expected to land as beta in v1.33.
-->
但是如果可以在不重启的情况下动态更新现有 Pod 的资源配置,那会怎样呢?
[KEP-1287](https://kep.k8s.io/1287) 正是为了实现这种就地 Pod 更新而设计的。
它为无状态进程的垂直扩缩开辟了多种可能性,例如在不停机的情况下进行扩容、
在流量较低时无缝缩容,甚至在启动时分配更多资源,待初始设置完成后减少资源分配。
该特性在 v1.27 中以 Alpha 版本发布,并预计在 v1.33 中进入 beta 阶段。
<!--
You can find more in [KEP-1287: In-Place Update of Pod Resources](https://kep.k8s.io/1287).
### DRAs ResourceClaim Device Status graduates to beta
-->
你可以在 [KEP-1287Pod 资源的就地更新](https://kep.k8s.io/1287)中找到更多信息。
### DRA 的 ResourceClaim 设备状态升级为 Beta
<!--
The `devices` field in ResourceClaim `status`, originally introduced in the v1.32 release, is likely to graduate to beta in v1.33. This field allows drivers to report device status data, improving both observability and troubleshooting capabilities.
-->
在 v1.32 版本中首次引入的 ResourceClaim `status` 中的 `devices` 字段,
预计将在 v1.33 中升级为 beta 阶段。此字段允许驱动程序报告设备状态数据,
从而提升可观测性和故障排查能力。
<!--
For example, reporting the interface name, MAC address, and IP addresses of network interfaces in the status of a ResourceClaim can significantly help in configuring and managing network services, as well as in debugging network related issues. You can read more about ResourceClaim Device Status in [Dynamic Resource Allocation: ResourceClaim Device Status](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status) document.
-->
例如,在 ResourceClaim 的状态中报告网络接口的接口名称、MAC 地址和 IP 地址,
可以显著帮助配置和管理网络服务,并且在调试网络相关问题时也非常有用。
你可以在[动态资源分配ResourceClaim 设备状态](/zh-cn/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#resourceclaim-device-status)
文档中阅读关于 ResourceClaim 设备状态的更多信息。
<!--
Also, you can find more about the planned enhancement in [KEP-4817: DRA: Resource Claim Status with possible standardized network interface data](https://kep.k8s.io/4817).
-->
此外,你可以在
[KEP-4817: DRA: Resource Claim Status with possible standardized network interface data](https://kep.k8s.io/4817)
中找到更多关于此计划增强特性的信息。
<!--
### Ordered namespace deletion
This KEP introduces a more structured deletion process for Kubernetes namespaces to ensure secure and deterministic resource removal. The current semi-random deletion order can create security gaps or unintended behaviour, such as Pods persisting after their associated NetworkPolicies are deleted. By enforcing a structured deletion sequence that respects logical and security dependencies, this approach ensures Pods are removed before other resources. The design improves Kubernetess security and reliability by mitigating risks associated with non-deterministic deletions.
-->
### 有序的命名空间删除
此 KEP 为 Kubernetes 命名空间引入了一种更为结构化的删除流程,
以确保更为安全且更为确定的资源移除。当前半随机的删除顺序可能会导致安全漏洞或意外行为,
例如在相关的 NetworkPolicy 被删除后Pod 仍然存在。
通过强制执行尊重逻辑和安全依赖关系的结构化删除顺序,此方法确保在删除其他资源之前先删除 Pod。
这种设计通过减少与非确定性删除相关的风险,提升了 Kubernetes 的安全性和可靠性。
<!--
You can find more in [KEP-5080: Ordered namespace deletion](https://kep.k8s.io/5080).
-->
你可以在 [KEP-5080: Ordered namespace deletion](https://kep.k8s.io/5080)
中找到更多信息。
<!--
### Enhancements for indexed job management
These two KEPs are both set to graduate to GA to provide better reliability for job handling, specifically for indexed jobs. [KEP-3850](https://kep.k8s.io/3850) provides per-index backoff limits for indexed jobs, which allows each index to be fully independent of other indexes. Also, [KEP-3998](https://kep.k8s.io/3998) extends Job API to define conditions for making an indexed job as successfully completed when not all indexes are succeeded.
-->
### 针对带索引作业Indexed Job管理的增强
这两个 KEP 都计划升级为 GA以提供更好的作业处理可靠性特别是针对索引作业。
[KEP-3850](https://kep.k8s.io/3850) 为索引作业中的不同索引分别支持独立的回退限制,
这使得每个索引可以完全独立于其他索引。此外,[KEP-3998](https://kep.k8s.io/3998)
扩展了 Job API定义了在并非所有索引都成功的情况下将索引作业标记为成功完成的条件。
<!--
You can find more in [KEP-3850: Backoff Limit Per Index For Indexed Jobs](https://kep.k8s.io/3850) and [KEP-3998: Job success/completion policy](https://kep.k8s.io/3998).
-->
你可以在 [KEP-3850: Backoff Limit Per Index For Indexed Jobs](https://kep.k8s.io/3850) 和
[KEP-3998: Job success/completion policy](https://kep.k8s.io/3998) 中找到更多信息。
<!--
## Want to know more?
New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what's new in [Kubernetes v1.33](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md) as part of the CHANGELOG for that release.
-->
## 想了解更多?
新特性和弃用也会在 Kubernetes 发行说明中宣布。我们将在该版本的
CHANGELOG 中正式宣布 [Kubernetes v1.33](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md)
的新内容。
<!--
Kubernetes v1.33 release is planned for **Wednesday, 23rd April, 2025**. Stay tuned for updates!
You can also see the announcements of changes in the release notes for:
-->
Kubernetes v1.33 版本计划于 **2025年4月23日星期三**发布。请持续关注以获取更新!
你也可以在以下版本的发行说明中查看变更公告:
* [Kubernetes v1.32](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.32.md)
* [Kubernetes v1.31](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md)
* [Kubernetes v1.30](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md)
<!--
## Get involved
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something youd like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below. Thank you for your continued feedback and support.
-->
## 参与进来
参与 Kubernetes 最简单的方式是加入与你兴趣相符的众多[特别兴趣小组](https://github.com/kubernetes/community/blob/master/sig-list.md)SIG
之一。你有什么想向 Kubernetes 社区广播的内容吗?
通过我们每周的[社区会议](https://github.com/kubernetes/community/tree/master/communication)和以下渠道分享你的声音。
感谢你持续的反馈和支持。
<!--
- Follow us on Bluesky [@kubernetes.io](https://bsky.app/profile/kubernetes.io) for the latest updates
- Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
- Join the community on [Slack](http://slack.k8s.io/)
- Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
- Read more about whats happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
- Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)
-->
- 在 Bluesky 上关注我们 [@kubernetes.io](https://bsky.app/profile/kubernetes.io) 以获取最新更新
- 在 [Discuss](https://discuss.kubernetes.io/) 上参与社区讨论
- 在 [Slack](http://slack.k8s.io/) 上加入社区
- 在 [Server Fault](https://serverfault.com/questions/tagged/kubernetes) 或
[Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes) 上提问(或回答问题)
- 分享你的 Kubernetes [故事](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
- 在[博客](https://kubernetes.io/zh-cn/blog/)上阅读更多关于 Kubernetes 最新动态的内容
- 了解更多关于 [Kubernetes 发布团队](https://github.com/kubernetes/sig-release/tree/master/release-team)的信息

View File

@ -1,433 +1,260 @@
---
title: 华为案例分析
case_study_styles: true
cid: caseStudies
css: /css/style_huawei.css
---
<!--
new_case_study_styles: true
heading_background: /images/case-studies/huawei/banner1.jpg
heading_title_logo: /images/huawei_logo.png
subheading: >
以用户和供应商身份拥抱云原生
case_study_details:
- Company: 华为
- Location: 中国深圳
- Industry: 电信设备
---
<!--
title: Huawei Case Study
case_study_styles: true
cid: caseStudies
css: /css/style_huawei.css
---
new_case_study_styles: true
heading_background: /images/case-studies/huawei/banner1.jpg
heading_title_logo: /images/huawei_logo.png
subheading: >
Embracing Cloud Native as a User and a Vendor
case_study_details:
- Company: Huawei
- Location: Shenzhen, China
- Industry: Telecommunications Equipment
-->
<div class="banner1">
<h1> 案例分析:<img src="/images/huawei_logo.png" class="header_logo"><br> <div class="subhead">以用户和供应商身份拥抱云原生</div></h1>
<!--
<h1> CASE STUDY:<img src="/images/huawei_logo.png" class="header_logo"><br> <div class="subhead">Embracing Cloud Native as a User and a Vendor</div></h1>
-->
</div>
<div class="details">
公司 &nbsp;<b>华为</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;地点 &nbsp;<b>中国深圳</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;产业 &nbsp;<b>通信设备</b>
<!--
Company &nbsp;<b>Huawei</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Location &nbsp;<b>Shenzhen, China</b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Industry &nbsp;<b>Telecommunications Equipment</b>
-->
</div>
<hr>
<section class="section1">
<div class="cols">
<div class="col1">
<h2>挑战</h2>
<!--
<h2>Challenge</h2>
-->
华为是世界上最大的电信设备制造商,拥有超过 18 万名员工。
<!--
A multinational company thats the largest telecommunications equipment manufacturer in the world,
Huawei has more than 180,000 employees.
-->
为了支持华为在全球的快速业务发展,<a href="http://www.huawei.com/">华为</a>内部 IT 部门有 8 个数据中心,
这些数据中心在 100K+ VMs 上运行了 800 多个应用程序,服务于这 18 万用户。
<!--
In order to support its fast business development around the globe,
<a href="http://www.huawei.com/">Huawei</a> has eight data centers for its internal I.T. department,
which have been running 800+ applications in 100K+ VMs to serve these 180,000 users.
-->
随着新应用程序的快速增长,基于 VM 的应用程序的管理和部署的成本和效率都成为业务敏捷性的关键挑战。
<!--
With the rapid increase of new applications, the cost and efficiency of management and
deployment of VM-based apps all became critical challenges for business agility.
-->
该公司首席软件架构师、开源社区总监侯培新表示:
“这是一个超大的分布式系统,因此我们发现,以更一致的方式管理所有任务始终是一个挑战。
我们希望进入一种更敏捷、更得体的实践”。
<!--
"Its very much a distributed system so we found that managing all of the tasks
in a more consistent way is always a challenge," says Peixin Hou,
the companys Chief Software Architect and Community Director for Open Source.
"We wanted to move into a more agile and decent practice."
-->
</div>
<div class="col2">
<h2>解决方案</h2>
<!--
<h2>Solution</h2>
-->
在决定使用容器技术后,华为开始将内部 IT 部门的应用程序迁移到<a href="http://kubernetes.io/"> Kubernetes </a>上运行。
到目前为止,大约 30% 的应用程序已经转移为云原生程序。
<!--
After deciding to use container technology, Huawei began moving the internal I.T. departments applications
to run on <a href="http://kubernetes.io/">Kubernetes</a>.
So far, about 30 percent of these applications have been transferred to cloud native.
-->
<br>
<br>
<h2>影响</h2>
<!--
<h2>Impact</h2>
<h2>Challenge</h2>
-->
“到 2016 年底,华为的内部 IT 部门使用基于 Kubernetes 的平台即服务PaaS解决方案管理了 4000 多个节点和数万个容器。
全局部署周期从一周缩短到几分钟,应用程序交付效率提高了 10 倍”。
<!--
"By the end of 2016, Huaweis internal I.T. department managed more than 4,000 nodes with tens of thousands containers
using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou.
"The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold."
-->
<h2>挑战</h2>
对于底线,侯培新表示,“我们还看到运营开支大幅削减,在某些情况下可削减 20% 到 30%,我们认为这对我们的业务非常有帮助”。
<!--
For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent,
which we think is very helpful for our business."
-->
这里给出一些华为内部结果资料、外部需求,也是公司的技术包装产品<a href="http://developer.huawei.com/ict/en/site-paas"> FusionStage™ </a>
它被作为一套 PaaS 解决方案提供给其客户。
<!--
Given the results Huawei has had internally and the demand it is seeing externally the company has also built the technologies
into <a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a>, the PaaS solution it offers its customers.
-->
</div>
<p>
<!--
A multinational company that's the largest telecommunications equipment manufacturer in the world, Huawei has more than 180,000 employees. In order to support its fast business development around the globe, <a href="https://www.huawei.com/">Huawei</a> has eight data centers for its internal I.T. department, which have been running 800+ applications in 100K+ VMs to serve these 180,000 users. With the rapid increase of new applications, the cost and efficiency of management and deployment of VM-based apps all became critical challenges for business agility. "It's very much a distributed system so we found that managing all of the tasks in a more consistent way is always a challenge," says Peixin Hou, the company's Chief Software Architect and Community Director for Open Source. "We wanted to move into a more agile and decent practice."
-->
华为作为一个跨国企业,是世界上最大的电信设备制造商,拥有超过 18 万名员工。
为了支持华为在全球的快速业务发展,<a href="https://www.huawei.com/">华为</a>内部 IT 部门有 8 个数据中心,
这些数据中心在 10 万多台虚拟机上运行了 800 多个应用程序,为内部 18 万用户提供服务。
随着新应用程序的快速增长,基于虚拟机的应用程序管理和部署的成本和效率都成为业务敏捷性的关键挑战。
该公司首席软件架构师、开源社区总监侯培新表示:
“这是一个超大的分布式系统,因此我们发现,以更一致的方式管理所有任务始终是一个挑战。
我们希望进入一种更敏捷、更得体的实践”。
</p>
</div>
<!--
<h2>Solution</h2>
-->
<h2>解决方案</h2>
</section>
<p>
<!--
After deciding to use container technology, Huawei began moving the internal I.T. department's applications to run on <a href="https://kubernetes.io/">Kubernetes</a>. So far, about 30 percent of these applications have been transferred to cloud native.
-->
在决定使用容器技术后,华为开始将内部 IT 部门的应用程序迁移到 <a href="https://kubernetes.io/">Kubernetes</a> 上运行。
到目前为止,大约 30% 的应用程序已经转移为云原生程序。
</p>
<div class="banner2">
<div class="banner2text">
“如果你是一个供应商,为了说服你的客户,你应该自己使用它。
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模。”
<!--
"If youre a vendor, in order to convince your customer, you should use it yourself.
Luckily because Huawei has a lot of employees,
we can demonstrate the scale of cloud we can build using this technology."
-->
<!--
<h2>Impact</h2>
-->
<h2>影响</h2>
<br style="height:25px">
<span style="font-size:14px;letter-spacing:2px;text-transform:uppercase;margin-top:5% !important;">
<br>- 侯培新,首席软件架构师、开源社区总监
<!--
<br>- Peixin Hou, chief software architect and community director for open source
-->
</span>
</div>
</div>
<p>
<!--
"By the end of 2016, Huawei's internal I.T. department managed more than 4,000 nodes with tens of thousands containers using a Kubernetes-based Platform as a Service (PaaS) solution," says Hou. "The global deployment cycles decreased from a week to minutes, and the efficiency of application delivery has been improved 10 fold." For the bottom line, he says, "We also see significant operating expense spending cut, in some circumstances 20-30 percent, which we think is very helpful for our business." Given the results Huawei has had internally and the demand it is seeing externally the company has also built the technologies into <a href="https://support.huawei.com/enterprise/en/cloud-computing/fusionstage-pid-21733180">FusionStage™</a>, the PaaS solution it offers its customers.
-->
“到 2016 年底,华为的内部 IT 部门使用基于 Kubernetes 的平台即服务PaaS解决方案管理了 4000 多个节点和数万个容器。
全局部署周期从一周缩短到几分钟,应用程序交付效率提高了 10 倍”。
对于底线,侯培新表示,“我们还看到运营开支大幅削减,在某些情况下可削减 20% 到 30%,我们认为这对我们的业务非常有帮助”。
这里给出一些华为内部结果资料、外部需求,也是公司的技术包装产品
<a href="https://support.huawei.com/enterprise/zh/cloud-computing/fusionstage-pid-21733180">FusionStage™ </a>
它被作为一套 PaaS 解决方案提供给其客户。
</p>
<section class="section2">
<!--
Peixin Hou, chief software architect and community director for open source
"If you're a vendor, in order to convince your customer, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology."
-->
{{< case-studies/quote author="侯培新,首席软件架构师、开源社区总监" >}}
“如果你是一个供应商,为了说服你的客户,你应该自己使用它。
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模。”
{{< /case-studies/quote >}}
<div class="fullcol">
华为的 Kubernetes 之旅始于一位开发者。
<!--
Huaweis Kubernetes journey began with one developer.
-->
<p>
<!--
Huawei's Kubernetes journey began with one developer. Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested in <a href="https://kubernetes.io/">Kubernetes</a>, the technology for managing application containers across clusters of hosts, and started contributing to its open source community. As the technology developed and the community grew, he kept telling his managers about it.
-->
华为的 Kubernetes 之旅始于一位开发者。
两年前,这家网络和电信巨头雇佣的一名工程师对 <a href="https://kubernetes.io/">Kubernetes</a>
这一跨主机集群的管理应用程序容器的技术产生了兴趣,并开始为其开源社区作出贡献。
随着技术和社区的发展,他不断地将这门技术告诉他的经理们。
</p>
两年前,这家网络和电信巨头雇佣的一名工程师对<a href="http://kubernetes.io/"> Kubernetes </a>
这一跨主机集群的管理应用程序容器的技术产生了兴趣,并开始为其开源社区作出贡献。
<!--
Over two years ago, one of the engineers employed by the networking and telecommunications giant became interested
in <a href="http://kubernetes.io/">Kubernetes</a>,
the technology for managing application containers across clusters of hosts,
and started contributing to its open source community.
-->
<p>
<!--
And as fate would have it, at the same time, Huawei was looking for a better orchestration system for its internal enterprise I.T. department, which supports every business flow processing. "We have more than 180,000 employees worldwide, and a complicated internal procedure, so probably every week this department needs to develop some new applications," says Peixin Hou, Huawei's Chief Software Architect and Community Director for Open Source. "Very often our I.T. departments need to launch tens of thousands of containers, with tasks running across thousands of nodes across the world. It's very much a distributed system, so we found that managing all of the tasks in a more consistent way is always a challenge."
-->
与此同时,华为也在为其内部的企业 IT 部门寻找更好的编排系统,该系统应该支持每一个业务的流程处理。
华为首席软件架构师、开源社区总监侯培新表示,
“我们在全球拥有逾 18 万名员工,内部流程复杂,所以这个部门可能每周都需要开发一些新的应用程序。
我们的 IT 部门经常需要启动数万个容器,任务要跨越全球数千个节点。
这是一个超大的分布式的系统,所以我们发现以更一致的方式管理所有的任务总是一个挑战”。
</p>
随着技术和社区的发展,他不断地将这门技术告诉他的经理们。<br><br>
<!--
As the technology developed and the community grew, he kept telling his managers about it.<br><br>
-->
<p>
<!--
In the past, Huawei had used virtual machines to encapsulate applications, but "every time when we start a VM," Hou says, "whether because it's a new service or because it was a service that was shut down because of some abnormal node functioning, it takes a lot of time." Huawei turned to containerization, so the timing was right to try Kubernetes. It took a year to adopt that engineer's suggestion the process "is not overnight," says Hou but once in use, he says, "Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
-->
过去,华为曾使用虚拟机来封装应用程序,但是,“每次我们启动虚拟机时”,侯培新说,
“无论是因为它是一项新服务,还是因为它是一项由于节点功能异常而被关闭的服务,都需要花费大量时间”。
华为转向了容器化,所以是时候尝试 Kubernetes 了。
采纳了这位工程师的建议花费了一年的时间,这个过程“不是一蹴而就的”,侯说,
但一旦投入使用“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
开发人员非常高兴。使用 Kubernetes 的那个部门也十分高兴”。
</p>
与此同时,华为也在为其内部的企业 IT 部门寻找更好的编排系统,该系统应该支持每一个业务的流程处理。
<!--
And as fate would have it, at the same time,
Huawei was looking for a better orchestration system for its internal enterprise I.T. department,
which supports every business flow processing.
-->
<p>
<!--
Hou sees great benefits to the company that come with using this technology: "Kubernetes brings agility, scale-out capability, and DevOps practice to the cloud-based applications," he says. "It provides us with the ability to customize the scheduling architecture, which makes possible the affinity between container tasks that gives greater efficiency. It supports multiple container formats. It has extensive support for various container networking solutions and container storage."
-->
侯培新看到了使用这项技术给公司带来的巨大好处,
“Kubernetes 为基于云的应用程序带来了敏捷性、扩展能力和 DevOps 实践”,他说,
“它为我们提供了自定义调度体系结构的能力,这使得容器任务之间的关联性成为可能,从而提高了效率。
它支持多种容器格式,同时广泛支持各种容器网络解决方案和容器存储方案”。
</p>
华为首席软件架构师、开源社区总监侯培新表示,
“我们在全球拥有逾 18 万名员工,内部流程复杂,所以这个部门可能每周都需要开发一些新的应用程序。
<!--
"We have more than 180,000 employees worldwide, and a complicated internal procedure,
so probably every week this department needs to develop some new applications," says Peixin Hou,
Huaweis Chief Software Architect and Community Director for Open Source.
-->
{{< case-studies/quote image="/images/case-studies/huawei/banner3.jpg" >}}
<!--
"Kubernetes basically solved most of our problems. Before, the time of deployment took about a week, now it only takes minutes. The developers are happy. That department is also quite happy."
-->
“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
开发人员很高兴。使用 Kubernetes 的部门也很高兴。”
{{< /case-studies/quote >}}
我们的 IT 部门经常需要启动数万个容器,任务要跨越全球数千个节点。
这是一个超大的分布式的系统,所以我们发现以更一致的方式管理所有的任务总是一个挑战”。<br><br>
<!--
"Very often our I.T. departments need to launch tens of thousands of containers,
with tasks running across thousands of nodes across the world.
Its very much a distributed system, so we found that managing all of the tasks
in a more consistent way is always a challenge."<br><br>
-->
<p>
<!--
And not least of all, there's an impact on the bottom line. Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent, which is very helpful for our business."
-->
最重要的是,这对底线有影响。侯培新说,
“我们还看到,在某些情况下,运营开支会大幅削减 20% 到 30%,这对我们的业务非常有帮助”。
</p>
过去,华为曾使用虚拟机来封装应用程序,但是,“每次我们启动虚拟机时”,侯培新说,
“无论是因为它是一项新服务,还是因为它是一项由于节点功能异常而被关闭的服务,都需要花费大量时间”。
<!--
In the past, Huawei had used virtual machines to encapsulate applications,
but "every time when we start a VM," Hou says,
"whether because its a new service or because it was a service that was shut down
because of some abnormal node functioning, it takes a lot of time."
-->
<p>
<!--
Pleased with those initial results, and seeing a demand for cloud native technologies from its customers, Huawei doubled down on Kubernetes. In the spring of 2016, the company became not only a user but also a vendor.
-->
华为对这些初步结果感到满意,并看到客户对云原生技术的需求,因此加大了 Kubernetes 的投入。
2016 年春,公司不仅成为用户,而且成为了供应商。
</p>
华为转向了容器化,所以是时候尝试 Kubernetes 了。
采纳了这位工程师的建议花费了一年的时间,这个过程“不是一蹴而就的”,侯说,
<!--
Huawei turned to containerization, so the timing was right to try Kubernetes.
It took a year to adopt that engineers suggestion the process "is not overnight," says Hou
-->
<p>
<!--
"We built the Kubernetes technologies into our solutions," says Hou, referring to Huawei's <a href="https://support.huawei.com/enterprise/en/cloud-computing/fusionstage-pid-21733180">FusionStage™</a> PaaS offering. "Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them. We've started to work with some Chinese banks, and we see a lot of interest from our customers like <a href="https://www.chinamobileltd.com/">China Mobile</a> and <a href="https://www.telekom.com/en">Deutsche Telekom</a>."
-->
“我们构建了 Kubernetes 技术解决方案”,侯培新说,
指的是华为的<a href="https://support.huawei.com/enterprise/zh/cloud-computing/fusionstage-pid-21733180"> FusionStage™ </a> PaaS 输出。
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花费大量的时间来分解他们的应用程序,将它们转换为微服务体系结构。
作为解决方案提供者,我们帮助他们。我们已经开始与一些中国银行合作,
我们看到<a href="https://www.chinamobileltd.com/">中国移动</a><a href="https://www.telekom.com/en">德国电信</a>等客户对我们很感兴趣”。
</p>
但一旦投入使用“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
<!--
but once in use, he says, "Kubernetes basically solved most of our problems.
Before, the time of deployment took about a week, now it only takes minutes.
-->
<p>
<!--
"If you're just a user, you're just a user," adds Hou. "But if you're a vendor, in order to even convince your customers, you should use it yourself. Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology. We provide customer wisdom." While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huawei's solutions. It's a big selling point that most of the public cloud providers now support Kubernetes. "This makes the cross-cloud transition much easier than with other solutions," says Hou.
-->
“如果你是一个用户,你就仅仅是个用户”,侯培新补充道,“但如果你是一个供应商,为了说服你的客户,你应该自己使用它。
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模,向客户提供智慧服务”。
尽管华为拥有自己的私有云,但其许多客户使用华为的解决方案运行跨云应用程序。
这是一个很大的卖点,大多数公共云提供商现在都支持 Kubernetes。
侯培新说,“这使得跨云转换比其他解决方案更容易”。
</p>
开发人员非常高兴。使用 Kubernetes 的那个部门也十分高兴”。<br><br>
<!--
The developers are happy. That department is also quite happy."<br><br>
-->
{{< case-studies/quote image="/images/case-studies/huawei/banner4.jpg" >}}
<!--
"Our customers, from very big telecommunications operators to banks, love the idea of cloud native. They like Kubernetes technology. But they need to spend a lot of time to decompose their applications to turn them into microservice architecture, and as a solution provider, we help them."
-->
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花很多时间来分解他们的应用程序,把它们变成微服务体系结构,作为一个解决方案提供商,我们帮助他们。”
{{< /case-studies/quote >}}
侯培新看到了使用这项技术给公司带来的巨大好处,
“Kubernetes 为基于云的应用程序带来了敏捷性、扩展能力和 DevOps 实践”,他说,
<!--
Hou sees great benefits to the company that come with using this technology:
"Kubernetes brings agility, scale-out capability,
and DevOps practice to the cloud-based applications," he says.
-->
“它为我们提供了自定义调度体系结构的能力,这使得容器任务之间的关联性成为可能,从而提高了效率。
它支持多种容器格式,同时广泛支持各种容器网络解决方案和容器存储方案”。
<!--
"It provides us with the ability to customize the scheduling architecture,
which makes possible the affinity between container tasks that gives greater efficiency.
It supports multiple container formats. It has extensive support for various container
networking solutions and container storage."
-->
</div>
</section>
<p>
<!--
Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes, Hou is looking to convince more departments to move over to the cloud native development cycle and practice. "We have a lot of software developers, so we will provide them with our platform as a service solution, our own product," he says. "We would like to see significant cuts in their iteration cycle."
-->
在华为内部,一旦他的团队完成内部业务流程部门向 Kubernetes 的转型,侯培新希望说服更多部门转向云原生开发和实践。
“我们有很多软件开发人员,所以我们将为他们提供我们的平台作为服务解决方案,我们自己的产品”,
他说,“我们希望在他们的迭代周期中看到显著的成本削减”。
</p>
<div class="banner3">
<div class="banner3text">
“Kubernetes 基本上解决了我们的大部分问题。
以前,部署时间大约需要一周,现在只需几分钟。
开发人员很高兴。使用 Kubernetes 的部门也很高兴。”
<!--
"Kubernetes basically solved most of our problems.
Before, the time of deployment took about a week, now it only takes minutes.
The developers are happy. That department is also quite happy."
-->
</div>
</div>
<p>
<!--
Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology: "When you start to design the architecture of your application, think about cloud native, think about microservice architecture from the beginning," he says. "I think you will benefit from that."
-->
在见证了华为最开始的向 Kubernetes 的转型之后,侯培新为其他考虑该技术的公司提供了建议,
“当你开始设计应用程序的架构时,首先考虑云原生,然后再考虑微服务架构”,他说,“我想你会从中受益”。
</p>
<section class="section3">
<div class="fullcol">
最重要的是,这对底线有影响。侯培新说,
“我们还看到,在某些情况下,运营开支会大幅削减 20% 到 30%,这对我们的业务非常有帮助”。<br><br>
<!--
And not least of all, theres an impact on the bottom line.
Says Hou: "We also see significant operating expense spending cut in some circumstances 20-30 percent,
which is very helpful for our business."<br><br>
-->
<p>
<!--
But if you already have legacy applications, "start from some microservice-friendly part of those applications first, parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says. "Don't think from day one that within how many days I want to move the whole architecture, or move everything into microservices. Don't put that as a kind of target. You should do it in a gradual manner. And I would say for legacy applications, not every piece would be suitable for microservice architecture. No need to force it."
-->
但是如果您已经有了遗留应用程序,“首先从这些应用程序中一些对微服务友好的部分开始,
这些部分相对容易分解成更简单的部分,并且相对轻量级”,侯培新说,
“不要从一开始就认为我想在几天内将整个架构或所有东西都迁移到微服务中。
不要把它当作目标。你应该循序渐进地做这件事。
我想说的是,对于遗留应用程序,并不是每个部分都适合微服务架构”。
</p>
华为对这些初步结果感到满意,并看到客户对云原生技术的需求,因此加大了 Kubernetes 的投入。
2016 年春,公司不仅成为用户,而且成为了供应商。<br><br>
<!--
Pleased with those initial results, and seeing a demand for cloud native technologies from its customers,
Huawei doubled down on Kubernetes.
In the spring of 2016, the company became not only a user but also a vendor.<br><br>
-->
<p>
<!--
After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There's still 20 percent that's not, but it's fine. If we can make 80 percent of our workload really be cloud native, to have agility, it's a much better world at the end of the day."
-->
毕竟,尽管侯培新对华为的 Kubernetes 充满热情,但他估计,
“未来 10 年,或许 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,但是没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界”。
</p>
“我们构建了 Kubernetes 技术解决方案”,侯培新说,
指的是华为的<a href="http://developer.huawei.com/ict/en/site-paas"> FusionStage™ </a> PaaS 输出。
<!--
"We built the Kubernetes technologies into our solutions," says Hou, referring to Huaweis
<a href="http://developer.huawei.com/ict/en/site-paas">FusionStage™</a> PaaS offering.
-->
{{< case-studies/quote >}}
<!--
"In the next 10 years, maybe 80 percent of the workload can be distributed, can be run on the cloud native environments. There's still 20 percent that's not, but it's fine. If we can make 80 percent of our workload really be cloud native, to have agility, it's a much better world at the end of the day."
-->
“未来 10 年,可能 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,不过没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界。”
{{< /case-studies/quote >}}
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花费大量的时间来分解他们的应用程序,将它们转换为微服务体系结构。
作为解决方案提供者,我们帮助他们。
<!--
"Our customers, from very big telecommunications operators to banks, love the idea of cloud native.
They like Kubernetes technology. But they need to spend a lot of time to decompose their applications
to turn them into microservice architecture, and as a solution provider, we help them.
-->
<p>
<!--
In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes, not least of all the ones that Huawei is contributing to. Huawei engineers have worked on the federation feature (which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling, container networking and storage, and a just-announced technology called <a href="https://containerops.org/">Container Ops</a>, which is a DevOps pipeline engine. "This will put every DevOps job into a container," he explains. "And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes. With that mechanism, we can make the containerized DevOps jobs be created, shared and managed much more easily than before."
-->
在不久的将来,侯培新期待着围绕着 Kubernetes 开发的新功能,尤其是华为正在开发的那些功能。
华为的工程师已经在为联邦功能(将多个 Kubernetes 集群放在一个框架中进行无缝管理)、调度、容器网络和存储,
以及刚刚发布的一项名为 <a href="https://containerops.org/">Container Ops</a> 的技术工作,这是一个 DevOps 管道引擎。
“这将把每个 DevOps 作业放到一个容器中”,他解释说,“这种容器机制使用 Kubernetes 运行,也用于测试 Kubernetes。
有了这种机制,我们可以比以前更容易地创建、共享和管理容器化 DevOps 作业”。
</p>
我们已经开始与一些中国银行合作我们看到中国移动China Mobile和德国电信Deutsche Telekom等客户对我们很感兴趣”。<br><br>
<!--
Weve started to work with some Chinese banks, and we see a lot of interest from our customers
like <a href="http://www.chinamobileltd.com/">China Mobile</a> and
<a href="https://www.telekom.com/en">Deutsche Telekom</a>."<br><br>
-->
<p>
<!--
Still, Hou sees this technology as only halfway to its full potential. First and foremost, he'd like to expand the scale it can orchestrate, which is important for supersized companies like Huawei as well as some of its customers.
-->
尽管如此,侯培新认为这项技术只是实现其全部潜力的一半。
首先,也是最重要的,他想要扩大它可以协调的规模,
这对于华为这样的超大规模公司以及它的一些客户来说非常重要。
</p>
“如果你是一个用户,你就仅仅是个用户”,侯培新补充道,“但如果你是一个供应商,为了说服你的客户,你应该自己使用它。
<!--
"If youre just a user, youre just a user," adds Hou.
"But if youre a vendor, in order to even convince your customers, you should use it yourself.
-->
幸运的是,因为华为有很多员工,我们可以利用这种技术来展示我们所能构建的云的规模,向客户提供智慧服务”。
<!--
Luckily because Huawei has a lot of employees, we can demonstrate the scale of cloud we can build using this technology.
We provide customer wisdom."
-->
尽管华为拥有自己的私有云,但其许多客户使用华为的解决方案运行跨云应用程序。
这是一个很大的卖点,大多数公共云提供商现在都支持 Kubernetes。
侯培新说,“这使得跨云转换比其他解决方案更容易”。<br><br>
<!--
While Huawei has its own private cloud, many of its customers run cross-cloud applications using Huaweis solutions.
Its a big selling point that most of the public cloud providers now support Kubernetes.
"This makes the cross-cloud transition much easier than with other solutions," says Hou.<br><br>
-->
</div>
</section>
<div class="banner4">
<div class="banner4text">
“我们的客户,从非常大的电信运营商到银行,都喜欢云原生的想法。他们喜欢 Kubernetes 的技术。
但是他们需要花很多时间来分解他们的应用程序,把它们变成微服务体系结构,作为一个解决方案提供商,我们帮助他们。”
<!--
"Our customers, from very big telecommunications operators to banks, love the idea of cloud native.
They like Kubernetes technology. But they need to spend a lot of time to decompose their applications
to turn them into microservice architecture, and as a solution provider, we help them."
-->
</div>
</div>
<section class="section4">
<div class="fullcol">
在华为内部,一旦他的团队完成内部业务流程部门向 Kubernetes 的转型,侯培新希望说服更多部门转向云原生开发和实践。
<!--
Within Huawei itself, once his team completes the transition of the internal business procedure department to Kubernetes,
Hou is looking to convince more departments to move over to the cloud native development cycle and practice.
-->
“我们有很多软件开发人员,所以我们将为他们提供我们的平台作为服务解决方案,我们自己的产品”,
他说,“我们希望在他们的迭代周期中看到显著的成本削减”。<br><br>
<!--
"We have a lot of software developers,
so we will provide them with our platform as a service solution, our own product," he says.
"We would like to see significant cuts in their iteration cycle."<br><br>
-->
在见证了华为最开始的向 Kubernetes 的转型之后,侯培新为其他考虑该技术的公司提供了建议,
“当你开始设计应用程序的架构时,首先考虑云原生,然后再考虑微服务架构”,他说,“我想你会从中受益”。<br><br>
<!--
Having overseen the initial move to Kubernetes at Huawei, Hou has advice for other companies considering the technology:
"When you start to design the architecture of your application, think about cloud native,
think about microservice architecture from the beginning," he says.
"I think you will benefit from that."<br><br>
-->
但是如果您已经有了遗留应用程序,“首先从这些应用程序中一些对微服务友好的部分开始,
这些部分相对容易分解成更简单的部分,并且相对轻量级”,侯培新说,
<!--
But if you already have legacy applications, "start from some microservice-friendly part of those applications first,
parts that are relatively easy to be decomposed into simpler pieces and are relatively lightweight," Hou says.
-->
“不要从一开始就认为我想在几天内将整个架构或所有东西都迁移到微服务中。
不要把它当作目标。你应该循序渐进地做这件事。
我想说的是,对于遗留应用程序,并不是每个部分都适合微服务架构”。<br><br>
<!--
"Dont think from day one that within how many days I want to move the whole architecture,
or move everything into microservices. Dont put that as a kind of target.
You should do it in a gradual manner. And I would say for legacy applications,
not every piece would be suitable for microservice architecture. No need to force it."<br><br>
-->
毕竟,尽管侯培新对华为的 Kubernetes 充满热情,但他估计,
“未来 10 年,或许 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,但是没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界”。
<!--
After all, as enthusiastic as Hou is about Kubernetes at Huawei, he estimates that "in the next 10 years,
maybe 80 percent of the workload can be distributed, can be run on the cloud native environments.
Theres still 20 percent thats not, but its fine.
If we can make 80 percent of our workload really be cloud native, to have agility,
its a much better world at the end of the day."
-->
</div>
</section>
<div class="banner5">
<div class="banner5text">
“未来 10 年,可能 80% 的工作负载可以分布式地在云原生环境中运行,但仍然有 20% 不是,不过没关系。
如果我们能够让 80% 的工作负载真正是云原生的、敏捷的,那么最终会有一个更好的世界。”
<!--
"In the next 10 years, maybe 80 percent of the workload can be distributed,
can be run on the cloud native environments.
Theres still 20 percent thats not, but its fine.
If we can make 80 percent of our workload really be cloud native, to have agility,
its a much better world at the end of the day."
-->
</div>
</div>
<section class="section5">
<div class="fullcol">
在不久的将来,侯培新期待着围绕着 Kubernetes 开发的新功能,尤其是华为正在开发的那些功能。
<!--
In the nearer future, Hou is looking forward to new features that are being developed around Kubernetes,
not least of all the ones that Huawei is contributing to.
-->
华为的工程师已经在为联邦功能(将多个 Kubernetes 集群放在一个框架中进行无缝管理)、调度、容器网络和存储,以及刚刚发布的一项名为
<a href="http://containerops.org/"> Container Ops </a>的技术工作,这是一个 DevOps 管道引擎。
<!--
Huawei engineers have worked on the federation feature
(which puts multiple Kubernetes clusters in a single framework to be managed seamlessly), scheduling,
container networking and storage, and a just-announced technology called
<a href="http://containerops.org/">Container Ops</a>, which is a DevOps pipeline engine.
-->
“这将把每个 DevOps 作业放到一个容器中”,他解释说,“这种容器机制使用 Kubernetes 运行,也用于测试 Kubernetes。
有了这种机制,我们可以比以前更容易地创建、共享和管理容器化 DevOps 作业”。<br><br>
<!--
"This will put every DevOps job into a container," he explains.
"And then this container mechanism is running using Kubernetes, but is also used to test Kubernetes.
With that mechanism, we can make the containerized DevOps jobs be created,
shared and managed much more easily than before."<br><br>
-->
尽管如此,侯培新认为这项技术只是实现其全部潜力的一半。
首先,也是最重要的,他想要扩大它可以协调的规模,
这对于华为这样的超大规模公司以及它的一些客户来说非常重要。<br><br>
<!--
Still, Hou sees this technology as only halfway to its full potential.
First and foremost, hed like to expand the scale it can orchestrate,
which is important for supersized companies like Huawei as well as some of its customers.<br><br>
-->
侯培新自豪地指出,在华为第一位工程师成为 Kubernetes 的贡献者和传道者两年后,华为现在是这个社区的最大贡献者。
他说,“我们发现,你对社区的贡献越大,你得到的回报也就越多”。
<!--
Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes,
Huawei is now a top contributor to the community. "Weve learned that the more you contribute to the community,"
he says, "the more you get back."
-->
</div>
</section>
<p>
<!--
Hou proudly notes that two years after that first Huawei engineer became a contributor to and evangelist for Kubernetes, Huawei is now a top contributor to the community. "We've learned that the more you contribute to the community," he says, "the more you get back."
-->
侯培新自豪地指出,在华为第一位工程师成为 Kubernetes 的贡献者和传道者两年后,华为现在是这个社区的最大贡献者。
他说,“我们发现,你对社区的贡献越大,你得到的回报也就越多”。
</p>

View File

@ -0,0 +1,101 @@
---
title: Kubernetes 自我修复
content_type: concept
Weight: 50
---
<!--
title: Kubernetes Self-Healing
content_type: concept
Weight: 50
-->
<!-- overview -->
<!--
Kubernetes is designed with self-healing capabilities that help maintain the health and availability of workloads.
It automatically replaces failed containers, reschedules workloads when nodes become unavailable, and ensures that the desired state of the system is maintained.
-->
Kubernetes 旨在通过自我修复能力来维护工作负载的健康和可用性。
它能够自动替换失败的容器,在节点不可用时重新调度工作负载,
并确保系统的期望状态得以维持。
<!-- body -->
<!--
## Self-Healing capabilities {#self-healing-capabilities}
- **Container-level restarts:** If a container inside a Pod fails, Kubernetes restarts it based on the [`restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy).
- **Replica replacement:** If a Pod in a [Deployment](/docs/concepts/workloads/controllers/deployment/) or [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) fails, Kubernetes creates a replacement Pod to maintain the specified number of replicas.
If a Pod fails that is part of a [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) fails, the control plane
creates a replacement Pod to run on the same node.
-->
## 自我修复能力 {#self-healing-capabilities}
- **容器级重启:** 如果 Pod 中的某个容器失败Kubernetes 会根据
[`restartPolicy`](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)
定义的策略重启此容器。
- **副本替换:** 如果 [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/)
或 [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 中的某个 Pod 失败,
Kubernetes 会创建一个替代 Pod以维持指定的副本数量。
如果属于 [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/)
的某个 Pod 失败,控制平面会在同一节点上创建一个替代 Pod。
<!--
- **Persistent storage recovery:** If a node is running a Pod with a PersistentVolume (PV) attached, and the node fails, Kubernetes can reattach the volume to a new Pod on a different node.
- **Load balancing for Services:** If a Pod behind a [Service](/docs/concepts/services-networking/service/) fails, Kubernetes automatically removes it from the Service's endpoints to route traffic only to healthy Pods.
-->
- **持久存储恢复:** 如果某个节点正在运行一个挂载了持久卷PV
的 Pod且该节点发生故障Kubernetes 可以将该卷重新挂载到另一个节点上的新 Pod。
- **服务的负载均衡:** 如果 [Service](/zh-cn/docs/concepts/services-networking/service/)
背后的某个 Pod 失败Kubernetes 会自动将其从 Service 的端点中移除,
以确保流量仅路由到健康的 Pod。
<!--
Here are some of the key components that provide Kubernetes self-healing:
- **[kubelet](/docs/concepts/architecture/#kubelet):** Ensures that containers are running, and restarts those that fail.
- **ReplicaSet, StatefulSet and DaemonSet controller:** Maintains the desired number of Pod replicas.
- **PersistentVolume controller:** Manages volume attachment and detachment for stateful workloads.
-->
以下是提供 Kubernetes 自我修复功能的一些关键组件:
- **[kubelet](/zh-cn/docs/concepts/architecture/#kubelet)**
确保容器正在运行,并重启失败的容器。
- **ReplicaSet、StatefulSet 和 DaemonSet 控制器:** 维持期望的 Pod 副本数量。
- **PersistentVolume 控制器:** 管理有状态工作负载的卷挂载和卸载。
<!--
## Considerations {#considerations}
- **Storage Failures:** If a persistent volume becomes unavailable, recovery steps may be required.
- **Application Errors:** Kubernetes can restart containers, but underlying application issues must be addressed separately.
-->
## 注意事项 {#considerations}
- **存储故障:** 如果持久卷变得不可用,可能需要执行恢复步骤。
- **应用程序错误:** Kubernetes 可以重启容器,但底层的应用程序问题需要单独解决。
## {{% heading "whatsnext" %}}
<!--
- Read more about [Pods](/docs/concepts/workloads/pods/)
- Learn about [Kubernetes Controllers](/docs/concepts/architecture/controller/)
- Explore [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
- Read about [node autoscaling](/docs/concepts/cluster-administration/node-autoscaling/). Node autoscaling
also provides automatic healing if or when nodes fail in your cluster.
-->
- 进一步阅读 [Pod](/zh-cn/docs/concepts/workloads/pods/)
- 了解 [Kubernetes 控制器](/zh-cn/docs/concepts/architecture/controller/)
- 探索 [持久卷PersistentVolume](/zh-cn/docs/concepts/storage/persistent-volumes/)
- 阅读关于[节点自动扩展](/zh-cn/docs/concepts/cluster-administration/node-autoscaling/)。
节点自动扩展还能够在集群中的节点发生故障时提供自动修复功能。

View File

@ -10,7 +10,7 @@ card:
weight: 60
anchors:
- anchor: "#securing-a-cluster"
title: 保护集群
title: 加固集群
---
<!--
title: Cluster Administration
@ -98,14 +98,14 @@ Before choosing a guide, here are some considerations:
## Managing a cluster
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/).
* Read about [Node autoscaling](/docs/concepts/cluster-administration/node-autoscaling/).
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
-->
## 管理集群 {#managing-a-cluster}
* 学习如何[管理节点](/zh-cn/docs/concepts/architecture/nodes/)。
* 阅读[集群自动扩缩](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)。
* 阅读[节点自动扩缩](/zh-cn/docs/concepts/cluster-administration/node-autoscaling/)。
* 学习如何设定和管理集群共享的[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)。
@ -124,12 +124,15 @@ Before choosing a guide, here are some considerations:
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
explains plug-ins which intercepts requests to the Kubernetes API server after authentication
and authorization.
* [Admission Webhook Good Practices](/docs/concepts/cluster-administration/admission-webhooks-good-practices/)
provides good practices and considerations when designing mutating admission
webhooks and validating admission webhooks.
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/)
describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters.
* [Auditing](/docs/tasks/debug/debug-cluster/audit/) describes how to interact with Kubernetes'
audit logs.
-->
## 保护集群 {#securing-a-cluster}
## 加固集群 {#securing-a-cluster}
* [生成证书](/zh-cn/docs/tasks/administer-cluster/certificates/)描述了使用不同的工具链生成证书的步骤。
* [Kubernetes 容器环境](/zh-cn/docs/concepts/containers/container-environment/)描述了
@ -141,6 +144,8 @@ Before choosing a guide, here are some considerations:
* [鉴权](/zh-cn/docs/reference/access-authn-authz/authorization/)与身份认证不同,用于控制如何处理 HTTP 请求。
* [使用准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers)阐述了在认证和授权之后拦截到
Kubernetes API 服务的请求的插件。
* [准入 Webhook 的最佳实践](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/)
提供了设计变更型准入 Webhook 和验证型准入 Webhook 时的最佳实践和注意事项。
* [在 Kubernetes 集群中使用 sysctl](/zh-cn/docs/tasks/administer-cluster/sysctl-cluster/)
描述了管理员如何使用 `sysctl` 命令行工具来设置内核参数。
* [审计](/zh-cn/docs/tasks/debug/debug-cluster/audit/)描述了如何与 Kubernetes 的审计日志交互。
@ -152,7 +157,7 @@ Before choosing a guide, here are some considerations:
* [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/)
-->
### 保护 kubelet {#securing-the-kubelet}
### 加固 kubelet {#securing-the-kubelet}
* [节点与控制面之间的通信](/zh-cn/docs/concepts/architecture/control-plane-node-communication/)
* [TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
@ -172,4 +177,3 @@ Before choosing a guide, here are some considerations:
名解析到一个 Kubernetes service。
* [记录和监控集群活动](/zh-cn/docs/concepts/cluster-administration/logging/)阐述了 Kubernetes
的日志如何工作以及怎样实现。

View File

@ -1,227 +0,0 @@
---
title: 集群自动扩缩容
linkTitle: 集群自动扩缩容
description: >-
自动管理集群中的节点以适配需求。
content_type: concept
weight: 120
---
<!--
title: Cluster Autoscaling
linkTitle: Cluster Autoscaling
description: >-
Automatically manage the nodes in your cluster to adapt to demand.
content_type: concept
weight: 120
-->
<!-- overview -->
<!--
Kubernetes requires {{< glossary_tooltip text="nodes" term_id="node" >}} in your cluster to
run {{< glossary_tooltip text="pods" term_id="pod" >}}. This means providing capacity for
the workload Pods and for Kubernetes itself.
You can adjust the amount of resources available in your cluster automatically:
_node autoscaling_. You can either change the number of nodes, or change the capacity
that nodes provide. The first approach is referred to as _horizontal scaling_, while the
second is referred to as _vertical scaling_.
Kubernetes can even provide multidimensional automatic scaling for nodes.
-->
Kubernetes 需要集群中的{{< glossary_tooltip text="节点" term_id="node" >}}来运行
{{< glossary_tooltip text="Pod" term_id="pod" >}}。
这意味着需要为工作负载 Pod 以及 Kubernetes 本身提供容量。
你可以自动调整集群中可用的资源量:**节点自动扩缩容**。
你可以更改节点的数量,或者更改节点提供的容量。
第一种方法称为**水平扩缩容**,而第二种方法称为**垂直扩缩容**。
Kubernetes 甚至可以为节点提供多维度的自动扩缩容。
<!-- body -->
<!--
## Manual node management
You can manually manage node-level capacity, where you configure a fixed amount of nodes;
you can use this approach even if the provisioning (the process to set up, manage, and
decommission) for these nodes is automated.
This page is about taking the next step, and automating management of the amount of
node capacity (CPU, memory, and other node resources) available in your cluster.
-->
## 手动节点管理 {#manual-node-management}
你可以手动管理节点级别的容量,例如你可以配置固定数量的节点;
即使这些节点的制备(搭建、管理和停用过程)是自动化的,你也可以使用这种方法。
本文介绍的是下一步操作即自动化管理集群中可用的节点容量CPU、内存和其他节点资源
<!--
## Automatic horizontal scaling {#autoscaling-horizontal}
### Cluster Autoscaler
You can use the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) to manage the scale of your nodes automatically.
The cluster autoscaler can integrate with a cloud provider, or with Kubernetes'
[cluster API](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md),
to achieve the actual node management that's needed.
-->
## 自动水平扩缩容 {#autoscaling-horizontal}
### Cluster Autoscaler
你可以使用 [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
自动管理节点的数目规模。Cluster Autoscaler 可以与云驱动或 Kubernetes 的
[Cluster API](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md)
集成,以完成实际所需的节点管理。
<!--
The cluster autoscaler adds nodes when there are unschedulable Pods, and
removes nodes when those nodes are empty.
#### Cloud provider integrations {#cluster-autoscaler-providers}
The [README](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md)
for the cluster autoscaler lists some of the cloud provider integrations
that are available.
-->
当存在不可调度的 Pod 时Cluster Autoscaler 会添加节点;
当这些节点为空时Cluster Autoscaler 会移除节点。
#### 云驱动集成组件 {#cluster-autoscaler-providers}
Cluster Autoscaler 的
[README](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md)
中列举了一些可用的云驱动集成组件。
<!--
## Cost-aware multidimensional scaling {#autoscaling-multi-dimension}
### Karpenter {#autoscaler-karpenter}
[Karpenter](https://karpenter.sh/) supports direct node management, via
plugins that integrate with specific cloud providers, and can manage nodes
for you whilst optimizing for overall cost.
-->
## 成本感知多维度扩缩容 {#autoscaling-multi-dimension}
### Karpenter {#autoscaler-karpenter}
[Karpenter](https://karpenter.sh/) 支持通过继承了特定云驱动的插件来直接管理节点,
还可以在优化总体成本的同时为你管理节点。
<!--
> Karpenter automatically launches just the right compute resources to
> handle your cluster's applications. It is designed to let you take
> full advantage of the cloud with fast and simple compute provisioning
> for Kubernetes clusters.
-->
> Karpenter 自动启动适合你的集群应用的计算资源。
> Karpenter 设计为让你充分利用云资源,快速简单地为 Kubernetes 集群制备计算资源。
<!--
The Karpenter tool is designed to integrate with a cloud provider that
provides API-driven server management, and where the price information for
available servers is also available via a web API.
For example, if you start some more Pods in your cluster, the Karpenter
tool might buy a new node that is larger than one of the nodes you are
already using, and then shut down an existing node once the new node
is in service.
-->
Karpenter 工具设计为与云驱动集成,提供 API 驱动的服务器管理,
此工具可以通过 Web API 获取可用服务器的价格信息。
例如,如果你在集群中启动更多 PodKarpenter 工具可能会购买一个比你当前使用的节点更大的新节点,
然后在这个新节点投入使用后关闭现有的节点。
<!--
#### Cloud provider integrations {#karpenter-providers}
-->
#### 云驱动集成组件 {#karpenter-providers}
{{% thirdparty-content vendor="true" %}}
<!--
There are integrations available between Karpenter's core and the following
cloud providers:
- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws)
- [Azure](https://github.com/Azure/karpenter-provider-azure)
-->
在 Karpenter 的核心与以下云驱动之间,存在可用的集成组件:
<!--
- [Amazon Web Services](https://github.com/aws/karpenter-provider-aws)
- [Azure](https://github.com/Azure/karpenter-provider-azure)
-->
- [亚马逊 Web 服务Amazon Web Service](https://github.com/aws/karpenter-provider-aws)
- [Azure](https://github.com/Azure/karpenter-provider-azure)
<!--
## Related components
### Descheduler
The [descheduler](https://github.com/kubernetes-sigs/descheduler) can help you
consolidate Pods onto a smaller number of nodes, to help with automatic scale down
when the cluster has spare capacity.
-->
## 相关组件 {#related-components}
### Descheduler
[Descheduler](https://github.com/kubernetes-sigs/descheduler)
可以帮助你将 Pod 集中到少量节点上,以便在集群有空闲容量时帮助自动缩容。
<!--
### Sizing a workload based on cluster size
#### Cluster proportional autoscaler
For workloads that need to be scaled based on the size of the cluster (for example
`cluster-dns` or other system components), you can use the
[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).<br />
The Cluster Proportional Autoscaler watches the number of schedulable nodes
and cores, and scales the number of replicas of the target workload accordingly.
-->
### 基于集群大小调整工作负载 {#sizing-a-workload-based-on-cluster-size}
#### Cluster Proportional Autoscaler
对于需要基于集群大小进行扩缩容的工作负载(例如 `cluster-dns` 或其他系统组件),
你可以使用 [**Cluster Proportional Autoscaler**](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)。
Cluster Proportional Autoscaler 监视可调度节点和核心的数量,并相应地调整目标工作负载的副本数量。
<!--
#### Cluster proportional vertical autoscaler
If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
This project is in **beta** and can be found on GitHub.
While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
in the cluster.
-->
#### Cluster Proportional Vertical Autoscaler
如果副本数量应该保持不变,你可以使用
[Cluster Proportional Vertical Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler)
基于集群大小垂直扩缩你的工作负载。此项目处于 **Beta** 阶段,托管在 GitHub 上。
Cluster Proportional Autoscaler 扩缩工作负载的副本数量,而 Cluster Proportional Vertical Autoscaler
基于集群中的节点和/或核心数量调整工作负载(例如 Deployment 或 DaemonSet的资源请求。
## {{% heading "whatsnext" %}}
<!--
- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/)
- Read about [node overprovisioning](/docs/tasks/administer-cluster/node-overprovisioning/)
-->
- 参阅[工作负载级别自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling/)
- 参阅[节点超分配](/zh-cn/docs/tasks/administer-cluster/node-overprovisioning/)

View File

@ -0,0 +1,525 @@
---
title: Node 自动扩缩容
linkTitle: Node 自动扩缩容
description: >-
自动在集群中制备和整合 Node以适应需求并优化成本。
content_type: concept
weight: 15
---
<!--
reviewers:
- gjtempleton
- jonathan-innis
- maciekpytel
title: Node Autoscaling
linkTitle: Node Autoscaling
description: >-
Automatically provision and consolidate the Nodes in your cluster to adapt to demand and optimize cost.
content_type: concept
weight: 15
-->
<!--
In order to run workloads in your cluster, you need
{{< glossary_tooltip text="Nodes" term_id="node" >}}. Nodes in your cluster can be _autoscaled_ -
dynamically [_provisioned_](#provisioning), or [_consolidated_](#consolidation) to provide needed
capacity while optimizing cost. Autoscaling is performed by Node [_autoscalers_](#autoscalers).
-->
为了在集群中运行负载,你需要 {{< glossary_tooltip text="Node" term_id="node" >}}。
集群中的 Node 可以被**自动扩缩容**
通过动态[**制备**](#provisioning)或[**整合**](#consolidation)的方式提供所需的容量并优化成本。
自动扩缩容操作是由 Node [**Autoscaler**](#autoscalers) 执行的。
<!--
## Node provisioning {#provisioning}
If there are Pods in a cluster that can't be scheduled on existing Nodes, new Nodes can be
automatically added to the cluster&mdash;_provisioned_&mdash;to accommodate the Pods. This is
especially useful if the number of Pods changes over time, for example as a result of
[combining horizontal workload with Node autoscaling](#horizontal-workload-autoscaling).
Autoscalers provision the Nodes by creating and deleting cloud provider resources backing them. Most
commonly, the resources backing the Nodes are Virtual Machines.
-->
## Node 制备 {#provisioning}
当集群中有 Pod 无法被调度到现有 Node 上时,系统将**制备**新的 Node 并将其添加到集群中,以容纳这些 Pod。
如果由于组合使用[水平负载和 Node 自动扩缩容](#horizontal-workload-autoscaling)使得
Pod 个数随着时间发生变化,这种自动扩缩容机制将特别有用。
Autoscaler 通过创建和删除云驱动基础资源来制备 Node。最常见的支撑 Node 的资源是虚拟机VM
<!--
The main goal of provisioning is to make all Pods schedulable. This goal is not always attainable
because of various limitations, including reaching configured provisioning limits, provisioning
configuration not being compatible with a particular set of pods, or the lack of cloud provider
capacity. While provisioning, Node autoscalers often try to achieve additional goals (for example
minimizing the cost of the provisioned Nodes or balancing the number of Nodes between failure
domains).
-->
制备的主要目标是使所有 Pod 可调度。
由于各种限制(如已达到配置的制备上限、制备配置与特定 Pod 集不兼容或云驱动容量不足),此目标不一定总是可以实现。
在制备之时Node Autoscaler 通常还会尝试实现其他目标(例如最小化制备 Node 的成本或在故障域之间平衡 Node 的数量)。
<!--
There are two main inputs to a Node autoscaler when determining Nodes to
provision&mdash;[Pod scheduling constraints](#provisioning-pod-constraints),
and [Node constraints imposed by autoscaler configuration](#provisioning-node-constraints).
Autoscaler configuration may also include other Node provisioning triggers (for example the number
of Nodes falling below a configured minimum limit).
-->
在决定制备 Node 时针对 Node Autoscaler 有两个主要输入:
- [Pod 调度约束](#provisioning-pod-constraints)
- [Autoscaler 配置所施加的 Node 约束](#provisioning-node-constraints)
Autoscaler 配置也可以包含其他 Node 制备触发条件(例如 Node 个数低于配置的最小限制值)。
{{< note >}}
<!--
Provisioning was formerly known as _scale-up_ in Cluster Autoscaler.
-->
在 Cluster Autoscaler 中,制备以前称为**扩容**。
{{< /note >}}
<!--
### Pod scheduling constraints {#provisioning-pod-constraints}
Pods can express [scheduling constraints](/docs/concepts/scheduling-eviction/assign-pod-node/) to
impose limitations on the kind of Nodes they can be scheduled on. Node autoscalers take these
constraints into account to ensure that the pending Pods can be scheduled on the provisioned Nodes.
-->
### Pod 调度约束 {#provisioning-pod-constraints}
Pod 可以通过[调度约束](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/)表达只能调度到特定类别 Node 的限制。
Node Autoscaler 会考虑这些约束,确保 Pending 的 Pod 可以被调度到这些制备的 Node 上。
<!--
The most common kind of scheduling constraints are the resource requests specified by Pod
containers. Autoscalers will make sure that the provisioned Nodes have enough resources to satisfy
the requests. However, they don't directly take into account the real resource usage of the Pods
after they start running. In order to autoscale Nodes based on actual workload resource usage, you
can combine [horizontal workload autoscaling](#horizontal-workload-autoscaling) with Node
autoscaling.
Other common Pod scheduling constraints include
[Node affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity),
[inter-Pod affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity),
or a requirement for a particular [storage volume](/docs/concepts/storage/volumes/).
-->
最常见的调度约束是通过 Pod 容器所指定的资源请求。
Autoscaler 将确保制备的 Node 具有足够资源来满足这些请求。
但是Autoscaler 不会在 Pod 开始运行之后直接考虑这些 Pod 的真实资源用量。
要根据实际负载资源用量自动扩缩容 Node
你可以组合使用[水平负载自动扩缩容](#horizontal-workload-autoscaling)和 Node 自动扩缩容。
其他常见的 Pod 调度约束包括
[Node 亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)、
[Pod 间亲和性/反亲和性](/zh-cn/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)或特定[存储卷](/docs/concepts/storage/volumes/)的要求。
<!--
### Node constraints imposed by autoscaler configuration {#provisioning-node-constraints}
The specifics of the provisioned Nodes (for example the amount of resources, the presence of a given
label) depend on autoscaler configuration. Autoscalers can either choose them from a pre-defined set
of Node configurations, or use [auto-provisioning](#autoprovisioning).
-->
### Autoscaler 配置施加的 Node 约束 {#provisioning-node-constraints}
已制备的 Node 的具体规格(例如资源量、给定标签的存在与否)取决于 Autoscaler 配置。
Autoscaler 可以从一组预定义的 Node 配置中进行选择,或使用[自动制备](#autoprovisioning)。
<!--
### Auto-provisioning {#autoprovisioning}
Node auto-provisioning is a mode of provisioning in which a user doesn't have to fully configure the
specifics of the Nodes that can be provisioned. Instead, the autoscaler dynamically chooses the Node
configuration based on the pending Pods it's reacting to, as well as pre-configured constraints (for
example, the minimum amount of resources or the need for a given label).
-->
### 自动制备 {#autoprovisioning}
Node 自动制备是一种用户无需完全配置 Node 容许制备规格的制备模式。
Autoscaler 会基于 Pending 的 Pod 和预配置的约束(例如最小资源量或给定标签的需求)动态选择 Node 配置。
<!--
## Node consolidation {#consolidation}
The main consideration when running a cluster is ensuring that all schedulable pods are running,
whilst keeping the cost of the cluster as low as possible. To achieve this, the Pods' resource
requests should utilize as much of the Nodes' resources as possible. From this perspective, the
overall Node utilization in a cluster can be used as a proxy for how cost-effective the cluster is.
-->
## Node 整合 {#consolidation}
运行集群时的主要考量是确保所有可调度 Pod 都在运行,并尽可能降低集群成本。
为此Pod 的资源请求应尽可能利用 Node 的更多资源。
从这个角度看,集群中的整体 Node 利用率可以用作集群成本效益的参考指标。
{{< note >}}
<!--
Correctly setting the resource requests of your Pods is as important to the overall
cost-effectiveness of a cluster as optimizing Node utilization.
Combining Node autoscaling with [vertical workload autoscaling](#vertical-workload-autoscaling) can
help you achieve this.
-->
对于集群的整体成本效益而言,正确设置 Pod 的资源请求与优化 Node 的利用率同样重要。
将 Node 自动扩缩容与[垂直负载自动扩缩容](#vertical-workload-autoscaling)结合使用有助于实现这一目标。
{{< /note >}}
<!--
Nodes in your cluster can be automatically _consolidated_ in order to improve the overall Node
utilization, and in turn the cost-effectiveness of the cluster. Consolidation happens through
removing a set of underutilized Nodes from the cluster. Optionally, a different set of Nodes can
be [provisioned](#provisioning) to replace them.
Consolidation, like provisioning, only considers Pod resource requests and not real resource usage
when making decisions.
-->
集群中的 Node 可以被自动**整合**,以提高整体 Node 利用率以及集群的成本效益。
整合操作通过移除一组利用率低的 Node 来实现。有时会同时[制备](#provisioning)一组不同的 Node 来替代。
与制备类似,整合操作在做出决策时仅考虑 Pod 的资源请求而非实际的资源用量。
<!--
For the purpose of consolidation, a Node is considered _empty_ if it only has DaemonSet and static
Pods running on it. Removing empty Nodes during consolidation is more straightforward than non-empty
ones, and autoscalers often have optimizations designed specifically for consolidating empty Nodes.
Removing non-empty Nodes during consolidation is disruptive&mdash;the Pods running on them are
terminated, and possibly have to be recreated (for example by a Deployment). However, all such
recreated Pods should be able to schedule on existing Nodes in the cluster, or the replacement Nodes
provisioned as part of consolidation. __No Pods should normally become pending as a result of
consolidation.__
-->
在整合过程中,如果一个 Node 上仅运行 DaemonSet 和静态 Pod这个 Node 就会被视为**空的**。
在整合期间移除空的 Node 要比操作非空 Node 更简单直接Autoscaler 通常针对空 Node 整合进行优化。
在整合期间移除非空 Node 会有破坏性Node 上运行的 Pod 会被终止,且可能需要被重新创建(例如由 Deployment 重新创建)。
不过,所有被重新创建的 Pod 都应该能够被调度到集群中的现有 Node 上,或调度到作为整合一部分而制备的替代 Node 上。
__正常情况下整合操作不应导致 Pod 处于 Pending 状态。__
{{< note >}}
<!--
Autoscalers predict how a recreated Pod will likely be scheduled after a Node is provisioned or
consolidated, but they don't control the actual scheduling. Because of this, some Pods might
become pending as a result of consolidation - if for example a completely new Pod appears while
consolidation is being performed.
-->
Autoscaler 会预测在 Node 被制备或整合后重新创建的 Pod 将可能以何种方式调度,但 Autoscaler 不控制实际的调度行为。
因此,某些 Pod 可能由于整合操作而进入 Pending 状态。例如在执行整合过程中,出现一个全新的 Pod。
{{< /note >}}
<!--
Autoscaler configuration may also enable triggering consolidation by other conditions (for example,
the time elapsed since a Node was created), in order to optimize different properties (for example,
the maximum lifespan of Nodes in a cluster).
The details of how consolidation is performed depend on the configuration of a given autoscaler.
-->
Autoscaler 配置还可以设为由其他状况触发整合(例如 Node 被创建后用掉的时间),以优化属性(例如集群中 Node 的最大生命期)。
执行整合的具体方式取决于给定 Autoscaler 的配置。
{{< note >}}
<!--
Consolidation was formerly known as _scale-down_ in Cluster Autoscaler.
-->
在 Cluster Autoscaler 中, 整合以前称为**缩容**。
{{< /note >}}
<!--
## Autoscalers {#autoscalers}
The functionalities described in previous sections are provided by Node _autoscalers_. In addition
to the Kubernetes API, autoscalers also need to interact with cloud provider APIs to provision and
consolidate Nodes. This means that they need to be explicitly integrated with each supported cloud
provider. The performance and feature set of a given autoscaler can differ between cloud provider
integrations.
-->
## Autoscaler {#autoscalers}
上述章节中所述的功能由 Node **Autoscaler** 提供。
除了 Kubernetes API 之外Autoscaler 还需要与云驱动 API 交互来制备和整合 Node。
这意味着 Autoscaler 需要与每个支持的云驱动进行显式集成。
给定的 Autoscaler 的性能和特性集在不同云驱动集成之间可能有所不同。
{{< mermaid >}}
graph TD
na[Node Autoscaler]
k8s[Kubernetes]
cp[云驱动]
k8s --> |获取 Pod/Node|na
na --> |腾空 Node|k8s
na --> |创建/移除支撑 Node 的资源|cp
cp --> |获取支撑 Node 的资源|na
classDef white_on_blue fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
classDef blue_on_white fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5;
class na blue_on_white;
class k8s,cp white_on_blue;
{{</ mermaid >}}
<!--
### Autoscaler implementations
[Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
and [Karpenter](https://github.com/kubernetes-sigs/karpenter) are the two Node autoscalers currently
sponsored by [SIG Autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling).
From the perspective of a cluster user, both autoscalers should provide a similar Node autoscaling
experience. Both will provision new Nodes for unschedulable Pods, and both will consolidate the
Nodes that are no longer optimally utilized.
-->
### Autoscaler 实现
[Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
和 [Karpenter](https://github.com/kubernetes-sigs/karpenter)
是目前由 [SIG Autoscaling](https://github.com/kubernetes/community/tree/master/sig-autoscaling)
维护的两个 Node Autoscaler。
对于集群用户来说,这两个 Autoscaler 都应提供类似的 Node 自动扩缩容体验。
两个 Autoscaler 都将为不可调度的 Pod 制备新的 Node也都会整合利用率不高的 Node。
<!--
Different autoscalers may also provide features outside the Node autoscaling scope described on this
page, and those additional features may differ between them.
Consult the sections below, and the linked documentation for the individual autoscalers to decide
which autoscaler fits your use case better.
-->
不同的 Autoscaler 还可能提供本文所述的 Node 自动扩缩容范围之外的其他特性,且这些额外的特性也会有所不同。
请参阅以下章节和特定 Autoscaler 的关联文档,了解哪个 Autoscaler 更适合你的使用场景。
<!--
#### Cluster Autoscaler
Cluster Autoscaler adds or removes Nodes to pre-configured _Node groups_. Node groups generally map
to some sort of cloud provider resource group (most commonly a Virtual Machine group). A single
instance of Cluster Autoscaler can simultaneously manage multiple Node groups. When provisioning,
Cluster Autoscaler will add Nodes to the group that best fits the requests of pending Pods. When
consolidating, Cluster Autoscaler always selects specific Nodes to remove, as opposed to just
resizing the underlying cloud provider resource group.
-->
#### Cluster Autoscaler
Cluster Autoscaler 通过向预先配置的 **Node 组**添加或移除 Node。
Node 组通常映射为某种云驱动资源组(最常见的是虚拟机组)。
单实例的 Cluster Autoscaler 将可以同时管理多个 Node 组。
在制备时Cluster Autoscaler 将把 Node 添加到最贴合 Pending Pod 请求的组。
在整合时Cluster Autoscaler 始终选择要移除的特定 Node而不只是重新调整云驱动资源组的大小。
<!--
Additional context:
* [Documentation overview](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md)
* [Cloud provider integrations](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md#faqdocumentation)
* [Cluster Autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
* [Contact](https://github.com/kubernetes/community/tree/master/sig-autoscaling#contact)
-->
更多信息:
* [文档概述](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md)
* [云驱动集成](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md#faqdocumentation)
* [Cluster Autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md)
* [联系方式](https://github.com/kubernetes/community/tree/master/sig-autoscaling#contact)
#### Karpenter
<!--
Karpenter auto-provisions Nodes based on [NodePool](https://karpenter.sh/docs/concepts/nodepools/)
configurations provided by the cluster operator. Karpenter handles all aspects of node lifecycle,
not just autoscaling. This includes automatically refreshing Nodes once they reach a certain
lifetime, and auto-upgrading Nodes when new worker Node images are released. It works directly with
individual cloud provider resources (most commonly individual Virtual Machines), and doesn't rely on
cloud provider resource groups.
-->
Karpenter 基于集群操作员所提供的 [NodePool](https://karpenter.sh/docs/concepts/nodepools/)
配置来自动制备 Node。Karpenter 处理 Node 生命周期的所有方面,而不仅仅是自动扩缩容。
这包括 Node 达到某个生命期后的自动刷新,以及在有新 Worker Node 镜像被发布时的自动升级。
Karpenter 直接与特定的云驱动资源(通常是单独的虚拟机)交互,不依赖云驱动资源组。
<!--
Additional context:
* [Documentation](https://karpenter.sh/)
* [Cloud provider integrations](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file#karpenter-implementations)
* [Karpenter FAQ](https://karpenter.sh/docs/faq/)
* [Contact](https://github.com/kubernetes-sigs/karpenter#community-discussion-contribution-and-support)
-->
更多上下文信息:
* [官方文档](https://karpenter.sh/)
* [云驱动集成](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file#karpenter-implementations)
* [Karpenter FAQ](https://karpenter.sh/docs/faq/)
* [联系方式](https://github.com/kubernetes-sigs/karpenter#community-discussion-contribution-and-support)
<!--
#### Implementation comparison
Main differences between Cluster Autoscaler and Karpenter:
* Cluster Autoscaler provides features related to just Node autoscaling. Karpenter has a wider
scope, and also provides features intended for managing Node lifecycle altogether (for example,
utilizing disruption to auto-recreate Nodes once they reach a certain lifetime, or auto-upgrade
them to new versions).
-->
#### 实现对比
Cluster Autoscaler 和 Karpenter 之间的主要差异:
* Cluster Autoscaler 仅提供与 Node 自动扩缩容相关的特性。
而 Karpenter 的特性范围更大,还提供 Node 生命周期管理
(例如在 Node 达到某个生命期后利用中断来自动重新创建 Node或自动将 Node 升级到新版本)。
<!--
* Cluster Autoscaler doesn't support auto-provisioning, the Node groups it can provision from have
to be pre-configured. Karpenter supports auto-provisioning, so the user only has to configure a
set of constraints for the provisioned Nodes, instead of fully configuring homogenous groups.
* Cluster Autoscaler provides cloud provider integrations directly, which means that they're a part
of the Kubernetes project. For Karpenter, the Kubernetes project publishes Karpenter as a library
that cloud providers can integrate with to build a Node autoscaler.
* Cluster Autoscaler provides integrations with numerous cloud providers, including smaller and less
popular providers. There are fewer cloud providers that integrate with Karpenter, including
[AWS](https://github.com/aws/karpenter-provider-aws), and
[Azure](https://github.com/Azure/karpenter-provider-azure).
-->
* Cluster Autoscaler 不支持自动制备,其可以制备的 Node 组必须被预先配置。
Karpenter 支持自动制备,因此用户只需为制备的 Node 配置一组约束,而不需要完整同质化的组。
* Cluster Autoscaler 直接提供云驱动集成,这意味着这些集成组件是 Kubernetes 项目的一部分。
对于 KarpenterKubernetes 将 Karpenter 发布为一个库,云驱动可以集成这个库来构建 Node Autoscaler。
* Cluster Autoscaler 为众多云驱动提供集成,包括一些小众的云驱动。
Karpenter 支持的云驱动相对较少,目前包括
[AWS](https://github.com/aws/karpenter-provider-aws) 和
[Azure](https://github.com/Azure/karpenter-provider-azure)。
<!--
## Combine workload and Node autoscaling
### Horizontal workload autoscaling {#horizontal-workload-autoscaling}
Node autoscaling usually works in response to Pods&mdash;it provisions new Nodes to accommodate
unschedulable Pods, and then consolidates the Nodes once they're no longer needed.
-->
## 组合使用负载自动扩缩容与 Node 自动扩缩容 {#combine-workload-and-node-autoscaling}
### 水平负载自动扩缩容 {#horizontal-workload-autoscaling}
Node 自动扩缩容通常是为了响应 Pod 而发挥作用的。
它会制备新的 Node 容纳不可调度的 Pod并在不再需要这些 Pod 时整合 Node。
<!--
[Horizontal workload autoscaling](/docs/concepts/workloads/autoscaling#scaling-workloads-horizontally)
automatically scales the number of workload replicas to maintain a desired average resource
utilization across the replicas. In other words, it automatically creates new Pods in response to
application load, and then removes the Pods once the load decreases.
You can use Node autoscaling together with horizontal workload autoscaling to autoscale the Nodes in
your cluster based on the average real resource utilization of your Pods.
-->
[水平负载自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling#scaling-workloads-horizontally)
自动扩缩负载副本的个数以保持各个副本达到预期的平均资源利用率。
换言之,它会基于应用负载而自动创建新的 Pod并在负载减少时移除 Pod。
<!--
If the application load increases, the average utilization of its Pods should also increase,
prompting workload autoscaling to create new Pods. Node autoscaling should then provision new Nodes
to accommodate the new Pods.
Once the application load decreases, workload autoscaling should remove unnecessary Pods. Node
autoscaling should, in turn, consolidate the Nodes that are no longer needed.
If configured correctly, this pattern ensures that your application always has the Node capacity to
handle load spikes if needed, but you don't have to pay for the capacity when it's not needed.
-->
如果应用负载增加,其 Pod 的平均利用率也会增加,将提示负载自动扩缩容以创建新的 Pod。
Node 自动扩缩容随之应制备新的 Node 以容纳新的 Pod。
一旦应用负载减少,负载自动扩缩容应移除不必要的 Pod。
Node 自动扩缩容应按序整合不再需要的 Node。
如果配置正确,这种模式确保你的应用在需要时始终有足够的 Node 容量处理突发负载,你也无需在闲置时为这些 Node 容量支付费用。
<!--
### Vertical workload autoscaling {#vertical-workload-autoscaling}
When using Node autoscaling, it's important to set Pod resource requests correctly. If the requests
of a given Pod are too low, provisioning a new Node for it might not help the Pod actually run.
If the requests of a given Pod are too high, it might incorrectly prevent consolidating its Node.
-->
### 垂直负载自动扩缩容 {#vertical-workload-autoscaling}
在使用 Node 自动扩缩容时,重要的是正确设置 Pod 资源请求。
如果给定 Pod 的请求过低,为其制备新的 Node 可能对 Pod 实际运行并无帮助。
如果给定 Pod 的请求过高,则可能对整合 Node 有所妨碍。
<!--
[Vertical workload autoscaling](/docs/concepts/workloads/autoscaling#scaling-workloads-vertically)
automatically adjusts the resource requests of your Pods based on their historical resource usage.
You can use Node autoscaling together with vertical workload autoscaling in order to adjust the
resource requests of your Pods while preserving Node autoscaling capabilities in your cluster.
-->
[垂直负载自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling#scaling-workloads-vertically)
基于其历史资源用量来自动调整 Pod 的资源请求。
你可以一起使用 Node 自动扩缩容和垂直负载自动扩缩容,以便在集群中保留 Node 自动扩缩容能力的同时调节 Pod 的资源请求。
{{< caution >}}
<!--
When using Node autoscaling, it's not recommended to set up vertical workload autoscaling for
DaemonSet Pods. Autoscalers have to predict what DaemonSet Pods on a new Node will look like in
order to predict available Node resources. Vertical workload autoscaling might make these
predictions unreliable, leading to incorrect scaling decisions.
-->
在使用 Node 自动扩缩容时,不推荐为 DaemonSet Pod 配置垂直负载自动扩缩容。
Autoscaler 需要预测新 Node 上的 DaemonSet Pod 情况,才能预测可用的 Node 资源。
垂直负载自动扩缩容可能会让这些预测不可靠,导致扩缩容决策出错。
{{</ caution >}}
<!--
## Related components
This section describes components providing functionality related to Node autoscaling.
### Descheduler
The [descheduler](https://github.com/kubernetes-sigs/descheduler) is a component providing Node
consolidation functionality based on custom policies, as well as other features related to
optimizing Nodes and Pods (for example deleting frequently restarting Pods).
-->
## 相关组件 {#related-components}
本节以下组件提供与 Node 自动扩缩容相关的功能。
### Descheduler
[Descheduler](https://github.com/kubernetes-sigs/descheduler)
组件基于自定义策略提供 Node 整合功能,以及与优化 Node 和 Pod 相关的其他特性(例如删除频繁重启的 Pod
<!--
### Workload autoscalers based on cluster size
[Cluster Proportional Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler)
and [Cluster Proportional Vertical
Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler) provide
horizontal, and vertical workload autoscaling based on the number of Nodes in the cluster. You can
read more in
[autoscaling based on cluster size](/docs/concepts/workloads/autoscaling#autoscaling-based-on-cluster-size).
-->
### 基于集群规模的负载 Autoscaler
[Cluster Proportional Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler) 和
[Cluster Proportional Vertical Autoscaler](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler)
基于集群中的 Node 个数进行水平和垂直负载自动扩缩容。
更多细节参阅[基于集群规模自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling#autoscaling-based-on-cluster-size)。
## {{% heading "whatsnext" %}}
<!--
- Read about [workload-level autoscaling](/docs/concepts/workloads/autoscaling/)
-->
- 阅读[负载层面的自动扩缩容](/zh-cn/docs/concepts/workloads/autoscaling/)

View File

@ -105,6 +105,24 @@ or detective controls around Pods, their containers, and the images that run in
[网络策略NetworkPolicy)](/zh-cn/docs/concepts/services-networking/network-policies/)
可让控制 Pod 之间或 Pod 与集群外部网络之间的网络流量。
<!--
### Admission control {#admission-control}
[Admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
are plugins that intercept Kubernetes API requests and can validate or mutate
the requests based on specific fields in the request. Thoughtfully designing
these controllers helps to avoid unintended disruptions as Kubernetes APIs
change across version updates. For design considerations, see
[Admission Webhook Good Practices](/docs/concepts/cluster-administration/admission-webhooks-good-practices/).
-->
### 准入控制 {#admission-control}
[准入控制器](/zh-cn/docs/reference/access-authn-authz/admission-controllers/)是拦截
Kubernetes API 请求的插件,可以根据请求中的特定字段验证或修改请求。
精心设计这些控制器有助于避免 Kubernetes API 在版本更新过程中发生意外干扰。
有关设计注意事项,请参阅
[Admission Webhook 良好实践](/zh-cn/docs/concepts/cluster-administration/admission-webhooks-good-practices/)。
<!--
### Auditing

View File

@ -65,7 +65,7 @@ evaluated on its merits.
- [ ] The [Role Based Access Control Good Practices](/docs/concepts/security/rbac-good-practices/)
are followed for guidance related to authentication and authorization.
-->
## 证和鉴权 {#authentication-authorization}
## 身份验证和鉴权 {#authentication-authorization}
- [ ] 在启动后 `system:masters` 组不用于用户或组件的身份验证。
- [ ] kube-controller-manager 运行时要启用 `--use-service-account-credentials` 参数。
@ -89,7 +89,7 @@ an admin user.
<!--
## Network security
- [ ] CNI plugins in-use supports network policies.
- [ ] CNI plugins in use support network policies.
- [ ] Ingress and egress network policies are applied to all workloads in the
cluster.
- [ ] Default network policies within each namespace, selecting all pods, denying
@ -115,16 +115,15 @@ plugins provide the functionality to
restrict network resources that pods may communicate with. This is most commonly done
through [Network Policies](/docs/concepts/services-networking/network-policies/)
which provide a namespaced resource to define rules. Default network policies
blocking everything egress and ingress, in each namespace, selecting all the
pods, can be useful to adopt an allow list approach, ensuring that no workloads
is missed.
that block all egress and ingress, in each namespace, selecting all pods, can be
useful to adopt an allow list approach to ensure that no workloads are missed.
-->
许多[容器网络接口Container Network InterfaceCNI插件](/zh-cn/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)提供了限制
Pod 可能与之通信的网络资源的功能。
这种限制通常通过[网络策略](/zh-cn/docs/concepts/services-networking/network-policies/)来完成,
网络策略提供了一种名字空间作用域的资源来定义规则。
在每个名字空间中,默认的网络策略会阻塞所有的出入站流量,并选择所有 Pod
采用允许列表的方法很有用,可以确保不遗漏任何工作负载。
这种采用允许列表的方法很有用,可以确保不遗漏任何工作负载。
<!--
Not all CNI plugins provide encryption in transit. If the chosen plugin lacks this
@ -145,12 +144,12 @@ should be unique to etcd.
<!--
External Internet access to the Kubernetes API server should be restricted to
not expose the API publicly. Be careful as many managed Kubernetes distribution
not expose the API publicly. Be careful, as many managed Kubernetes distributions
are publicly exposing the API server by default. You can then use a bastion host
to access the server.
The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API access
should be restricted and not publicly exposed, the defaults authentication and
should be restricted and not exposed publicly, the default authentication and
authorization settings, when no configuration file specified with the `--config`
flag, are overly permissive.
-->
@ -383,8 +382,8 @@ SELinux 仅在 Linux 节点上可用,
-->
## Pod 布局 {#pod-placement}
- [ ] Pod 布局是根据应用程序的敏感级别来完成的。
- [ ] 敏感应用程序在节点上隔离运行或使用特定的沙箱运行时运行。
- [ ] Pod 布局是根据应用的敏感级别来完成的。
- [ ] 敏感应用在节点上隔离运行或使用特定的沙箱运行时运行。
<!--
Pods that are on different tiers of sensitivity, for example, an application pod
@ -395,8 +394,8 @@ pivot within the cluster. This separation should be enforced to prevent pods
accidentally being deployed onto the same node. This could be enforced with the
following features:
-->
处于不同敏感级别的 Pod例如应用程序 Pod 和 Kubernetes API 服务器应该部署到不同的节点上。
节点隔离的目的是防止应用程序容器的逃逸,进而直接访问敏感度更高的应用,
处于不同敏感级别的 Pod例如应用程序 Pod 和 Kubernetes API 服务器应该部署到不同的节点上。
节点隔离的目的是防止应用容器的逃逸,进而直接访问敏感度更高的应用,
甚至轻松地改变集群工作机制。
这种隔离应该被强制执行,以防止 Pod 集合被意外部署到同一节点上。
可以通过以下功能实现:
@ -437,7 +436,7 @@ overhead.
: RuntimeClass 是一个用于选择容器运行时配置的特性,容器运行时配置用于运行 Pod 中的容器,
并以性能开销为代价提供或多或少的主机隔离能力。
## Secrets {#secrets}
## Secret {#secrets}
<!--
- [ ] ConfigMaps are not used to hold confidential data.
@ -591,20 +590,20 @@ Production.
- [ ] 保证准入链插件和 Webhook 的配置都是安全的。
<!--
Admission controllers can help to improve the security of the cluster. However,
Admission controllers can help improve the security of the cluster. However,
they can present risks themselves as they extend the API server and
[should be properly secured](/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/).
-->
准入控制器可以帮助提高集群的安全性。
然而,由于它们是对 API 服务器的扩展,其自身可能会带来风险,
所以它们[应该得到适当的保护](/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/)。
所以它们[应该得到适当的保护](/zh-cn/blog/2022/01/19/secure-your-admission-controllers-and-webhooks/)。
<!--
The following lists present a number of admission controllers that could be
considered to enhance the security posture of your cluster and application. It
includes controllers that may be referenced in other parts of this document.
-->
下面列出了一些准入控制器,可以考虑用这些控制器来增强集群和应用程序的安全状况。
下面列出了一些准入控制器,可以考虑用这些控制器来增强集群和应用的安全状况。
列表中包括了可能在本文档其他部分曾提及的控制器。
<!--
@ -641,7 +640,7 @@ attribute') of `system:masters`.
<!--
[`LimitRanger`](/docs/reference/access-authn-authz/admission-controllers/#limitranger)
: Enforce the LimitRange API constraints.
: Enforces the LimitRange API constraints.
-->
[`LimitRanger`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#limitranger)
: 强制执行 LimitRange API 约束。
@ -649,10 +648,10 @@ attribute') of `system:masters`.
<!--
[`MutatingAdmissionWebhook`](/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
: Allows the use of custom controllers through webhooks, these controllers may
mutate requests that it reviews.
mutate requests that they review.
-->
[`MutatingAdmissionWebhook`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook)
: 允许通过 Webhook 使用自定义控制器,这些控制器可能会变更它所审查的请求。
: 允许通过 Webhook 使用自定义控制器,这些控制器可能会变更它所审查的请求。
<!--
[`PodSecurity`](/docs/reference/access-authn-authz/admission-controllers/#podsecurity)
@ -678,8 +677,8 @@ not mutate requests that it reviews.
: 允许通过 Webhook 使用自定义控制器,这些控制器不变更它所审查的请求。
<!--
The second group includes plugin that are not enabled by default but in general
availability state and recommended to improve your security posture:
The second group includes plugins that are not enabled by default but are in general
availability state and are recommended to improve your security posture:
-->
第二组包括默认情况下没有启用、但处于正式发布状态的插件,建议启用这些插件以改善你的安全状况:

View File

@ -79,7 +79,7 @@ request specific levels of resources (CPU and Memory). Claims can request specif
size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany,
ReadWriteMany, or ReadWriteOncePod, see [AccessModes](#access-modes)).
-->
**持久卷申领PersistentVolumeClaimPVC** 表达的是用户对存储的请求概念上与 Pod 类似。
**持久卷申领PersistentVolumeClaimPVC** 表达的是用户对存储的请求概念上与 Pod 类似。
Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。Pod 可以请求特定数量的资源CPU
和内存)。同样 PVC 申领也可以请求特定的大小和访问模式
(例如,可以挂载为 ReadWriteOnce、ReadOnlyMany、ReadWriteMany 或 ReadWriteOncePod
@ -511,7 +511,8 @@ Events: <none>
The finalizer `external-provisioner.volume.kubernetes.io/finalizer` is added for CSI volumes.
The following is an example:
-->
终结器 `external-provisioner.volume.kubernetes.io/finalizer` 会被添加到 CSI 卷上。下面是一个例子:
终结器 `external-provisioner.volume.kubernetes.io/finalizer`
会被添加到 CSI 卷上。下面是一个例子:
```none
Name: pvc-2f0bab97-85a8-4552-8044-eb8be45cf48d
@ -657,16 +658,13 @@ the following types of volumes:
现在,对扩充 PVC 申领的支持默认处于被启用状态。你可以扩充以下类型的卷:
<!--
* azureFile (deprecated)
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* {{< glossary_tooltip text="csi" term_id="csi" >}} (including some CSI migrated
volme types)
* flexVolume (deprecated)
* rbd (deprecated)
* portworxVolume (deprecated)
-->
* azureFile已弃用
* {{< glossary_tooltip text="csi" term_id="csi" >}}
* {{< glossary_tooltip text="csi" term_id="csi" >}}(包含一些 CSI 迁移的卷类型)
* flexVolume已弃用
* rbd已弃用
* portworxVolume已弃用
<!--
@ -826,7 +824,7 @@ administrator intervention.
1. 将绑定到 PVC 申领的 PV 卷标记为 `Retain` 回收策略。
2. 删除 PVC 对象。由于 PV 的回收策略为 `Retain`,我们不会在重建 PVC 时丢失数据。
3. 删除 PV 规约中的 `claimRef` 项,这样新的 PVC 可以绑定到该卷。
这一操作会使得 PV 卷变为 "可用Available"。
这一操作会使得 PV 卷变为"可用Available"。
4. 使用小于 PV 卷大小的尺寸重建 PVC设置 PVC 的 `volumeName` 字段为 PV 卷的名称。
这一操作将把新的 PVC 对象绑定到现有的 PV 卷。
5. 不要忘记恢复 PV 卷上设置的回收策略。
@ -959,6 +957,8 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu
(**not available** starting v1.31)
* `flocker` - Flocker storage.
(**not available** starting v1.25)
* `glusterfs` - GlusterFS storage.
(**not available** starting v1.26)
* `photonPersistentDisk` - Photon controller persistent disk.
(**not available** starting v1.15)
* `quobyte` - Quobyte volume.
@ -976,6 +976,8 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu
v1.31 之后**不可用**
* `flocker` - Flocker 存储。
v1.25 之后**不可用**
* `glusterfs` - GlusterFS 存储。
v1.26 之后**不可用**
* `photonPersistentDisk` - Photon 控制器持久化盘
v1.15 之后**不可用**
* `quobyte` - Quobyte 卷。
@ -1119,15 +1121,6 @@ The access modes are:
`ReadOnlyMany`
: the volume can be mounted as read-only by many nodes.
`ReadWriteMany`
: the volume can be mounted as read-write by many nodes.
`ReadWriteOncePod`
: {{< feature-state for_k8s_version="v1.29" state="stable" >}}
the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod
access mode if you want to ensure that only one pod across the whole cluster can
read that PVC or write to it.
-->
访问模式有:
@ -1139,6 +1132,16 @@ The access modes are:
`ReadOnlyMany`
: 卷可以被多个节点以只读方式挂载。
<!--
`ReadWriteMany`
: the volume can be mounted as read-write by many nodes.
`ReadWriteOncePod`
: {{< feature-state for_k8s_version="v1.29" state="stable" >}}
the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod
access mode if you want to ensure that only one pod across the whole cluster can
read that PVC or write to it.
-->
`ReadWriteMany`
: 卷可以被多个节点以读写方式挂载。
@ -1160,8 +1163,8 @@ to these versions or greater:
* [csi-attacher:v3.3.0+](https://github.com/kubernetes-csi/external-attacher/releases/tag/v3.3.0)
* [csi-resizer:v1.3.0+](https://github.com/kubernetes-csi/external-resizer/releases/tag/v1.3.0)
-->
`ReadWriteOncePod` 访问模式仅适用于 {{< glossary_tooltip text="CSI" term_id="csi" >}} 卷和 Kubernetes v1.22+。
要使用此特性,你需要将以下
`ReadWriteOncePod` 访问模式仅适用于 {{< glossary_tooltip text="CSI" term_id="csi" >}}
卷和 Kubernetes v1.22+。要使用此特性,你需要将以下
[CSI 边车](https://kubernetes-csi.github.io/docs/sidecar-containers.html)更新为下列或更高版本:
- [csi-provisioner:v3.0.0+](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v3.0.0)
@ -1307,23 +1310,15 @@ Not all Persistent Volume types support mount options.
<!--
The following volume types support mount options:
* `azureFile`
* `cephfs` (**deprecated** in v1.28)
* `cinder` (**deprecated** in v1.18)
* `csi` (including CSI migrated volume types)
* `iscsi`
* `nfs`
* `rbd` (**deprecated** in v1.28)
* `vsphereVolume`
-->
以下卷类型支持挂载选项:
* `azureFile`
* `cephfs`(于 v1.28 中**弃用**
* `cinder`(于 v1.18 中**弃用**
* `csi`(包含 CSI 迁移的卷类型)
* `iscsi`
* `nfs`
* `rbd`(于 v1.28 中**弃用**
* `vsphereVolume`
<!--
Mount options are not validated. If a mount option is invalid, the mount fails.
@ -1404,7 +1399,8 @@ A PersistentVolume will be in one of the following phases:
<!--
You can see the name of the PVC bound to the PV using `kubectl describe persistentvolume <name>`.
-->
你可以使用 `kubectl describe persistentvolume <name>` 查看已绑定到 PV 的 PVC 的名称。
你可以使用 `kubectl describe persistentvolume <name>` 查看已绑定到
PV 的 PVC 的名称。
<!--
#### Phase transition timestamp
@ -1491,7 +1487,6 @@ in a pending state.
如果指定的 PV 已经绑定到另一个 PVC则绑定操作将卡在 Pending 状态。
<!--
### Resources
Claims, like Pods, can request specific quantities of a resource. In this case,
@ -1504,13 +1499,27 @@ applies to both volumes and claims.
申领和 Pod 一样,也可以请求特定数量的资源。在这个上下文中,请求的资源是存储。
卷和申领都使用相同的[资源模型](https://git.k8s.io/design-proposals-archive/scheduling/resources.md)。
{{< note >}}
<!--
For `Filesystem` volumes, the storage request refers to the "outer" volume size
(i.e. the allocated size from the storage backend).
This means that the writeable size may be slightly lower for providers that
build a filesystem on top of a block device, due to filesystem overhead.
This is especially visible with XFS, where many metadata features are enabled by default.
-->
对于 `Filesystem` 类型的卷,存储请求指的是“外部”卷的大小(即从存储后端分配的大小)。
这意味着,对于在块设备之上构建文件系统的提供商来说,由于文件系统开销,可写入的大小可能会略小。
这种情况在 XFS 文件系统中尤为明显,因为默认启用了许多元数据功能。
{{< /note >}}
<!--
### Selector
Claims can specify a
[label selector](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
to further filter the set of volumes. Only the volumes whose labels match the selector
can be bound to the claim. The selector can consist of two fields:
to further filter the set of volumes.
Only the volumes whose labels match the selector can be bound to the claim.
The selector can consist of two fields:
-->
### 选择算符 {#selector}
@ -1521,15 +1530,15 @@ can be bound to the claim. The selector can consist of two fields:
<!--
* `matchLabels` - the volume must have a label with this value
* `matchExpressions` - a list of requirements made by specifying key, list of values,
and operator that relates the key and values. Valid operators include In, NotIn,
Exists, and DoesNotExist.
and operator that relates the key and values.
Valid operators include `In`, `NotIn`, `Exists`, and `DoesNotExist`.
All of the requirements, from both `matchLabels` and `matchExpressions`, are
ANDed together they must all be satisfied in order to match.
-->
* `matchLabels` - 卷必须包含带有此值的标签
* `matchExpressions` - 通过设定键key、值列表和操作符operator
来构造的需求。合法的操作符有 In、NotIn、Exists 和 DoesNotExist。
来构造的需求。合法的操作符有 `In``NotIn``Exists``DoesNotExist`
来自 `matchLabels``matchExpressions` 的所有需求都按逻辑与的方式组合在一起。
这些需求都必须被满足才被视为匹配。
@ -1540,8 +1549,8 @@ ANDed together they must all be satisfied in order to match.
A claim can request a particular class by specifying the name of a
[StorageClass](/docs/concepts/storage/storage-classes/)
using the attribute `storageClassName`.
Only PVs of the requested class, ones with the same `storageClassName` as the PVC, can
be bound to the PVC.
Only PVs of the requested class, ones with the same `storageClassName` as the PVC,
can be bound to the PVC.
-->
### 类 {#class}
@ -1553,8 +1562,8 @@ be bound to the PVC.
<!--
PVCs don't necessarily have to request a class. A PVC with its `storageClassName` set
equal to `""` is always interpreted to be requesting a PV with no class, so it
can only be bound to PVs with no class (no annotation or one set equal to
`""`). A PVC with no `storageClassName` is not quite the same and is treated differently
can only be bound to PVs with no class (no annotation or one set equal to `""`).
A PVC with no `storageClassName` is not quite the same and is treated differently
by the cluster, depending on whether the
[`DefaultStorageClass` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
is turned on.
@ -1564,24 +1573,22 @@ PVC 申领不必一定要请求某个类。如果 PVC 的 `storageClassName` 属
PV 卷(未设置注解或者注解值为 `""` 的 PersistentVolumePV对象在系统中不会被删除
因为这样做可能会引起数据丢失)。未设置 `storageClassName` 的 PVC 与此大不相同,
也会被集群作不同处理。具体筛查方式取决于
[`DefaultStorageClass` 准入控制器插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
是否被启用。
[`DefaultStorageClass` 准入控制器插件](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)是否被启用。
<!--
* If the admission plugin is turned on, the administrator may specify a
default StorageClass. All PVCs that have no `storageClassName` can be bound only to
PVs of that default. Specifying a default StorageClass is done by setting the
annotation `storageclass.kubernetes.io/is-default-class` equal to `true` in
a StorageClass object. If the administrator does not specify a default, the
cluster responds to PVC creation as if the admission plugin were turned off. If more than one
default StorageClass is specified, the newest default is used when the
PVC is dynamically provisioned.
* If the admission plugin is turned off, there is no notion of a default
StorageClass. All PVCs that have `storageClassName` set to `""` can be
bound only to PVs that have `storageClassName` also set to `""`.
However, PVCs with missing `storageClassName` can be updated later once
default StorageClass becomes available. If the PVC gets updated it will no
longer bind to PVs that have `storageClassName` also set to `""`.
* If the admission plugin is turned on, the administrator may specify a default StorageClass.
All PVCs that have no `storageClassName` can be bound only to PVs of that default.
Specifying a default StorageClass is done by setting the annotation
`storageclass.kubernetes.io/is-default-class` equal to `true` in a StorageClass object.
If the administrator does not specify a default, the cluster responds to PVC creation
as if the admission plugin were turned off.
If more than one default StorageClass is specified, the newest default is used when
the PVC is dynamically provisioned.
* If the admission plugin is turned off, there is no notion of a default StorageClass.
All PVCs that have `storageClassName` set to `""` can be bound only to PVs
that have `storageClassName` also set to `""`.
However, PVCs with missing `storageClassName` can be updated later once default StorageClass becomes available.
If the PVC gets updated it will no longer bind to PVs that have `storageClassName` also set to `""`.
-->
* 如果准入控制器插件被启用,则管理员可以设置一个默认的 StorageClass。
所有未设置 `storageClassName` 的 PVC 都只能绑定到隶属于默认存储类的 PV 卷。
@ -1644,7 +1651,8 @@ in your cluster. In this case, the new PVC creates as you defined it, and the
-->
你可以创建 PersistentVolumeClaim而无需为新 PVC 指定 `storageClassName`
即使你的集群中不存在默认 StorageClass你也可以这样做。
在这种情况下,新的 PVC 会按照你的定义进行创建,并且在默认值可用之前,该 PVC 的 `storageClassName` 保持不设置。
在这种情况下,新的 PVC 会按照你的定义进行创建,并且在默认值可用之前,该 PVC 的
`storageClassName` 保持不设置。
<!--
When a default StorageClass becomes available, the control plane identifies any
@ -1654,8 +1662,8 @@ updates those PVCs to set `storageClassName` to match the new default StorageCla
If you have an existing PVC where the `storageClassName` is `""`, and you configure
a default StorageClass, then this PVC will not get updated.
-->
当一个默认的 StorageClass 变得可用时,控制平面会识别所有未设置 `storageClassName` 的现有 PVC。
对于 `storageClassName` 为空值或没有此主键的 PVC
当一个默认的 StorageClass 变得可用时,控制平面会识别所有未设置 `storageClassName`
的现有 PVC。对于 `storageClassName` 为空值或没有此主键的 PVC
控制平面会更新这些 PVC 以设置其 `storageClassName` 与新的默认 StorageClass 匹配。
如果你有一个现有的 PVC其中 `storageClassName``""`
并且你配置了默认 StorageClass则此 PVC 将不会得到更新。
@ -1750,23 +1758,15 @@ applicable:
以下卷插件支持原始块卷,包括其动态制备(如果支持的话)的卷:
<!--
* CSI
* CSI (including some CSI migrated volume types)
* FC (Fibre Channel)
* iSCSI
* Local volume
* OpenStack Cinder
* RBD (deprecated)
* RBD (Ceph Block Device; deprecated)
* VsphereVolume
-->
* CSI
* CSI包含一些 CSI 迁移的卷类型)
* FC光纤通道
* iSCSI
* Local 卷
* OpenStack Cinder
* RBD已弃用
* RBDCeph 块设备,已弃用)
* VsphereVolume
<!--
### PersistentVolume using a Raw Block Volume {#persistent-volume-using-a-raw-block-volume}
@ -1874,7 +1874,7 @@ not given the combinations: Volume binding matrix for statically provisioned vol
| Filesystem | Block | NO BIND |
| Filesystem | unspecified | BIND |
-->
| PV volumeMode | PVC volumeMode | Result |
| PV volumeMode | PVC volumeMode | 结果 |
| --------------|:---------------:| ----------------:|
| 未指定 | 未指定 | 绑定 |
| 未指定 | Block | 不绑定 |
@ -1945,7 +1945,8 @@ only available for CSI volume plugins.
-->
## 卷克隆 {#volume-cloning}
[卷克隆](/zh-cn/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于 CSI 卷插件。
[卷克隆](/zh-cn/docs/concepts/storage/volume-pvc-datasource/)功能特性仅适用于
CSI 卷插件。
<!--
### Create PersistentVolumeClaim from an existing PVC {#create-persistent-volume-claim-from-an-existing-pvc}
@ -1994,7 +1995,7 @@ kube-apiserver 和 kube-controller-manager 启用 `AnyVolumeDataSource`
卷填充器利用了 PVC 规约字段 `dataSourceRef`
不像 `dataSource` 字段只能包含对另一个持久卷申领或卷快照的引用,
`dataSourceRef` 字段可以包含对同一名空间中任何对象的引用(不包含除 PVC 以外的核心资源)。
`dataSourceRef` 字段可以包含对同一名空间中任何对象的引用(不包含除 PVC 以外的核心资源)。
对于启用了特性门控的集群,使用 `dataSourceRef``dataSource` 更好。
<!--

View File

@ -3,8 +3,7 @@ title: 特定于节点的卷数限制
content_type: concept
weight: 90
---
<!-- ---
<!--
reviewers:
- jsafrane
- saad-ali
@ -13,8 +12,7 @@ reviewers:
title: Node-specific Volume Limits
content_type: concept
weight: 90
---
-->
-->
<!-- overview -->
@ -22,7 +20,7 @@ weight: 90
This page describes the maximum number of volumes that can be attached
to a Node for various cloud providers.
-->
此页面描述了各个云供应商可关联至一个节点的最大卷数。
此页面描述了各个云供应商可挂接至一个节点的最大卷数。
<!--
Cloud providers like Google, Amazon, and Microsoft typically have a limit on
@ -30,8 +28,8 @@ how many volumes can be attached to a Node. It is important for Kubernetes to
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
waiting for volumes to attach.
-->
谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。
Kubernetes 需要尊重这些限制。否则,在节点上调度的 Pod 可能会卡住去等待卷的关联
谷歌、亚马逊和微软等云供应商通常对可以挂接到节点的卷数量进行限制。
Kubernetes 需要尊重这些限制。否则,在节点上调度的 Pod 可能会卡住去等待卷的挂接
<!-- body -->
@ -41,9 +39,10 @@ Kubernetes 需要尊重这些限制。否则,在节点上调度的 Pod 可能
The Kubernetes scheduler has default limits on the number of volumes
that can be attached to a Node:
-->
## Kubernetes 的默认限制
## Kubernetes 的默认限制 {#kubernetes-default-limits}
Kubernetes 调度器对挂接到一个节点的卷数有默认限制:
The Kubernetes 调度器对关联于一个节点的卷数有默认限制:
<!--
<table>
<tr><th>Cloud service</th><th>Maximum volumes per Node</th></tr>
@ -73,10 +72,10 @@ the limit you set.
The limit applies to the entire cluster, so it affects all Nodes.
-->
## 自定义限制
## 自定义限制 {#custom-limits}
你可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。
CSI 驱动程序可能具有不同的过程,关于如何自定义其限制请参阅相关文档。
各个 CSI 驱动可能采用不同的步骤,关于如何自定义其限制请参阅相关文档。
如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持你设置的限制。
@ -85,7 +84,7 @@ CSI 驱动程序可能具有不同的过程,关于如何自定义其限制请
<!--
## Dynamic volume limits
-->
## 动态卷限制
## 动态卷限制 {#dynamic-volume-limits}
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
@ -108,7 +107,7 @@ Dynamic volume limits are supported for following volume types.
For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node
type and enforces the appropriate maximum number of volumes for the node. For example:
-->
对于由内插件管理的卷Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。例如:
对于由内插件管理的卷Kubernetes 会自动确定节点类型并确保节点上可挂接的卷数目合规。例如:
<!--
* On
@ -122,25 +121,25 @@ volumes to be attached to a Node. For other instance types on
Kubernetes allows 39 volumes to be attached to a Node.
* On Azure, up to 64 disks can be attached to a node, depending on the node type. For more details, refer to [Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes).
-->
* 在 <a href="https://cloud.google.com/compute/">Google Compute Engine</a> 环境中,
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将 127 个卷挂接到节点。
* 对于 M5、C5、R5、T3 和 Z1D 实例类型的 Amazon EBS 磁盘Kubernetes 仅允许 25 个卷挂接到节点。
对于 <a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a> 上的其他实例类型,
Kubernetes 允许 39 个卷挂接至节点。
* 在 Azure 环境中,根据节点类型,最多 64 个磁盘可以挂接至一个节点。
更多详细信息,请参阅 [Azure 虚拟机的数量大小](https://docs.microsoft.com/zh-cn/azure/virtual-machines/windows/sizes)。
<!--
* If a CSI storage driver advertises a maximum number of volumes for a Node (using `NodeGetInfo`), the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} honors that limit.
Refer to the [CSI specifications](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) for details.
* For volumes managed by in-tree plugins that have been migrated to a CSI driver, the maximum number of volumes will be the one reported by the CSI driver.
-->
* 在
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中,
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将 127 个卷关联到节点。
* 如果 CSI 存储驱动(使用 `NodeGetInfo`)为节点通告卷数上限,则
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} 将遵守该限制值。
参考 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo)获取更多详细信息
* 对于 M5、C5、R5、T3 和 Z1D 类型实例的 Amazon EBS 磁盘Kubernetes 仅允许 25 个卷关联到节点。
对于 ec2 上的其他实例类型
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>
Kubernetes 允许 39 个卷关联至节点。
* 在 Azure 环境中, 根据节点类型,最多 64 个磁盘可以关联至一个节点。
更多详细信息,请参阅 [Azure 虚拟机的数量大小](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes)。
* 如果 CSI 存储驱动程序(使用 `NodeGetInfo` )为节点通告卷数上限,则 {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} 将遵守该限制值。
参考 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) 获取更多详细信息。
* 对于由已迁移到 CSI 驱动程序的树内插件管理的卷,最大卷数将是 CSI 驱动程序报告的卷数。
* 对于由已迁移到 CSI 驱动的树内插件管理的卷,最大卷数将是 CSI 驱动报告的卷数。

View File

@ -149,7 +149,18 @@ The kubelet will pick host UIDs/GIDs a pod is mapped to, and will do so in a way
to guarantee that no two pods on the same node use the same mapping.
The `runAsUser`, `runAsGroup`, `fsGroup`, etc. fields in the `pod.spec` always
refer to the user inside the container.
refer to the user inside the container. These users will be used for volume
mounts (specified in `pod.spec.volumes`) and therefore the host UID/GID will not
have any effect on writes/reads from volumes the pod can mount. In other words,
the inodes created/read in volumes mounted by the pod will be the same as if the
pod wasn't using user namespaces.
This way, a pod can easily enable and disable user namespaces (without affecting
its volume's file ownerships) and can also share volumes with pods without user
namespaces by just setting the appropriate users inside the container
(`RunAsUser`, `RunAsGroup`, `fsGroup`, etc.). This applies to any volume the pod
can mount, including `hostPath` (if the pod is allowed to mount `hostPath`
volumes).
The valid UIDs/GIDs when this feature is enabled is the range 0-65535. This
applies to files and processes (`runAsUser`, `runAsGroup`, etc.).
@ -158,7 +169,17 @@ kubelet 将挑选 Pod 所映射的主机 UID/GID
并以此保证同一节点上没有两个 Pod 使用相同的方式进行映射。
`pod.spec` 中的 `runAsUser`、`runAsGroup`、`fsGroup` 等字段总是指的是容器内的用户。
启用该功能时,有效的 UID/GID 在 0-65535 范围内。这以限制适用于文件和进程(`runAsUser`、`runAsGroup` 等)。
这些用户将用于卷挂载(在 `pod.spec.volumes` 中指定),
因此,主机上的 UID/GID 不会影响 Pod 挂载卷的读写操作。
换句话说,由 Pod 挂载卷中创建或读取的 inode将与 Pod 未使用用户命名空间时相同。
通过这种方式Pod 可以轻松启用或禁用用户命名空间(不会影响其卷中文件的所有权),
并且可以通过在容器内部设置适当的用户(`runAsUser`、`runAsGroup`、`fsGroup` 等),
即可与没有用户命名空间的 Pod 共享卷。这一点适用于 Pod 可挂载的任何卷,
包括 `hostPath`(前提是允许 Pod 挂载 `hostPath` 卷)。
启用该功能时,有效的 UID/GID 在 0-65535 范围内。
这适用于文件和进程(`runAsUser`、`runAsGroup` 等)。
<!--
Files using a UID/GID outside this range will be seen as belonging to the

View File

@ -818,7 +818,7 @@ placeholder text for the search form:
例如,这是搜索表单的德语占位符文本:
```toml
[ui_search_placeholder]
[ui_search]
other = "Suchen"
```

Some files were not shown because too many files have changed in this diff Show More