commit
e2dc67dbc1
|
@ -140,6 +140,7 @@ aliases:
|
|||
- chenrui333
|
||||
- howieyuen
|
||||
- mengjiao-liu
|
||||
- my-git9
|
||||
- SataQiu
|
||||
- Sea-n
|
||||
- tanjunchen
|
||||
|
@ -147,6 +148,7 @@ aliases:
|
|||
- windsonsea
|
||||
- xichengliudui
|
||||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- asa3311
|
||||
- chenrui333
|
||||
- chenxuc
|
||||
- howieyuen
|
||||
|
|
|
@ -39,10 +39,10 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
|
|||
{{< note >}}
|
||||
Um eine spezifische Version herunterzuladen, ersetze `$(curl -L -s https://dl.k8s.io/release/stable.txt)` mit der spezifischen Version.
|
||||
|
||||
Um zum Beispiel Version {{< param "fullversion" >}} auf Linux herunterzuladen:
|
||||
Um zum Beispiel Version {{< skew currentPatchVersion >}} auf Linux herunterzuladen:
|
||||
|
||||
```bash
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -139,7 +139,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
|
|||
2. Den öffentlichen Google Cloud Signaturschlüssel herunterladen:
|
||||
|
||||
```shell
|
||||
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Kubernetes zum `apt` Repository:
|
||||
|
@ -170,7 +170,7 @@ name=Kubernetes
|
|||
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
sudo yum install -y kubectl
|
||||
```
|
||||
|
|
|
@ -44,16 +44,16 @@ Um kubectl auf macOS zu installieren, gibt es die folgenden Möglichkeiten:
|
|||
{{< note >}}
|
||||
Um eine spezifische Version herunterzuladen, ersetze `$(curl -L -s https://dl.k8s.io/release/stable.txt)` mit der spezifischen Version
|
||||
|
||||
Um zum Beispiel Version {{< param "fullversion" >}} auf Intel macOS herunterzuladen:
|
||||
Um zum Beispiel Version {{< skew currentPatchVersion >}} auf Intel macOS herunterzuladen:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl"
|
||||
```
|
||||
|
||||
Für macOS auf Apple Silicon (z.B. M1/M2):
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/arm64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/arm64/kubectl"
|
||||
```
|
||||
|
||||
{{< /note >}}
|
||||
|
|
|
@ -197,10 +197,10 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
|
|||
|
||||
Um eine bestimmte Version herunterzuladen, ersetzen Sie den Befehlsteil `$(curl -LS https://dl.k8s.io/release/stable.txt)` mit der jeweiligen Version.
|
||||
|
||||
Um beispielsweise die Version {{< param "fullversion" >}} auf macOS herunterzuladen, verwenden Sie den folgenden Befehl:
|
||||
Um beispielsweise die Version {{< skew currentPatchVersion >}} auf macOS herunterzuladen, verwenden Sie den folgenden Befehl:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Machen Sie die kubectl-Binärdatei ausführbar.
|
||||
|
@ -225,10 +225,10 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
|
|||
|
||||
Um eine bestimmte Version herunterzuladen, ersetzen Sie den Befehlsteil `$(curl -LS https://dl.k8s.io/release/stable.txt)` mit der jeweiligen Version.
|
||||
|
||||
Um beispielsweise die Version {{< param "fullversion" >}} auf Linux herunterzuladen, verwenden Sie den folgenden Befehl:
|
||||
Um beispielsweise die Version {{< skew currentPatchVersion >}} auf Linux herunterzuladen, verwenden Sie den folgenden Befehl:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/linux/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
2. Machen Sie die kubectl-Binärdatei ausführbar.
|
||||
|
@ -244,12 +244,12 @@ Sie können kubectl als Teil des Google Cloud SDK installieren.
|
|||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="Windows" %}}
|
||||
1. Laden Sie das aktuellste Release {{< param "fullversion" >}} von [diesem link](https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe) herunter.
|
||||
1. Laden Sie das aktuellste Release {{< skew currentPatchVersion >}} von [diesem link](https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe) herunter.
|
||||
|
||||
Oder, sofern Sie `curl` installiert haven, verwenden Sie den folgenden Befehl:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe
|
||||
curl -LO https://dl.k8s.io/release/v{{< skew currentPatchVersion >}}/bin/windows/amd64/kubectl.exe
|
||||
```
|
||||
|
||||
Informationen zur aktuellen stabilen Version (z. B. für scripting) finden Sie unter [https://dl.k8s.io/release/stable.txt](https://dl.k8s.io/release/stable.txt).
|
||||
|
|
|
@ -127,13 +127,13 @@ Note how we set those parameters so they are used only when you deploy to GKE. Y
|
|||
|
||||
After training, you [export your model](https://www.tensorflow.org/serving/serving_basic) to a serving location.
|
||||
|
||||
Kubeflow also includes a serving package as well. In a separate example, we trained a standard Inception model, and stored the trained model in a bucket we’ve created called ‘gs://kubeflow-models’ with the path ‘/inception’.
|
||||
Kubeflow also includes a serving package as well.
|
||||
|
||||
To deploy a the trained model for serving, execute the following:
|
||||
|
||||
```
|
||||
ks generate tf-serving inception --name=inception
|
||||
---namespace=default --model\_path=gs://kubeflow-models/inception
|
||||
---namespace=default --model\_path=gs://$bucket_name/$model_loc
|
||||
ks apply gke -c inception
|
||||
```
|
||||
|
||||
|
@ -170,3 +170,6 @@ Thank you for your support so far, we could not be more excited!
|
|||
|
||||
_Jeremy Lewi & David Aronchick_
|
||||
Google
|
||||
|
||||
Note:
|
||||
* This article was amended in June 2023 to update the trained model bucket location.
|
||||
|
|
|
@ -116,7 +116,6 @@ are a bunch of registries that already supports OCI artifacts:
|
|||
- [Amazon Elastic Container Registry](https://aws.amazon.com/ecr)
|
||||
- [Google Artifact Registry](https://cloud.google.com/artifact-registry)
|
||||
- [GitHub Packages container registry](https://docs.github.com/en/packages/guides/about-github-container-registry)
|
||||
- [Bundle Bar](https://bundle.bar/docs/supported-clients/oras)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Zot Registry](https://zotregistry.io)
|
||||
|
||||
|
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "dl.k8s.io to adopt a Content Delivery Network"
|
||||
date: 2023-06-09
|
||||
slug: dl-adopt-cdn
|
||||
---
|
||||
|
||||
**Authors**: Arnaud Meukam (VMware), Hannah Aubry (Fastly), Frederico
|
||||
Muñoz (SAS Institute)
|
||||
|
||||
We're happy to announce that dl.k8s.io, home of the official Kubernetes
|
||||
binaries, will soon be powered by [Fastly](https://www.fastly.com).
|
||||
|
||||
Fastly is known for its high-performance content delivery network (CDN) designed
|
||||
to deliver content quickly and reliably around the world. With its powerful
|
||||
network, Fastly will help us deliver official Kubernetes binaries to users
|
||||
faster and more reliably than ever before.
|
||||
|
||||
The decision to use Fastly was made after an extensive evaluation process in
|
||||
which we carefully evaluated several potential content delivery network
|
||||
providers. Ultimately, we chose Fastly because of their commitment to the open
|
||||
internet and proven track record of delivering fast and secure digital
|
||||
experiences to some of the most known open source projects (through their [Fast
|
||||
Forward](https://www.fastly.com/fast-forward) program).
|
||||
|
||||
## What you need to know about this change
|
||||
|
||||
- On Monday, July 24th, the IP addresses and backend storage associated with the
|
||||
dl.k8s.io domain name will change.
|
||||
- The change will not impact the vast majority of users since the domain
|
||||
name will remain the same.
|
||||
- If you restrict access to specific IP ranges, access to the dl.k8s.io domain
|
||||
could stop working.
|
||||
|
||||
If you think you may be impacted or want to know more about this change,
|
||||
please keep reading.
|
||||
|
||||
## Why are we making this change
|
||||
|
||||
The official Kubernetes binaries site, dl.k8s.io, is used by thousands of users
|
||||
all over the world, and currently serves _more than 5 petabytes of binaries each
|
||||
month_. This change will allow us to improve access to those resources by
|
||||
leveraging a world-wide CDN.
|
||||
|
||||
## Does this affect dl.k8s.io only, or are other domains also affected?
|
||||
|
||||
Only dl.k8s.io will be affected by this change.
|
||||
|
||||
## My company specifies the domain names that we are allowed to be accessed. Will this change affect the domain name?
|
||||
|
||||
No, the domain name (`dl.k8s.io`) will remain the same: no change will be
|
||||
necessary, and access to the Kubernetes release binaries site should not be
|
||||
affected.
|
||||
|
||||
## My company uses some form of IP filtering. Will this change affect access to the site?
|
||||
|
||||
If IP-based filtering is in place, it’s possible that access to the site will be
|
||||
affected when the new IP addresses become active.
|
||||
|
||||
## If my company doesn’t use IP addresses to restrict network traffic, do we need to do anything?
|
||||
|
||||
No, the switch to the CDN should be transparent.
|
||||
|
||||
## Will there be a dual running period?
|
||||
|
||||
**No, it is a cutover.** You can, however, test your networks right now to check
|
||||
if they can route to the new public IP addresses from Fastly. You should add
|
||||
the new IPs to your network's `allowlist` before July 24th. Once the transfer is
|
||||
complete, ensure your networks use the new IP addresses to connect to
|
||||
the `dl.k8s.io` service.
|
||||
|
||||
## What are the new IP addresses?
|
||||
|
||||
If you need to manage an allow list for downloads, you can get the ranges to
|
||||
match from the Fastly API, in JSON: [public IP address
|
||||
ranges](https://api.fastly.com/public-ip-list). You don't need any credentials
|
||||
to download that list of ranges.
|
||||
|
||||
## What next steps would you recommend?
|
||||
|
||||
If you have IP-based filtering in place, we recommend the following course of
|
||||
action **before July, 24th**:
|
||||
|
||||
- Add the new IP addresses to your allowlist.
|
||||
- Conduct tests with your networks/firewall to ensure your networks can route to
|
||||
the new IP addresses.
|
||||
|
||||
After the change is made, we recommend double-checking that HTTP calls are
|
||||
accessing dl.k8s.io with the new IP addresses.
|
||||
|
||||
## What should I do if I detect some abnormality after the cutover date?
|
||||
|
||||
If you encounter any weirdness during binaries download, please [open an
|
||||
issue](https://github.com/kubernetes/k8s.io/issues/new/choose).
|
Binary file not shown.
After Width: | Height: | Size: 13 KiB |
|
@ -0,0 +1 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 450 450"><defs><style>.cls-1{fill:#0fd15d;}.cls-2{fill:#232323;}</style></defs><title>DaoCloud_logo</title><g id="Layer_9" data-name="Layer 9"><polygon class="cls-1" points="225 105.24 279.4 78.41 224.75 54.57 170.11 78.17 225 105.24"/><polygon class="cls-1" points="169.11 178.76 166.38 134.55 206.12 114.68 149.99 86.86 113.72 102.76 118.44 145.23 169.11 178.76"/><polygon class="cls-1" points="331.06 144.98 336.28 102.76 299.52 86.86 243.88 114.68 284.12 134.55 280.89 178.76 331.06 144.98"/><polygon class="cls-1" points="279.4 199.88 275.42 257.5 321.62 222.73 328.58 166.84 279.4 199.88"/><polygon class="cls-1" points="170.35 199.88 121.17 166.84 127.63 222.98 174.08 257.5 170.35 199.88"/><polygon class="cls-1" points="262.01 211.55 225.25 236.39 187.99 211.55 191.72 270.67 225 295.51 257.79 270.67 262.01 211.55"/></g><g id="Layer_3" data-name="Layer 3"><path class="cls-2" d="M50.54,405.08V339.86H74.48c13.33,0,21.62,8.2,21.62,21.25v22.73c0,13-8.29,21.24-21.62,21.24ZM63,394H74.48c5.87,0,9-3.82,9-11.27v-20.5c0-7.45-3.17-11.27-9-11.27H63Z"/><path class="cls-2" d="M154.15,375.36c0-12.2,7.74-19.66,20.31-19.66s20.22,7.46,20.22,19.66v11C194.77,398.56,187,406,174.46,406s-20.31-7.45-20.31-19.66Zm11.93,10c0,6.8,3,10.34,8.38,10.34s8.3-3.54,8.3-10.34v-8.95c0-6.8-3-10.34-8.3-10.34s-8.29,3.54-8.29,10.34Z"/><path class="cls-2" d="M144.32,405.08H134.07l-.67-4.65c-2.84,3.22-8.05,5.6-13.76,5.6-9.3,0-16.23-6.63-16.23-14.55,0-20.6,28.5-20.76,28.5-20.76,0-1.59-2.26-6.46-12.52-6.46a30.26,30.26,0,0,0-8.86,1.63l-2.32-6.4a39.34,39.34,0,0,1,16.12-3.8c12.62,0,20,5.27,20,17.31Zm-22.18-8.41a12.57,12.57,0,0,0,10-5.55l.05-13.21c-1.94,0-16.61,1.26-16.61,12.41A6.14,6.14,0,0,0,122.14,396.67Z"/><path class="cls-2" d="M226.32,406.36c-12.94,0-22.38-8.16-22.38-21V360.51c0-12.84,9.44-20.91,22.38-20.91,8.16,0,13,2.29,18.52,6.14l-4.77,8.62c-4.67-2.57-7.7-4-13.75-4s-10,4.31-10,11.28v22.65c0,7.25,4,11.28,10,11.28s9.08-1.47,13.75-4l4.77,8.71C239.34,404.07,234.48,406.36,226.32,406.36Z"/><path class="cls-2" d="M252.73,405.08V336.29h11.74v68.79Z"/><path class="cls-2" d="M274.74,375.82c0-12,7.62-19.35,20-19.35s19.9,7.34,19.9,19.35v10.82c.09,12-7.61,19.36-19.9,19.36s-20-7.34-20-19.36Zm11.74,9.82c0,6.69,2.94,10.18,8.26,10.18s8.16-3.49,8.16-10.18v-8.81c0-6.69-2.93-10.18-8.16-10.18s-8.17,3.49-8.17,10.18Z"/><path class="cls-2" d="M339.22,406c-10.91,0-14.49-6.61-14.49-18.71v-29.9h11.74v28.8c0,6.78,1.38,9.72,6,9.72a10.28,10.28,0,0,0,8.9-5.41V357.39h11.83v47.69H352.89l-.92-4.49C348.58,403.7,343.72,406,339.22,406Z"/><path class="cls-2" d="M402.78,401a18.88,18.88,0,0,1-12.93,5c-10.73,0-16.32-7.34-16.32-19.36V375c0-12.11,5.41-18.53,16-18.53a28.87,28.87,0,0,1,11.65,2.2V336.29H413v68.79h-9.63Zm-1.65-32.28a20.7,20.7,0,0,0-9-1.83c-4.49,0-7,2.75-7,9v9.73c0,6.69,1.84,10.18,7.16,10.18,4.12,0,7.15-1.84,8.8-4.86Z"/></g></svg>
|
After Width: | Height: | Size: 2.8 KiB |
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
title: DaoCloud Case Study
|
||||
linkTitle: DaoCloud
|
||||
case_study_styles: true
|
||||
cid: caseStudies
|
||||
logo: daocloud_featured_logo.svg
|
||||
|
||||
css: /css/style_daocloud.css
|
||||
new_case_study_styles: true
|
||||
heading_background: /images/case-studies/daocloud/banner1.jpg
|
||||
heading_title_logo: /images/daocloud-light.svg
|
||||
subheading: >
|
||||
Seek Global Optimal Solutions for Digital World
|
||||
case_study_details:
|
||||
- Company: DaoCloud
|
||||
- Location: Shanghai, China
|
||||
- Industry: Cloud Native
|
||||
---
|
||||
|
||||
<h2>Challenges</h2>
|
||||
|
||||
<p><a href="https://www.daocloud.io/en/">DaoCloud</a>, founded in 2014, is an innovation leader in the field of cloud native. It boasts independent intellectual property rights of core technologies for crafting an open cloud platform to empower the digital transformation of enterprises.</p>
|
||||
|
||||
<p>DaoCloud has been engaged in cloud native since its inception. As containerization is crucial for cloud native business, a cloud platform that does not have containers as infrastructure is unlikely to attract its potential users. Therefore, the first challenge confronting DaoCloud is how to efficiently manage and schedule numerous containers while maintaining stable connectivity between them.</p>
|
||||
|
||||
<p>As cloud native technology gains momentum, cloud native solutions proliferate like mushrooms after rain. However, having more choices is not always a good thing, because choosing from various products to globally maximize benefits and minimize cost is always challenging and demanding. Therefore, another obstacle ahead of DaoCloud is how to pick out the best runner in each field and organize them into one platform that can achieve global optimum for cloud native.</p>
|
||||
|
||||
<h2>Solutions</h2>
|
||||
|
||||
<p>As the de facto standard for container orchestration, Kubernetes is undoubtedly the preferred container solution. Paco Xu, head of the Open Source and Advanced Development team at DaoCloud, stated, "Kubernetes is a fundamental tool in the current container ecosystem. Most services or applications are deployed and managed in Kubernetes clusters."</p>
|
||||
|
||||
<p>Regarding finding the global optimal solutions for cloud native technology, Peter Pan, R&D Vice President of DaoCloud, believes that "the right way is to focus on Kubernetes, coordinate relevant best practices and advanced technologies, and build a widely applicable platform."</p>
|
||||
|
||||
<h2>Results</h2>
|
||||
|
||||
<p>In the process of embracing cloud native technology, DaoCloud continues to learn from Kubernetes and other excellent CNCF open source projects. It has formed a product architecture centered on DaoCloud Enterprise, a platform for cloud native applications. Using Kubernetes and other cutting-edge cloud native technologies as a foundation, DaoCloud provides solid cloud native solutions for military, finance, manufacturing, energy, government, and retail clients. It helps promote digital transformation of many companies, such as SPD Bank, Huatai Securities, Fullgoal Fund, SAIC Motor, Haier, Fudan University, Watsons, Genius Auto Finance, State Grid Corporation of China, etc.</p>
|
||||
|
||||
{{< case-studies/quote
|
||||
image="/images/case-studies/daocloud/banner2.jpg"
|
||||
author="Kebe Liu, Service Mesh Expert, DaoCloud"
|
||||
>}}
|
||||
"As DaoCloud Enterprise becomes more powerful and attracts more users, some customers need to use Kubernetes instead of Swarm for application orchestration. We, as providers, need to meet the needs of our users."
|
||||
{{< /case-studies/quote >}}
|
||||
|
||||
<p>DaoCloud was founded to help traditional enterprises move their applications to the cloud and realize digital transformation. The first product released after the company's establishment, DaoCloud Enterprise 1.0, is a Docker-based container engine platform that can easily build images and run them in containers.</p>
|
||||
|
||||
<p>However, as applications and containers increase in number, coordinating and scheduling these containers became a bottleneck that restricted product performance. DaoCloud Enterprise 2.0 used Docker Swarm to manage containers, but the increasingly complex container scheduling system gradually went beyond the competence of Docker Swarm.</p>
|
||||
|
||||
<p>Fortunately, Kubernetes began to stand out at this time. It rapidly grew into the industrial standard for container orchestration with its competitive rich functions, stable performance, timely community support, and strong compatibility. Paco Xu said, "Enterprise container platforms need container orchestration to standardize the process of moving to the cloud. Kubernetes was accepted as the de facto standard for container orchestration around 2016 and 2017. Our products started to support it in 2017."</p>
|
||||
|
||||
<p>After thorough comparisons and evaluations, DaoCloud Enterprise 2.8, debuted in 2017, officially adopted Kubernetes (v1.6.7) as its container orchestration tool. Since then, DaoCloud Enterprise 3.0 (2018) used Kubernetes v1.10, and DaoCloud Enterprise 4.0 (2021) adopted Kubernetes v1.18. The latest version, DaoCloud Enterprise 5.0 (2022), supports Kubernetes v1.23 to v1.26.</p>
|
||||
|
||||
<p>Kubernetes served as an inseparable part of these four releases over six years, which speaks volumes about the fact that using Kubernetes in DaoCloud Enterprise was the right choice. DaoCloud has proven, through its own experience and actions, that Kubernetes is the best choice for container orchestration and that it has always been a loyal fan of Kubernetes.</p>
|
||||
|
||||
{{< case-studies/quote
|
||||
image="/images/case-studies/daocloud/banner3.jpg"
|
||||
author="Ting Ye, Vice President of Product Innovation, DaoCloud"
|
||||
>}}
|
||||
"Kubernetes is the cornerstone for refining our products towards world-class software."
|
||||
{{< /case-studies/quote >}}
|
||||
|
||||
<p>Kubernetes helped our product and research teams realized automation of test, build, check, and release process, ensuring the quality of deliverables. It also helped build our smart systems of collaboration about product requirements & definition, multilingual product materials, debugging, and miscellaneous challenges, improving the efficiency of intra- and inter-department collaboration. </p>
|
||||
|
||||
<p>On the one hand, Kubernetes makes our products more performant and competitive. DaoCloud integrates relevant practices and technologies around Kubernetes to polish its flagship offering – DaoCloud Enterprise. The latest 5th version, released in 2022, covers application stores, application delivery, microservice governance, observability, data services, multi-cloud management, cloud-edge collaboration, and other functions. DaoCloud Enterprise 5.0 is an inclusive integration of cloud native technologies.</p>
|
||||
|
||||
<p>DaoCloud deployed a Kubernetes platform for SPD Bank, improving its application deployment efficiency by 82%, shortening its delivery cycle from half a year to one month, and promoting its transaction success rate to 99.999%.</p>
|
||||
|
||||
<p>In terms of Sichuan Tianfu Bank, the scaling time was reduced from several hours to an average of 2 minutes, product iteration cycle was shortened from two months to two weeks, and application rollout time was cut by 76.76%.</p>
|
||||
|
||||
<p>As for a joint-venture carmaker, its delivery cycle shortened from two months to one or two weeks, success rate of application deployment increased by 53%, and application rollout became ten times more efficient. In the case of a multinational retailer, application deployment issues were solved by 46%, and fault location efficiency rose by more than 90%.</p>
|
||||
|
||||
<p>For a large-scale securities firm, its business procedure efficiency was enhanced by 30%, and resource costs were lowered by about 35%.</p>
|
||||
|
||||
<p>With this product, Fullgoal Fund shortened its middleware deployment time from hours to minutes, improved middleware operation and maintenance capabilities by 50%, containerization by 60%, and resource utilization by 40%.</p>
|
||||
|
||||
<p>On the other hand, our product development is also based on Kubernetes. DaoCloud deployed Gitlab based on Kubernetes and established a product development process of "Gitlab -> PR -> Auto Tests -> Builds & Releases", which significantly improved our development efficiency, reduced repetitive tests, and realized automatic release of applications. This approach greatly saves operation and maintenance costs, enabling technicians to invest more time and energy in product development to offer better cloud native products.</p>
|
||||
|
||||
{{< case-studies/quote
|
||||
image="/images/case-studies/daocloud/banner4.jpg"
|
||||
author="Paco Xu, Header of Open Source & Advanced Development Team, DaoCloud"
|
||||
>}}
|
||||
"Our developers actively contribute to open source projects and build technical expertise. DaoCloud has established a remarkable presence in the Kubernetes and Istio communities."
|
||||
{{< /case-studies/quote >}}
|
||||
|
||||
<p>DaoCloud is deeply involved in contributing to Kubernetes and other cloud native open source projects. Our participation and contributions in these communities continue to grow. In the year of 2022, DaoCloud was ranked third globally in terms of cumulative contribution to Kubernetes (data from Stackalytics as of January 5, 2023).</p>
|
||||
|
||||
<p>In August 2022, Kubernetes officially organized an interview with community contributors, and four outstanding contributors from the Asia-Pacific region were invited. Half of them came from DaoCloud, namely <a href="https://github.com/wzshiming">Shiming Zhang</a> and <a href="https://github.com/pacoxu">Paco Xu</a>. Both are Reviewers of SIG Node. Furthermore, at the KubeCon + CloudNative North America 2022, <a href="https://github.com/kerthcet">Kante Yin</a> from DaoCloud won the 2022 Contributor Award of Kubernetes.</p>
|
||||
|
||||
<p>In addition, DaoCloud continue to practice its cloud native beliefs and contribute to the Kubernetes ecosystem by sharing source code of several excellent projects, including <a href="https://clusterpedia.io/">Clusterpedia</a>, <a href="https://github.com/kubean-io/kubean">Kubean</a>, <a href="https://github.com/cloudtty/cloudtty">CloudTTY</a>, <a href="https://github.com/klts-io/kubernetes-lts">KLTS</a>, <a href="https://merbridge.io/">Merbridge</a>, <a href="https://hwameistor.io/">HwameiStor</a>, <a href="https://github.com/spidernet-io/spiderpool">Spiderpool</a>, and <a href="https://github.com/kubernetes-sigs/kwok">KWOK</a>, on GitHub.</p>
|
||||
|
||||
<p>In particular:</p>
|
||||
|
||||
<ul type="disc">
|
||||
<li><strong>Clusterpedia:</strong> Designed for resource synchronization across clusters, Clusterpedia is compatible with Kubernetes OpenAPIs and offers a powerful search function for quick and effective retrieval of all resources in clusters.</li>
|
||||
<li><strong>Kubean:</strong> With Kubean, it's possible to quickly create production-ready Kubernetes clusters and integrate clusters from other providers.</li>
|
||||
<li><strong>CloudTTY:</strong> CloudTTY is a web terminal and cloud shell operator for Kubernetes cloud native environments, allowing for management of Kubernetes clusters on a web page from anywhere and at any time.</li>
|
||||
<li><strong>KLTS:</strong> Providing long-term free maintenance for earlier versions of Kubernetes, KLTS ensures stability and support for older Kubernetes deployments. Additionally, Piraeus is an easy and secure storage solution for Kubernetes with high performance and availability.</li>
|
||||
<li><strong>KWOK:</strong> Short for Kubernetes WithOut Kubelet, KWOK is a toolkit that enables the setup of a cluster of thousands of nodes in seconds. All nodes are simulated to behave like real ones, resulting in low resource usage that makes it easy to experiment on a laptop.</li>
|
||||
</ul>
|
||||
|
||||
<p>DaoCloud utilizes its practical experience across industries to contribute to Kubernetes-related open source projects, with an aim of making cloud native technologies, represented by Kubernetes, better function in production environment.</p>
|
||||
|
||||
{{< case-studies/quote
|
||||
image="/images/case-studies/daocloud/banner5.jpg"
|
||||
author="Song Zheng, Technology GM, DaoCloud"
|
||||
>}}
|
||||
"DaoCloud, as one of the first cloud native technology training partners certified by CNCF, will continue to carry out trainings to help more companies find their best ways for going to the cloud."
|
||||
{{< /case-studies/quote >}}
|
||||
|
||||
<p>Enterprise users need a global optimal solution, which can be understood as an inclusive platform that can maximize the advantages of multi-cloud management, application delivery, observability, cloud-edge collaboration, microservice governance, application store, and data services. In today's cloud native ecosystem, these functions cannot be achieved without Kubernetes as the underlying container orchestration tool. Therefore, Kubernetes is crucial to DaoCloud's mission of finding the optimal solution in the digital world, and all future product development will continue to be based on Kubernetes.</p>
|
||||
|
||||
<p>Kubernetes training and promotion activities have always been attached great importance in DaoCloud. In 2017, the company took the lead in passing CNCF's Certified Kubernetes Conformance Program through its featured product — DaoCloud Enterprise. In 2018, it became a CNCF-certified Kubernetes service provider and training partner.</p>
|
||||
|
||||
<p>On November 18, 2022, the "Kubernetes Community Days" event was successfully held in Chengdu, organized by CNCF, DaoCloud, Huawei Cloud, Sichuan Tianfu Bank, and OPPO. The event brought together end-users, contributors, and technical experts from open-source communities to share best practices and innovative ideas about Kubernetes and cloud native. In the future, DaoCloud will continue to contribute to Kubernetes projects, and expand the influence of Kubernetes through project training, community contributions and other activities.</p>
|
|
@ -26,8 +26,7 @@ each Node in your cluster, so that the
|
|||
The kubelet acts as a client when connecting to the container runtime via gRPC.
|
||||
The runtime and image service endpoints have to be available in the container
|
||||
runtime, which can be configured separately within the kubelet by using the
|
||||
`--image-service-endpoint` and `--container-runtime-endpoint` [command line
|
||||
flags](/docs/reference/command-line-tools-reference/kubelet)
|
||||
`--image-service-endpoint` [command line flags](/docs/reference/command-line-tools-reference/kubelet).
|
||||
|
||||
For Kubernetes v{{< skew currentVersion >}}, the kubelet prefers to use CRI `v1`.
|
||||
If a container runtime does not support `v1` of the CRI, then the kubelet tries to
|
||||
|
|
|
@ -118,7 +118,7 @@ break the kubelet behavior and remove containers that should exist.
|
|||
To configure options for unused container and image garbage collection, tune the
|
||||
kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
|
||||
and change the parameters related to garbage collection using the
|
||||
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
resource type.
|
||||
|
||||
### Container image lifecycle
|
||||
|
|
|
@ -506,7 +506,7 @@ in a cluster,
|
|||
|`custom-class-c` | 1000 |
|
||||
|`regular/unset` | 0 |
|
||||
|
||||
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
the settings for `shutdownGracePeriodByPodPriority` could look like:
|
||||
|
||||
|Pod priority class value|Shutdown period|
|
||||
|
@ -625,7 +625,7 @@ onwards, swap memory support can be enabled on a per-node basis.
|
|||
|
||||
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
|
||||
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
|
||||
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
must be set to false.
|
||||
|
||||
{{< warning >}}
|
||||
|
|
|
@ -193,7 +193,7 @@ A PriorityLevelConfiguration represents a single priority level. Each
|
|||
PriorityLevelConfiguration has an independent limit on the number of outstanding
|
||||
requests, and limitations on the number of queued requests.
|
||||
|
||||
The nominal oncurrency limit for a PriorityLevelConfiguration is not
|
||||
The nominal concurrency limit for a PriorityLevelConfiguration is not
|
||||
specified in an absolute number of seats, but rather in "nominal
|
||||
concurrency shares." The total concurrency limit for the API Server is
|
||||
distributed among the existing PriorityLevelConfigurations in
|
||||
|
|
|
@ -81,15 +81,16 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl
|
|||
|
||||

|
||||
|
||||
A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams.
|
||||
Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized
|
||||
as the _CRI logging format_.
|
||||
A container runtime handles and redirects any output generated to a containerized
|
||||
application's `stdout` and `stderr` streams.
|
||||
Different container runtimes implement this in different ways; however, the integration
|
||||
with the kubelet is standardized as the _CRI logging format_.
|
||||
|
||||
By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node,
|
||||
all corresponding containers are also evicted, along with their logs.
|
||||
By default, if a container restarts, the kubelet keeps one terminated container with its logs.
|
||||
If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.
|
||||
|
||||
The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is
|
||||
by running `kubectl logs`.
|
||||
The kubelet makes logs available to clients via a special feature of the Kubernetes API.
|
||||
The usual way to access this is by running `kubectl logs`.
|
||||
|
||||
### Log rotation
|
||||
|
||||
|
@ -101,7 +102,7 @@ If you configure rotation, the kubelet is responsible for rotating container log
|
|||
The kubelet sends this information to the container runtime (using CRI),
|
||||
and the runtime writes the container logs to the given location.
|
||||
|
||||
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration),
|
||||
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
|
||||
`containerLogMaxSize` and `containerLogMaxFiles`,
|
||||
using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
|
||||
These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
|
||||
|
@ -201,7 +202,8 @@ as your responsibility.
|
|||
|
||||
## Cluster-level logging architectures
|
||||
|
||||
While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options:
|
||||
While Kubernetes does not provide a native solution for cluster-level logging, there are
|
||||
several common approaches you can consider. Here are some options:
|
||||
|
||||
* Use a node-level logging agent that runs on every node.
|
||||
* Include a dedicated sidecar container for logging in an application pod.
|
||||
|
@ -211,14 +213,18 @@ While Kubernetes does not provide a native solution for cluster-level logging, t
|
|||
|
||||

|
||||
|
||||
You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
|
||||
You can implement cluster-level logging by including a _node-level logging agent_ on each node.
|
||||
The logging agent is a dedicated tool that exposes logs or pushes logs to a backend.
|
||||
Commonly, the logging agent is a container that has access to a directory with log files from all of the
|
||||
application containers on that node.
|
||||
|
||||
Because the logging agent must run on every node, it is recommended to run the agent
|
||||
as a `DaemonSet`.
|
||||
|
||||
Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node.
|
||||
|
||||
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation.
|
||||
Containers write to stdout and stderr, but with no agreed format. A node-level agent collects
|
||||
these logs and forwards them for aggregation.
|
||||
|
||||
### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent}
|
||||
|
||||
|
|
|
@ -11,17 +11,16 @@ feature:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how
|
||||
much of each resource a {{< glossary_tooltip text="container" term_id="container" >}} needs.
|
||||
The most common resources to specify are CPU and memory (RAM); there are others.
|
||||
When you specify a {{< glossary_tooltip term_id="pod" >}}, you can optionally specify how much of each resource a
|
||||
{{< glossary_tooltip text="container" term_id="container" >}} needs. The most common resources to specify are CPU and memory
|
||||
(RAM); there are others.
|
||||
|
||||
When you specify the resource _request_ for containers in a Pod, the
|
||||
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this
|
||||
information to decide which node to place the Pod on. When you specify a resource _limit_
|
||||
for a container, the kubelet enforces those limits so that the running container is not
|
||||
allowed to use more of that resource than the limit you set. The kubelet also reserves
|
||||
at least the _request_ amount of that system resource specifically for that container
|
||||
to use.
|
||||
{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} uses this information to decide which node to place the Pod on.
|
||||
When you specify a resource _limit_ for a container, the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} enforces those
|
||||
limits so that the running container is not allowed to use more of that resource
|
||||
than the limit you set. The kubelet also reserves at least the _request_ amount of
|
||||
that system resource specifically for that container to use.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -257,6 +256,18 @@ Your applications cannot expect any performance SLAs (disk IOPS for example)
|
|||
from local ephemeral storage.
|
||||
{{< /caution >}}
|
||||
|
||||
|
||||
{{< note >}}
|
||||
To make the resource quota work on ephemeral-storage, two things need to be done:
|
||||
|
||||
* An admin sets the resource quota for ephemeral-storage in a namespace.
|
||||
* A user needs to specify limits for the ephemeral-storage resource in the Pod spec.
|
||||
|
||||
If the user doesn't specify the ephemeral-storage resource limit in the Pod spec,
|
||||
the resource quota is not enforced on ephemeral-storage.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
Kubernetes lets you track, reserve and limit the amount
|
||||
of ephemeral local storage a Pod can consume.
|
||||
|
||||
|
|
|
@ -283,6 +283,20 @@ and to support other aspects of the Kubernetes network model.
|
|||
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
|
||||
allow Kubernetes to work with different networking topologies and technologies.
|
||||
|
||||
### Kubelet image credential provider plugins
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
Kubelet image credential providers are plugins for the kubelet to dynamically retrieve image registry
|
||||
credentials. The credentials are then used when pulling images from container image registries that
|
||||
match the configuration.
|
||||
|
||||
The plugins can communicate with external services or use local files to obtain credentials. This way,
|
||||
the kubelet does not need to have static credentials for each registry, and can support various
|
||||
authentication methods and protocols.
|
||||
|
||||
For plugin configuration details, see
|
||||
[Configure a kubelet image credential provider](/docs/tasks/administer-cluster/kubelet-credential-provider/).
|
||||
|
||||
## Scheduling extensions
|
||||
|
||||
The scheduler is a special type of controller that watches pods, and assigns
|
||||
|
|
|
@ -122,6 +122,12 @@ about containers in a central database, and provides a UI for browsing that data
|
|||
A [cluster-level logging](/docs/concepts/cluster-administration/logging/) mechanism is responsible for
|
||||
saving container logs to a central log store with search/browsing interface.
|
||||
|
||||
### Network Plugins
|
||||
|
||||
[Network plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins) are software
|
||||
components that implement the container network interface (CNI) specification. They are responsible for
|
||||
allocating IP addresses to pods and enabling them to communicate with each other within the cluster.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
|
|
@ -1,11 +1,68 @@
|
|||
---
|
||||
title: "Policies"
|
||||
weight: 90
|
||||
no_list: true
|
||||
description: >
|
||||
Policies you can configure that apply to groups of resources.
|
||||
Manage security and best-practices with policies.
|
||||
---
|
||||
|
||||
{{< note >}}
|
||||
See [Network Policies](/docs/concepts/services-networking/network-policies/)
|
||||
for documentation about NetworkPolicy in Kubernetes.
|
||||
{{< /note >}}
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetes policies are configurations that manage other configurations or runtime behaviors. Kubernetes offers various forms of policies, described below:
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Apply policies using API objects
|
||||
|
||||
Some API objects act as policies. Here are some examples:
|
||||
* [NetworkPolicies](/docs/concepts/services-networking/network-policies/) can be used to restrict ingress and egress traffic for a workload.
|
||||
* [LimitRanges](/docs/concepts/policy/limit-range/) manage resource allocation constraints across different object kinds.
|
||||
* [ResourceQuotas](/docs/concepts/policy/resource-quotas/) limit resource consumption for a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
|
||||
|
||||
## Apply policies using admission controllers
|
||||
|
||||
An {{< glossary_tooltip text="admission controller" term_id="admission-controller" >}}
|
||||
runs in the API server
|
||||
and can validate or mutate API requests. Some admission controllers act to apply policies.
|
||||
For example, the [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) admission controller modifies a new Pod to set the image pull policy to `Always`.
|
||||
|
||||
Kubernetes has several built-in admission controllers that are configurable via the API server `--enable-admission-plugins` flag.
|
||||
|
||||
Details on admission controllers, with the complete list of available admission controllers, are documented in a dedicated section:
|
||||
|
||||
* [Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
|
||||
## Apply policies using ValidatingAdmissionPolicy
|
||||
|
||||
Validating admission policies allow configurable validation checks to be executed in the API server using the Common Expression Language (CEL). For example, a `ValidatingAdmissionPolicy` can be used to disallow use of the `latest` image tag.
|
||||
|
||||
A `ValidatingAdmissionPolicy` operates on an API request and can be used to block, audit, and warn users about non-compliant configurations.
|
||||
|
||||
Details on the `ValidatingAdmissionPolicy` API, with examples, are documented in a dedicated section:
|
||||
* [Validating Admission Policy](/docs/reference/access-authn-authz/validating-admission-policy/)
|
||||
|
||||
|
||||
## Apply policies using dynamic admission control
|
||||
|
||||
Dynamic admission controllers (or admission webhooks) run outside the API server as separate applications that register to receive webhooks requests to perform validation or mutation of API requests.
|
||||
|
||||
Dynamic admission controllers can be used to apply policies on API requests and trigger other policy-based workflows. A dynamic admission controller can perform complex checks including those that require retrieval of other cluster resources and external data. For example, an image verification check can lookup data from OCI registries to validate the container image signatures and attestations.
|
||||
|
||||
Details on dynamic admission control are documented in a dedicated section:
|
||||
* [Dynamic Admission Control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
|
||||
### Implementations {#implementations-admission-control}
|
||||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
Dynamic Admission Controllers that act as flexible policy engines are being developed in the Kubernetes ecosystem, such as:
|
||||
- [Kubewarden](https://github.com/kubewarden)
|
||||
- [Kyverno](https://kyverno.io)
|
||||
- [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper)
|
||||
- [Polaris](https://polaris.docs.fairwinds.com/admission-controller/)
|
||||
|
||||
## Apply policies using Kubelet configurations
|
||||
|
||||
Kubernetes allows configuring the Kubelet on each worker node. Some Kubelet configurations act as policies:
|
||||
* [Process ID limts and reservations](/docs/concepts/policy/pid-limiting/) are used to limit and reserve allocatable PIDs.
|
||||
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/) can manage compute, memory, and device resources for latency-critical and high-throughput workloads.
|
||||
|
|
|
@ -203,7 +203,7 @@ cpu = resourceScoringFunction((2+1),8)
|
|||
= rawScoringFunction(37.5)
|
||||
= 3 # floor(37.5/10)
|
||||
|
||||
NodeScore = (7 * 5) + (5 * 1) + (3 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((7 * 5) + (5 * 1) + (3 * 3)) / (5 + 1 + 3)
|
||||
= 5
|
||||
```
|
||||
|
||||
|
@ -242,7 +242,7 @@ cpu = resourceScoringFunction((2+6),8)
|
|||
= rawScoringFunction(100)
|
||||
= 10
|
||||
|
||||
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
|
||||
NodeScore = ((5 * 5) + (7 * 1) + (10 * 3)) / (5 + 1 + 3)
|
||||
= 7
|
||||
|
||||
```
|
||||
|
|
|
@ -10,7 +10,6 @@ weight: 50
|
|||
<!-- overview -->
|
||||
This page provides an overview of controlling access to the Kubernetes API.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
Users access the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) using `kubectl`,
|
||||
client libraries, or by making REST requests. Both human users and
|
||||
|
@ -23,11 +22,15 @@ following diagram:
|
|||
|
||||
## Transport security
|
||||
|
||||
By default, the Kubernetes API server listens on port 6443 on the first non-localhost network interface, protected by TLS. In a typical production Kubernetes cluster, the API serves on port 443. The port can be changed with the `--secure-port`, and the listening IP address with the `--bind-address` flag.
|
||||
By default, the Kubernetes API server listens on port 6443 on the first non-localhost
|
||||
network interface, protected by TLS. In a typical production Kubernetes cluster, the
|
||||
API serves on port 443. The port can be changed with the `--secure-port`, and the
|
||||
listening IP address with the `--bind-address` flag.
|
||||
|
||||
The API server presents a certificate. This certificate may be signed using
|
||||
a private certificate authority (CA), or based on a public key infrastructure linked
|
||||
to a generally recognized CA. The certificate and corresponding private key can be set by using the `--tls-cert-file` and `--tls-private-key-file` flags.
|
||||
to a generally recognized CA. The certificate and corresponding private key can be set
|
||||
by using the `--tls-cert-file` and `--tls-private-key-file` flags.
|
||||
|
||||
If your cluster uses a private certificate authority, you need a copy of that CA
|
||||
certificate configured into your `~/.kube/config` on the client, so that you can
|
||||
|
@ -65,9 +68,12 @@ users in its API.
|
|||
|
||||
## Authorization
|
||||
|
||||
After the request is authenticated as coming from a specific user, the request must be authorized. This is shown as step **2** in the diagram.
|
||||
After the request is authenticated as coming from a specific user, the request must
|
||||
be authorized. This is shown as step **2** in the diagram.
|
||||
|
||||
A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permissions to complete the requested action.
|
||||
A request must include the username of the requester, the requested action, and
|
||||
the object affected by the action. The request is authorized if an existing policy
|
||||
declares that the user has permissions to complete the requested action.
|
||||
|
||||
For example, if Bob has the policy below, then he can read pods only in the namespace `projectCaribou`:
|
||||
|
||||
|
@ -83,7 +89,9 @@ For example, if Bob has the policy below, then he can read pods only in the name
|
|||
}
|
||||
}
|
||||
```
|
||||
If Bob makes the following request, the request is authorized because he is allowed to read objects in the `projectCaribou` namespace:
|
||||
|
||||
If Bob makes the following request, the request is authorized because he is
|
||||
allowed to read objects in the `projectCaribou` namespace:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -99,14 +107,25 @@ If Bob makes the following request, the request is authorized because he is allo
|
|||
}
|
||||
}
|
||||
```
|
||||
If Bob makes a request to write (`create` or `update`) to the objects in the `projectCaribou` namespace, his authorization is denied. If Bob makes a request to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
|
||||
|
||||
Kubernetes authorization requires that you use common REST attributes to interact with existing organization-wide or cloud-provider-wide access control systems. It is important to use REST formatting because these control systems might interact with other APIs besides the Kubernetes API.
|
||||
If Bob makes a request to write (`create` or `update`) to the objects in the
|
||||
`projectCaribou` namespace, his authorization is denied. If Bob makes a request
|
||||
to read (`get`) objects in a different namespace such as `projectFish`, then his authorization is denied.
|
||||
|
||||
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode, and Webhook mode. When an administrator creates a cluster, they configure the authorization modules that should be used in the API server. If more than one authorization modules are configured, Kubernetes checks each module, and if any module authorizes the request, then the request can proceed. If all of the modules deny the request, then the request is denied (HTTP status code 403).
|
||||
Kubernetes authorization requires that you use common REST attributes to interact
|
||||
with existing organization-wide or cloud-provider-wide access control systems.
|
||||
It is important to use REST formatting because these control systems might
|
||||
interact with other APIs besides the Kubernetes API.
|
||||
|
||||
To learn more about Kubernetes authorization, including details about creating policies using the supported authorization modules, see [Authorization](/docs/reference/access-authn-authz/authorization/).
|
||||
Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC Mode,
|
||||
and Webhook mode. When an administrator creates a cluster, they configure the
|
||||
authorization modules that should be used in the API server. If more than one
|
||||
authorization modules are configured, Kubernetes checks each module, and if
|
||||
any module authorizes the request, then the request can proceed. If all of
|
||||
the modules deny the request, then the request is denied (HTTP status code 403).
|
||||
|
||||
To learn more about Kubernetes authorization, including details about creating
|
||||
policies using the supported authorization modules, see [Authorization](/docs/reference/access-authn-authz/authorization/).
|
||||
|
||||
## Admission control
|
||||
|
||||
|
|
|
@ -15,7 +15,6 @@ weight: 30
|
|||
{{< feature-state for_k8s_version="v1.19" state="stable" >}}
|
||||
{{< glossary_definition term_id="ingress" length="all" >}}
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Terminology
|
||||
|
@ -23,14 +22,21 @@ weight: 30
|
|||
For clarity, this guide defines the following terms:
|
||||
|
||||
* Node: A worker machine in Kubernetes, part of a cluster.
|
||||
* Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For this example, and in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
|
||||
* Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
|
||||
* Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes [networking model](/docs/concepts/cluster-administration/networking/).
|
||||
* Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
|
||||
* Cluster: A set of Nodes that run containerized applications managed by Kubernetes.
|
||||
For this example, and in most common Kubernetes deployments, nodes in the cluster
|
||||
are not part of the public internet.
|
||||
* Edge router: A router that enforces the firewall policy for your cluster. This
|
||||
could be a gateway managed by a cloud provider or a physical piece of hardware.
|
||||
* Cluster network: A set of links, logical or physical, that facilitate communication
|
||||
within a cluster according to the Kubernetes [networking model](/docs/concepts/cluster-administration/networking/).
|
||||
* Service: A Kubernetes {{< glossary_tooltip term_id="service" >}} that identifies
|
||||
a set of Pods using {{< glossary_tooltip text="label" term_id="label" >}} selectors.
|
||||
Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
|
||||
|
||||
## What is Ingress?
|
||||
|
||||
[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io) exposes HTTP and HTTPS routes from outside the cluster to
|
||||
[Ingress](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ingress-v1-networking-k8s-io)
|
||||
exposes HTTP and HTTPS routes from outside the cluster to
|
||||
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
|
||||
Traffic routing is controlled by rules defined on the Ingress resource.
|
||||
|
||||
|
@ -38,7 +44,11 @@ Here is a simple example where an Ingress sends all its traffic to one Service:
|
|||
|
||||
{{< figure src="/docs/images/ingress.svg" alt="ingress-diagram" class="diagram-large" caption="Figure. Ingress" link="https://mermaid.live/edit#pako:eNqNkstuwyAQRX8F4U0r2VHqPlSRKqt0UamLqlnaWWAYJygYLB59KMm_Fxcix-qmGwbuXA7DwAEzzQETXKutof0Ovb4vaoUQkwKUu6pi3FwXM_QSHGBt0VFFt8DRU2OWSGrKUUMlVQwMmhVLEV1Vcm9-aUksiuXRaO_CEhkv4WjBfAgG1TrGaLa-iaUw6a0DcwGI-WgOsF7zm-pN881fvRx1UDzeiFq7ghb1kgqFWiElyTjnuXVG74FkbdumefEpuNuRu_4rZ1pqQ7L5fL6YQPaPNiFuywcG9_-ihNyUkm6YSONWkjVNM8WUIyaeOJLO3clTB_KhL8NQDmVe-OJjxgZM5FhFiiFTK5zjDkxHBQ9_4zB4a-x20EGNSZhyaKmXrg7f5hSsvufUwTMXThtMWiot5Jh6p9ffimHijIezaSVoeN0uiqcfMJvf7w" >}}
|
||||
|
||||
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
|
||||
An Ingress may be configured to give Services externally-reachable URLs,
|
||||
load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
|
||||
An [Ingress controller](/docs/concepts/services-networking/ingress-controllers)
|
||||
is responsible for fulfilling the Ingress, usually with a load balancer, though
|
||||
it may also configure your edge router or additional frontends to help handle the traffic.
|
||||
|
||||
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
|
||||
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#type-nodeport) or
|
||||
|
@ -46,10 +56,11 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin
|
|||
|
||||
## Prerequisites
|
||||
|
||||
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
|
||||
You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers)
|
||||
to satisfy an Ingress. Only creating an Ingress resource has no effect.
|
||||
|
||||
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/). You can choose from a number of
|
||||
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
|
||||
You may need to deploy an Ingress controller such as [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/).
|
||||
You can choose from a number of [Ingress controllers](/docs/concepts/services-networking/ingress-controllers).
|
||||
|
||||
Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress
|
||||
controllers operate slightly differently.
|
||||
|
@ -68,10 +79,10 @@ An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
|
|||
The name of an Ingress object must be a valid
|
||||
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
|
||||
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
|
||||
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
|
||||
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/rewrite/README.md).
|
||||
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
|
||||
your choice of Ingress controller to learn which annotations are supported.
|
||||
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
|
||||
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/main/docs/examples/rewrite/README.md).
|
||||
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations.
|
||||
Review the documentation for your choice of Ingress controller to learn which annotations are supported.
|
||||
|
||||
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
|
||||
has all the information needed to configure a load balancer or proxy server. Most importantly, it
|
||||
|
@ -100,7 +111,8 @@ Each HTTP rule contains the following information:
|
|||
incoming request before the load balancer directs traffic to the referenced
|
||||
Service.
|
||||
* A backend is a combination of Service and port names as described in the
|
||||
[Service doc](/docs/concepts/services-networking/service/) or a [custom resource backend](#resource-backend) by way of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}. HTTP (and HTTPS) requests to the
|
||||
[Service doc](/docs/concepts/services-networking/service/) or a [custom resource backend](#resource-backend)
|
||||
by way of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}}. HTTP (and HTTPS) requests to the
|
||||
Ingress that match the host and path of the rule are sent to the listed backend.
|
||||
|
||||
A `defaultBackend` is often configured in an Ingress controller to service any requests that do not
|
||||
|
@ -168,9 +180,11 @@ supported path types:
|
|||
match for path _p_ if every _p_ is an element-wise prefix of _p_ of the
|
||||
request path.
|
||||
|
||||
{{< note >}} If the last element of the path is a substring of the last
|
||||
{{< note >}}
|
||||
If the last element of the path is a substring of the last
|
||||
element in request path, it is not a match (for example: `/foo/bar`
|
||||
matches `/foo/bar/baz`, but does not match `/foo/barbaz`). {{< /note >}}
|
||||
matches `/foo/bar/baz`, but does not match `/foo/barbaz`).
|
||||
{{< /note >}}
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -196,12 +210,14 @@ supported path types:
|
|||
| Mixed | `/foo` (Prefix), `/foo` (Exact) | `/foo` | Yes, prefers Exact |
|
||||
|
||||
#### Multiple matches
|
||||
|
||||
In some cases, multiple paths within an Ingress will match a request. In those
|
||||
cases precedence will be given first to the longest matching path. If two paths
|
||||
are still equally matched, precedence will be given to paths with an exact path
|
||||
type over prefix path type.
|
||||
|
||||
## Hostname wildcards
|
||||
|
||||
Hosts can be precise matches (for example “`foo.bar.com`”) or a wildcard (for
|
||||
example “`*.foo.com`”). Precise matches require that the HTTP `host` header
|
||||
matches the `host` field. Wildcard matches require the HTTP `host` header is
|
||||
|
@ -248,6 +264,7 @@ the `name` of the parameters identifies a specific cluster scoped
|
|||
resource for that API.
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
|
@ -266,6 +283,7 @@ spec:
|
|||
kind: ClusterIngressParameter
|
||||
name: external-config-1
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Namespaced" %}}
|
||||
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
|
||||
|
@ -295,6 +313,7 @@ The IngressClass API itself is always cluster-scoped.
|
|||
|
||||
Here is an example of an IngressClass that refers to parameters that are
|
||||
namespaced:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
|
@ -390,8 +409,7 @@ down to a minimum. For example, a setup like:
|
|||
|
||||
{{< figure src="/docs/images/ingressFanOut.svg" alt="ingress-fanout-diagram" class="diagram-large" caption="Figure. Ingress Fan Out" link="https://mermaid.live/edit#pako:eNqNUslOwzAQ_RXLvYCUhMQpUFzUUzkgcUBwbHpw4klr4diR7bCo8O8k2FFbFomLPZq3jP00O1xpDpjijWHtFt09zAuFUCUFKHey8vf6NE7QrdoYsDZumGIb4Oi6NAskNeOoZJKpCgxK4oXwrFVgRyi7nCVXWZKRPMlysv5yD6Q4Xryf1Vq_WzDPooJs9egLNDbolKTpT03JzKgh3zWEztJZ0Niu9L-qZGcdmAMfj4cxvWmreba613z9C0B-AMQD-V_AdA-A4j5QZu0SatRKJhSqhZR0wjmPrDP6CeikrutQxy-Cuy2dtq9RpaU2dJKm6fzI5Glmg0VOLio4_5dLjx27hFSC015KJ2VZHtuQvY2fuHcaE43G0MaCREOow_FV5cMxHZ5-oPX75UM5avuXhXuOI9yAaZjg_aLuBl6B3RYaKDDtSw4166QrcKE-emrXcubghgunDaY1kxYizDqnH99UhakzHYykpWD9hjS--fEJoIELqQ" >}}
|
||||
|
||||
|
||||
would require an Ingress such as:
|
||||
It would require an Ingress such as:
|
||||
|
||||
{{< codenew file="service/networking/simple-fanout-example.yaml" >}}
|
||||
|
||||
|
@ -435,7 +453,6 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
|
|||
|
||||
{{< figure src="/docs/images/ingressNameBased.svg" alt="ingress-namebase-diagram" class="diagram-large" caption="Figure. Ingress Name Based Virtual hosting" link="https://mermaid.live/edit#pako:eNqNkl9PwyAUxb8KYS-atM1Kp05m9qSJJj4Y97jugcLtRqTQAPVPdN_dVlq3qUt8gZt7zvkBN7xjbgRgiteW1Rt0_zjLNUJcSdD-ZBn21WmcoDu9tuBcXDHN1iDQVWHnSBkmUMEU0xwsSuK5DK5l745QejFNLtMkJVmSZmT1Re9NcTz_uDXOU1QakxTMJtxUHw7ss-SQLhehQEODTsdH4l20Q-zFyc84-Y67pghv5apxHuweMuj9eS2_NiJdPhix-kMgvwQShOyYMNkJoEUYM3PuGkpUKyY1KqVSdCSEiJy35gnoqCzLvo5fpPAbOqlfI26UsXQ0Ho9nB5CnqesRGTnncPYvSqsdUvqp9KRdlI6KojjEkB0mnLgjDRONhqENBYm6oXbLV5V1y6S7-l42_LowlIN2uFm_twqOcAW2YlK0H_i9c-bYb6CCHNO2FFCyRvkc53rbWptaMA83QnpjMS2ZchBh1nizeNMcU28bGEzXkrV_pArN7Sc0rBTu" >}}
|
||||
|
||||
|
||||
The following Ingress tells the backing load balancer to route requests based on
|
||||
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
|
||||
|
||||
|
@ -446,7 +463,9 @@ web traffic to the IP address of your Ingress controller can be matched without
|
|||
virtual host being required.
|
||||
|
||||
For example, the following Ingress routes traffic
|
||||
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`, and any traffic whose request host header doesn't match `first.bar.com` and `second.bar.com` to `service3`.
|
||||
requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
|
||||
and any traffic whose request host header doesn't match `first.bar.com`
|
||||
and `second.bar.com` to `service3`.
|
||||
|
||||
{{< codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" >}}
|
||||
|
||||
|
@ -615,8 +634,6 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
|
|||
* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)
|
||||
* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn about the [Ingress](/docs/reference/kubernetes-api/service-resources/ingress-v1/) API
|
||||
|
|
|
@ -7,13 +7,11 @@ content_type: concept
|
|||
weight: 150
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
{{< note >}}
|
||||
|
||||
This feature, specifically the alpha `topologyKeys` API, is deprecated since
|
||||
Kubernetes v1.21.
|
||||
[Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/),
|
||||
|
@ -25,7 +23,6 @@ topology of the cluster. For example, a service can specify that traffic be
|
|||
preferentially routed to endpoints that are on the same Node as the client, or
|
||||
in the same availability zone.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Topology-aware traffic routing
|
||||
|
@ -51,7 +48,8 @@ same top-of-rack switch for the lowest latency.
|
|||
|
||||
## Using Service Topology
|
||||
|
||||
If your cluster has the `ServiceTopology` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) enabled, you can control Service traffic
|
||||
If your cluster has the `ServiceTopology` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
enabled, you can control Service traffic
|
||||
routing by specifying the `topologyKeys` field on the Service spec. This field
|
||||
is a preference-order list of Node labels which will be used to sort endpoints
|
||||
when accessing this Service. Traffic will be directed to a Node whose value for
|
||||
|
@ -83,8 +81,6 @@ traffic as follows.
|
|||
none are available within this zone:
|
||||
`["topology.kubernetes.io/zone", "*"]`.
|
||||
|
||||
|
||||
|
||||
## Constraints
|
||||
|
||||
* Service topology is not compatible with `externalTrafficPolicy=Local`, and
|
||||
|
@ -101,7 +97,6 @@ traffic as follows.
|
|||
* The catch-all value, `"*"`, must be the last value in the topology keys, if
|
||||
it is used.
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
The following are common examples of using the Service Topology feature.
|
||||
|
@ -147,12 +142,10 @@ spec:
|
|||
- "*"
|
||||
```
|
||||
|
||||
|
||||
### Only Zonal or Regional Endpoints
|
||||
|
||||
A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
|
@ -1239,7 +1239,7 @@ for that Service.
|
|||
When you define a Service, you can specify `externalIPs` for any
|
||||
[service type](#publishing-services-service-types).
|
||||
In the example below, the Service named `"my-service"` can be accessed by clients using TCP,
|
||||
on `"198.51.100.32:80"` (calculated from `.spec.externalIP` and `.spec.port`).
|
||||
on `"198.51.100.32:80"` (calculated from `.spec.externalIPs[]` and `.spec.ports[].port`).
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
|
|
@ -98,7 +98,8 @@ vendors provide their own external provisioner.
|
|||
### Reclaim Policy
|
||||
|
||||
PersistentVolumes that are dynamically created by a StorageClass will have the
|
||||
reclaim policy specified in the `reclaimPolicy` field of the class, which can be
|
||||
[reclaim policy](/docs/concepts/storage/persistent-volumes/#reclaiming)
|
||||
specified in the `reclaimPolicy` field of the class, which can be
|
||||
either `Delete` or `Retain`. If no `reclaimPolicy` is specified when a
|
||||
StorageClass object is created, it will default to `Delete`.
|
||||
|
||||
|
@ -107,8 +108,6 @@ whatever reclaim policy they were assigned at creation.
|
|||
|
||||
### Allow Volume Expansion
|
||||
|
||||
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
|
||||
|
||||
PersistentVolumes can be configured to be expandable. This feature when set to `true`,
|
||||
allows the users to resize the volume by editing the corresponding PVC object.
|
||||
|
||||
|
@ -146,8 +145,9 @@ the class or PV. If a mount option is invalid, the PV mount fails.
|
|||
|
||||
### Volume Binding Mode
|
||||
|
||||
The `volumeBindingMode` field controls when [volume binding and dynamic
|
||||
provisioning](/docs/concepts/storage/persistent-volumes/#provisioning) should occur. When unset, "Immediate" mode is used by default.
|
||||
The `volumeBindingMode` field controls when
|
||||
[volume binding and dynamic provisioning](/docs/concepts/storage/persistent-volumes/#provisioning)
|
||||
should occur. When unset, "Immediate" mode is used by default.
|
||||
|
||||
The `Immediate` mode indicates that volume binding and dynamic
|
||||
provisioning occurs once the PersistentVolumeClaim is created. For storage
|
||||
|
@ -176,14 +176,14 @@ The following plugins support `WaitForFirstConsumer` with pre-created Persistent
|
|||
- All of the above
|
||||
- [Local](#local)
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
|
||||
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
|
||||
and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver
|
||||
to see its supported topology keys and examples.
|
||||
|
||||
{{< note >}}
|
||||
If you choose to use `WaitForFirstConsumer`, do not use `nodeName` in the Pod spec
|
||||
to specify node affinity. If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
to specify node affinity.
|
||||
If `nodeName` is used in this case, the scheduler will be bypassed and PVC will remain in `pending` state.
|
||||
|
||||
Instead, you can use node selector for hostname in this case as shown below.
|
||||
{{< /note >}}
|
||||
|
@ -353,7 +353,8 @@ parameters:
|
|||
- `path`: Path that is exported by the NFS server.
|
||||
- `readOnly`: A flag indicating whether the storage will be mounted as read only (default false).
|
||||
|
||||
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.
|
||||
Kubernetes doesn't include an internal NFS provisioner.
|
||||
You need to use an external provisioner to create a StorageClass for NFS.
|
||||
Here are some examples:
|
||||
|
||||
- [NFS Ganesha server and external provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
|
@ -376,7 +377,8 @@ parameters:
|
|||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
|
||||
This internal provisioner of OpenStack is deprecated. Please use [the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
This internal provisioner of OpenStack is deprecated. Please use
|
||||
[the external cloud provider for OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
{{< /note >}}
|
||||
|
||||
### vSphere
|
||||
|
@ -386,11 +388,15 @@ There are two types of provisioners for vSphere storage classes:
|
|||
- [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com`
|
||||
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
|
||||
|
||||
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
|
||||
In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi).
|
||||
For more information on the CSI provisioner, see
|
||||
[Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and
|
||||
[vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
|
||||
|
||||
#### CSI Provisioner {#vsphere-provisioner-csi}
|
||||
|
||||
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
|
||||
The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters.
|
||||
For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
|
||||
|
||||
#### vCP Provisioner
|
||||
|
||||
|
@ -642,8 +648,6 @@ parameters:
|
|||
|
||||
### Local
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
|
|
|
@ -13,24 +13,46 @@ weight: 100
|
|||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
|
||||
{{< glossary_tooltip text="CSI" term_id="csi" >}} volume health monitoring allows
|
||||
CSI Drivers to detect abnormal volume conditions from the underlying storage systems
|
||||
and report them as events on {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}
|
||||
or {{< glossary_tooltip text="Pods" term_id="pod" >}}.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Volume health monitoring
|
||||
|
||||
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the Container Storage Interface (CSI). Volume health monitoring feature is implemented in two components: an External Health Monitor controller, and the {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
|
||||
Kubernetes _volume health monitoring_ is part of how Kubernetes implements the
|
||||
Container Storage Interface (CSI). Volume health monitoring feature is implemented
|
||||
in two components: an External Health Monitor controller, and the
|
||||
{{< glossary_tooltip term_id="kubelet" text="kubelet" >}}.
|
||||
|
||||
If a CSI Driver supports Volume Health Monitoring feature from the controller side, an event will be reported on the related {{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC) when an abnormal volume condition is detected on a CSI volume.
|
||||
If a CSI Driver supports Volume Health Monitoring feature from the controller side,
|
||||
an event will be reported on the related
|
||||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}} (PVC)
|
||||
when an abnormal volume condition is detected on a CSI volume.
|
||||
|
||||
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}} also watches for node failure events. You can enable node failure monitoring by setting the `enable-node-watcher` flag to true. When the external health monitor detects a node failure event, the controller reports an Event will be reported on the PVC to indicate that pods using this PVC are on a failed node.
|
||||
The External Health Monitor {{< glossary_tooltip text="controller" term_id="controller" >}}
|
||||
also watches for node failure events. You can enable node failure monitoring by setting
|
||||
the `enable-node-watcher` flag to true. When the external health monitor detects a node
|
||||
failure event, the controller reports an Event will be reported on the PVC to indicate
|
||||
that pods using this PVC are on a failed node.
|
||||
|
||||
If a CSI Driver supports Volume Health Monitoring feature from the node side, an Event will be reported on every Pod using the PVC when an abnormal volume condition is detected on a CSI volume. In addition, Volume Health information is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy. For more information, please check [KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
|
||||
If a CSI Driver supports Volume Health Monitoring feature from the node side,
|
||||
an Event will be reported on every Pod using the PVC when an abnormal volume
|
||||
condition is detected on a CSI volume. In addition, Volume Health information
|
||||
is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal
|
||||
is added. This metric includes two labels: `namespace` and `persistentvolumeclaim`.
|
||||
The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume
|
||||
is healthy. For more information, please check
|
||||
[KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#kubelet-metrics-changes).
|
||||
|
||||
{{< note >}}
|
||||
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to use this feature from the node side.
|
||||
You need to enable the `CSIVolumeHealth` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
to use this feature from the node side.
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html) to find out which CSI drivers have implemented this feature.
|
||||
See the [CSI driver documentation](https://kubernetes-csi.github.io/docs/drivers.html)
|
||||
to find out which CSI drivers have implemented this feature.
|
||||
|
|
|
@ -11,36 +11,43 @@ weight: 70
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
|
||||
|
||||
|
||||
|
||||
This document describes the concept of cloning existing CSI Volumes in Kubernetes.
|
||||
Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
||||
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
|
||||
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds
|
||||
support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s
|
||||
in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
|
||||
|
||||
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume.
|
||||
A Clone is defined as a duplicate of an existing Kubernetes Volume that can be
|
||||
consumed as any standard Volume would be. The only difference is that upon
|
||||
provisioning, rather than creating a "new" empty Volume, the back end device
|
||||
creates an exact duplicate of the specified Volume.
|
||||
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use).
|
||||
The implementation of cloning, from the perspective of the Kubernetes API, adds
|
||||
the ability to specify an existing PVC as a dataSource during new PVC creation.
|
||||
The source PVC must be bound and available (not in use).
|
||||
|
||||
Users need to be aware of the following when using this feature:
|
||||
|
||||
* Cloning support (`VolumePVCDataSource`) is only available for CSI drivers.
|
||||
* Cloning support is only available for dynamic provisioners.
|
||||
* CSI drivers may or may not have implemented the volume cloning functionality.
|
||||
* You can only clone a PVC when it exists in the same namespace as the destination PVC (source and destination must be in the same namespace).
|
||||
* You can only clone a PVC when it exists in the same namespace as the destination PVC
|
||||
(source and destination must be in the same namespace).
|
||||
* Cloning is supported with a different Storage Class.
|
||||
- Destination volume can be the same or a different storage class as the source.
|
||||
- Default storage class can be used and storageClassName omitted in the spec.
|
||||
* Cloning can only be performed between two volumes that use the same VolumeMode setting (if you request a block mode volume, the source MUST also be block mode)
|
||||
|
||||
- Destination volume can be the same or a different storage class as the source.
|
||||
- Default storage class can be used and storageClassName omitted in the spec.
|
||||
* Cloning can only be performed between two volumes that use the same VolumeMode setting
|
||||
(if you request a block mode volume, the source MUST also be block mode)
|
||||
|
||||
## Provisioning
|
||||
|
||||
Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
|
||||
Clones are provisioned like any other PVC with the exception of adding a dataSource
|
||||
that references an existing PVC in the same namespace.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -61,13 +68,18 @@ spec:
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
You must specify a capacity value for `spec.resources.requests.storage`, and the value you specify must be the same or larger than the capacity of the source volume.
|
||||
You must specify a capacity value for `spec.resources.requests.storage`, and the
|
||||
value you specify must be the same or larger than the capacity of the source volume.
|
||||
{{< /note >}}
|
||||
|
||||
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same content as the specified source `pvc-1`.
|
||||
The result is a new PVC with the name `clone-of-pvc-1` that has the exact same
|
||||
content as the specified source `pvc-1`.
|
||||
|
||||
## Usage
|
||||
|
||||
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone.
|
||||
|
||||
|
||||
Upon availability of the new PVC, the cloned PVC is consumed the same as other PVC.
|
||||
It's also expected at this point that the newly created PVC is an independent object.
|
||||
It can be consumed, cloned, snapshotted, or deleted independently and without
|
||||
consideration for it's original dataSource PVC. This also implies that the source
|
||||
is not linked in any way to the newly created clone, it may also be modified or
|
||||
deleted without affecting the newly created clone.
|
||||
|
|
|
@ -17,9 +17,6 @@ This document describes the concept of VolumeSnapshotClass in Kubernetes. Famili
|
|||
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
|
||||
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introduction
|
||||
|
@ -40,7 +37,8 @@ of a class when first creating VolumeSnapshotClass objects, and the objects cann
|
|||
be updated once they are created.
|
||||
|
||||
{{< note >}}
|
||||
Installation of the CRDs is the responsibility of the Kubernetes distribution. Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
|
||||
Installation of the CRDs is the responsibility of the Kubernetes distribution.
|
||||
Without the required CRDs present, the creation of a VolumeSnapshotClass fails.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
|
@ -76,14 +74,17 @@ used for provisioning VolumeSnapshots. This field must be specified.
|
|||
|
||||
### DeletionPolicy
|
||||
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what
|
||||
happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to
|
||||
is to be deleted. The deletionPolicy of a volume snapshot class can either be
|
||||
`Retain` or `Delete`. This field must be specified.
|
||||
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
|
||||
deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`,
|
||||
then both the underlying snapshot and VolumeSnapshotContent remain.
|
||||
|
||||
## Parameters
|
||||
|
||||
Volume snapshot classes have parameters that describe volume snapshots belonging to
|
||||
the volume snapshot class. Different parameters may be accepted depending on the
|
||||
`driver`.
|
||||
|
||||
|
||||
|
|
|
@ -291,7 +291,7 @@ network port spaces). Kubernetes uses pause containers to allow for worker conta
|
|||
crashing or restarting without losing any of the networking configuration.
|
||||
|
||||
Kubernetes maintains a multi-architecture image that includes support for Windows.
|
||||
For Kubernetes v{{< skew currentVersion >}} the recommended pause image is `registry.k8s.io/pause:3.6`.
|
||||
For Kubernetes v{{< skew currentPatchVersion >}} the recommended pause image is `registry.k8s.io/pause:3.6`.
|
||||
The [source code](https://github.com/kubernetes/kubernetes/tree/master/build/pause)
|
||||
is available on GitHub.
|
||||
|
||||
|
|
|
@ -290,8 +290,13 @@ Jobs with _fixed completion count_ - that is, jobs that have non null
|
|||
The Job is considered complete when there is one successfully completed Pod
|
||||
for each index. For more information about how to use this mode, see
|
||||
[Indexed Job for Parallel Processing with Static Work Assignment](/docs/tasks/job/indexed-parallel-processing-static/).
|
||||
Note that, although rare, more than one Pod could be started for the same
|
||||
index, but only one of them will count towards the completion count.
|
||||
|
||||
{{< note >}}
|
||||
Although rare, more than one Pod could be started for the same index (due to various reasons such as node failures,
|
||||
kubelet restarts, or Pod evictions). In this case, only the first Pod that completes successfully will
|
||||
count towards the completion count and update the status of the Job. The other Pods that are running
|
||||
or completed for the same index will be deleted by the Job controller once they are detected.
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
## Handling Pod and container failures
|
||||
|
|
|
@ -44,7 +44,7 @@ field).
|
|||
### Differences from regular containers
|
||||
|
||||
Init containers support all the fields and features of app containers,
|
||||
including resource limits, volumes, and security settings. However, the
|
||||
including resource limits, [volumes](/docs/concepts/storage/volumes/), and security settings. However, the
|
||||
resource requests and limits for an init container are handled differently,
|
||||
as documented in [Resources](#resources).
|
||||
|
||||
|
@ -196,7 +196,7 @@ kubectl logs myapp-pod -c init-myservice # Inspect the first init container
|
|||
kubectl logs myapp-pod -c init-mydb # Inspect the second init container
|
||||
```
|
||||
|
||||
At this point, those init containers will be waiting to discover Services named
|
||||
At this point, those init containers will be waiting to discover {{< glossary_tooltip text="Services" term_id="service" >}} named
|
||||
`mydb` and `myservice`.
|
||||
|
||||
Here's a configuration you can use to make those Services appear:
|
||||
|
@ -322,7 +322,7 @@ reasons:
|
|||
have to be done by someone with root access to nodes.
|
||||
* All containers in a Pod are terminated while `restartPolicy` is set to Always,
|
||||
forcing a restart, and the init container completion record has been lost due
|
||||
to garbage collection.
|
||||
to {{< glossary_tooltip text="garbage collection" term_id="garbage-collection" >}}.
|
||||
|
||||
The Pod will not be restarted when the init container image is changed, or the
|
||||
init container completion record has been lost due to garbage collection. This
|
||||
|
@ -333,4 +333,5 @@ Kubernetes, consult the documentation for the version you are using.
|
|||
|
||||
* Read about [creating a Pod that has an init container](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)
|
||||
* Learn how to [debug init containers](/docs/tasks/debug/debug-application/debug-init-containers/)
|
||||
|
||||
* Read about an overview of [kubelet](/docs/reference/command-line-tools-reference/kubelet/) and [kubectl](/docs/reference/kubectl/)
|
||||
* Learn about the [types of probes](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe): liveness, readiness, startup probe.
|
||||
|
|
|
@ -320,6 +320,10 @@ Each probe must define exactly one of these four mechanisms:
|
|||
the port is open. If the remote system (the container) closes
|
||||
the connection immediately after it opens, this counts as healthy.
|
||||
|
||||
{{< caution >}} Unlike the other mechanisms, `exec` probe's implementation involves the creation/forking of multiple processes each time when executed.
|
||||
As a result, in case of the clusters having higher pod densities, lower intervals of `initialDelaySeconds`, `periodSeconds`, configuring any probe with exec mechanism might introduce an overhead on the cpu usage of the node.
|
||||
In such scenarios, consider using the alternative probe mechanisms to avoid the overhead.{{< /caution >}}
|
||||
|
||||
### Probe outcome
|
||||
|
||||
Each probe has one of three results:
|
||||
|
|
|
@ -55,7 +55,7 @@ to use this feature with Kubernetes stateless pods:
|
|||
* CRI-O: version 1.25 (and later) supports user namespaces for containers.
|
||||
|
||||
Please note that containerd v1.7 supports user namespaces for containers,
|
||||
compatible with Kubernetes {{< skew currentVersion >}}. It should not be used
|
||||
compatible with Kubernetes {{< skew currentPatchVersion >}}. It should not be used
|
||||
with Kubernetes 1.27 (and later).
|
||||
|
||||
Support for this in [cri-dockerd is not planned][CRI-dockerd-issue] yet.
|
||||
|
|
|
@ -17,8 +17,6 @@ You can register for Kubernetes Slack at https://slack.k8s.io/.
|
|||
For information on creating new content for the Kubernetes
|
||||
docs, follow the [style guide](/docs/contribute/style/style-guide).
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Overview
|
||||
|
@ -40,15 +38,19 @@ Kubernetes docs allow content for third-party projects only when:
|
|||
|
||||
### Third party content
|
||||
|
||||
Kubernetes documentation includes applied examples of projects in the Kubernetes project—projects that live in the [kubernetes](https://github.com/kubernetes) and
|
||||
Kubernetes documentation includes applied examples of projects in the Kubernetes
|
||||
project—projects that live in the [kubernetes](https://github.com/kubernetes) and
|
||||
[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations.
|
||||
|
||||
Links to active content in the Kubernetes project are always allowed.
|
||||
|
||||
Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker),
|
||||
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/), and [logging](/docs/concepts/cluster-administration/logging/).
|
||||
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins),
|
||||
[Ingress controllers](/docs/concepts/services-networking/ingress-controllers/),
|
||||
and [logging](/docs/concepts/cluster-administration/logging/).
|
||||
|
||||
Docs can link to third-party open source software (OSS) outside the Kubernetes project only if it's necessary for Kubernetes to function.
|
||||
Docs can link to third-party open source software (OSS) outside the Kubernetes
|
||||
project only if it's necessary for Kubernetes to function.
|
||||
|
||||
### Dual sourced content
|
||||
|
||||
|
@ -59,19 +61,14 @@ Dual-sourced content requires double the effort (or more!) to maintain
|
|||
and grows stale more quickly.
|
||||
|
||||
{{< note >}}
|
||||
|
||||
If you're a maintainer for a Kubernetes project and need help hosting your own docs,
|
||||
ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/messages/C1J0BPD2M/).
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
### More information
|
||||
|
||||
If you have questions about allowed content, join the [Kubernetes Slack](https://slack.k8s.io/) #sig-docs channel and ask!
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read the [Style guide](/docs/contribute/style/style-guide).
|
||||
|
|
|
@ -4,13 +4,10 @@ content_type: concept
|
|||
weight: 90
|
||||
---
|
||||
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This site uses Hugo. In Hugo, [content organization](https://gohugo.io/content-management/organization/) is a core concept.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
{{% note %}}
|
||||
|
@ -21,7 +18,9 @@ This site uses Hugo. In Hugo, [content organization](https://gohugo.io/content-m
|
|||
|
||||
### Page Order
|
||||
|
||||
The documentation side menu, the documentation page browser etc. are listed using Hugo's default sort order, which sorts by weight (from 1), date (newest first), and finally by the link title.
|
||||
The documentation side menu, the documentation page browser etc. are listed using
|
||||
Hugo's default sort order, which sorts by weight (from 1), date (newest first),
|
||||
and finally by the link title.
|
||||
|
||||
Given that, if you want to move a page or a section up, set a weight in the page's front matter:
|
||||
|
||||
|
@ -30,24 +29,25 @@ title: My Page
|
|||
weight: 10
|
||||
```
|
||||
|
||||
|
||||
{{% note %}}
|
||||
For page weights, it can be smart not to use 1, 2, 3 ..., but some other interval, say 10, 20, 30... This allows you to insert pages where you want later.
|
||||
Additionally, each weight within the same directory (section) should not be overlapped with the other weights. This makes sure that content is always organized correctly, especially in localized content.
|
||||
For page weights, it can be smart not to use 1, 2, 3 ..., but some other interval,
|
||||
say 10, 20, 30... This allows you to insert pages where you want later.
|
||||
Additionally, each weight within the same directory (section) should not be
|
||||
overlapped with the other weights. This makes sure that content is always
|
||||
organized correctly, especially in localized content.
|
||||
{{% /note %}}
|
||||
|
||||
|
||||
### Documentation Main Menu
|
||||
|
||||
The `Documentation` main menu is built from the sections below `docs/` with the `main_menu` flag set in front matter of the `_index.md` section content file:
|
||||
The `Documentation` main menu is built from the sections below `docs/` with
|
||||
the `main_menu` flag set in front matter of the `_index.md` section content file:
|
||||
|
||||
```yaml
|
||||
main_menu: true
|
||||
```
|
||||
|
||||
|
||||
Note that the link title is fetched from the page's `linkTitle`, so if you want it to be something different than the title, change it in the content file:
|
||||
|
||||
Note that the link title is fetched from the page's `linkTitle`, so if you want
|
||||
it to be something different than the title, change it in the content file:
|
||||
|
||||
```yaml
|
||||
main_menu: true
|
||||
|
@ -55,9 +55,10 @@ title: Page Title
|
|||
linkTitle: Title used in links
|
||||
```
|
||||
|
||||
|
||||
{{% note %}}
|
||||
The above needs to be done per language. If you don't see your section in the menu, it is probably because it is not identified as a section by Hugo. Create a `_index.md` content file in the section folder.
|
||||
The above needs to be done per language. If you don't see your section in the menu,
|
||||
it is probably because it is not identified as a section by Hugo. Create a
|
||||
`_index.md` content file in the section folder.
|
||||
{{% /note %}}
|
||||
|
||||
### Documentation Side Menu
|
||||
|
@ -72,11 +73,13 @@ If you don't want to list a section or page, set the `toc_hide` flag to `true` i
|
|||
toc_hide: true
|
||||
```
|
||||
|
||||
When you navigate to a section that has content, the specific section or page (e.g. `_index.md`) is shown. Else, the first page inside that section is shown.
|
||||
When you navigate to a section that has content, the specific section or page
|
||||
(e.g. `_index.md`) is shown. Else, the first page inside that section is shown.
|
||||
|
||||
### Documentation Browser
|
||||
|
||||
The page browser on the documentation home page is built using all the sections and pages that are directly below the `docs section`.
|
||||
The page browser on the documentation home page is built using all the sections
|
||||
and pages that are directly below the `docs section`.
|
||||
|
||||
If you don't want to list a section or page, set the `toc_hide` flag to `true` in front matter:
|
||||
|
||||
|
@ -86,14 +89,18 @@ toc_hide: true
|
|||
|
||||
### The Main Menu
|
||||
|
||||
The site links in the top-right menu -- and also in the footer -- are built by page-lookups. This is to make sure that the page actually exists. So, if the `case-studies` section does not exist in a site (language), it will not be linked to.
|
||||
|
||||
The site links in the top-right menu -- and also in the footer -- are built by
|
||||
page-lookups. This is to make sure that the page actually exists. So, if the
|
||||
`case-studies` section does not exist in a site (language), it will not be linked to.
|
||||
|
||||
## Page Bundles
|
||||
|
||||
In addition to standalone content pages (Markdown files), Hugo supports [Page Bundles](https://gohugo.io/content-management/page-bundles/).
|
||||
In addition to standalone content pages (Markdown files), Hugo supports
|
||||
[Page Bundles](https://gohugo.io/content-management/page-bundles/).
|
||||
|
||||
One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/). It is considered a `leaf bundle`. Everything below the directory, including the `index.md`, will be part of the bundle. This also includes page-relative links, images that can be processed etc.:
|
||||
One example is [Custom Hugo Shortcodes](/docs/contribute/style/hugo-shortcodes/).
|
||||
It is considered a `leaf bundle`. Everything below the directory, including the `index.md`,
|
||||
will be part of the bundle. This also includes page-relative links, images that can be processed etc.:
|
||||
|
||||
```bash
|
||||
en/docs/home/contribute/includes
|
||||
|
@ -103,7 +110,8 @@ en/docs/home/contribute/includes
|
|||
└── podtemplate.json
|
||||
```
|
||||
|
||||
Another widely used example is the `includes` bundle. It sets `headless: true` in front matter, which means that it does not get its own URL. It is only used in other pages.
|
||||
Another widely used example is the `includes` bundle. It sets `headless: true` in
|
||||
front matter, which means that it does not get its own URL. It is only used in other pages.
|
||||
|
||||
```bash
|
||||
en/includes
|
||||
|
@ -118,22 +126,22 @@ en/includes
|
|||
|
||||
Some important notes to the files in the bundles:
|
||||
|
||||
* For translated bundles, any missing non-content files will be inherited from languages above. This avoids duplication.
|
||||
* All the files in a bundle are what Hugo calls `Resources` and you can provide metadata per language, such as parameters and title, even if it does not supports front matter (YAML files etc.). See [Page Resources Metadata](https://gohugo.io/content-management/page-resources/#page-resources-metadata).
|
||||
* The value you get from `.RelPermalink` of a `Resource` is page-relative. See [Permalinks](https://gohugo.io/content-management/urls/#permalinks).
|
||||
|
||||
* For translated bundles, any missing non-content files will be inherited from
|
||||
languages above. This avoids duplication.
|
||||
* All the files in a bundle are what Hugo calls `Resources` and you can provide
|
||||
metadata per language, such as parameters and title, even if it does not supports
|
||||
front matter (YAML files etc.).
|
||||
See [Page Resources Metadata](https://gohugo.io/content-management/page-resources/#page-resources-metadata).
|
||||
* The value you get from `.RelPermalink` of a `Resource` is page-relative.
|
||||
See [Permalinks](https://gohugo.io/content-management/urls/#permalinks).
|
||||
|
||||
## Styles
|
||||
|
||||
The [SASS](https://sass-lang.com/) source of the stylesheets for this site is stored in `assets/sass` and is automatically built by Hugo.
|
||||
|
||||
|
||||
The [SASS](https://sass-lang.com/) source of the stylesheets for this site is
|
||||
stored in `assets/sass` and is automatically built by Hugo.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn about [custom Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/)
|
||||
* Learn about the [Style guide](/docs/contribute/style/style-guide)
|
||||
* Learn about the [Content guide](/docs/contribute/style/content-guide)
|
||||
|
||||
|
||||
|
|
|
@ -14,8 +14,8 @@ For additional information on creating new content for the Kubernetes
|
|||
documentation, read the [Documentation Content Guide](/docs/contribute/style/content-guide/).
|
||||
|
||||
Changes to the style guide are made by SIG Docs as a group. To propose a change
|
||||
or addition, [add it to the agenda](https://bit.ly/sig-docs-agenda) for an upcoming SIG Docs meeting, and attend the meeting to participate in the
|
||||
discussion.
|
||||
or addition, [add it to the agenda](https://bit.ly/sig-docs-agenda) for an upcoming
|
||||
SIG Docs meeting, and attend the meeting to participate in the discussion.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
@ -42,11 +42,17 @@ The English-language documentation uses U.S. English spelling and grammar.
|
|||
|
||||
### Use upper camel case for API objects
|
||||
|
||||
When you refer specifically to interacting with an API object, use [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as Pascal case. You may see different capitalization, such as "configMap", in the [API Reference](/docs/reference/kubernetes-api/). When writing general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
|
||||
When you refer specifically to interacting with an API object, use
|
||||
[UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as
|
||||
Pascal case. You may see different capitalization, such as "configMap",
|
||||
in the [API Reference](/docs/reference/kubernetes-api/). When writing
|
||||
general documentation, it's better to use upper camel case, calling it "ConfigMap" instead.
|
||||
|
||||
When you are generally discussing an API object, use [sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
When you are generally discussing an API object, use
|
||||
[sentence-style capitalization](https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization).
|
||||
|
||||
The following examples focus on capitalization. For more information about formatting API object names, review the related guidance on [Code Style](#code-style-inline-code).
|
||||
The following examples focus on capitalization. For more information about formatting
|
||||
API object names, review the related guidance on [Code Style](#code-style-inline-code).
|
||||
|
||||
{{< table caption = "Do and Don't - Use Pascal case for API objects" >}}
|
||||
Do | Don't
|
||||
|
@ -130,7 +136,9 @@ Remove trailing spaces in the code. | Add trailing spaces in the code, where the
|
|||
{{< /table >}}
|
||||
|
||||
{{< note >}}
|
||||
The website supports syntax highlighting for code samples, but specifying a language is optional. Syntax highlighting in the code block should conform to the [contrast guidelines.](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum)
|
||||
The website supports syntax highlighting for code samples, but specifying a language
|
||||
is optional. Syntax highlighting in the code block should conform to the
|
||||
[contrast guidelines.](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum)
|
||||
{{< /note >}}
|
||||
|
||||
### Use code style for object field names and namespaces
|
||||
|
@ -189,7 +197,10 @@ This section talks about how we reference API resources in the documentation.
|
|||
|
||||
### Clarification about "resource"
|
||||
|
||||
Kubernetes uses the word "resource" to refer to API resources, such as `pod`, `deployment`, and so on. We also use "resource" to talk about CPU and memory requests and limits. Always refer to API resources as "API resources" to avoid confusion with CPU and memory resources.
|
||||
Kubernetes uses the word "resource" to refer to API resources, such as `pod`,
|
||||
`deployment`, and so on. We also use "resource" to talk about CPU and memory
|
||||
requests and limits. Always refer to API resources as "API resources" to avoid
|
||||
confusion with CPU and memory resources.
|
||||
|
||||
### When to use Kubernetes API terminologies
|
||||
|
||||
|
@ -197,21 +208,27 @@ The different Kubernetes API terminologies are:
|
|||
|
||||
- Resource type: the name used in the API URL (such as `pods`, `namespaces`)
|
||||
- Resource: a single instance of a resource type (such as `pod`, `secret`)
|
||||
- Object: a resource that serves as a "record of intent". An object is a desired state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
|
||||
- Object: a resource that serves as a "record of intent". An object is a desired
|
||||
state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
|
||||
|
||||
Always use "resource" or "object" when referring to an API resource in docs. For example, use "a `Secret` object" over just "a `Secret`".
|
||||
Always use "resource" or "object" when referring to an API resource in docs.
|
||||
For example, use "a `Secret` object" over just "a `Secret`".
|
||||
|
||||
### API resource names
|
||||
|
||||
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case), also known as PascalCase, and code formatting.
|
||||
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case),
|
||||
also known as PascalCase, and code formatting.
|
||||
|
||||
For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``).
|
||||
|
||||
Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List.
|
||||
|
||||
For more information about PascalCase and code formatting, please review the related guidance on [Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects) and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code).
|
||||
For more information about PascalCase and code formatting, please review the related guidance on
|
||||
[Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects)
|
||||
and [Use code style for inline code, commands, and API objects](/docs/contribute/style/style-guide/#code-style-inline-code).
|
||||
|
||||
For more information about Kubernetes API terminologies, please review the related guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology).
|
||||
For more information about Kubernetes API terminologies, please review the related
|
||||
guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/#standard-api-terminology).
|
||||
|
||||
## Code snippet formatting
|
||||
|
||||
|
@ -240,17 +257,23 @@ nginx 1/1 Running 0 13s 10.200.0.4 worker0
|
|||
|
||||
### Versioning Kubernetes examples
|
||||
|
||||
Code examples and configuration examples that include version information should be consistent with the accompanying text.
|
||||
Code examples and configuration examples that include version information should
|
||||
be consistent with the accompanying text.
|
||||
|
||||
If the information is version specific, the Kubernetes version needs to be defined in the `prerequisites` section of the [Task template](/docs/contribute/style/page-content-types/#task) or the [Tutorial template](/docs/contribute/style/page-content-types/#tutorial). Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
|
||||
If the information is version specific, the Kubernetes version needs to be defined
|
||||
in the `prerequisites` section of the [Task template](/docs/contribute/style/page-content-types/#task)
|
||||
or the [Tutorial template](/docs/contribute/style/page-content-types/#tutorial).
|
||||
Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
|
||||
|
||||
To specify the Kubernetes version for a task or tutorial page, include `min-kubernetes-server-version` in the front matter of the page.
|
||||
To specify the Kubernetes version for a task or tutorial page, include
|
||||
`min-kubernetes-server-version` in the front matter of the page.
|
||||
|
||||
If the example YAML is in a standalone file, find and review the topics that include it as a reference.
|
||||
Verify that any topics using the standalone YAML have the appropriate version information defined.
|
||||
If a stand-alone YAML file is not referenced from any topics, consider deleting it instead of updating it.
|
||||
|
||||
For example, if you are writing a tutorial that is relevant to Kubernetes version 1.8, the front-matter of your markdown file should look something like:
|
||||
For example, if you are writing a tutorial that is relevant to Kubernetes version 1.8,
|
||||
the front-matter of your markdown file should look something like:
|
||||
|
||||
```yaml
|
||||
---
|
||||
|
@ -283,7 +306,10 @@ On-premises | On-premises or On-prem rather than On-premise or other variations.
|
|||
|
||||
## Shortcodes
|
||||
|
||||
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create different rhetorical appeal levels. Our documentation supports three different shortcodes in this category: **Note** `{{</* note */>}}`, **Caution** `{{</* caution */>}}`, and **Warning** `{{</* warning */>}}`.
|
||||
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create
|
||||
different rhetorical appeal levels. Our documentation supports three different
|
||||
shortcodes in this category: **Note** `{{</* note */>}}`,
|
||||
**Caution** `{{</* caution */>}}`, and **Warning** `{{</* warning */>}}`.
|
||||
|
||||
1. Surround the text with an opening and closing shortcode.
|
||||
|
||||
|
@ -412,7 +438,8 @@ The output is:
|
|||
|
||||
### Include Statements
|
||||
|
||||
Shortcodes inside include statements will break the build. You must insert them in the parent document, before and after you call the include. For example:
|
||||
Shortcodes inside include statements will break the build. You must insert them
|
||||
in the parent document, before and after you call the include. For example:
|
||||
|
||||
```
|
||||
{{</* note */>}}
|
||||
|
@ -424,11 +451,19 @@ Shortcodes inside include statements will break the build. You must insert them
|
|||
|
||||
### Line breaks
|
||||
|
||||
Use a single newline to separate block-level content like headings, lists, images, code blocks, and others. The exception is second-level headings, where it should be two newlines. Second-level headings follow the first-level (or the title) without any preceding paragraphs or texts. A two line spacing helps visualize the overall structure of content in a code editor better.
|
||||
Use a single newline to separate block-level content like headings, lists, images,
|
||||
code blocks, and others. The exception is second-level headings, where it should
|
||||
be two newlines. Second-level headings follow the first-level (or the title) without
|
||||
any preceding paragraphs or texts. A two line spacing helps visualize the overall
|
||||
structure of content in a code editor better.
|
||||
|
||||
### Headings and titles {#headings}
|
||||
|
||||
People accessing this documentation may use a screen reader or other assistive technology (AT). [Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices, they output items on a page one at a time. If there is a lot of content on a page, you can use headings to give the page an internal structure. A good page structure helps all readers to easily navigate the page or filter topics of interest.
|
||||
People accessing this documentation may use a screen reader or other assistive technology (AT).
|
||||
[Screen readers](https://en.wikipedia.org/wiki/Screen_reader) are linear output devices,
|
||||
they output items on a page one at a time. If there is a lot of content on a page, you can
|
||||
use headings to give the page an internal structure. A good page structure helps all readers
|
||||
to easily navigate the page or filter topics of interest.
|
||||
|
||||
{{< table caption = "Do and Don't - Headings" >}}
|
||||
Do | Don't
|
||||
|
@ -460,12 +495,20 @@ Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](
|
|||
|
||||
### Lists
|
||||
|
||||
Group items in a list that are related to each other and need to appear in a specific order or to indicate a correlation between multiple items. When a screen reader comes across a list—whether it is an ordered or unordered list—it will be announced to the user that there is a group of list items. The user can then use the arrow keys to move up and down between the various items in the list.
|
||||
Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.
|
||||
Group items in a list that are related to each other and need to appear in a specific
|
||||
order or to indicate a correlation between multiple items. When a screen reader comes
|
||||
across a list—whether it is an ordered or unordered list—it will be announced to the
|
||||
user that there is a group of list items. The user can then use the arrow keys to move
|
||||
up and down between the various items in the list. Website navigation links can also be
|
||||
marked up as list items; after all they are nothing but a group of related links.
|
||||
|
||||
- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.
|
||||
- End each item in a list with a period if one or more items in the list are complete
|
||||
sentences. For the sake of consistency, normally either all items or none should be complete sentences.
|
||||
|
||||
{{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}}
|
||||
{{< note >}}
|
||||
Ordered lists that are part of an incomplete introductory sentence can be in lowercase
|
||||
and punctuated as if each item was a part of the introductory sentence.
|
||||
{{< /note >}}
|
||||
|
||||
- Use the number one (`1.`) for ordered lists.
|
||||
|
||||
|
@ -475,11 +518,15 @@ Website navigation links can also be marked up as list items; after all they are
|
|||
|
||||
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
|
||||
|
||||
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.
|
||||
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list
|
||||
item must be indented by either four spaces or one tab.
|
||||
|
||||
### Tables
|
||||
|
||||
The semantic purpose of a data table is to present tabular data. Sighted users can quickly scan the table but a screen reader goes through line by line. A table caption is used to create a descriptive title for a data table. Assistive technologies (AT) use the HTML table caption element to identify the table contents to the user within the page structure.
|
||||
The semantic purpose of a data table is to present tabular data. Sighted users can
|
||||
quickly scan the table but a screen reader goes through line by line. A table caption
|
||||
is used to create a descriptive title for a data table. Assistive technologies (AT)
|
||||
use the HTML table caption element to identify the table contents to the user within the page structure.
|
||||
|
||||
- Add table captions using [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions) for tables.
|
||||
|
||||
|
|
|
@ -182,14 +182,14 @@ kubelet [flags]
|
|||
<td colspan="2">--container-log-max-files int32 Default: 5</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be >= 2. This flag can only be used with <code>--container-runtime=remote</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><Warning: Beta feature> Set the maximum number of container log files that can be present for a container. The number must be >= 2. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td colspan="2">--container-log-max-size string Default: <code>10Mi</code></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><Warning: Beta feature> Set the maximum size (e.g. <code>10Mi</code>) of container log file before it is rotated. This flag can only be used with <code>--container-runtime=remote</code>. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
|
||||
<td></td><td style="line-height: 130%; word-wrap: break-word;"><Warning: Beta feature> Set the maximum size (e.g. <code>10Mi</code>) of container log file before it is rotated. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/">kubelet-config-file</a> for more information.)</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -372,7 +372,7 @@ NAME PARENTREF
|
|||
|
||||
#### IP address ranges for Service virtual IP addresses {#service-ip-static-sub-range}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
Kubernetes divides the `ClusterIP` range into two bands, based on
|
||||
the size of the configured `service-cluster-ip-range` by using the following formula
|
||||
|
@ -396,7 +396,7 @@ to control how Kubernetes routes traffic to healthy (“ready”) backends.
|
|||
|
||||
### Internal traffic policy
|
||||
|
||||
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
|
||||
|
||||
You can set the `.spec.internalTrafficPolicy` field to control how traffic from
|
||||
internal sources is routed. Valid values are `Cluster` and `Local`. Set the field to
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 10
|
|||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
{{<note>}}
|
||||
This page is deprecated and will be removed in Kubernetes 1.27.
|
||||
{{</note>}}
|
||||
|
||||
`crictl` is a command-line interface for {{<glossary_tooltip term_id="cri" text="CRI">}}-compatible container runtimes.
|
||||
You can use it to inspect and debug container runtimes and applications on a
|
||||
Kubernetes node. `crictl` and its source are hosted in the
|
||||
|
@ -74,4 +70,4 @@ crictl | Description
|
|||
`runp` | Run a new pod
|
||||
`rmp` | Remove one or more pods
|
||||
`stopp` | Stop one or more running pods
|
||||
{{< /table >}}
|
||||
{{< /table >}}
|
||||
|
|
|
@ -1066,9 +1066,9 @@ Continue Token, Exact
|
|||
|
||||
{{< note >}}
|
||||
When you **list** resources and receive a collection response, the response includes the
|
||||
[list metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#listmeta-v1-meta)
|
||||
[list metadata](/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#listmeta-v1-meta)
|
||||
of the collection as well as
|
||||
[object metadata](/docs/reference/generated/kubernetes-api/v{{ skew currentVersion >}}/#objectmeta-v1-meta)
|
||||
[object metadata](/docs/reference/generated/kubernetes-api/v{{<skew currentVersion >}}/#objectmeta-v1-meta)
|
||||
for each item in that collection. For individual objects found within a collection response,
|
||||
`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date
|
||||
the object is when served.
|
||||
|
|
|
@ -171,7 +171,7 @@ It augments the basic
|
|||
|
||||
{{< note >}}
|
||||
The contents below are just an example. If you don't want to use a package manager
|
||||
follow the guide outlined in the [Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
|
||||
follow the guide outlined in the ([Without a package manager](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-2))
|
||||
section.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -63,12 +63,22 @@ on Kubernetes dual-stack support see [Dual-stack support with kubeadm](/docs/set
|
|||
that has higher precedence than the kubeadm-provided kubelet unit file.
|
||||
|
||||
```sh
|
||||
cat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf
|
||||
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
|
||||
# Replace the value of "containerRuntimeEndpoint" for a different container runtime if needed.
|
||||
#
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
cgroupDriver: systemd
|
||||
address: 127.0.0.1
|
||||
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
|
||||
staticPodPath: /etc/kubernetes/manifests
|
||||
EOF
|
||||
|
||||
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
|
||||
[Service]
|
||||
ExecStart=
|
||||
# Replace "systemd" with the cgroup driver of your container runtime. The default value in the kubelet is "cgroupfs".
|
||||
# Replace the value of "--container-runtime-endpoint" for a different container runtime if needed.
|
||||
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
|
||||
ExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf
|
||||
Restart=always
|
||||
EOF
|
||||
|
||||
|
|
|
@ -397,7 +397,7 @@ Before you start an upgrade, please back up your etcd cluster first.
|
|||
|
||||
## Maintaining etcd clusters
|
||||
|
||||
Fore more details on etcd maintenance, please refer to the [etcd maintenance](https://etcd.io/docs/latest/op-guide/maintenance/) documentation.
|
||||
For more details on etcd maintenance, please refer to the [etcd maintenance](https://etcd.io/docs/latest/op-guide/maintenance/) documentation.
|
||||
|
||||
{{% thirdparty-content single="true" %}}
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
reviewers:
|
||||
- jpbetz
|
||||
- cheftako
|
||||
title: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
|
||||
title: Migrate Replicated Control Plane To Use Cloud Controller Manager
|
||||
linkTitle: "Migrate Replicated Control Plane To Use Cloud Controller Manager"
|
||||
content_type: task
|
||||
weight: 250
|
||||
|
@ -14,45 +14,92 @@ weight: 250
|
|||
|
||||
## Background
|
||||
|
||||
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/), all cloud specific controllers must be moved out of the `kube-controller-manager`. All existing clusters that run cloud controllers in the `kube-controller-manager` must migrate to instead run the controllers in a cloud provider specific `cloud-controller-manager`.
|
||||
As part of the [cloud provider extraction effort](/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/),
|
||||
all cloud specific controllers must be moved out of the `kube-controller-manager`.
|
||||
All existing clusters that run cloud controllers in the `kube-controller-manager`
|
||||
must migrate to instead run the controllers in a cloud provider specific
|
||||
`cloud-controller-manager`.
|
||||
|
||||
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud specific" controllers between the `kube-controller-manager` and the `cloud-controller-manager` via a shared resource lock between the two components while upgrading the replicated control plane. For a single-node control plane, or if unavailability of controller managers can be tolerated during the upgrade, Leader Migration is not needed and this guide can be ignored.
|
||||
Leader Migration provides a mechanism in which HA clusters can safely migrate "cloud
|
||||
specific" controllers between the `kube-controller-manager` and the
|
||||
`cloud-controller-manager` via a shared resource lock between the two components
|
||||
while upgrading the replicated control plane. For a single-node control plane, or if
|
||||
unavailability of controller managers can be tolerated during the upgrade, Leader
|
||||
Migration is not needed and this guide can be ignored.
|
||||
|
||||
Leader Migration can be enabled by setting `--enable-leader-migration` on `kube-controller-manager` or `cloud-controller-manager`. Leader Migration only applies during the upgrade and can be safely disabled or left enabled after the upgrade is complete.
|
||||
Leader Migration can be enabled by setting `--enable-leader-migration` on
|
||||
`kube-controller-manager` or `cloud-controller-manager`. Leader Migration only
|
||||
applies during the upgrade and can be safely disabled or left enabled after the
|
||||
upgrade is complete.
|
||||
|
||||
This guide walks you through the manual process of upgrading the control plane from `kube-controller-manager` with built-in cloud provider to running both `kube-controller-manager` and `cloud-controller-manager`. If you use a tool to deploy and manage the cluster, please refer to the documentation of the tool and the cloud provider for specific instructions of the migration.
|
||||
This guide walks you through the manual process of upgrading the control plane from
|
||||
`kube-controller-manager` with built-in cloud provider to running both
|
||||
`kube-controller-manager` and `cloud-controller-manager`. If you use a tool to deploy
|
||||
and manage the cluster, please refer to the documentation of the tool and the cloud
|
||||
provider for specific instructions of the migration.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
It is assumed that the control plane is running Kubernetes version N and to be upgraded to version N + 1. Although it is possible to migrate within the same version, ideally the migration should be performed as part of an upgrade so that changes of configuration can be aligned to each release. The exact versions of N and N + 1 depend on each cloud provider. For example, if a cloud provider builds a `cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1 can be 1.24.
|
||||
It is assumed that the control plane is running Kubernetes version N and to be
|
||||
upgraded to version N + 1. Although it is possible to migrate within the same
|
||||
version, ideally the migration should be performed as part of an upgrade so that
|
||||
changes of configuration can be aligned to each release. The exact versions of N and
|
||||
N + 1 depend on each cloud provider. For example, if a cloud provider builds a
|
||||
`cloud-controller-manager` to work with Kubernetes 1.24, then N can be 1.23 and N + 1
|
||||
can be 1.24.
|
||||
|
||||
The control plane nodes should run `kube-controller-manager` with Leader Election enabled, which is the default. As of version N, an in-tree cloud provider must be set with `--cloud-provider` flag and `cloud-controller-manager` should not yet be deployed.
|
||||
The control plane nodes should run `kube-controller-manager` with Leader Election
|
||||
enabled, which is the default. As of version N, an in-tree cloud provider must be set
|
||||
with `--cloud-provider` flag and `cloud-controller-manager` should not yet be
|
||||
deployed.
|
||||
|
||||
The out-of-tree cloud provider must have built a `cloud-controller-manager` with Leader Migration implementation. If the cloud provider imports `k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later, Leader Migration will be available. However, for version before v0.22.0, Leader Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be enabled in `cloud-controller-manager`.
|
||||
The out-of-tree cloud provider must have built a `cloud-controller-manager` with
|
||||
Leader Migration implementation. If the cloud provider imports
|
||||
`k8s.io/cloud-provider` and `k8s.io/controller-manager` of version v0.21.0 or later,
|
||||
Leader Migration will be available. However, for version before v0.22.0, Leader
|
||||
Migration is alpha and requires feature gate `ControllerManagerLeaderMigration` to be
|
||||
enabled in `cloud-controller-manager`.
|
||||
|
||||
This guide assumes that kubelet of each control plane node starts `kube-controller-manager` and `cloud-controller-manager` as static pods defined by their manifests. If the components run in a different setting, please adjust the steps accordingly.
|
||||
This guide assumes that kubelet of each control plane node starts
|
||||
`kube-controller-manager` and `cloud-controller-manager` as static pods defined by
|
||||
their manifests. If the components run in a different setting, please adjust the
|
||||
steps accordingly.
|
||||
|
||||
For authorization, this guide assumes that the cluster uses RBAC. If another authorization mode grants permissions to `kube-controller-manager` and `cloud-controller-manager` components, please grant the needed access in a way that matches the mode.
|
||||
For authorization, this guide assumes that the cluster uses RBAC. If another
|
||||
authorization mode grants permissions to `kube-controller-manager` and
|
||||
`cloud-controller-manager` components, please grant the needed access in a way that
|
||||
matches the mode.
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
### Grant access to Migration Lease
|
||||
|
||||
The default permissions of the controller manager allow only accesses to their main Lease. In order for the migration to work, accesses to another Lease are required.
|
||||
The default permissions of the controller manager allow only accesses to their main
|
||||
Lease. In order for the migration to work, accesses to another Lease are required.
|
||||
|
||||
You can grant `kube-controller-manager` full access to the leases API by modifying the `system::leader-locking-kube-controller-manager` role. This task guide assumes that the name of the migration lease is `cloud-provider-extraction-migration`.
|
||||
You can grant `kube-controller-manager` full access to the leases API by modifying
|
||||
the `system::leader-locking-kube-controller-manager` role. This task guide assumes
|
||||
that the name of the migration lease is `cloud-provider-extraction-migration`.
|
||||
|
||||
`kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
|
||||
```shell
|
||||
kubectl patch -n kube-system role 'system::leader-locking-kube-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
|
||||
```
|
||||
|
||||
Do the same to the `system::leader-locking-cloud-controller-manager` role.
|
||||
|
||||
`kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
|
||||
```shell
|
||||
kubectl patch -n kube-system role 'system::leader-locking-cloud-controller-manager' -p '{"rules": [ {"apiGroups":[ "coordination.k8s.io"], "resources": ["leases"], "resourceNames": ["cloud-provider-extraction-migration"], "verbs": ["create", "list", "get", "update"] } ]}' --type=merge`
|
||||
```
|
||||
|
||||
### Initial Leader Migration configuration
|
||||
|
||||
Leader Migration optionally takes a configuration file representing the state of controller-to-manager assignment. At this moment, with in-tree cloud provider, `kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The following example configuration shows the assignment.
|
||||
Leader Migration optionally takes a configuration file representing the state of
|
||||
controller-to-manager assignment. At this moment, with in-tree cloud provider,
|
||||
`kube-controller-manager` runs `route`, `service`, and `cloud-node-lifecycle`. The
|
||||
following example configuration shows the assignment.
|
||||
|
||||
Leader Migration can be enabled without a configuration. Please see [Default Configuration](#default-configuration) for details.
|
||||
Leader Migration can be enabled without a configuration. Please see
|
||||
[Default Configuration](#default-configuration) for details.
|
||||
|
||||
```yaml
|
||||
kind: LeaderMigrationConfiguration
|
||||
|
@ -67,8 +114,9 @@ controllerLeaders:
|
|||
component: kube-controller-manager
|
||||
```
|
||||
|
||||
Alternatively, because the controllers can run under either controller managers, setting `component` to `*`
|
||||
for both sides makes the configuration file consistent between both parties of the migration.
|
||||
Alternatively, because the controllers can run under either controller managers,
|
||||
setting `component` to `*` for both sides makes the configuration file consistent
|
||||
between both parties of the migration.
|
||||
|
||||
```yaml
|
||||
# wildcard version
|
||||
|
@ -84,16 +132,25 @@ controllerLeaders:
|
|||
component: *
|
||||
```
|
||||
|
||||
On each control plane node, save the content to `/etc/leadermigration.conf`, and update the manifest of `kube-controller-manager` so that the file is mounted inside the container at the same location. Also, update the same manifest to add the following arguments:
|
||||
On each control plane node, save the content to `/etc/leadermigration.conf`, and
|
||||
update the manifest of `kube-controller-manager` so that the file is mounted inside
|
||||
the container at the same location. Also, update the same manifest to add the
|
||||
following arguments:
|
||||
|
||||
- `--enable-leader-migration` to enable Leader Migration on the controller manager
|
||||
- `--leader-migration-config=/etc/leadermigration.conf` to set configuration file
|
||||
|
||||
Restart `kube-controller-manager` on each node. At this moment, `kube-controller-manager` has leader migration enabled and is ready for the migration.
|
||||
Restart `kube-controller-manager` on each node. At this moment,
|
||||
`kube-controller-manager` has leader migration enabled and is ready for the
|
||||
migration.
|
||||
|
||||
### Deploy Cloud Controller Manager
|
||||
|
||||
In version N + 1, the desired state of controller-to-manager assignment can be represented by a new configuration file, shown as follows. Please note `component` field of each `controllerLeaders` changing from `kube-controller-manager` to `cloud-controller-manager`. Alternatively, use the wildcard version mentioned above, which has the same effect.
|
||||
In version N + 1, the desired state of controller-to-manager assignment can be
|
||||
represented by a new configuration file, shown as follows. Please note `component`
|
||||
field of each `controllerLeaders` changing from `kube-controller-manager` to
|
||||
`cloud-controller-manager`. Alternatively, use the wildcard version mentioned above,
|
||||
which has the same effect.
|
||||
|
||||
```yaml
|
||||
kind: LeaderMigrationConfiguration
|
||||
|
@ -108,35 +165,70 @@ controllerLeaders:
|
|||
component: cloud-controller-manager
|
||||
```
|
||||
|
||||
When creating control plane nodes of version N + 1, the content should be deployed to `/etc/leadermigration.conf`. The manifest of `cloud-controller-manager` should be updated to mount the configuration file in the same manner as `kube-controller-manager` of version N. Similarly, add `--enable-leader-migration` and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of `cloud-controller-manager`.
|
||||
When creating control plane nodes of version N + 1, the content should be deployed to
|
||||
`/etc/leadermigration.conf`. The manifest of `cloud-controller-manager` should be
|
||||
updated to mount the configuration file in the same manner as
|
||||
`kube-controller-manager` of version N. Similarly, add `--enable-leader-migration`
|
||||
and `--leader-migration-config=/etc/leadermigration.conf` to the arguments of
|
||||
`cloud-controller-manager`.
|
||||
|
||||
Create a new control plane node of version N + 1 with the updated `cloud-controller-manager` manifest, and with the `--cloud-provider` flag set to `external` for `kube-controller-manager`. `kube-controller-manager` of version N + 1 MUST NOT have Leader Migration enabled because, with an external cloud provider, it does not run the migrated controllers anymore, and thus it is not involved in the migration.
|
||||
Create a new control plane node of version N + 1 with the updated
|
||||
`cloud-controller-manager` manifest, and with the `--cloud-provider` flag set to
|
||||
`external` for `kube-controller-manager`. `kube-controller-manager` of version N + 1
|
||||
MUST NOT have Leader Migration enabled because, with an external cloud provider, it
|
||||
does not run the migrated controllers anymore, and thus it is not involved in the
|
||||
migration.
|
||||
|
||||
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/) for more detail on how to deploy `cloud-controller-manager`.
|
||||
Please refer to [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/)
|
||||
for more detail on how to deploy `cloud-controller-manager`.
|
||||
|
||||
### Upgrade Control Plane
|
||||
|
||||
The control plane now contains nodes of both version N and N + 1. The nodes of version N run `kube-controller-manager` only, and these of version N + 1 run both `kube-controller-manager` and `cloud-controller-manager`. The migrated controllers, as specified in the configuration, are running under either `kube-controller-manager` of version N or `cloud-controller-manager` of version N + 1 depending on which controller manager holds the migration lease. No controller will ever be running under both controller managers at any time.
|
||||
The control plane now contains nodes of both version N and N + 1. The nodes of
|
||||
version N run `kube-controller-manager` only, and these of version N + 1 run both
|
||||
`kube-controller-manager` and `cloud-controller-manager`. The migrated controllers,
|
||||
as specified in the configuration, are running under either `kube-controller-manager`
|
||||
of version N or `cloud-controller-manager` of version N + 1 depending on which
|
||||
controller manager holds the migration lease. No controller will ever be running
|
||||
under both controller managers at any time.
|
||||
|
||||
In a rolling manner, create a new control plane node of version N + 1 and bring down one of version N + 1 until the control plane contains only nodes of version N + 1.
|
||||
If a rollback from version N + 1 to N is required, add nodes of version N with Leader Migration enabled for `kube-controller-manager` back to the control plane, replacing one of version N + 1 each time until there are only nodes of version N.
|
||||
In a rolling manner, create a new control plane node of version N + 1 and bring down
|
||||
one of version N until the control plane contains only nodes of version N + 1.
|
||||
If a rollback from version N + 1 to N is required, add nodes of version N with Leader
|
||||
Migration enabled for `kube-controller-manager` back to the control plane, replacing
|
||||
one of version N + 1 each time until there are only nodes of version N.
|
||||
|
||||
### (Optional) Disable Leader Migration {#disable-leader-migration}
|
||||
|
||||
Now that the control plane has been upgraded to run both `kube-controller-manager` and `cloud-controller-manager` of version N + 1, Leader Migration has finished its job and can be safely disabled to save one Lease resource. It is safe to re-enable Leader Migration for the rollback in the future.
|
||||
Now that the control plane has been upgraded to run both `kube-controller-manager`
|
||||
and `cloud-controller-manager` of version N + 1, Leader Migration has finished its
|
||||
job and can be safely disabled to save one Lease resource. It is safe to re-enable
|
||||
Leader Migration for the rollback in the future.
|
||||
|
||||
In a rolling manager, update manifest of `cloud-controller-manager` to unset both `--enable-leader-migration` and `--leader-migration-config=` flag, also remove the mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`. To re-enable Leader Migration, recreate the configuration file and add its mount and the flags that enable Leader Migration back to `cloud-controller-manager`.
|
||||
In a rolling manager, update manifest of `cloud-controller-manager` to unset both
|
||||
`--enable-leader-migration` and `--leader-migration-config=` flag, also remove the
|
||||
mount of `/etc/leadermigration.conf`, and finally remove `/etc/leadermigration.conf`.
|
||||
To re-enable Leader Migration, recreate the configuration file and add its mount and
|
||||
the flags that enable Leader Migration back to `cloud-controller-manager`.
|
||||
|
||||
### Default Configuration
|
||||
|
||||
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable for the default controller-to-manager assignment.
|
||||
The default configuration can be enabled by setting `--enable-leader-migration` but without `--leader-migration-config=`.
|
||||
Starting Kubernetes 1.22, Leader Migration provides a default configuration suitable
|
||||
for the default controller-to-manager assignment.
|
||||
The default configuration can be enabled by setting `--enable-leader-migration` but
|
||||
without `--leader-migration-config=`.
|
||||
|
||||
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags that enable any in-tree cloud provider or change ownership of controllers, the default configuration can be used to avoid manual creation of the configuration file.
|
||||
For `kube-controller-manager` and `cloud-controller-manager`, if there are no flags
|
||||
that enable any in-tree cloud provider or change ownership of controllers, the
|
||||
default configuration can be used to avoid manual creation of the configuration file.
|
||||
|
||||
### Special case: migrating the Node IPAM controller {#node-ipam-controller-migration}
|
||||
|
||||
If your cloud provider provides an implementation of Node IPAM controller, you should switch to the implementation in `cloud-controller-manager`. Disable Node IPAM controller in `kube-controller-manager` of version N + 1 by adding `--controllers=*,-nodeipam` to its flags. Then add `nodeipam` to the list of migrated controllers.
|
||||
If your cloud provider provides an implementation of Node IPAM controller, you should
|
||||
switch to the implementation in `cloud-controller-manager`. Disable Node IPAM
|
||||
controller in `kube-controller-manager` of version N + 1 by adding
|
||||
`--controllers=*,-nodeipam` to its flags. Then add `nodeipam` to the list of migrated
|
||||
controllers.
|
||||
|
||||
```yaml
|
||||
# wildcard version, with nodeipam
|
||||
|
@ -156,5 +248,6 @@ controllerLeaders:
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration) enhancement proposal.
|
||||
- Read the [Controller Manager Leader Migration](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2436-controller-manager-leader-migration)
|
||||
enhancement proposal.
|
||||
|
||||
|
|
|
@ -14,39 +14,69 @@ This page shows how to configure and enable the `ip-masq-agent`.
|
|||
<!-- discussion -->
|
||||
## IP Masquerade Agent User Guide
|
||||
|
||||
The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster
|
||||
node's IP address. This is typically done when sending traffic to destinations outside the
|
||||
cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
|
||||
|
||||
### **Key Terms**
|
||||
### Key Terms
|
||||
|
||||
* **NAT (Network Address Translation)**
|
||||
Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
|
||||
* **Masquerading**
|
||||
A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
|
||||
* **CIDR (Classless Inter-Domain Routing)**
|
||||
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
|
||||
* **Link Local**
|
||||
A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
|
||||
* **NAT (Network Address Translation)**:
|
||||
Is a method of remapping one IP address to another by modifying either the source and/or
|
||||
destination address information in the IP header. Typically performed by a device doing IP routing.
|
||||
* **Masquerading**:
|
||||
A form of NAT that is typically used to perform a many to one address translation, where
|
||||
multiple source IP addresses are masked behind a single address, which is typically the
|
||||
device doing the IP routing. In Kubernetes this is the Node's IP address.
|
||||
* **CIDR (Classless Inter-Domain Routing)**:
|
||||
Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes.
|
||||
CIDR introduced a new method of representation for IP addresses, now commonly known as
|
||||
**CIDR notation**, in which an address or routing prefix is written with a suffix indicating
|
||||
the number of bits of the prefix, such as 192.168.2.0/24.
|
||||
* **Link Local**:
|
||||
A link-local address is a network address that is valid only for communications within the
|
||||
network segment or the broadcast domain that the host is connected to. Link-local addresses
|
||||
for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
|
||||
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when
|
||||
sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This
|
||||
essentially hides pod IP addresses behind the cluster node's IP address. In some environments,
|
||||
traffic to "external" addresses must come from a known machine address. For example, in Google
|
||||
Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in
|
||||
Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide
|
||||
the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the
|
||||
agent is configured to treat the three private IP ranges specified by
|
||||
[RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade
|
||||
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
|
||||
These ranges are `10.0.0.0/8`, `172.16.0.0/12`, and `192.168.0.0 16`.
|
||||
The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default.
|
||||
The agent is configured to reload its configuration from the location
|
||||
*/etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
|
||||
|
||||

|
||||
|
||||
The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys:
|
||||
The agent configuration file must be written in YAML or JSON syntax, and may contain three
|
||||
optional keys:
|
||||
|
||||
* `nonMasqueradeCIDRs`: A list of strings in
|
||||
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
|
||||
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify
|
||||
the non-masquerade ranges.
|
||||
* `masqLinkLocal`: A Boolean (true/false) which indicates whether to masquerade traffic to the
|
||||
link local prefix `169.254.0.0/16`. False by default.
|
||||
* `resyncInterval`: A time interval at which the agent attempts to reload config from disk.
|
||||
For example: '30s', where 's' means seconds, 'ms' means milliseconds.
|
||||
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 ranges will NOT be masqueraded. Any
|
||||
other traffic (assumed to be internet) will be masqueraded. An example of a local destination
|
||||
from a pod could be its Node's IP address as well as another node's address or one of the IP
|
||||
addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The
|
||||
below entries show the default set of rules that are applied by the ip-masq-agent:
|
||||
|
||||
```shell
|
||||
iptables -t nat -L IP-MASQ-AGENT
|
||||
```
|
||||
|
||||
```none
|
||||
target prot opt source destination
|
||||
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
|
||||
|
@ -64,24 +94,33 @@ to your cluster.
|
|||
<!-- steps -->
|
||||
|
||||
## Create an ip-masq-agent
|
||||
|
||||
To create an ip-masq-agent, run the following kubectl command:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml
|
||||
```
|
||||
|
||||
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
|
||||
You must also apply the appropriate node label to any nodes in your cluster that you want the
|
||||
agent to run on.
|
||||
|
||||
```shell
|
||||
kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true
|
||||
```
|
||||
|
||||
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent)
|
||||
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent).
|
||||
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
|
||||
In most cases, the default set of rules should be sufficient; however, if this is not the case
|
||||
for your cluster, you can create and apply a
|
||||
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP
|
||||
ranges that are affected. For example, to allow
|
||||
only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following
|
||||
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called
|
||||
"config".
|
||||
|
||||
{{< note >}}
|
||||
It is important that the file is called config since, by default, that will be used as the key for lookup by the `ip-masq-agent`:
|
||||
It is important that the file is called config since, by default, that will be used as the key
|
||||
for lookup by the `ip-masq-agent`:
|
||||
|
||||
```yaml
|
||||
nonMasqueradeCIDRs:
|
||||
|
@ -90,13 +129,14 @@ resyncInterval: 60s
|
|||
```
|
||||
{{< /note >}}
|
||||
|
||||
Run the following command to add the config map to your cluster:
|
||||
Run the following command to add the configmap to your cluster:
|
||||
|
||||
```shell
|
||||
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
|
||||
```
|
||||
|
||||
This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked every `resyncInterval` and applied to the cluster node.
|
||||
This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked
|
||||
every `resyncInterval` and applied to the cluster node.
|
||||
After the resync interval has expired, you should see the iptables rules reflect your changes:
|
||||
|
||||
```shell
|
||||
|
@ -111,7 +151,9 @@ RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent:
|
|||
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
|
||||
```
|
||||
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set `masqLinkLocal` to true in the ConfigMap.
|
||||
By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which
|
||||
sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can
|
||||
set `masqLinkLocal` to true in the ConfigMap.
|
||||
|
||||
```yaml
|
||||
nonMasqueradeCIDRs:
|
||||
|
@ -119,4 +161,3 @@ nonMasqueradeCIDRs:
|
|||
resyncInterval: 60s
|
||||
masqLinkLocal: true
|
||||
```
|
||||
|
||||
|
|
|
@ -53,13 +53,13 @@ setting up a cluster to use an external CA.
|
|||
|
||||
You can use the `check-expiration` subcommand to check when certificates expire:
|
||||
|
||||
```
|
||||
```shell
|
||||
kubeadm certs check-expiration
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
```console
|
||||
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
|
||||
admin.conf Dec 30, 2020 23:36 UTC 364d no
|
||||
apiserver Dec 30, 2020 23:36 UTC 364d ca no
|
||||
|
@ -268,7 +268,7 @@ serverTLSBootstrap: true
|
|||
If you have already created the cluster you must adapt it by doing the following:
|
||||
- Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace.
|
||||
In that ConfigMap, the `kubelet` key has a
|
||||
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
|
||||
[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`.
|
||||
- On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml`
|
||||
and restart the kubelet with `systemctl restart kubelet`
|
||||
|
@ -284,6 +284,8 @@ These CSRs can be viewed using:
|
|||
|
||||
```shell
|
||||
kubectl get csr
|
||||
```
|
||||
```console
|
||||
NAME AGE SIGNERNAME REQUESTOR CONDITION
|
||||
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
|
||||
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending
|
||||
|
|
|
@ -98,8 +98,7 @@ then run the following commands:
|
|||
|
||||
## Configure the kubelet to use containerd as its container runtime
|
||||
|
||||
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags.
|
||||
`--container-runtime=remote` and
|
||||
Edit the file `/var/lib/kubelet/kubeadm-flags.env` and add the containerd runtime to the flags;
|
||||
`--container-runtime-endpoint=unix:///run/containerd/containerd.sock`.
|
||||
|
||||
Users using kubeadm should be aware that the `kubeadm` tool stores the CRI socket for each host as
|
||||
|
|
|
@ -41,9 +41,9 @@ node-2 Ready v1.16.15 docker://19.3.1
|
|||
node-3 Ready v1.16.15 docker://19.3.1
|
||||
```
|
||||
If your runtime shows as Docker Engine, you still might not be affected by the
|
||||
removal of dockershim in Kubernetes v1.24. [Check the runtime
|
||||
endpoint](#which-endpoint) to see if you use dockershim. If you don't use
|
||||
dockershim, you aren't affected.
|
||||
removal of dockershim in Kubernetes v1.24.
|
||||
[Check the runtime endpoint](#which-endpoint) to see if you use dockershim.
|
||||
If you don't use dockershim, you aren't affected.
|
||||
|
||||
For containerd, the output is similar to this:
|
||||
|
||||
|
@ -88,7 +88,8 @@ nodes.
|
|||
|
||||
* If your nodes use Kubernetes v1.23 and earlier and these flags aren't
|
||||
present or if the `--container-runtime` flag is not `remote`,
|
||||
you use the dockershim socket with Docker Engine.
|
||||
you use the dockershim socket with Docker Engine. The `--container-runtime` command line
|
||||
argument is not available in Kubernetes v1.27 and later.
|
||||
* If the `--container-runtime-endpoint` flag is present, check the socket
|
||||
name to find out which runtime you use. For example,
|
||||
`unix:///run/containerd/containerd.sock` is the containerd endpoint.
|
||||
|
@ -96,4 +97,4 @@ nodes.
|
|||
If you want to change the Container Runtime on a Node from Docker Engine to containerd,
|
||||
you can find out more information on [migrating from Docker Engine to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/),
|
||||
or, if you want to continue using Docker Engine in Kubernetes v1.24 and later, migrate to a
|
||||
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
|
||||
CRI-compatible adapter like [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd).
|
||||
|
|
|
@ -8,24 +8,39 @@ weight: 30
|
|||
|
||||
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
|
||||
|
||||
This page shows how to configure [Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA) for Pods and containers that will run on Windows nodes. Group Managed Service Accounts are a specific type of Active Directory account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators across multiple servers.
|
||||
This page shows how to configure
|
||||
[Group Managed Service Accounts](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) (GMSA)
|
||||
for Pods and containers that will run on Windows nodes. Group Managed Service Accounts
|
||||
are a specific type of Active Directory account that provides automatic password management,
|
||||
simplified service principal name (SPN) management, and the ability to delegate the management
|
||||
to other administrators across multiple servers.
|
||||
|
||||
In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope as Custom Resources. Windows Pods, as well as individual containers within a Pod, can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication) when interacting with other Windows services.
|
||||
In Kubernetes, GMSA credential specs are configured at a Kubernetes cluster-wide scope
|
||||
as Custom Resources. Windows Pods, as well as individual containers within a Pod,
|
||||
can be configured to use a GMSA for domain based functions (e.g. Kerberos authentication)
|
||||
when interacting with other Windows services.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
You need to have a Kubernetes cluster and the `kubectl` command-line tool must be configured to communicate with your cluster. The cluster is expected to have Windows worker nodes. This section covers a set of initial steps required once for each cluster:
|
||||
You need to have a Kubernetes cluster and the `kubectl` command-line tool must be
|
||||
configured to communicate with your cluster. The cluster is expected to have Windows worker nodes.
|
||||
This section covers a set of initial steps required once for each cluster:
|
||||
|
||||
### Install the GMSACredentialSpec CRD
|
||||
|
||||
A [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)(CRD) for GMSA credential spec resources needs to be configured on the cluster to define the custom resource type `GMSACredentialSpec`. Download the GMSA CRD [YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml) and save it as gmsa-crd.yaml.
|
||||
Next, install the CRD with `kubectl apply -f gmsa-crd.yaml`
|
||||
A [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)(CRD)
|
||||
for GMSA credential spec resources needs to be configured on the cluster to define
|
||||
the custom resource type `GMSACredentialSpec`. Download the GMSA CRD
|
||||
[YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml)
|
||||
and save it as gmsa-crd.yaml. Next, install the CRD with `kubectl apply -f gmsa-crd.yaml`
|
||||
|
||||
### Install webhooks to validate GMSA users
|
||||
|
||||
Two webhooks need to be configured on the Kubernetes cluster to populate and validate GMSA credential spec references at the Pod or container level:
|
||||
Two webhooks need to be configured on the Kubernetes cluster to populate and
|
||||
validate GMSA credential spec references at the Pod or container level:
|
||||
|
||||
1. A mutating webhook that expands references to GMSAs (by name from a Pod specification) into the full credential spec in JSON form within the Pod spec.
|
||||
1. A mutating webhook that expands references to GMSAs (by name from a Pod specification)
|
||||
into the full credential spec in JSON form within the Pod spec.
|
||||
|
||||
1. A validating webhook ensures all references to GMSAs are authorized to be used by the Pod service account.
|
||||
|
||||
|
@ -39,29 +54,49 @@ Installing the above webhooks and associated objects require the steps below:
|
|||
|
||||
1. Create the validating and mutating webhook configurations referring to the deployment.
|
||||
|
||||
A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) can be used to deploy and configure the GMSA webhooks and associated objects mentioned above. The script can be run with a ```--dry-run=server``` option to allow you to review the changes that would be made to your cluster.
|
||||
A [script](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh)
|
||||
can be used to deploy and configure the GMSA webhooks and associated objects
|
||||
mentioned above. The script can be run with a `--dry-run=server` option to
|
||||
allow you to review the changes that would be made to your cluster.
|
||||
|
||||
The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) used by the script may also be used to deploy the webhooks and associated objects manually (with appropriate substitutions for the parameters)
|
||||
The [YAML template](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl)
|
||||
used by the script may also be used to deploy the webhooks and associated objects
|
||||
manually (with appropriate substitutions for the parameters)
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Configure GMSAs and Windows nodes in Active Directory
|
||||
|
||||
Before Pods in Kubernetes can be configured to use GMSAs, the desired GMSAs need to be provisioned in Active Directory as described in the [Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#BKMK_Step1). Windows worker nodes (that are part of the Kubernetes cluster) need to be configured in Active Directory to access the secret credentials associated with the desired GMSA as described in the [Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#to-add-member-hosts-using-the-set-adserviceaccount-cmdlet)
|
||||
Before Pods in Kubernetes can be configured to use GMSAs, the desired GMSAs need
|
||||
to be provisioned in Active Directory as described in the
|
||||
[Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#BKMK_Step1).
|
||||
Windows worker nodes (that are part of the Kubernetes cluster) need to be configured
|
||||
in Active Directory to access the secret credentials associated with the desired GMSA as described in the
|
||||
[Windows GMSA documentation](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#to-add-member-hosts-using-the-set-adserviceaccount-cmdlet).
|
||||
|
||||
## Create GMSA credential spec resources
|
||||
|
||||
With the GMSACredentialSpec CRD installed (as described earlier), custom resources containing GMSA credential specs can be configured. The GMSA credential spec does not contain secret or sensitive data. It is information that a container runtime can use to describe the desired GMSA of a container to Windows. GMSA credential specs can be generated in YAML format with a utility [PowerShell script](https://github.com/kubernetes-sigs/windows-gmsa/tree/master/scripts/GenerateCredentialSpecResource.ps1).
|
||||
With the GMSACredentialSpec CRD installed (as described earlier), custom resources
|
||||
containing GMSA credential specs can be configured. The GMSA credential spec does
|
||||
not contain secret or sensitive data. It is information that a container runtime
|
||||
can use to describe the desired GMSA of a container to Windows. GMSA credential
|
||||
specs can be generated in YAML format with a utility
|
||||
[PowerShell script](https://github.com/kubernetes-sigs/windows-gmsa/tree/master/scripts/GenerateCredentialSpecResource.ps1).
|
||||
|
||||
Following are the steps for generating a GMSA credential spec YAML manually in JSON format and then converting it:
|
||||
|
||||
1. Import the CredentialSpec [module](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1): `ipmo CredentialSpec.psm1`
|
||||
1. Import the CredentialSpec
|
||||
[module](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1): `ipmo CredentialSpec.psm1`
|
||||
|
||||
1. Create a credential spec in JSON format using `New-CredentialSpec`. To create a GMSA credential spec named WebApp1, invoke `New-CredentialSpec -Name WebApp1 -AccountName WebApp1 -Domain $(Get-ADDomain -Current LocalComputer)`
|
||||
1. Create a credential spec in JSON format using `New-CredentialSpec`.
|
||||
To create a GMSA credential spec named WebApp1, invoke
|
||||
`New-CredentialSpec -Name WebApp1 -AccountName WebApp1 -Domain $(Get-ADDomain -Current LocalComputer)`
|
||||
|
||||
1. Use `Get-CredentialSpec` to show the path of the JSON file.
|
||||
|
||||
1. Convert the credspec file from JSON to YAML format and apply the necessary header fields `apiVersion`, `kind`, `metadata` and `credspec` to make it a GMSACredentialSpec custom resource that can be configured in Kubernetes.
|
||||
1. Convert the credspec file from JSON to YAML format and apply the necessary
|
||||
header fields `apiVersion`, `kind`, `metadata` and `credspec` to make it a
|
||||
GMSACredentialSpec custom resource that can be configured in Kubernetes.
|
||||
|
||||
The following YAML configuration describes a GMSA credential spec named `gmsa-WebApp1`:
|
||||
|
||||
|
@ -69,33 +104,38 @@ The following YAML configuration describes a GMSA credential spec named `gmsa-We
|
|||
apiVersion: windows.k8s.io/v1
|
||||
kind: GMSACredentialSpec
|
||||
metadata:
|
||||
name: gmsa-WebApp1 #This is an arbitrary name but it will be used as a reference
|
||||
name: gmsa-WebApp1 # This is an arbitrary name but it will be used as a reference
|
||||
credspec:
|
||||
ActiveDirectoryConfig:
|
||||
GroupManagedServiceAccounts:
|
||||
- Name: WebApp1 #Username of the GMSA account
|
||||
Scope: CONTOSO #NETBIOS Domain Name
|
||||
- Name: WebApp1 #Username of the GMSA account
|
||||
Scope: contoso.com #DNS Domain Name
|
||||
- Name: WebApp1 # Username of the GMSA account
|
||||
Scope: CONTOSO # NETBIOS Domain Name
|
||||
- Name: WebApp1 # Username of the GMSA account
|
||||
Scope: contoso.com # DNS Domain Name
|
||||
CmsPlugins:
|
||||
- ActiveDirectory
|
||||
DomainJoinConfig:
|
||||
DnsName: contoso.com #DNS Domain Name
|
||||
DnsTreeName: contoso.com #DNS Domain Name Root
|
||||
Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a #GUID
|
||||
MachineAccountName: WebApp1 #Username of the GMSA account
|
||||
NetBiosName: CONTOSO #NETBIOS Domain Name
|
||||
Sid: S-1-5-21-2126449477-2524075714-3094792973 #SID of GMSA
|
||||
DnsName: contoso.com # DNS Domain Name
|
||||
DnsTreeName: contoso.com # DNS Domain Name Root
|
||||
Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID
|
||||
MachineAccountName: WebApp1 # Username of the GMSA account
|
||||
NetBiosName: CONTOSO # NETBIOS Domain Name
|
||||
Sid: S-1-5-21-2126449477-2524075714-3094792973 # SID of GMSA
|
||||
```
|
||||
|
||||
The above credential spec resource may be saved as `gmsa-Webapp1-credspec.yaml` and applied to the cluster using: `kubectl apply -f gmsa-Webapp1-credspec.yml`
|
||||
The above credential spec resource may be saved as `gmsa-Webapp1-credspec.yaml`
|
||||
and applied to the cluster using: `kubectl apply -f gmsa-Webapp1-credspec.yml`
|
||||
|
||||
## Configure cluster role to enable RBAC on specific GMSA credential specs
|
||||
|
||||
A cluster role needs to be defined for each GMSA credential spec resource. This authorizes the `use` verb on a specific GMSA resource by a subject which is typically a service account. The following example shows a cluster role that authorizes usage of the `gmsa-WebApp1` credential spec from above. Save the file as gmsa-webapp1-role.yaml and apply using `kubectl apply -f gmsa-webapp1-role.yaml`
|
||||
A cluster role needs to be defined for each GMSA credential spec resource. This
|
||||
authorizes the `use` verb on a specific GMSA resource by a subject which is typically
|
||||
a service account. The following example shows a cluster role that authorizes usage
|
||||
of the `gmsa-WebApp1` credential spec from above. Save the file as gmsa-webapp1-role.yaml
|
||||
and apply using `kubectl apply -f gmsa-webapp1-role.yaml`
|
||||
|
||||
```yaml
|
||||
#Create the Role to read the credspec
|
||||
# Create the Role to read the credspec
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
|
@ -109,7 +149,10 @@ rules:
|
|||
|
||||
## Assign role to service accounts to use specific GMSA credspecs
|
||||
|
||||
A service account (that Pods will be configured with) needs to be bound to the cluster role create above. This authorizes the service account to use the desired GMSA credential spec resource. The following shows the default service account being bound to a cluster role `webapp1-role` to use `gmsa-WebApp1` credential spec resource created above.
|
||||
A service account (that Pods will be configured with) needs to be bound to the
|
||||
cluster role create above. This authorizes the service account to use the desired
|
||||
GMSA credential spec resource. The following shows the default service account
|
||||
being bound to a cluster role `webapp1-role` to use `gmsa-WebApp1` credential spec resource created above.
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -129,7 +172,10 @@ roleRef:
|
|||
|
||||
## Configure GMSA credential spec reference in Pod spec
|
||||
|
||||
The Pod spec field `securityContext.windowsOptions.gmsaCredentialSpecName` is used to specify references to desired GMSA credential spec custom resources in Pod specs. This configures all containers in the Pod spec to use the specified GMSA. A sample Pod spec with the annotation populated to refer to `gmsa-WebApp1`:
|
||||
The Pod spec field `securityContext.windowsOptions.gmsaCredentialSpecName` is used to
|
||||
specify references to desired GMSA credential spec custom resources in Pod specs.
|
||||
This configures all containers in the Pod spec to use the specified GMSA. A sample
|
||||
Pod spec with the annotation populated to refer to `gmsa-WebApp1`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -160,7 +206,8 @@ spec:
|
|||
kubernetes.io/os: windows
|
||||
```
|
||||
|
||||
Individual containers in a Pod spec can also specify the desired GMSA credspec using a per-container `securityContext.windowsOptions.gmsaCredentialSpecName` field. For example:
|
||||
Individual containers in a Pod spec can also specify the desired GMSA credspec
|
||||
using a per-container `securityContext.windowsOptions.gmsaCredentialSpecName` field. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -191,31 +238,39 @@ spec:
|
|||
kubernetes.io/os: windows
|
||||
```
|
||||
|
||||
As Pod specs with GMSA fields populated (as described above) are applied in a cluster, the following sequence of events take place:
|
||||
As Pod specs with GMSA fields populated (as described above) are applied in a cluster,
|
||||
the following sequence of events take place:
|
||||
|
||||
1. The mutating webhook resolves and expands all references to GMSA credential spec resources to the contents of the GMSA credential spec.
|
||||
1. The mutating webhook resolves and expands all references to GMSA credential spec
|
||||
resources to the contents of the GMSA credential spec.
|
||||
|
||||
1. The validating webhook ensures the service account associated with the Pod is authorized for the `use` verb on the specified GMSA credential spec.
|
||||
1. The validating webhook ensures the service account associated with the Pod is
|
||||
authorized for the `use` verb on the specified GMSA credential spec.
|
||||
|
||||
1. The container runtime configures each Windows container with the specified GMSA credential spec so that the container can assume the identity of the GMSA in Active Directory and access services in the domain using that identity.
|
||||
1. The container runtime configures each Windows container with the specified GMSA
|
||||
credential spec so that the container can assume the identity of the GMSA in
|
||||
Active Directory and access services in the domain using that identity.
|
||||
|
||||
## Authenticating to network shares using hostname or FQDN
|
||||
|
||||
If you are experiencing issues connecting to SMB shares from Pods using hostname or FQDN, but are able to access the shares via their IPv4 address then make sure the following registry key is set on the Windows nodes.
|
||||
If you are experiencing issues connecting to SMB shares from Pods using hostname or FQDN,
|
||||
but are able to access the shares via their IPv4 address then make sure the following registry key is set on the Windows nodes.
|
||||
|
||||
```cmd
|
||||
reg add "HKLM\SYSTEM\CurrentControlSet\Services\hns\State" /v EnableCompartmentNamespace /t REG_DWORD /d 1
|
||||
```
|
||||
|
||||
Running Pods will then need to be recreated to pick up the behavior changes.
|
||||
More information on how this registry key is used can be found [here](
|
||||
https://github.com/microsoft/hcsshim/blob/885f896c5a8548ca36c88c4b87fd2208c8d16543/internal/uvm/create.go#L74-L83)
|
||||
More information on how this registry key is used can be found
|
||||
[here](https://github.com/microsoft/hcsshim/blob/885f896c5a8548ca36c88c4b87fd2208c8d16543/internal/uvm/create.go#L74-L83)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you are having difficulties getting GMSA to work in your environment, there are a few troubleshooting steps you can take.
|
||||
If you are having difficulties getting GMSA to work in your environment,
|
||||
there are a few troubleshooting steps you can take.
|
||||
|
||||
First, make sure the credspec has been passed to the Pod. To do this you will need to `exec` into one of your Pods and check the output of the `nltest.exe /parentdomain` command.
|
||||
First, make sure the credspec has been passed to the Pod. To do this you will need
|
||||
to `exec` into one of your Pods and check the output of the `nltest.exe /parentdomain` command.
|
||||
|
||||
In the example below the Pod did not get the credspec correctly:
|
||||
|
||||
|
@ -229,7 +284,8 @@ kubectl exec -it iis-auth-7776966999-n5nzr powershell.exe
|
|||
Getting parent domain failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE
|
||||
```
|
||||
|
||||
If your Pod did get the credspec correctly, then next check communication with the domain. First, from inside of your Pod, quickly do an nslookup to find the root of your domain.
|
||||
If your Pod did get the credspec correctly, then next check communication with the domain.
|
||||
First, from inside of your Pod, quickly do an nslookup to find the root of your domain.
|
||||
|
||||
This will tell us 3 things:
|
||||
|
||||
|
@ -237,7 +293,9 @@ This will tell us 3 things:
|
|||
1. The DC can reach the Pod
|
||||
1. DNS is working correctly.
|
||||
|
||||
If the DNS and communication test passes, next you will need to check if the Pod has established secure channel communication with the domain. To do this, again, `exec` into your Pod and run the `nltest.exe /query` command.
|
||||
If the DNS and communication test passes, next you will need to check if the Pod has
|
||||
established secure channel communication with the domain. To do this, again,
|
||||
`exec` into your Pod and run the `nltest.exe /query` command.
|
||||
|
||||
```PowerShell
|
||||
nltest.exe /query
|
||||
|
@ -249,7 +307,8 @@ Results in the following output:
|
|||
I_NetLogonControl failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE
|
||||
```
|
||||
|
||||
This tells us that for some reason, the Pod was unable to logon to the domain using the account specified in the credspec. You can try to repair the secure channel by running the following:
|
||||
This tells us that for some reason, the Pod was unable to logon to the domain using
|
||||
the account specified in the credspec. You can try to repair the secure channel by running the following:
|
||||
|
||||
```PowerShell
|
||||
nltest /sc_reset:domain.example
|
||||
|
@ -264,7 +323,9 @@ Trusted DC Connection Status Status = 0 0x0 NERR_Success
|
|||
The command completed successfully
|
||||
```
|
||||
|
||||
If the above corrects the error, you can automate the step by adding the following lifecycle hook to your Pod spec. If it did not correct the error, you will need to examine your credspec again and confirm that it is correct and complete.
|
||||
If the above corrects the error, you can automate the step by adding the following
|
||||
lifecycle hook to your Pod spec. If it did not correct the error, you will need to
|
||||
examine your credspec again and confirm that it is correct and complete.
|
||||
|
||||
```yaml
|
||||
image: registry.domain.example/iis-auth:1809v1
|
||||
|
@ -275,4 +336,5 @@ If the above corrects the error, you can automate the step by adding the followi
|
|||
imagePullPolicy: IfNotPresent
|
||||
```
|
||||
|
||||
If you add the `lifecycle` section show above to your Pod spec, the Pod will execute the commands listed to restart the `netlogon` service until the `nltest.exe /query` command exits without error.
|
||||
If you add the `lifecycle` section show above to your Pod spec, the Pod will execute
|
||||
the commands listed to restart the `netlogon` service until the `nltest.exe /query` command exits without error.
|
||||
|
|
|
@ -44,11 +44,8 @@ Understand the difference between readiness and liveness probes and when to appl
|
|||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Define a liveness command
|
||||
|
@ -95,14 +92,14 @@ kubectl describe pod liveness-exec
|
|||
|
||||
The output indicates that no liveness probes have failed yet:
|
||||
|
||||
```
|
||||
```none
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 9s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 7s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 7s kubelet, node01 Created container liveness
|
||||
Normal Started 7s kubelet, node01 Started container liveness
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 9s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 7s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 7s kubelet, node01 Created container liveness
|
||||
Normal Started 7s kubelet, node01 Started container liveness
|
||||
```
|
||||
|
||||
After 35 seconds, view the Pod events again:
|
||||
|
@ -114,16 +111,16 @@ kubectl describe pod liveness-exec
|
|||
At the bottom of the output, there are messages indicating that the liveness
|
||||
probes have failed, and the failed containers have been killed and recreated.
|
||||
|
||||
```
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 55s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 53s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 53s kubelet, node01 Created container liveness
|
||||
Normal Started 53s kubelet, node01 Started container liveness
|
||||
Warning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
Normal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted
|
||||
```none
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01
|
||||
Normal Pulling 55s kubelet, node01 Pulling image "registry.k8s.io/busybox"
|
||||
Normal Pulled 53s kubelet, node01 Successfully pulled image "registry.k8s.io/busybox"
|
||||
Normal Created 53s kubelet, node01 Created container liveness
|
||||
Normal Started 53s kubelet, node01 Started container liveness
|
||||
Warning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
|
||||
Normal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted
|
||||
```
|
||||
|
||||
Wait another 30 seconds, and verify that the container has been restarted:
|
||||
|
@ -132,9 +129,10 @@ Wait another 30 seconds, and verify that the container has been restarted:
|
|||
kubectl get pod liveness-exec
|
||||
```
|
||||
|
||||
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter increments as soon as a failed container comes back to the running state:
|
||||
The output shows that `RESTARTS` has been incremented. Note that the `RESTARTS` counter
|
||||
increments as soon as a failed container comes back to the running state:
|
||||
|
||||
```
|
||||
```none
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
@ -142,8 +140,7 @@ liveness-exec 1/1 Running 1 1m
|
|||
## Define a liveness HTTP request
|
||||
|
||||
Another kind of liveness probe uses an HTTP GET request. Here is the configuration
|
||||
file for a Pod that runs a container based on the `registry.k8s.io/liveness`
|
||||
image.
|
||||
file for a Pod that runs a container based on the `registry.k8s.io/liveness` image.
|
||||
|
||||
{{< codenew file="pods/probe/http-liveness.yaml" >}}
|
||||
|
||||
|
@ -196,9 +193,6 @@ the container has been restarted:
|
|||
kubectl describe pod liveness-http
|
||||
```
|
||||
|
||||
In releases prior to v1.13 (including v1.13), if the environment variable
|
||||
`http_proxy` (or `HTTP_PROXY`) is set on the node where a Pod is running,
|
||||
the HTTP liveness probe uses that proxy.
|
||||
In releases after v1.13, local HTTP proxy environment variable settings do not
|
||||
affect the HTTP liveness probe.
|
||||
|
||||
|
@ -240,7 +234,8 @@ kubectl describe pod goproxy
|
|||
|
||||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
|
||||
If your application implements the [gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
If your application implements the
|
||||
[gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
this example shows how to configure Kubernetes to use it for application liveness checks.
|
||||
Similarly you can configure readiness and startup probes.
|
||||
|
||||
|
@ -251,19 +246,19 @@ Here is an example manifest:
|
|||
To use a gRPC probe, `port` must be configured. If you want to distinguish probes of different types
|
||||
and probes for different features you can use the `service` field.
|
||||
You can set `service` to the value `liveness` and make your gRPC Health Checking endpoint
|
||||
respond to this request differently then when you set `service` set to `readiness`.
|
||||
respond to this request differently than when you set `service` set to `readiness`.
|
||||
This lets you use the same endpoint for different kinds of container health check
|
||||
(rather than needing to listen on two different ports).
|
||||
rather than listening on two different ports.
|
||||
If you want to specify your own custom service name and also specify a probe type,
|
||||
the Kubernetes project recommends that you use a name that concatenates
|
||||
those. For example: `myservice-liveness` (using `-` as a separator).
|
||||
|
||||
{{< note >}}
|
||||
Unlike HTTP or TCP probes, you cannot specify the healthcheck port by name, and you
|
||||
Unlike HTTP or TCP probes, you cannot specify the health check port by name, and you
|
||||
cannot configure a custom hostname.
|
||||
{{< /note >}}
|
||||
|
||||
Configuration problems (for example: incorrect port and service, unimplemented health checking protocol)
|
||||
Configuration problems (for example: incorrect port or service, unimplemented health checking protocol)
|
||||
are considered a probe failure, similar to HTTP and TCP probes.
|
||||
|
||||
To try the gRPC liveness check, create a Pod using the command below.
|
||||
|
@ -279,23 +274,24 @@ After 15 seconds, view Pod events to verify that the liveness check has not fail
|
|||
kubectl describe pod etcd-with-grpc
|
||||
```
|
||||
|
||||
Before Kubernetes 1.23, gRPC health probes were often implemented using [grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
|
||||
as described in the blog post [Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
|
||||
The built-in gRPC probes behavior is similar to one implemented by grpc-health-probe.
|
||||
Before Kubernetes 1.23, gRPC health probes were often implemented using
|
||||
[grpc-health-probe](https://github.com/grpc-ecosystem/grpc-health-probe/),
|
||||
as described in the blog post
|
||||
[Health checking gRPC servers on Kubernetes](/blog/2018/10/01/health-checking-grpc-servers-on-kubernetes/).
|
||||
The built-in gRPC probe's behavior is similar to the one implemented by grpc-health-probe.
|
||||
When migrating from grpc-health-probe to built-in probes, remember the following differences:
|
||||
|
||||
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against `127.0.0.1`.
|
||||
Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
|
||||
- Built-in probes run against the pod IP address, unlike grpc-health-probe that often runs against
|
||||
`127.0.0.1`. Be sure to configure your gRPC endpoint to listen on the Pod's IP address.
|
||||
- Built-in probes do not support any authentication parameters (like `-tls`).
|
||||
- There are no error codes for built-in probes. All errors are considered as probe failures.
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not** respect the `timeoutSeconds` setting (which defaults to 1s),
|
||||
while built-in probe would fail on timeout.
|
||||
- If `ExecProbeTimeout` feature gate is set to `false`, grpc-health-probe does **not**
|
||||
respect the `timeoutSeconds` setting (which defaults to 1s), while built-in probe would fail on timeout.
|
||||
|
||||
## Use a named port
|
||||
|
||||
You can use a named
|
||||
[`port`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports)
|
||||
for HTTP and TCP probes. (gRPC probes do not support named ports).
|
||||
You can use a named [`port`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports)
|
||||
for HTTP and TCP probes. gRPC probes do not support named ports.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -367,7 +363,9 @@ Readiness probes runs on the container during its whole lifecycle.
|
|||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
|
||||
Liveness probes *do not* wait for readiness probes to succeed.
|
||||
If you want to wait before executing a liveness probe you should use
|
||||
`initialDelaySeconds` or a `startupProbe`.
|
||||
{{< /caution >}}
|
||||
|
||||
Readiness probes are configured similarly to liveness probes. The only difference
|
||||
|
@ -392,37 +390,34 @@ for it, and that containers are restarted when they fail.
|
|||
|
||||
## Configure Probes
|
||||
|
||||
{{< comment >}}
|
||||
Eventually, some of this section could be moved to a concept topic.
|
||||
{{< /comment >}}
|
||||
<!--Eventually, some of this section could be moved to a concept topic.-->
|
||||
|
||||
[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core) have a number of fields that
|
||||
you can use to more precisely control the behavior of startup, liveness and readiness
|
||||
checks:
|
||||
[Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
have a number of fields that you can use to more precisely control the behavior of startup,
|
||||
liveness and readiness checks:
|
||||
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started
|
||||
before startup, liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10
|
||||
seconds. Minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out. Defaults
|
||||
to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be
|
||||
considered successful after having failed. Defaults to 1. Must be 1 for liveness
|
||||
and startup Probes. Minimum value is 1.
|
||||
* `initialDelaySeconds`: Number of seconds after the container has started before startup,
|
||||
liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
|
||||
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10 seconds.
|
||||
The minimum value is 1.
|
||||
* `timeoutSeconds`: Number of seconds after which the probe times out.
|
||||
Defaults to 1 second. Minimum value is 1.
|
||||
* `successThreshold`: Minimum consecutive successes for the probe to be considered successful
|
||||
after having failed. Defaults to 1. Must be 1 for liveness and startup Probes.
|
||||
Minimum value is 1.
|
||||
* `failureThreshold`: After a probe fails `failureThreshold` times in a row, Kubernetes
|
||||
considers that the overall check has failed: the container is _not_ ready / healthy /
|
||||
live.
|
||||
considers that the overall check has failed: the container is _not_ ready/healthy/live.
|
||||
For the case of a startup or liveness probe, if at least `failureThreshold` probes have
|
||||
failed, Kubernetes treats the container as unhealthy and triggers a restart for that
|
||||
specific container. The kubelet takes the setting of `terminationGracePeriodSeconds`
|
||||
for that container into account.
|
||||
specific container. The kubelet honors the setting of `terminationGracePeriodSeconds`
|
||||
for that container.
|
||||
For a failed readiness probe, the kubelet continues running the container that failed
|
||||
checks, and also continues to run more probes; because the check failed, the kubelet
|
||||
sets the `Ready` [condition](/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)
|
||||
on the Pod to `false`.
|
||||
* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait
|
||||
between triggering a shut down of the failed container, and then forcing the
|
||||
container runtime to stop that container.
|
||||
* `terminationGracePeriodSeconds`: configure a grace period for the kubelet to wait between
|
||||
triggering a shut down of the failed container, and then forcing the container runtime to stop
|
||||
that container.
|
||||
The default is to inherit the Pod-level value for `terminationGracePeriodSeconds`
|
||||
(30 seconds if not specified), and the minimum value is 1.
|
||||
See [probe-level `terminationGracePeriodSeconds`](#probe-level-terminationgraceperiodseconds)
|
||||
|
@ -435,16 +430,16 @@ until a result was returned.
|
|||
|
||||
This defect was corrected in Kubernetes v1.20. You may have been relying on the previous behavior,
|
||||
even without realizing it, as the default timeout is 1 second.
|
||||
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `ExecProbeTimeout` (set it to `false`)
|
||||
on each kubelet to restore the behavior from older versions, then remove that override
|
||||
once all the exec probes in the cluster have a `timeoutSeconds` value set.
|
||||
If you have pods that are impacted from the default 1 second timeout,
|
||||
you should update their probe timeout so that you're ready for the
|
||||
eventual removal of that feature gate.
|
||||
As a cluster administrator, you can disable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
`ExecProbeTimeout` (set it to `false`) on each kubelet to restore the behavior from older versions,
|
||||
then remove that override once all the exec probes in the cluster have a `timeoutSeconds` value set.
|
||||
If you have pods that are impacted from the default 1 second timeout, you should update their
|
||||
probe timeout so that you're ready for the eventual removal of that feature gate.
|
||||
|
||||
With the fix of the defect, for exec probes, on Kubernetes `1.20+` with the `dockershim` container runtime,
|
||||
the process inside the container may keep running even after probe returned failure because of the timeout.
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
Incorrect implementation of readiness probes may result in an ever growing number
|
||||
of processes in the container, and resource starvation if this is left unchecked.
|
||||
|
@ -456,15 +451,15 @@ of processes in the container, and resource starvation if this is left unchecked
|
|||
have additional fields that can be set on `httpGet`:
|
||||
|
||||
* `host`: Host name to connect to, defaults to the pod IP. You probably want to
|
||||
set "Host" in httpHeaders instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
|
||||
* `path`: Path to access on the HTTP server. Defaults to /.
|
||||
set "Host" in `httpHeaders` instead.
|
||||
* `scheme`: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to "HTTP".
|
||||
* `path`: Path to access on the HTTP server. Defaults to "/".
|
||||
* `httpHeaders`: Custom headers to set in the request. HTTP allows repeated headers.
|
||||
* `port`: Name or number of the port to access on the container. Number must be
|
||||
in the range 1 to 65535.
|
||||
in the range 1 to 65535.
|
||||
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified path and
|
||||
port to perform the check. The kubelet sends the probe to the pod's IP address,
|
||||
For an HTTP probe, the kubelet sends an HTTP request to the specified port and
|
||||
path to perform the check. The kubelet sends the probe to the Pod's IP address,
|
||||
unless the address is overridden by the optional `host` field in `httpGet`. If
|
||||
`scheme` field is set to `HTTPS`, the kubelet sends an HTTPS request skipping the
|
||||
certificate verification. In most scenarios, you do not want to set the `host` field.
|
||||
|
@ -474,10 +469,12 @@ to 127.0.0.1. If your pod relies on virtual hosts, which is probably the more co
|
|||
case, you should not use `host`, but rather set the `Host` header in `httpHeaders`.
|
||||
|
||||
For an HTTP probe, the kubelet sends two request headers in addition to the mandatory `Host` header:
|
||||
`User-Agent`, and `Accept`. The default values for these headers are `kube-probe/{{< skew currentVersion >}}`
|
||||
(where `{{< skew currentVersion >}}` is the version of the kubelet ), and `*/*` respectively.
|
||||
- `User-Agent`: The default value is `kube-probe/{{< skew currentVersion >}}`,
|
||||
where `{{< skew currentVersion >}}` is the version of the kubelet.
|
||||
- `Accept`: The default value is `*/*`.
|
||||
|
||||
You can override the default headers by defining `.httpHeaders` for the probe; for example
|
||||
You can override the default headers by defining `httpHeaders` for the probe.
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
livenessProbe:
|
||||
|
@ -511,7 +508,7 @@ startupProbe:
|
|||
|
||||
### TCP probes
|
||||
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the pod, which
|
||||
For a TCP probe, the kubelet makes the probe connection at the node, not in the Pod, which
|
||||
means that you can not use a service name in the `host` parameter since the kubelet is unable
|
||||
to resolve it.
|
||||
|
||||
|
@ -519,13 +516,13 @@ to resolve it.
|
|||
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
Prior to release 1.21, the pod-level `terminationGracePeriodSeconds` was used
|
||||
Prior to release 1.21, the Pod-level `terminationGracePeriodSeconds` was used
|
||||
for terminating a container that failed its liveness or startup probe. This
|
||||
coupling was unintended and may have resulted in failed containers taking an
|
||||
unusually long time to restart when a pod-level `terminationGracePeriodSeconds`
|
||||
unusually long time to restart when a Pod-level `terminationGracePeriodSeconds`
|
||||
was set.
|
||||
|
||||
In 1.25 and beyond, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
In 1.25 and above, users can specify a probe-level `terminationGracePeriodSeconds`
|
||||
as part of the probe specification. When both a pod- and probe-level
|
||||
`terminationGracePeriodSeconds` are set, the kubelet will use the probe-level value.
|
||||
|
||||
|
@ -534,20 +531,20 @@ Beginning in Kubernetes 1.25, the `ProbeTerminationGracePeriod` feature is enabl
|
|||
by default. For users choosing to disable this feature, please note the following:
|
||||
|
||||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
|
||||
* If you have existing Pods where the `terminationGracePeriodSeconds` field is set and
|
||||
you no longer wish to use per-probe termination grace periods, you must delete
|
||||
those existing Pods.
|
||||
you no longer wish to use per-probe termination grace periods, you must delete
|
||||
those existing Pods.
|
||||
|
||||
* When you (or the control plane, or some other component) create replacement
|
||||
Pods, and the feature gate `ProbeTerminationGracePeriod` is disabled, then the
|
||||
API server ignores the Probe-level `terminationGracePeriodSeconds` field, even if
|
||||
a Pod or pod template specifies it.
|
||||
* When you or the control plane, or some other components create replacement
|
||||
Pods, and the feature gate `ProbeTerminationGracePeriod` is disabled, then the
|
||||
API server ignores the Probe-level `terminationGracePeriodSeconds` field, even if
|
||||
a Pod or pod template specifies it.
|
||||
{{< /note >}}
|
||||
|
||||
For example,
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
@ -577,10 +574,11 @@ It will be rejected by the API server.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about
|
||||
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
[Container Probes](/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
||||
You can also read the API references for:
|
||||
|
||||
* [Pod](/docs/reference/kubernetes-api/workload-resources/pod-v1/), and specifically:
|
||||
* [container(s)](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container)
|
||||
* [probe(s)](/docs/reference/kubernetes-api/workload-resources/pod-v1/#Probe)
|
||||
|
||||
|
|
|
@ -9,16 +9,16 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
|
||||
This is *not* a guide for people who want to debug their cluster. For that you should check out
|
||||
[this guide](/docs/tasks/debug/debug-cluster).
|
||||
This guide is to help users debug applications that are deployed into Kubernetes
|
||||
and not behaving correctly. This is *not* a guide for people who want to debug their cluster.
|
||||
For that you should check out [this guide](/docs/tasks/debug/debug-cluster).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Diagnosing the problem
|
||||
|
||||
The first step in troubleshooting is triage. What is the problem? Is it your Pods, your Replication Controller or
|
||||
your Service?
|
||||
The first step in troubleshooting is triage. What is the problem?
|
||||
Is it your Pods, your Replication Controller or your Service?
|
||||
|
||||
* [Debugging Pods](#debugging-pods)
|
||||
* [Debugging Replication Controllers](#debugging-replication-controllers)
|
||||
|
@ -26,36 +26,43 @@ your Service?
|
|||
|
||||
### Debugging Pods
|
||||
|
||||
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with the following command:
|
||||
The first step in debugging a Pod is taking a look at it. Check the current
|
||||
state of the Pod and recent events with the following command:
|
||||
|
||||
```shell
|
||||
kubectl describe pods ${POD_NAME}
|
||||
```
|
||||
|
||||
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
|
||||
Look at the state of the containers in the pod. Are they all `Running`?
|
||||
Have there been recent restarts?
|
||||
|
||||
Continue debugging depending on the state of the pods.
|
||||
|
||||
#### My pod stays pending
|
||||
|
||||
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
|
||||
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
|
||||
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
|
||||
your pod. Reasons include:
|
||||
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node.
|
||||
Generally this is because there are insufficient resources of one type or another
|
||||
that prevent scheduling. Look at the output of the `kubectl describe ...` command above.
|
||||
There should be messages from the scheduler about why it can not schedule your pod.
|
||||
Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See
|
||||
[Compute Resources document](/docs/concepts/configuration/manage-resources-containers/) for more information.
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU
|
||||
or Memory in your cluster, in this case you need to delete Pods, adjust resource
|
||||
requests, or add new nodes to your cluster. See [Compute Resources document](/docs/concepts/configuration/manage-resources-containers/)
|
||||
for more information.
|
||||
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a
|
||||
limited number of places that pod can be scheduled. In most cases, `hostPort`
|
||||
is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
|
||||
#### My pod stays waiting
|
||||
|
||||
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
|
||||
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node,
|
||||
but it can't run on that machine. Again, the information from `kubectl describe ...`
|
||||
should be informative. The most common cause of `Waiting` pods is a failure to pull the image.
|
||||
There are three things to check:
|
||||
|
||||
* Make sure that you have the name of the image correct.
|
||||
* Have you pushed the image to the registry?
|
||||
|
@ -64,8 +71,9 @@ Again, the information from `kubectl describe ...` should be informative. The m
|
|||
|
||||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
Once your pod has been scheduled, the methods described in [Debug Running Pods](
|
||||
/docs/tasks/debug/debug-application/debug-running-pod/) are available for debugging.
|
||||
Once your pod has been scheduled, the methods described in
|
||||
[Debug Running Pods](/docs/tasks/debug/debug-application/debug-running-pod/)
|
||||
are available for debugging.
|
||||
|
||||
#### My pod is running but not doing what I told it to do
|
||||
|
||||
|
@ -92,25 +100,27 @@ The next thing to check is whether the pod on the apiserver
|
|||
matches the pod you meant to create (e.g. in a yaml file on your local machine).
|
||||
For example, run `kubectl get pods/mypod -o yaml > mypod-on-apiserver.yaml` and then
|
||||
manually compare the original pod description, `mypod.yaml` with the one you got
|
||||
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
|
||||
lines on the "apiserver" version that are not on the original version. This is
|
||||
expected. However, if there are lines on the original that are not on the apiserver
|
||||
back from apiserver, `mypod-on-apiserver.yaml`. There will typically be some
|
||||
lines on the "apiserver" version that are not on the original version. This is
|
||||
expected. However, if there are lines on the original that are not on the apiserver
|
||||
version, then this may indicate a problem with your pod spec.
|
||||
|
||||
### Debugging Replication Controllers
|
||||
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
|
||||
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't.
|
||||
If they can't create pods, then please refer to the
|
||||
[instructions above](#debugging-pods) to debug your pods.
|
||||
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
|
||||
controller.
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events
|
||||
related to the replication controller.
|
||||
|
||||
### Debugging Services
|
||||
|
||||
Services provide load balancing across a set of pods. There are several common problems that can make Services
|
||||
Services provide load balancing across a set of pods. There are several common problems that can make Services
|
||||
not work properly. The following instructions should help debug Service problems.
|
||||
|
||||
First, verify that there are endpoints for the service. For every Service object, the apiserver makes an `endpoints` resource available.
|
||||
First, verify that there are endpoints for the service. For every Service object,
|
||||
the apiserver makes an `endpoints` resource available.
|
||||
|
||||
You can view this resource with:
|
||||
|
||||
|
@ -124,8 +134,8 @@ IP addresses in the Service's endpoints.
|
|||
|
||||
#### My service is missing endpoints
|
||||
|
||||
If you are missing endpoints, try listing pods using the labels that Service uses. Imagine that you have
|
||||
a Service where the labels are:
|
||||
If you are missing endpoints, try listing pods using the labels that Service uses.
|
||||
Imagine that you have a Service where the labels are:
|
||||
|
||||
```yaml
|
||||
...
|
||||
|
@ -141,7 +151,7 @@ You can use:
|
|||
kubectl get pods --selector=name=nginx,type=frontend
|
||||
```
|
||||
|
||||
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
|
||||
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
|
||||
Verify that the pod's `containerPort` matches up with the Service's `targetPort`
|
||||
|
||||
#### Network traffic is not forwarded
|
||||
|
@ -157,4 +167,3 @@ actually serving; you have DNS working, iptables rules installed, and kube-proxy
|
|||
does not seem to be misbehaving.
|
||||
|
||||
You may also visit [troubleshooting document](/docs/tasks/debug/) for more information.
|
||||
|
||||
|
|
|
@ -144,10 +144,12 @@ You can configure the log audit backend using the following `kube-apiserver` fla
|
|||
|
||||
If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the `hostPath`
|
||||
to the location of the policy file and log file, so that audit records are persisted. For example:
|
||||
```shell
|
||||
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
|
||||
--audit-log-path=/var/log/kubernetes/audit/audit.log
|
||||
|
||||
```yaml
|
||||
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
|
||||
- --audit-log-path=/var/log/kubernetes/audit/audit.log
|
||||
```
|
||||
|
||||
then mount the volumes:
|
||||
|
||||
```yaml
|
||||
|
|
|
@ -7,11 +7,20 @@ content_type: task
|
|||
|
||||
{{% thirdparty-content %}}
|
||||
|
||||
Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to [get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/) in order to run debugging tools.
|
||||
Kubernetes applications usually consist of multiple, separate services,
|
||||
each running in its own container. Developing and debugging these services
|
||||
on a remote Kubernetes cluster can be cumbersome, requiring you to
|
||||
[get a shell on a running container](/docs/tasks/debug/debug-application/get-shell-running-container/)
|
||||
in order to run debugging tools.
|
||||
|
||||
`telepresence` is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using `telepresence` allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
|
||||
`telepresence` is a tool to ease the process of developing and debugging
|
||||
services locally while proxying the service to a remote Kubernetes cluster.
|
||||
Using `telepresence` allows you to use custom tools, such as a debugger and
|
||||
IDE, for a local service and provides the service full access to ConfigMap,
|
||||
secrets, and the services running on the remote cluster.
|
||||
|
||||
This document describes using `telepresence` to develop and debug services running on a remote cluster locally.
|
||||
This document describes using `telepresence` to develop and debug services
|
||||
running on a remote cluster locally.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
@ -24,7 +33,8 @@ This document describes using `telepresence` to develop and debug services runni
|
|||
|
||||
## Connecting your local machine to a remote Kubernetes cluster
|
||||
|
||||
After installing `telepresence`, run `telepresence connect` to launch its Daemon and connect your local workstation to the cluster.
|
||||
After installing `telepresence`, run `telepresence connect` to launch
|
||||
its Daemon and connect your local workstation to the cluster.
|
||||
|
||||
```
|
||||
$ telepresence connect
|
||||
|
@ -38,9 +48,14 @@ You can curl services using the Kubernetes syntax e.g. `curl -ik https://kuberne
|
|||
|
||||
## Developing or debugging an existing service
|
||||
|
||||
When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.
|
||||
When developing an application on Kubernetes, you typically program
|
||||
or debug a single service. The service might require access to other
|
||||
services for testing and debugging. One option is to use the continuous
|
||||
deployment pipeline, but even the fastest deployment pipeline introduces
|
||||
a delay in the program or debug cycle.
|
||||
|
||||
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT` command to create an "intercept" for rerouting remote service traffic.
|
||||
Use the `telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT`
|
||||
command to create an "intercept" for rerouting remote service traffic.
|
||||
|
||||
Where:
|
||||
|
||||
|
@ -48,14 +63,27 @@ Where:
|
|||
- `$LOCAL_PORT` is the port that your service is running on your local workstation
|
||||
- And `$REMOTE_PORT` is the port your service listens to in the cluster
|
||||
|
||||
Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.
|
||||
Running this command tells Telepresence to send remote traffic to your
|
||||
local service instead of the service in the remote Kubernetes cluster.
|
||||
Make edits to your service source code locally, save, and see the corresponding
|
||||
changes when accessing your remote application take effect immediately.
|
||||
You can also run your local service using a debugger or any other local development tool.
|
||||
|
||||
## How does Telepresence work?
|
||||
|
||||
Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)) or a subset of the traffic (when you create a [personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept)) to your local development environment.
|
||||
Telepresence installs a traffic-agent sidecar next to your existing
|
||||
application's container running in the remote cluster. It then captures
|
||||
all traffic requests going into the Pod, and instead of forwarding this
|
||||
to the application in the remote cluster, it routes all traffic (when you
|
||||
create a [global intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#global-intercept)
|
||||
or a subset of the traffic (when you create a
|
||||
[personal intercept](https://www.getambassador.io/docs/telepresence/latest/concepts/intercepts/#personal-intercept))
|
||||
to your local development environment.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
If you're interested in a hands-on tutorial, check out [this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s) that walks through locally developing the Guestbook application on Google Kubernetes Engine.
|
||||
If you're interested in a hands-on tutorial, check out
|
||||
[this tutorial](https://cloud.google.com/community/tutorials/developing-services-with-k8s)
|
||||
that walks through locally developing the Guestbook application on Google Kubernetes Engine.
|
||||
|
||||
For further reading, visit the [Telepresence website](https://www.telepresence.io).
|
||||
|
|
|
@ -10,31 +10,44 @@ weight: 10
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes APIs.
|
||||
Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
|
||||
allows the Kubernetes apiserver to be extended with additional APIs, which are not
|
||||
part of the core Kubernetes APIs.
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{< note >}}
|
||||
There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the Kubernetes general CA.
|
||||
There are a few setup requirements for getting the aggregation layer working in
|
||||
your environment to support mutual TLS auth between the proxy and extension apiservers.
|
||||
Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is
|
||||
signed by the aggregation layer CA and not by something else, like the Kubernetes general CA.
|
||||
{{< /note >}}
|
||||
|
||||
{{< caution >}}
|
||||
Reusing the same CA for different client types can negatively impact the cluster's ability to function. For more information, see [CA Reusage and Conflicts](#ca-reusage-and-conflicts).
|
||||
Reusing the same CA for different client types can negatively impact the cluster's
|
||||
ability to function. For more information, see [CA Reusage and Conflicts](#ca-reusage-and-conflicts).
|
||||
{{< /caution >}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver.
|
||||
Unlike Custom Resource Definitions (CRDs), the Aggregation API involves
|
||||
another server - your Extension apiserver - in addition to the standard Kubernetes apiserver.
|
||||
The Kubernetes apiserver will need to communicate with your extension apiserver,
|
||||
and your extension apiserver will need to communicate with the Kubernetes apiserver.
|
||||
In order for this communication to be secured, the Kubernetes apiserver uses x509
|
||||
certificates to authenticate itself to the extension apiserver.
|
||||
|
||||
This section describes how the authentication and authorization flows work, and how to configure them.
|
||||
This section describes how the authentication and authorization flows work,
|
||||
and how to configure them.
|
||||
|
||||
The high-level flow is as follows:
|
||||
|
||||
1. Kubernetes apiserver: authenticate the requesting user and authorize their rights to the requested API path.
|
||||
1. Kubernetes apiserver: authenticate the requesting user and authorize their
|
||||
rights to the requested API path.
|
||||
2. Kubernetes apiserver: proxy the request to the extension apiserver
|
||||
3. Extension apiserver: authenticate the request from the Kubernetes apiserver
|
||||
4. Extension apiserver: authorize the request from the original user
|
||||
|
@ -63,7 +76,8 @@ note:
|
|||
kube-apiserver / aggregator -> kube-apiserver / aggregator: authentication
|
||||
|
||||
note:
|
||||
2. The Kube API server authenticates the incoming request using any configured authentication methods (e.g. OIDC or client certs)
|
||||
2. The Kube API server authenticates the incoming request using any configured
|
||||
authentication methods (e.g. OIDC or client certs)
|
||||
|
||||
kube-apiserver / aggregator -> kube-apiserver / aggregator: authorization
|
||||
|
||||
|
@ -73,8 +87,10 @@ note:
|
|||
kube-apiserver / aggregator -> aggregated apiserver:
|
||||
|
||||
note:
|
||||
4. The aggregator opens a connection to the aggregated API server using `--proxy-client-cert-file`/`--proxy-client-key-file` client certificate/key to secure the channel
|
||||
5. The aggregator sends the user info from step 1 to the aggregated API server as http headers, as defined by the following flags:
|
||||
4. The aggregator opens a connection to the aggregated API server using
|
||||
`--proxy-client-cert-file`/`--proxy-client-key-file` client certificate/key to secure the channel
|
||||
5. The aggregator sends the user info from step 1 to the aggregated API server as
|
||||
http headers, as defined by the following flags:
|
||||
* `--requestheader-username-headers`
|
||||
* `--requestheader-group-headers`
|
||||
* `--requestheader-extra-headers-prefix`
|
||||
|
@ -86,27 +102,41 @@ note:
|
|||
* verifies the request has a recognized auth proxy client certificate
|
||||
* pulls user info from the incoming request's http headers
|
||||
|
||||
By default, it pulls the configuration information for this from a configmap in the kube-system namespace that is published by the kube-apiserver, containing the info from the `--requestheader-...` flags provided to the kube-apiserver (CA bundle to use, auth proxy client certificate names to allow, http header names to use, etc)
|
||||
By default, it pulls the configuration information for this from a configmap
|
||||
in the kube-system namespace that is published by the kube-apiserver,
|
||||
containing the info from the `--requestheader-...` flags provided to the
|
||||
kube-apiserver (CA bundle to use, auth proxy client certificate names to allow,
|
||||
http header names to use, etc)
|
||||
|
||||
aggregated apiserver -> kube-apiserver / aggregator: authorization
|
||||
|
||||
note:
|
||||
7. The aggregated apiserver authorizes the incoming request by making a SubjectAccessReview call to the kube-apiserver
|
||||
7. The aggregated apiserver authorizes the incoming request by making a
|
||||
SubjectAccessReview call to the kube-apiserver
|
||||
|
||||
aggregated apiserver -> aggregated apiserver: admission
|
||||
|
||||
note:
|
||||
8. For mutating requests, the aggregated apiserver runs admission checks. by default, the namespace lifecycle admission plugin ensures namespaced resources are created in a namespace that exists in the kube-apiserver
|
||||
8. For mutating requests, the aggregated apiserver runs admission checks.
|
||||
by default, the namespace lifecycle admission plugin ensures namespaced
|
||||
resources are created in a namespace that exists in the kube-apiserver
|
||||
-----END-----
|
||||
-->
|
||||
|
||||
### Kubernetes Apiserver Authentication and Authorization
|
||||
|
||||
A request to an API path that is served by an extension apiserver begins the same way as all API requests: communication to the Kubernetes apiserver. This path already has been registered with the Kubernetes apiserver by the extension apiserver.
|
||||
A request to an API path that is served by an extension apiserver begins
|
||||
the same way as all API requests: communication to the Kubernetes apiserver.
|
||||
This path already has been registered with the Kubernetes apiserver by the extension apiserver.
|
||||
|
||||
The user communicates with the Kubernetes apiserver, requesting access to the path. The Kubernetes apiserver uses standard authentication and authorization configured with the Kubernetes apiserver to authenticate the user and authorize access to the specific path.
|
||||
The user communicates with the Kubernetes apiserver, requesting access to the path.
|
||||
The Kubernetes apiserver uses standard authentication and authorization configured
|
||||
with the Kubernetes apiserver to authenticate the user and authorize access to the specific path.
|
||||
|
||||
For an overview of authenticating to a Kubernetes cluster, see ["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/). For an overview of authorization of access to Kubernetes cluster resources, see ["Authorization Overview"](/docs/reference/access-authn-authz/authorization/).
|
||||
For an overview of authenticating to a Kubernetes cluster, see
|
||||
["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/).
|
||||
For an overview of authorization of access to Kubernetes cluster resources, see
|
||||
["Authorization Overview"](/docs/reference/access-authn-authz/authorization/).
|
||||
|
||||
Everything to this point has been standard Kubernetes API requests, authentication and authorization.
|
||||
|
||||
|
@ -114,50 +144,75 @@ The Kubernetes apiserver now is prepared to send the request to the extension ap
|
|||
|
||||
### Kubernetes Apiserver Proxies the Request
|
||||
|
||||
The Kubernetes apiserver now will send, or proxy, the request to the extension apiserver that registered to handle the request. In order to do so, it needs to know several things:
|
||||
The Kubernetes apiserver now will send, or proxy, the request to the extension
|
||||
apiserver that registered to handle the request. In order to do so,
|
||||
it needs to know several things:
|
||||
|
||||
1. How should the Kubernetes apiserver authenticate to the extension apiserver, informing the extension apiserver that the request, which comes over the network, is coming from a valid Kubernetes apiserver?
|
||||
2. How should the Kubernetes apiserver inform the extension apiserver of the username and group for which the original request was authenticated?
|
||||
1. How should the Kubernetes apiserver authenticate to the extension apiserver,
|
||||
informing the extension apiserver that the request, which comes over the network,
|
||||
is coming from a valid Kubernetes apiserver?
|
||||
2. How should the Kubernetes apiserver inform the extension apiserver of the
|
||||
username and group for which the original request was authenticated?
|
||||
|
||||
In order to provide for these two, you must configure the Kubernetes apiserver using several flags.
|
||||
|
||||
#### Kubernetes Apiserver Client Authentication
|
||||
|
||||
The Kubernetes apiserver connects to the extension apiserver over TLS, authenticating itself using a client certificate. You must provide the following to the Kubernetes apiserver upon startup, using the provided flags:
|
||||
The Kubernetes apiserver connects to the extension apiserver over TLS,
|
||||
authenticating itself using a client certificate. You must provide the
|
||||
following to the Kubernetes apiserver upon startup, using the provided flags:
|
||||
|
||||
* private key file via `--proxy-client-key-file`
|
||||
* signed client certificate file via `--proxy-client-cert-file`
|
||||
* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file`
|
||||
* valid Common Name values (CNs) in the signed client certificate via `--requestheader-allowed-names`
|
||||
|
||||
The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met:
|
||||
The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file`
|
||||
to authenticate to the extension apiserver. In order for the request to be considered
|
||||
valid by a compliant extension apiserver, the following conditions must be met:
|
||||
|
||||
1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`.
|
||||
2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`.
|
||||
1. The connection must be made using a client certificate that is signed by
|
||||
the CA whose certificate is in `--requestheader-client-ca-file`.
|
||||
2. The connection must be made using a client certificate whose CN is one of
|
||||
those listed in `--requestheader-allowed-names`.
|
||||
|
||||
{{< note >}}You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
|
||||
{{< note >}}
|
||||
You can set this option to blank as `--requestheader-allowed-names=""`.
|
||||
This will indicate to an extension apiserver that _any_ CN is acceptable.
|
||||
{{< /note >}}
|
||||
|
||||
When started with these options, the Kubernetes apiserver will:
|
||||
|
||||
1. Use them to authenticate to the extension apiserver.
|
||||
2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests.
|
||||
2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`,
|
||||
in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved
|
||||
by extension apiservers to validate requests.
|
||||
|
||||
Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests.
|
||||
Note that the same client certificate is used by the Kubernetes apiserver to authenticate
|
||||
against _all_ extension apiservers. It does not create a client certificate per extension
|
||||
apiserver, but rather a single one to authenticate as the Kubernetes apiserver.
|
||||
This same one is reused for all extension apiserver requests.
|
||||
|
||||
#### Original Request Username and Group
|
||||
|
||||
When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used.
|
||||
When the Kubernetes apiserver proxies the request to the extension apiserver,
|
||||
it informs the extension apiserver of the username and group with which the
|
||||
original request successfully authenticated. It provides these in http headers
|
||||
of its proxied request. You must inform the Kubernetes apiserver of the names
|
||||
of the headers to be used.
|
||||
|
||||
* the header in which to store the username via `--requestheader-username-headers`
|
||||
* the header in which to store the group via `--requestheader-group-headers`
|
||||
* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix`
|
||||
|
||||
These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers.
|
||||
These header names are also placed in the `extension-apiserver-authentication` configmap,
|
||||
so they can be retrieved and used by extension apiservers.
|
||||
|
||||
### Extension Apiserver Authenticates the Request
|
||||
|
||||
The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver, must validate that the request actually did come from a valid authenticating proxy, which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via:
|
||||
The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver,
|
||||
must validate that the request actually did come from a valid authenticating proxy,
|
||||
which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via:
|
||||
|
||||
1. Retrieve the following from the configmap in `kube-system`, as described above:
|
||||
* Client CA certificate
|
||||
|
@ -168,17 +223,28 @@ The extension apiserver, upon receiving a proxied request from the Kubernetes ap
|
|||
* Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed.
|
||||
* Extract the username and group from the appropriate headers
|
||||
|
||||
If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
|
||||
If the above passes, then the request is a valid proxied request from a legitimate
|
||||
authenticating proxy, in this case the Kubernetes apiserver.
|
||||
|
||||
Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options.
|
||||
Note that it is the responsibility of the extension apiserver implementation to provide
|
||||
the above. Many do it by default, leveraging the `k8s.io/apiserver/` package.
|
||||
Others may provide options to override it using command-line options.
|
||||
|
||||
In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned.
|
||||
In order to have permission to retrieve the configmap, an extension apiserver
|
||||
requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader`
|
||||
in the `kube-system` namespace which can be assigned.
|
||||
|
||||
### Extension Apiserver Authorizes the Request
|
||||
|
||||
The extension apiserver now can validate that the user/group retrieved from the headers are authorized to execute the given request. It does so by sending a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/) request to the Kubernetes apiserver.
|
||||
The extension apiserver now can validate that the user/group retrieved from
|
||||
the headers are authorized to execute the given request. It does so by sending
|
||||
a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/)
|
||||
request to the Kubernetes apiserver.
|
||||
|
||||
In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account.
|
||||
In order for the extension apiserver to be authorized itself to submit the
|
||||
`SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions.
|
||||
Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that
|
||||
has the appropriate permissions. It can be granted to the extension apiserver's service account.
|
||||
|
||||
### Extension Apiserver Executes
|
||||
|
||||
|
@ -187,7 +253,8 @@ If the `SubjectAccessReview` passes, the extension apiserver executes the reques
|
|||
|
||||
## Enable Kubernetes Apiserver flags
|
||||
|
||||
Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider.
|
||||
Enable the aggregation layer via the following `kube-apiserver` flags.
|
||||
They may have already been taken care of by your provider.
|
||||
|
||||
--requestheader-client-ca-file=<path to aggregator CA cert>
|
||||
--requestheader-allowed-names=front-proxy-client
|
||||
|
@ -204,20 +271,44 @@ The Kubernetes apiserver has two client CA options:
|
|||
* `--client-ca-file`
|
||||
* `--requestheader-client-ca-file`
|
||||
|
||||
Each of these functions independently and can conflict with each other, if not used correctly.
|
||||
Each of these functions independently and can conflict with each other,
|
||||
if not used correctly.
|
||||
|
||||
* `--client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file referenced by `--client-ca-file`, then the request is treated as a legitimate request, and the user is the value of the common name `CN=`, while the group is the organization `O=`. See the [documentation on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs).
|
||||
* `--requestheader-client-ca-file`: When a request arrives to the Kubernetes apiserver, if this option is enabled, the Kubernetes apiserver checks the certificate of the request. If it is signed by one of the CA certificates in the file reference by `--requestheader-client-ca-file`, then the request is treated as a potentially legitimate request. The Kubernetes apiserver then checks if the common name `CN=` is one of the names in the list provided by `--requestheader-allowed-names`. If the name is allowed, the request is approved; if it is not, the request is not.
|
||||
* `--client-ca-file`: When a request arrives to the Kubernetes apiserver,
|
||||
if this option is enabled, the Kubernetes apiserver checks the certificate
|
||||
of the request. If it is signed by one of the CA certificates in the file referenced by
|
||||
`--client-ca-file`, then the request is treated as a legitimate request,
|
||||
and the user is the value of the common name `CN=`, while the group is the organization `O=`.
|
||||
See the [documentation on TLS authentication](/docs/reference/access-authn-authz/authentication/#x509-client-certs).
|
||||
* `--requestheader-client-ca-file`: When a request arrives to the Kubernetes apiserver,
|
||||
if this option is enabled, the Kubernetes apiserver checks the certificate of the request.
|
||||
If it is signed by one of the CA certificates in the file reference by `--requestheader-client-ca-file`,
|
||||
then the request is treated as a potentially legitimate request. The Kubernetes apiserver then
|
||||
checks if the common name `CN=` is one of the names in the list provided by `--requestheader-allowed-names`.
|
||||
If the name is allowed, the request is approved; if it is not, the request is not.
|
||||
|
||||
If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided, then the request first checks the `--requestheader-client-ca-file` CA and then the `--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs, are used for each of these options; regular client requests match against `--client-ca-file`, while aggregation requests match against `--requestheader-client-ca-file`. However, if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file` will fail, because the CA will match the CA in `--requestheader-client-ca-file`, but the common name `CN=` will **not** match one of the acceptable common names in `--requestheader-allowed-names`. This can cause your kubelets and other control plane components, as well as end-users, to be unable to authenticate to the Kubernetes apiserver.
|
||||
If _both_ `--client-ca-file` and `--requestheader-client-ca-file` are provided,
|
||||
then the request first checks the `--requestheader-client-ca-file` CA and then the
|
||||
`--client-ca-file`. Normally, different CAs, either root CAs or intermediate CAs,
|
||||
are used for each of these options; regular client requests match against `--client-ca-file`,
|
||||
while aggregation requests match against `--requestheader-client-ca-file`. However,
|
||||
if both use the _same_ CA, then client requests that normally would pass via `--client-ca-file`
|
||||
will fail, because the CA will match the CA in `--requestheader-client-ca-file`,
|
||||
but the common name `CN=` will **not** match one of the acceptable common names in
|
||||
`--requestheader-allowed-names`. This can cause your kubelets and other control plane components,
|
||||
as well as end-users, to be unable to authenticate to the Kubernetes apiserver.
|
||||
|
||||
For this reason, use different CA certs for the `--client-ca-file` option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
|
||||
For this reason, use different CA certs for the `--client-ca-file`
|
||||
option - to authorize control plane components and end-users - and the `--requestheader-client-ca-file` option - to authorize aggregation apiserver requests.
|
||||
|
||||
{{< warning >}}
|
||||
Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage.
|
||||
Do **not** reuse a CA that is used in a different context unless you understand
|
||||
the risks and the mechanisms to protect the CA's usage.
|
||||
{{< /warning >}}
|
||||
|
||||
If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag:
|
||||
If you are not running kube-proxy on a host running the API server,
|
||||
then you must make sure that the system is enabled with the following
|
||||
`kube-apiserver` flag:
|
||||
|
||||
--enable-aggregator-routing=true
|
||||
|
||||
|
@ -276,6 +367,8 @@ spec:
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [Set up an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
|
||||
* For a high level overview, see [Extending the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
* [Set up an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/)
|
||||
to work with the aggregation layer.
|
||||
* For a high level overview, see
|
||||
[Extending the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
* Learn how to [Extend the Kubernetes API Using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).
|
||||
|
|
|
@ -18,6 +18,11 @@ In Kubernetes, there are two ways to expose Pod and container fields to a runnin
|
|||
Together, these two ways of exposing Pod and container fields are called the
|
||||
downward API.
|
||||
|
||||
As Services are the primary mode of communication between containerized applications managed by Kubernetes,
|
||||
it is helpful to be able to discover them at runtime.
|
||||
|
||||
Read more about accessing Services [here](/docs/tutorials/services/connect-applications-service/#accessing-the-service).
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
|
|
@ -30,7 +30,5 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
@ -29,7 +29,5 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
@ -29,7 +29,5 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
|
|
@ -29,8 +29,6 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
|
|
@ -29,8 +29,6 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
|
|
@ -29,8 +29,6 @@ _build:
|
|||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ earlier versions of this tutorial.
|
|||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
The example shown on this page works with `kubectl` 1.14 and above.
|
||||
The example shown on this page works with `kubectl` 1.27 and above.
|
||||
|
||||
Download the following configuration files:
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ spec:
|
|||
tier: mysql
|
||||
spec:
|
||||
containers:
|
||||
- image: mysql:5.6
|
||||
- image: mysql:8.0
|
||||
name: mysql
|
||||
env:
|
||||
- name: MYSQL_ROOT_PASSWORD
|
||||
|
@ -53,6 +53,15 @@ spec:
|
|||
secretKeyRef:
|
||||
name: mysql-pass
|
||||
key: password
|
||||
- name: MYSQL_DATABASE
|
||||
value: wordpress
|
||||
- name: MYSQL_USER
|
||||
value: wordpress
|
||||
- name: MYSQL_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mysql-pass
|
||||
key: password
|
||||
ports:
|
||||
- containerPort: 3306
|
||||
name: mysql
|
||||
|
|
|
@ -45,7 +45,7 @@ spec:
|
|||
tier: frontend
|
||||
spec:
|
||||
containers:
|
||||
- image: wordpress:4.8-apache
|
||||
- image: wordpress:6.2.1-apache
|
||||
name: wordpress
|
||||
env:
|
||||
- name: WORDPRESS_DB_HOST
|
||||
|
@ -55,6 +55,8 @@ spec:
|
|||
secretKeyRef:
|
||||
name: mysql-pass
|
||||
key: password
|
||||
- name: WORDPRESS_DB_USER
|
||||
value: wordpress
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: wordpress
|
||||
|
|
|
@ -5,4 +5,4 @@ cluster, you can create one by using
|
|||
or you can use one of these Kubernetes playgrounds:
|
||||
|
||||
* [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes)
|
||||
* [Play with Kubernetes](http://labs.play-with-k8s.com/)
|
||||
* [Play with Kubernetes](https://labs.play-with-k8s.com/)
|
||||
|
|
|
@ -531,7 +531,7 @@ Selecciona una de las pestañas.
|
|||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
cloud.google.com/load-balancer-type: "Internal"
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
|
||||
|
|
|
@ -0,0 +1,551 @@
|
|||
---
|
||||
reviewers:
|
||||
- edithturn
|
||||
- raelga
|
||||
- electrocucaracha
|
||||
title: StorageClass (Clases de Almacenamiento)
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Este documento describe el concepto de una StorageClass (Clases de Almacenamiento) en Kubernetes. Necesita estar familiarizado con
|
||||
[volumes](/docs/concepts/storage/volumes/) y
|
||||
[persistent volumes](/docs/concepts/storage/persistent-volumes).
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Introducción
|
||||
|
||||
Una StorageClass proporciona una forma para que los administradores describan las "clases" de
|
||||
almacenamiento que ofrecen. Diferentes clases pueden corresponder a niveles de calidad de servicio,
|
||||
o a políticas de copia de seguridad, o a políticas arbitrarias determinadas por los administradores del clúster
|
||||
de Kubernetes. Kubernetes en sí no tiene opiniones sobre lo que representan las clases. Este concepto a veces se denomina "profiles" en otros sistemas de almacenamiento.
|
||||
|
||||
## El recurso StorageClass
|
||||
|
||||
Cada StorageClass contiene los campos `provisioner`, `parameters` y
|
||||
`reclaimPolicy`, que se utilizan cuando un PersistentVolume que pertenece a
|
||||
la clase debe aprovisionarse dinámicamente.
|
||||
|
||||
El nombre de un objeto StorageClass es significativo y es la forma en que los usuarios pueden
|
||||
solicitar una clase en particular. Los administradores establecen el nombre y otros parámetros
|
||||
de una clase al crear objetos StorageClass por primera vez, y los objetos no pueden
|
||||
actualizarse una vez creados.
|
||||
|
||||
Los administradores pueden especificar una StorageClass predeterminada solo para los PVC que no
|
||||
solicite cualquier clase en particular a la que vincularse: vea la
|
||||
[sección PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
|
||||
para detalles.
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
parameters:
|
||||
type: gp2
|
||||
reclaimPolicy: Retain
|
||||
allowVolumeExpansion: true
|
||||
mountOptions:
|
||||
- debug
|
||||
volumeBindingMode: Immediate
|
||||
```
|
||||
|
||||
### Proveedor
|
||||
|
||||
Cada StorageClass tiene un aprovisionador que determina qué complemento de volumen se usa
|
||||
para el aprovisionamiento de PV. Este campo debe ser especificado.
|
||||
|
||||
| Complemento de volumen | Aprovisionador interno | Ejemplo de configuración |
|
||||
| :--------------------- | :--------------------: | :-----------------------------------: |
|
||||
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
|
||||
| AzureFile | ✓ | [Azure File](#azure-file) |
|
||||
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
|
||||
| CephFS | - | - |
|
||||
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder) |
|
||||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
|
||||
| iSCSI | - | - |
|
||||
| NFS | - | [NFS](#nfs) |
|
||||
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
|
||||
| VsphereVolume | ✓ | [vSphere](#vsphere) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
|
||||
| Local | - | [Local](#local) |
|
||||
|
||||
No está restringido especificar los aprovisionadores "internos"
|
||||
enumerados aquí (cuyos nombres tienen el prefijo "kubernetes.io" y se envían
|
||||
junto con Kubernetes). También puede ejecutar y especificar aprovisionadores externos,
|
||||
que son programas independientes que siguen una [especificación](https://git.k8s.io/design-proposals-archive/storage/volume-provisioning.md)
|
||||
definida por Kubernetes. Los autores de proveedores externos tienen total discreción
|
||||
sobre dónde vive su código, cómo se envía el aprovisionador, cómo debe ser
|
||||
ejecutada, qué complemento de volumen usa (incluido Flex), etc. El repositorio
|
||||
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner)
|
||||
alberga una biblioteca para escribir aprovisionadores externos que implementa la mayor parte de
|
||||
la especificación. Algunos proveedores externos se enumeran en el repositorio
|
||||
[kubernetes-sigs/sig-storage-lib-external-provisioner](https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner).
|
||||
|
||||
Por ejemplo, NFS no proporciona un aprovisionador interno, pero se puede usar un aprovisionador externo. También hay casos en los que los proveedores de almacenamiento de terceros proporcionan su propio aprovisionador externo.
|
||||
|
||||
### Política de reclamación
|
||||
|
||||
Los PersistentVolumes creados dinámicamente por StorageClass tendrán la política de recuperación especificada en el campo `reclaimPolicy` de la clase, que puede ser `Delete` o `Retain`. Si no se especifica `reclaimPolicy` cuando se crea un objeto StorageClass, el valor predeterminado será `Delete`.
|
||||
|
||||
Los PersistentVolumes que se crean manualmente y se administran a través de StorageClass tendrán la política de recuperación que se les asignó en el momento de la creación.
|
||||
|
||||
### Permitir expansión de volumen
|
||||
|
||||
{{< feature-state for_k8s_version="v1.11" state="beta" >}}
|
||||
|
||||
PersistentVolumes se puede configurar para que sea ampliable. Esta función, cuando se establece en `true`, permite a los usuarios cambiar el tamaño del volumen editando el objeto de PVC correspondiente.
|
||||
|
||||
Los siguientes tipos de volúmenes admiten la expansión de volumen, cuando el StorageClass subyacente tiene el campo `allowVolumeExpansion` establecido en verdadero.
|
||||
|
||||
{{< table caption = "Table of Volume types and the version of Kubernetes they require" >}}
|
||||
|
||||
| Tipo de volumen | Versión requerida de Kubernetes |
|
||||
| :------------------- | :------------------------------ |
|
||||
| gcePersistentDisk | 1.11 |
|
||||
| awsElasticBlockStore | 1.11 |
|
||||
| Cinder | 1.11 |
|
||||
| rbd | 1.11 |
|
||||
| Azure File | 1.11 |
|
||||
| Azure Disk | 1.11 |
|
||||
| Portworx | 1.11 |
|
||||
| FlexVolume | 1.13 |
|
||||
| CSI | 1.14 (alpha), 1.16 (beta) |
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
{{< note >}}
|
||||
Solo puede usar la función de expansión de volumen para aumentar un volumen, no para reducirlo.
|
||||
{{< /note >}}
|
||||
|
||||
### Opciones de montaje
|
||||
|
||||
Los PersistentVolumes creados dinámicamente por StorageClass tendrán las opciones de montaje especificadas en el campo `mountOptions` de la clase.
|
||||
|
||||
Si el complemento de volumen no admite opciones de montaje pero se especifican opciones de montaje, el aprovisionamiento fallará. Las opciones de montura no se validan ni en la clase ni en el PV. Si una opción de montaje no es válida, el montaje PV falla.
|
||||
|
||||
### Modo de enlace de volumen
|
||||
|
||||
El campo `volumeBindingMode` controla cuándo debe ocurrir [enlace de volumen y aprovisionamiento dinámico](/docs/concepts/storage/persistent-volumes/#provisioning). Cuando no está configurado, el modo "Inmediato" se usa de manera predeterminada.
|
||||
|
||||
El modo `Inmediato` indica que el enlace de volumen y la dinámica
|
||||
el aprovisionamiento ocurre una vez que se crea PersistentVolumeClaim. Para los backends de almacenamiento que están restringidos por topología y no son accesibles globalmente desde todos los nodos del clúster, los volúmenes persistentes se vincularán o aprovisionarán sin conocimiento de los requisitos de programación del pod. Esto puede resultar en Pods no programables.
|
||||
|
||||
Un administrador de clústeres puede abordar este problema especificando el modo `WaitForFirstConsumer` que retrasará el enlace y el aprovisionamiento de un PersistentVolume hasta que se cree un Pod que use PersistentVolumeClaim.
|
||||
PersistentVolumes se seleccionarán o aprovisionarán de acuerdo con la topología especificada por las restricciones de programación del pod. Estos incluyen, pero no se limitan a [requerimientos de recursos](/docs/concepts/configuration/manage-resources-containers/),
|
||||
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector),
|
||||
[pod affinity and anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
y [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
|
||||
|
||||
Los siguientes complementos admiten `WaitForFirstConsumer` con aprovisionamiento dinámico:
|
||||
|
||||
- [AWSElasticBlockStore](#aws-ebs)
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
- [AzureDisk](#azure-disk)
|
||||
|
||||
Los siguientes complementos admiten `WaitForFirstConsumer` con enlace PersistentVolume creado previamente:
|
||||
|
||||
- Todo lo anterior
|
||||
- [Local](#local)
|
||||
|
||||
{{< feature-state state="stable" for_k8s_version="v1.17" >}}
|
||||
[CSI volumes](/docs/concepts/storage/volumes/#csi) también son compatibles con el aprovisionamiento dinámico y los PV creados previamente, pero deberá consultar la documentación de un controlador CSI específico para ver sus claves de topología y ejemplos compatibles.
|
||||
|
||||
{{< note >}}
|
||||
Si elige usar `WaitForFirstConsumer`, no use `nodeName` en la especificación del pod para especificar la afinidad del nodo. Si se utiliza `nodeName` en este caso, el planificador se omitirá y el PVC permanecerá en estado `pendiente`.
|
||||
|
||||
En su lugar, puede usar el selector de nodos para el nombre de host en este caso, como se muestra a continuación.
|
||||
{{< /note >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: task-pv-pod
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: kube-01
|
||||
volumes:
|
||||
- name: task-pv-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: task-pv-claim
|
||||
containers:
|
||||
- name: task-pv-container
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: task-pv-storage
|
||||
```
|
||||
|
||||
### Topologías permitidas
|
||||
|
||||
Cuando un operador de clúster especifica el modo de enlace de volumen `WaitForFirstConsumer`, ya no es necesario restringir el aprovisionamiento a topologías específicas en la mayoría de las situaciones. Sin embargo, si todavía es necesario, se puede especificar `allowedTopologies`.
|
||||
|
||||
Este ejemplo demuestra cómo restringir la topología de los volúmenes aprovisionados a determinadas
|
||||
zonas y debe usarse como reemplazo de los parámetros `zone` y `zones` para el
|
||||
complementos compatibles.
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-standard
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowedTopologies:
|
||||
- matchLabelExpressions:
|
||||
- key: failure-domain.beta.kubernetes.io/zone
|
||||
values:
|
||||
- us-central-1a
|
||||
- us-central-1b
|
||||
```
|
||||
|
||||
## Parámetros
|
||||
|
||||
Las clases de almacenamiento tienen parámetros que describen los volúmenes que pertenecen a la clase de almacenamiento. Se pueden aceptar diferentes parámetros dependiendo del `provisioner`. Por ejemplo, el valor `io1`, para el parámetro `type` y el parámetro `iopsPerGB` son específicos de EBS. Cuando se omite un parámetro, se utiliza algún valor predeterminado.
|
||||
|
||||
Puede haber como máximo 512 parámetros definidos para StorageClass.
|
||||
La longitud total del objeto de parámetros, incluidas sus claves y valores, no puede superar los 256 KiB.
|
||||
|
||||
### AWS EBS
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/aws-ebs
|
||||
parameters:
|
||||
type: io1
|
||||
iopsPerGB: "10"
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
- `type`: `io1`, `gp2`, `sc1`, `st1`. Ver
|
||||
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
|
||||
para detalles. Por defecto: `gp2`.
|
||||
- `zone` (Obsoleto): AWS zona. Si no se especifica `zone` ni `zones`, los volúmenes generalmente se distribuyen por turnos en todas las zonas activas donde el clúster de Kubernetes tiene un nodo. Los parámetros `zone` y `zones` no deben usarse al mismo tiempo.
|
||||
- `zones` (Obsoleto): una lista separada por comas de las zonas de AWS. Si no se especifica `zone` ni `zones`, los volúmenes generalmente se distribuyen por turnos en todas las zonas activas donde el clúster de Kubernetes tiene un nodo. Los parámetros `zone` y `zones` no deben usarse al mismo tiempo.
|
||||
|
||||
- `iopsPerGB`: solo para volúmenes `io1`. Operaciones de E/S por segundo por GiB. El complemento de volumen de AWS multiplica esto por el tamaño del volumen solicitado para calcular las IOPS del volumen y lo limita a 20 000 IOPS (máximo admitido por AWS, consulte [Documentos de AWS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)). Aquí se espera una cadena, es decir, `"10"`, no `10`.
|
||||
- `fsType`: fsType que es compatible con kubernetes. Predeterminado: `"ext4"`.
|
||||
- `encrypted`: indica si el volumen de EBS debe cifrarse o no. Los valores válidos son `"true"` o `"false"`. Aquí se espera una cadena, es decir, `"true"`, no `true`.
|
||||
- `kmsKeyId`: opcional. El nombre de recurso de Amazon completo de la clave que se utilizará al cifrar el volumen. Si no se proporciona ninguno pero `encrypted` es verdadero, AWS genera una clave. Consulte los documentos de AWS para obtener un valor de ARN válido.
|
||||
|
||||
{{< note >}}
|
||||
`zone` y `zones` Los parámetros están en desuso y se reemplazan con
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
{{< /note >}}
|
||||
|
||||
### GCE PD
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-standard
|
||||
fstype: ext4
|
||||
replication-type: none
|
||||
```
|
||||
|
||||
- `type`: `pd-standard` o `pd-ssd`. Por defecto: `pd-standard`
|
||||
- `zone` (Obsoleto): zona GCE. Si no se especifica `zone` ni `zones`, los volúmenes generalmente se distribuyen por turnos en todas las zonas activas donde el clúster de Kubernetes tiene un nodo. Los parámetros `zone` y `zones` no deben usarse al mismo tiempo.
|
||||
- `zones` (Obsoleto): Una lista separada por comas de zona(s) GCE. Si no se especifica `zone` ni `zones`, los volúmenes generalmente se distribuyen por turnos en todas las zonas activas donde el clúster de Kubernetes tiene un nodo. Los parámetros `zone` y `zones` no deben usarse al mismo tiempo.
|
||||
|
||||
- `fstype`: `ext4` o `xfs`. Por defecto: `ext4`. El tipo de sistema de archivos definido debe ser compatible con el sistema operativo host.
|
||||
- `replication-type`: `none` or `regional-pd`. Por defecto: `none`.
|
||||
|
||||
Si `replication-type` se establece en `none`, se aprovisionará un PD regular (zonal).
|
||||
|
||||
Si `replication-type` se establece en`regional-pd`, a
|
||||
[Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)
|
||||
será aprovisionado. Es muy recomendable tener
|
||||
`volumeBindingMode: WaitForFirstConsumer` establecido, en cuyo caso cuando crea un Pod que consume un PersistentVolumeClaim que usa esta clase de almacenamiento, un disco persistente regional se aprovisiona con dos zonas. Una zona es la misma que la zona en la que está programado el Pod. La otra zona se selecciona aleatoriamente de las zonas disponibles para el clúster. Las zonas de disco se pueden restringir aún más usando `allowedTopologies`.
|
||||
|
||||
{{< note >}}
|
||||
`zone` y `zones` parámetros están en desuso y se reemplazan con
|
||||
[allowedTopologies](#allowed-topologies)
|
||||
{{< /note >}}
|
||||
|
||||
### NFS
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: example-nfs
|
||||
provisioner: example.com/external-nfs
|
||||
parameters:
|
||||
server: nfs-server.example.com
|
||||
path: /share
|
||||
readOnly: "false"
|
||||
```
|
||||
|
||||
- `server`: Servidor es el nombre de host o la dirección IP del servidor NFS.
|
||||
- `path`: Ruta que exporta el servidor NFS.
|
||||
- `readOnly`: Una bandera que indica si el almacenamiento se montará como solo lectura (falso por defecto)
|
||||
|
||||
Kubernetes no incluye un proveedor de NFS interno. Debe usar un aprovisionador externo para crear una StorageClass para NFS.
|
||||
Aquí hay unos ejemplos:
|
||||
|
||||
- [Servidor NFS Ganesha y aprovisionador externo](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner)
|
||||
- [Aprovisionador externo de subdirección NFS](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
|
||||
### OpenStack Cinder
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: gold
|
||||
provisioner: kubernetes.io/cinder
|
||||
parameters:
|
||||
availability: nova
|
||||
```
|
||||
|
||||
- `availability`: Zona de disponibilidad. Si no se especifica, los volúmenes generalmente se distribuyen por turnos en todas las zonas activas donde el clúster de Kubernetes tiene un nodo.
|
||||
|
||||
{{< note >}}
|
||||
{{< feature-state state="deprecated" for_k8s_version="v1.11" >}}
|
||||
Este proveedor interno de OpenStack está obsoleto. Por favor use [el proveedor de nube externo para OpenStack](https://github.com/kubernetes/cloud-provider-openstack).
|
||||
{{< /note >}}
|
||||
|
||||
### vSphere
|
||||
|
||||
Hay dos tipos de aprovisionadores para las clases de almacenamiento de vSphere:
|
||||
|
||||
- [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com`
|
||||
- [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume`
|
||||
|
||||
Los proveedores In-tree estan [obsoletos](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). Para obtener más información sobre el aprovisionador de CSI, consulte [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) y [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#vsphere-csi-migration).
|
||||
|
||||
#### Aprovisionador de CSI {#vsphere-provisioner-csi}
|
||||
|
||||
El aprovisionador vSphere CSI StorageClass funciona con clústeres de Tanzu Kubernetes. Para ver un ejemplo, consulte el [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml).
|
||||
|
||||
#### Aprovisionador de vCP
|
||||
|
||||
Los siguientes ejemplos utilizan el aprovisionador StorageClass de VMware Cloud Provider (vCP).
|
||||
|
||||
1. Cree una StorageClass con un formato de disco especificado por el usuario.
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/vsphere-volume
|
||||
parameters:
|
||||
diskformat: zeroedthick
|
||||
```
|
||||
|
||||
`diskformat`: `thin`, `zeroedthick` y `eagerzeroedthick`. Por defecto: `"thin"`.
|
||||
|
||||
2. Cree una StorageClass con un formato de disco en un almacén de datos especificado por el usuario.
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/vsphere-volume
|
||||
parameters:
|
||||
diskformat: zeroedthick
|
||||
datastore: VSANDatastore
|
||||
```
|
||||
|
||||
`datastore`: el usuario también puede especificar el almacén de datos en StorageClass. El volumen se creará en el almacén de datos especificado en StorageClass, que en este caso es `VSANDatastore`. Este campo es opcional. Si no se especifica el almacén de datos, el volumen se creará en el almacén de datos especificado en el archivo de configuración de vSphere utilizado para inicializar vSphere Cloud Provider.
|
||||
|
||||
3. Gestión de políticas de almacenamiento dentro de Kubernetes
|
||||
|
||||
- Uso de la política de vCenter SPBM existente
|
||||
|
||||
Una de las características más importantes de vSphere for Storage Management es la administración basada en políticas. La gestión basada en políticas de almacenamiento (SPBM) es un marco de políticas de almacenamiento que proporciona un único plano de control unificado en una amplia gama de servicios de datos y soluciones de almacenamiento. SPBM permite a los administradores de vSphere superar los desafíos iniciales de aprovisionamiento de almacenamiento, como la planificación de la capacidad, los niveles de servicio diferenciados y la gestión del margen de capacidad.
|
||||
|
||||
Las políticas de SPBM se pueden especificar en StorageClass mediante el parámetro `storagePolicyName`.
|
||||
|
||||
- Soporte de políticas Virtual SAN dentro de Kubernetes
|
||||
|
||||
Los administradores de Vsphere Infrastructure (VI) tendrán la capacidad de especificar capacidades de almacenamiento Virtual SAN personalizadas durante el aprovisionamiento dinámico de volúmenes. Ahora puede definir los requisitos de almacenamiento, como el rendimiento y la disponibilidad, en forma de capacidades de almacenamiento durante el aprovisionamiento dinámico de volúmenes.
|
||||
Los requisitos de capacidad de almacenamiento se convierten en una política de Virtual SAN que luego se transfiere a la capa de Virtual SAN cuando se crea un volumen persistente (disco virtual). El disco virtual se distribuye en el almacén de datos de Virtual SAN para cumplir con los requisitos.
|
||||
|
||||
Puedes ver la [Administración basada en políticas de almacenamiento para el aprovisionamiento dinámico de volúmenes](https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/fa4c8b8ad46a85b6555d715dd9d27ff69839df53/documentation/policy-based-mgmt.md)
|
||||
para obtener más detalles sobre cómo utilizar las políticas de almacenamiento para la gestión de volúmenes persistentes.
|
||||
|
||||
Hay pocos
|
||||
[ejemplos de vSphere](https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere)
|
||||
que prueba para la administración persistente de volúmenes dentro de Kubernetes para vSphere.
|
||||
|
||||
### Ceph RBD
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/rbd
|
||||
parameters:
|
||||
monitors: 10.16.153.105:6789
|
||||
adminId: kube
|
||||
adminSecretName: ceph-secret
|
||||
adminSecretNamespace: kube-system
|
||||
pool: kube
|
||||
userId: kube
|
||||
userSecretName: ceph-secret-user
|
||||
userSecretNamespace: default
|
||||
fsType: ext4
|
||||
imageFormat: "2"
|
||||
imageFeatures: "layering"
|
||||
```
|
||||
|
||||
- `monitors`: Monitores Ceph, delimitados por comas. Este parámetro es obligatorio.
|
||||
- `adminId`: ID de cliente de Ceph que es capaz de crear imágenes en el grupo.
|
||||
El valor predeterminado es "admin".
|
||||
- `adminSecretName`: Nombre secreto para `adminId`. Este parámetro es obligatorio. El secreto proporcionado debe tener el tipo "kubernetes.io/rbd".
|
||||
- `adminSecretNamespace`: El espacio de nombres para `adminSecretName`. El valor predeterminado es "predeterminado".
|
||||
- `pool`: Grupo Ceph RBD. El valor predeterminado es "rbd".
|
||||
- `userId`: ID de cliente de Ceph que se utiliza para asignar la imagen RBD. El valor predeterminado es el mismo que `adminId`.
|
||||
- `userSecretName`: El nombre de Ceph Secret para `userId` para mapear la imagen RBD. Él
|
||||
debe existir en el mismo espacio de nombres que los PVC. Este parámetro es obligatorio.
|
||||
El secreto proporcionado debe tener el tipo "kubernetes.io/rbd", por ejemplo creado de esta manera:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
|
||||
--from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \
|
||||
--namespace=kube-system
|
||||
```
|
||||
|
||||
- `userSecretNamespace`: El espacio de nombres para `userSecretName`.
|
||||
- `fsType`: fsType que es compatible con Kubernetes. Por defecto: `"ext4"`.
|
||||
- `imageFormat`: Ceph RBD formato de imagen, "1" o "2". El valor predeterminado es "2".
|
||||
- `imageFeatures`: Este parámetro es opcional y solo debe usarse si
|
||||
establece `imageFormat` a "2". Las características admitidas actualmente son `layering` solamente.
|
||||
El valor predeterminado es "" y no hay funciones activadas.
|
||||
|
||||
### Azure Disk
|
||||
|
||||
#### Clase de almacenamiento Azure Unmanaged Disk {#azure-unmanaged-disk-storage-class}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: eastus
|
||||
storageAccount: azure_storage_account_name
|
||||
```
|
||||
|
||||
- `skuName`: Nivel de SKU de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío.
|
||||
- `location`: Ubicación de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío.
|
||||
- `storageAccount`: Nombre de la cuenta de almacenamiento de Azure. Si se proporciona una cuenta de almacenamiento, debe residir en el mismo grupo de recursos que el clúster y se ignora la `location`. Si no se proporciona una cuenta de almacenamiento, se creará una nueva cuenta de almacenamiento en el mismo grupo de recursos que el clúster.
|
||||
|
||||
#### Clase de almacenamiento Azure Disk (empezando desde v1.7.2) {#azure-disk-storage-class}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Standard_LRS
|
||||
kind: managed
|
||||
```
|
||||
|
||||
- `storageaccounttype`: Nivel de SKU de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío.
|
||||
- `kind`: Los valores posibles `shared`, `dedicated`, y `managed` (por defecto).
|
||||
Cuando `kind` es `shared`, todos los discos no administrados se crean en algunas cuentas de almacenamiento compartido en el mismo grupo de recursos que el clúster. Cuando `kind` es
|
||||
`dedicated`, se creará una nueva cuenta de almacenamiento dedicada para el nuevo disco no administrado en el mismo grupo de recursos que el clúster. Cuando `kind` es
|
||||
`managed`, todos los discos administrados se crean en el mismo grupo de recursos que el clúster.
|
||||
- `resourceGroup`: Especifique el grupo de recursos en el que se creará el disco de Azure.
|
||||
Debe ser un nombre de grupo de recursos existente. Si no se especifica, el disco se colocará en el mismo grupo de recursos que el clúster de Kubernetes actual.
|
||||
|
||||
* Premium VM puede conectar discos Standard_LRS y Premium_LRS, mientras que Standard VM solo puede conectar discos Standard_LRS.
|
||||
* La VM administrada solo puede adjuntar discos administrados y la VM no administrada solo puede adjuntar discos no administrados.
|
||||
|
||||
### Azure File
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: azurefile
|
||||
provisioner: kubernetes.io/azure-file
|
||||
parameters:
|
||||
skuName: Standard_LRS
|
||||
location: eastus
|
||||
storageAccount: azure_storage_account_name
|
||||
```
|
||||
|
||||
- `skuName`: Nivel de SKU de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío.
|
||||
- `location`: Ubicación de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío.
|
||||
- `storageAccount`: Nombre de la cuenta de almacenamiento de Azure. El valor predeterminado está vacío. Si un almacenamiento
|
||||
no se proporciona la cuenta, se buscan todas las cuentas de almacenamiento asociadas con el grupo de recursos para encontrar una que coincida con `skuName` y `location`. Si se proporciona una cuenta de almacenamiento, debe residir en el mismo grupo de recursos que el clúster y se ignoran `skuName` y `location`.
|
||||
- `secretNamespace`: el espacio de nombres del secreto que contiene el nombre y la clave de la cuenta de Azure Storage. El valor predeterminado es el mismo que el Pod.
|
||||
- `secretName`: el nombre del secreto que contiene el nombre y la clave de la cuenta de Azure Storage. El valor predeterminado es `azure-storage-account-<accountName>-secret`
|
||||
- `readOnly`: una bandera que indica si el almacenamiento se montará como de solo lectura. El valor predeterminado es falso, lo que significa un montaje de lectura/escritura. Esta configuración también afectará la configuración `ReadOnly` en VolumeMounts.
|
||||
|
||||
Durante el aprovisionamiento de almacenamiento, se crea un secreto denominado `secretName` para las credenciales de montaje. Si el clúster ha habilitado ambos [RBAC](/docs/reference/access-authn-authz/rbac/) y [Controller Roles](/docs/reference/access-authn-authz/rbac/#controller-roles), agregue el permiso de `create` de recurso `secret` para clusterrole
|
||||
`system:controller:persistent-volume-binder`.
|
||||
|
||||
En un contexto de tenencia múltiple, se recomienda enfáticamente establecer el valor para `secretNamespace` explícitamente; de lo contrario, las credenciales de la cuenta de almacenamiento pueden ser leído por otros usuarios.
|
||||
|
||||
### Volumen Portworx
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: portworx-io-priority-high
|
||||
provisioner: kubernetes.io/portworx-volume
|
||||
parameters:
|
||||
repl: "1"
|
||||
snap_interval: "70"
|
||||
priority_io: "high"
|
||||
```
|
||||
|
||||
- `fs`: sistema de archivos a distribuir: `none/xfs/ext4` (predeterminado: `ext4`).
|
||||
- `block_size`: tamaño de bloque en Kbytes (predeterminado: `32`).
|
||||
- `repl`: número de réplicas síncronas que se proporcionarán en forma de
|
||||
factor de replicación `1..3` (predeterminado: `1`) Aquí se espera una cadena, es decir
|
||||
`"1"` y no `1`.
|
||||
- `priority_io`: determina si el volumen se creará a partir de un almacenamiento de mayor rendimiento o de menor prioridad `high/medium/low` (predeterminado: `low`).
|
||||
|
||||
- `snap_interval`: reloj/intervalo de tiempo en minutos para determinar cuándo activar las instantáneas. Las instantáneas son incrementales según la diferencia con la instantánea anterior, 0 desactiva las instantáneas (predeterminado: `0`). Aquí se espera una cadena, es decir `"70"` y no `70`.
|
||||
- `aggregation_level`: especifica el número de fragmentos en los que se distribuiría el volumen, 0 indica un volumen no agregado (predeterminado: `0`). Aquí se espera una cadena, es decir, `"0"` y no `0`
|
||||
- `ephemeral`: especifica si el volumen debe limpiarse después de desmontarlo o si debe ser persistente. El caso de uso `emptyDir` puede establecer este valor en verdadero y el caso de uso de `persistent volumes`, como para bases de datos como Cassandra, debe establecerse en falso, `true/false` (predeterminado `false`). Aquí se espera una cadena, es decir, `"true"` y no `true`.
|
||||
|
||||
### Local
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local-storage
|
||||
provisioner: kubernetes.io/no-provisioner
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
```
|
||||
|
||||
Actualmente, los volúmenes locales no admiten el aprovisionamiento dinámico; sin embargo, aún se debe crear una StorageClass para retrasar el enlace del volumen hasta la programación del Pod. Esto se especifica mediante el modo de enlace de volumen `WaitForFirstConsumer`.
|
||||
|
||||
Retrasar el enlace de volumen permite que el programador considere todos los datos de un Pod.
|
||||
restricciones de programación al elegir un PersistentVolume apropiado para un PersistentVolumeClaim.
|
|
@ -148,9 +148,9 @@ Para ver las etiquetas generadas automáticamente en cada pod, ejecuta el comand
|
|||
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
```
|
||||
|
||||
El ReplicaSet creado garantiza que hay tres Pods de `nginx` ejecutándose en todo momento.
|
||||
|
|
|
@ -1,95 +0,0 @@
|
|||
---
|
||||
reviewers:
|
||||
- raelga
|
||||
title: Pod Preset
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.6" state="alpha" >}}
|
||||
|
||||
Esta página provee una descripción general de los PodPresets, los cuales son
|
||||
los objetos que se utilizan para inyectar cierta información en los Pods en
|
||||
el momento de la creación. Esta información puede incluir secretos, volúmenes,
|
||||
montajes de volúmenes y variables de entorno.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
## Entendiendo los Pod Presets
|
||||
|
||||
Un PodPreset es un recurso de la API utilizado para poder inyectar requerimientos
|
||||
adicionales de tiempo de ejecución en un Pod en el momento de la creación.
|
||||
Se utilizan los [selectores de etiquetas](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
|
||||
para especificar los Pods a los que se aplica un PodPreset determinado.
|
||||
|
||||
El uso de un PodPreset permite a los autores de plantillas de Pods no tener que proporcionar
|
||||
explícitamente toda la información de cada Pod. De esta manera, los autores de plantillas de
|
||||
Pods que consuman un determinado servicio no tendrán que conocer todos los detalles de ese servicio.
|
||||
|
||||
|
||||
## Habilitando un PodPreset en su clúster
|
||||
|
||||
Con el fin de utilizar los Pod Presets en un clúster debe asegurarse de lo siguiente:
|
||||
|
||||
1. Que se ha configurado el tipo de API `settings.k8s.io/v1alpha1/podpreset`. Esto se puede hacer,
|
||||
por ejemplo, incluyendo `settings.k8s.io/v1alpha1=true` como valor de la opción `--runtime-config`
|
||||
en el servidor API. En minikube se debe añadir el flag
|
||||
`--extra-config=apiserver.runtime-config=settings.k8s.io/v1alpha1=true` cuando el clúster
|
||||
se está iniciando.
|
||||
2. Que se ha habilitado el controlador de admisión `PodPreset`. Una forma de hacer esto es incluir
|
||||
`PodPreset` como valor de la opción `--enable-admission-plugins` especificada
|
||||
para el servidor API. En minikube se debe añadir el flag
|
||||
|
||||
```shell
|
||||
--extra-config=apiserver.enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset
|
||||
```
|
||||
|
||||
cuando el clúster se está iniciando.
|
||||
|
||||
|
||||
## Cómo funciona
|
||||
|
||||
Kubernetes provee un controlador de admisión (`PodPreset`) que, cuando está habilitado,
|
||||
aplica los Pod Presets a las peticiones de creación de Pods entrantes.
|
||||
Cuando se realiza una solicitud de creación de Pods, el sistema hace lo siguiente:
|
||||
|
||||
1. Obtiene todos los `PodPresets` disponibles para usar.
|
||||
2. Verifica si los selectores de etiquetas de cualquier `PodPreset` correspondan
|
||||
con las etiquetas del Pod que se está creando.
|
||||
3. Intenta fusionar los diversos recursos definidos por el `PodPreset` dentro del Pod
|
||||
que se está creando.
|
||||
4. Si se llegase a producir un error al intentar fusionar los recursos dentro del Pod,
|
||||
lanza un evento que documente este error, luego crea el Pod _sin_ ningún recurso que se
|
||||
inyecte desde el `PodPreset`.
|
||||
5. Escribe una nota descriptiva de la especificación de Pod modificada resultante para
|
||||
indicar que ha sido modificada por un `PodPreset`. La nota descriptiva presenta la forma
|
||||
`podpreset.admission.kubernetes.io/podpreset-<pod-preset name>: "<resource version>"`.
|
||||
|
||||
Cada Pod puede ser correspondido por cero o más Pod Presets; y cada `PodPreset` puede ser
|
||||
aplicado a cero o más Pods. Cuando se aplica un `PodPreset` a una o más Pods, Kubernetes
|
||||
modifica la especificación del Pod. Para los cambios a `env`, `envFrom`, y `volumeMounts`,
|
||||
Kubernetes modifica la especificación del Container para todos los Containers en el Pod;
|
||||
para los cambios a `volumes`, Kubernetes modifica la especificación del Pod.
|
||||
|
||||
{{< note >}}
|
||||
Un Pod Preset es capaz de modificar los siguientes campos en las especificaciones de un Pod
|
||||
en caso de ser necesario:
|
||||
- El campo `.spec.containers`.
|
||||
- El campo `.spec.initContainers`
|
||||
{{< /note >}}
|
||||
|
||||
### Deshabilitar un Pod Preset para un Pod específico
|
||||
|
||||
Puede haber casos en los que se desee que un Pod no se vea alterado por ninguna posible
|
||||
modificación del Pod Preset. En estos casos, se puede añadir una observación en el Pod
|
||||
`.spec` de la siguiente forma: `podpreset.admission.kubernetes.io/exclude: "true"`.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Ver [Inyectando datos en un Pod usando PodPreset](/docs/tasks/inject-data-application/podpreset/)
|
||||
|
||||
Para más información sobre los detalles de los trasfondos, consulte la [propuesta de diseño de PodPreset](https://git.k8s.io/design-proposals-archive/service-catalog/pod-preset.md).
|
|
@ -136,7 +136,7 @@ baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
|
|||
enabled=1
|
||||
gpgcheck=1
|
||||
repo_gpgcheck=1
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
|
||||
EOF
|
||||
yum install -y kubectl
|
||||
{{< /tab >}}
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
---
|
||||
headless: true
|
||||
---
|
||||
---
|
||||
|
|
|
@ -8,3 +8,9 @@ menu:
|
|||
post: >
|
||||
<p>Lisez les dernières nouvelles à propos de Kubernetes et des conteneurs en général. Obtenez les derniers tutoriels techniques.</p>
|
||||
---
|
||||
{{< comment >}}
|
||||
|
||||
Pour savoir comment contribuer sur le blog, voir
|
||||
https://kubernetes.io/docs/contribute/new-content/blogs-case-studies/#write-a-blog-post
|
||||
|
||||
{{< /comment >}}
|
|
@ -67,7 +67,7 @@ Si l'application peut fonctionner dans un conteneur, elle devrait fonctionner co
|
|||
Ces composants peuvent être lancés dans Kubernetes et/ou être accessibles à des applications tournant dans Kubernetes via des mécaniques d'intermédiation tel que Open Service Broker.
|
||||
- N'impose pas de solutions de logging, monitoring, ou alerting.
|
||||
Kubernetes fournit quelques intégrations primaires et des mécanismes de collecte et export de métriques.
|
||||
- Ne fournit ou n'impose un langague/système de configuration (e.g., [jsonnet](https://github.com/google/jsonnet)).
|
||||
- Ne fournit ou n'impose pas un langage/système de configuration (e.g., [jsonnet](https://github.com/google/jsonnet)).
|
||||
Il fournit une API déclarative qui peut être ciblée par n'importe quelle forme de spécifications déclaratives.
|
||||
- Ne fournit ou n'adopte aucune mécanique de configuration des machines, de maintenance, de gestion ou de contrôle de la santé des systèmes.
|
||||
|
||||
|
|
|
@ -470,7 +470,7 @@ Sélectionnez l'un des onglets.
|
|||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
cloud.google.com/load-balancer-type: "Internal"
|
||||
networking.gke.io/load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
|
|
@ -145,9 +145,9 @@ Avant de commencer, assurez-vous que votre cluster Kubernetes est opérationnel.
|
|||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
```
|
||||
|
||||
Le ReplicaSet créé garantit qu'il y a trois pods `nginx`.
|
||||
|
|
|
@ -5,83 +5,11 @@ weight: 50
|
|||
content_type: concept
|
||||
---
|
||||
|
||||
{{< toc >}}
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Cette section de la documentation de Kubernetes contient des pages qui montrent comment effectuer des tâches individuelles.
|
||||
Une page montre comment effectuer une seule chose, généralement en donnant une courte séquence d'étapes.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Interface web (Dashboard) {#dashboard}
|
||||
|
||||
Déployer et accéder au dashboard web de votre cluster pour vous aider à le gérer et administrer un cluster Kubernetes.
|
||||
|
||||
## Utilisation de la ligne de commande kubectl
|
||||
|
||||
Installez et configurez l’outil en ligne de commande `kubectl` utilisé pour gérer directement les clusters Kubernetes.
|
||||
|
||||
## Configuration des Pods et des Conteneurs
|
||||
|
||||
Effectuer des tâches de configuration courantes pour les pods et les conteneurs.
|
||||
|
||||
## Exécution d'applications
|
||||
|
||||
Effectuez des tâches courantes de gestion des applications, telles que les mises à jour progressives, l'injection de données dans les pods et la mise à l'échelle automatique des pods.
|
||||
|
||||
## Executez des jobs
|
||||
|
||||
Exécuter des jobs en utilisant un traitement parallèle
|
||||
|
||||
## Accéder aux applications dans un cluster
|
||||
|
||||
Configuration du load balancing, du port forwarding, ou mise en place d'un firewall ou la configuration DNS pour accéder aux applications dans un cluster.
|
||||
|
||||
## Monitoring, Logging, and Debugging
|
||||
|
||||
Mettre en place le monitoring et le logging pour diagnostiquer un cluster ou debugguer une application conteneurisée.
|
||||
|
||||
## Accéder à l'API Kubernetes
|
||||
|
||||
Apprenez diverses méthodes pour accéder directement à l'API Kubernetes.
|
||||
|
||||
## Utiliser TLS
|
||||
|
||||
Configurer votre application pour faire confiance à et utiliser le certificat racine de votre Certificate Authority (CA).
|
||||
|
||||
## Administration d'un cluster
|
||||
|
||||
Apprenez les tâches courantes pour administrer un cluster.
|
||||
|
||||
## Administration d'une fédération
|
||||
|
||||
Configurez les composants dans une fédération de cluster.
|
||||
|
||||
## Gestion des applications avec état
|
||||
|
||||
Effectuez des taches communes pour gérer des applications avec état, notamment la mise à l'échelle, la suppression et le debugging des objets StatefulSets.
|
||||
|
||||
## Gestion des démons cluster
|
||||
|
||||
Effectuez des tâches courantes pour gérer un DaemonSet, telles que la mise à jour progressive.
|
||||
|
||||
## Gestion des GPU
|
||||
|
||||
Configurer des GPUs NVIDIA pour les utiliser dans des noeuds dans un cluster.
|
||||
|
||||
## Gestion des HugePages
|
||||
|
||||
Configuration des huge pages comme une ressource planifiable dans un cluster.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Si vous souhaitez écrire une page, consultez
|
||||
[Création d'une PullRequest de documentation](/docs/home/contribute/create-pull-request/).
|
||||
|
||||
Cette section de la documentation de Kubernetes contient différentes pages montrant
|
||||
comment effectuer des tâches individuelles. Une page montre comment effectuer qu'une
|
||||
seule chose, généralement en donnant une courte séquence d'étapes.
|
||||
|
||||
Si vous souhaitez écrire une nouvelle page, consultez
|
||||
[Créer une Pull Request pour la documentation](/fr/docs/contribute/new-content/open-a-pr/).
|
||||
|
|
|
@ -110,10 +110,10 @@ kubectl version --client
|
|||
1. Téléchargez la dernière version:
|
||||
|
||||
```
|
||||
curl -LO https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
curl -LO https://dl.k8s.io/release/$(curl -Ls https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -s https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
Pour télécharger une version spécifique, remplacez `$(curl -Ls https://dl.k8s.io/release/stable.txt)` avec la version spécifique.
|
||||
|
||||
Par exemple, pour télécharger la version {{< param "fullversion" >}} sur macOS, tapez :
|
||||
|
||||
|
|
|
@ -116,9 +116,9 @@ Dalam contoh ini:
|
|||
6. Untuk melihat label yang dibangkitkan secara otomatis untuk tiap Pod, jalankan `kubectl get pods --show-labels`. Perintah akan menghasilkan keluaran berikut:
|
||||
```shell
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
|
||||
nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897
|
||||
```
|
||||
ReplicaSet yang dibuat menjamin bahwa ada tiga Pod `nginx`.
|
||||
|
||||
|
|
|
@ -30,6 +30,8 @@ Googleが週に何十億ものコンテナを実行することを可能とし
|
|||
|
||||
Kubernetesはオープンソースなので、オンプレミスやパブリッククラウド、それらのハイブリッドなどの利点を自由に得ることができ、簡単に移行することができます。
|
||||
|
||||
Kubernetesをダウンロードするには、[ダウンロード](/releases/download/)セクションを訪れてください。
|
||||
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{< /blocks/section >}}
|
||||
|
@ -41,12 +43,12 @@ Kubernetesはオープンソースなので、オンプレミスやパブリッ
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">ビデオを見る</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu22" button id="desktopKCButton">2022年5月16日〜20日のKubeCon EUバーチャルに参加する</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">2023年4月18日〜21日のKubeCon + CloudNativeCon Europeに参加する</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccncna22" button id="desktopKCButton">2022年10月24日-28日のKubeCon NAバーチャルに参加する</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">2023年11月6日〜9日のKubeCon + CloudNativeCon North Americaに参加する</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -14,7 +14,7 @@ weight: 150
|
|||
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
|
||||
|
||||
{{< note >}}
|
||||
この機能、特にアルファ版の`topologyKeys`APIは、Kubernetes v1.21以降では非推奨です。Kubernetes v1.21で導入された、[トポロジーを意識したヒント](/ja/docs/concepts/services-networking/topology-aware-hints/)が同様の機能を提供します。
|
||||
この機能、特にアルファ版の`topologyKeys`APIは、Kubernetes v1.21以降では非推奨です。Kubernetes v1.21で導入された、[トポロジーを意識したルーティング](/ja/docs/concepts/services-networking/topology-aware-routing/)が同様の機能を提供します。
|
||||
{{</ note >}}
|
||||
|
||||
*Serviceトポロジー*を利用すると、Serviceのトラフィックをクラスターのノードトポロジーに基づいてルーティングできるようになります。たとえば、あるServiceのトラフィックに対して、できるだけ同じノードや同じアベイラビリティゾーン上にあるエンドポイントを優先してルーティングするように指定できます。
|
||||
|
|
|
@ -466,7 +466,7 @@ Split-HorizonなDNS環境において、ユーザーは2つのServiceを外部
|
|||
metadata:
|
||||
name: my-service
|
||||
annotations:
|
||||
cloud.google.com/load-balancer-type: "Internal"
|
||||
networking.gke.io /load-balancer-type: "Internal"
|
||||
[...]
|
||||
```
|
||||
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
---
|
||||
title: トポロジーを意識したヒント
|
||||
title: トポロジーを意識したルーティング
|
||||
content_type: concept
|
||||
weight: 100
|
||||
description: >-
|
||||
*Topology Aware Routing*は、ネットワークトラフィックを発信元のゾーン内に留めておくのに役立つメカニズムを提供します。クラスター内のPod間で同じゾーンのトラフィックを優先することで、信頼性、パフォーマンス(ネットワークの待ち時間やスループット)の向上、またはコストの削減に役立ちます。
|
||||
---
|
||||
|
||||
|
||||
|
@ -9,34 +11,46 @@ weight: 100
|
|||
|
||||
{{< feature-state for_k8s_version="v1.23" state="beta" >}}
|
||||
|
||||
*Topology Aware Hint*は、クライアントがendpointをどのように使用するかについての提案を含めることにより、トポロジーを考慮したルーティングを可能にします。このアプローチでは、EndpointSliceおよび/またはEndpointオブジェクトの消費者が、これらのネットワークエンドポイントへのトラフィックを、それが発生した場所の近くにルーティングできるように、メタデータを追加します。
|
||||
{{< note >}}
|
||||
Kubernetes 1.27より前には、この機能は、*Topology Aware Hint*として知られていました。
|
||||
{{</ note >}}
|
||||
|
||||
たとえば、局所的にトラフィックをルーティングすることで、コストを削減したり、ネットワークパフォーマンスを向上させたりできます。
|
||||
*Topology Aware Routing*は、トラフィックを発信元のゾーンに維持するようにルーティング動作を調整します。場合によっては、コストを削減したり、ネットワークパフォーマンスを向上させたりすることができます。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 動機
|
||||
|
||||
Kubernetesクラスターは、マルチゾーン環境で展開されることが多くなっています。
|
||||
*Topology Aware Hint*は、トラフィックを発信元のゾーン内に留めておくのに役立つメカニズムを提供します。このコンセプトは、一般に「Topology Aware Routing」と呼ばれています。EndpointSliceコントローラーは{{< glossary_tooltip term_id="Service" >}}のendpointを計算する際に、各endpointのトポロジー(リージョンとゾーン)を考慮し、ゾーンに割り当てるためのヒントフィールドに値を入力します。
|
||||
EndpointSliceコントローラーは、各endpointのトポロジー(リージョンとゾーン)を考慮し、ゾーンに割り当てるためのヒントフィールドに入力します。
|
||||
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}のようなクラスターコンポーネントは、次にこれらのヒントを消費し、それらを使用してトラフィックがルーティングされる方法に影響を与えることが可能です(トポロジー的に近いendpointを優先します)。
|
||||
*Topology Aware Routing*は、トラフィックを発信元のゾーン内に留めておくのに役立つメカニズムを提供します。EndpointSliceコントローラーは{{< glossary_tooltip term_id="Service" >}}のendpointを計算する際に、各endpointのトポロジー(リージョンとゾーン)を考慮し、ゾーンに割り当てるためのヒントフィールドに値を入力します。{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}のようなクラスターコンポーネントは、次にこれらのヒントを消費し、それらを使用してトラフィックがルーティングされる方法に影響を与えることが可能です(トポロジー的に近いendpointを優先します)。
|
||||
|
||||
## Topology Aware Routingを有効にする
|
||||
|
||||
## Topology Aware Hintを使う
|
||||
{{< note >}}
|
||||
Kubernetes 1.27より前には、この動作は`service.kubernetes.io/topology-aware-hints`アノテーションを使用して制御されていました。
|
||||
{{</ note >}}
|
||||
|
||||
`service.kubernetes.io/topology-aware-hints`アノテーションを`auto`に設定すると、サービスに対してTopology Aware Hintを有効にすることができます。これはEndpointSliceコントローラーが安全と判断した場合に、トポロジーヒントを設定するように指示します。
|
||||
重要なのは、これはヒントが常に設定されることを保証するものではないことです。
|
||||
`service.kubernetes.io/topology-mode`アノテーションを`auto`に設定すると、サービスに対してTopology Aware Routingを有効にすることができます。各ゾーンに十分なendpointがある場合、個々のendpointを特定のゾーンに割り当てるために、トポロジーヒントがEndpointSliceに入力され、その結果、トラフィックは発信元の近くにルーティングされます。
|
||||
|
||||
## 使い方 {#implementation}
|
||||
## 最も効果的なとき
|
||||
|
||||
この機能を有効にする機能は、EndpointSliceコントローラーとkube-proxyの2つのコンポーネントに分かれています。このセクションでは、各コンポーネントがこの機能をどのように実装しているか、高レベルの概要を説明します。
|
||||
この機能は、次の場合に最も効果的に動作します。
|
||||
|
||||
### 1. 受信トラフィックが均等に分散されている
|
||||
|
||||
トラフィックの大部分が単一のゾーンから発信されている場合、トラフィックはそのゾーンに割り当てられたendpointのサブセットに過負荷を与える可能性があります。受信トラフィックが単一のゾーンから発信されることが予想される場合、この機能は推奨されません。
|
||||
|
||||
### 2. 1つのゾーンに3つ以上のendpointを持つサービス {#three-or-more-endpoints-per-zone}
|
||||
|
||||
3つのゾーンからなるクラスターでは、これは9つ以上のendpointがあることを意味します。ゾーン毎のendpointが3つ未満の場合、EndpointSliceコントローラーはendpointを均等に割り当てることができず、代わりにデフォルトのクラスター全体のルーティングアプローチに戻る可能性が高く(約50%)なります。
|
||||
|
||||
## 使い方 {#how-it-works}
|
||||
|
||||
「Auto」ヒューリスティックは、各ゾーンに多数のendpointを比例的に割り当てようとします。このヒューリスティックは、非常に多くのendpointを持つサービスに最適です。
|
||||
|
||||
### EndpointSliceコントローラー {#implementation-control-plane}
|
||||
|
||||
この機能が有効な場合、EndpointSliceコントローラーはEndpointSliceにヒントを設定する役割を担います。
|
||||
コントローラーは、各ゾーンに比例した量のendpointを割り当てます。
|
||||
この割合は、そのゾーンで実行されているノードの[割り当て可能な](/ja/docs/task/administer-cluster/reserve-compute-resources/#node-allocatable)CPUコアを基に決定されます。
|
||||
このヒューリスティックが有効な場合、EndpointSliceコントローラーはEndpointSliceにヒントを設定する役割を担います。コントローラーは、各ゾーンに比例した量のendpointを割り当てます。この割合は、そのゾーンで実行されているノードの[割り当て可能な](/ja/docs/task/administer-cluster/reserve-compute-resources/#node-allocatable)CPUコアを基に決定されます。
|
||||
|
||||
たとえば、あるゾーンに2つのCPUコアがあり、別のゾーンに1つのCPUコアしかない場合、コントローラーは2つのCPUコアを持つゾーンに2倍のendpointを割り当てます。
|
||||
|
||||
|
@ -92,10 +106,17 @@ kube-proxyは、EndpointSliceコントローラーによって設定されたヒ
|
|||
|
||||
* EndpointSliceコントローラーは、各ゾーンの比率を計算するときに、準備ができていないノードを無視します。ノードの大部分の準備ができていない場合、これは意図しない結果をもたらす可能性があります。
|
||||
|
||||
* EndpointSliceコントローラーは、`node-role.kubernetes.io/control-plane`または`node-role.kubernetes.io/master`ラベルが設定されたノードを無視します。それらのノードでワークロードが実行されている場合、これは問題になる可能性があります。
|
||||
|
||||
* EndpointSliceコントローラーは、各ゾーンの比率を計算するデプロイ時に{{< glossary_tooltip text="toleration" term_id="toleration" >}}を考慮しません。サービスをバックアップするPodがクラスター内のノードのサブセットに制限されている場合、これは考慮されません。
|
||||
|
||||
* これはオートスケーリングと相性が悪いかもしれません。例えば、多くのトラフィックが1つのゾーンから発信されている場合、そのゾーンに割り当てられたendpointのみがそのトラフィックを処理することになります。その結果、{{< glossary_tooltip text="Horizontal Pod Autoscaler" term_id="horizontal-pod-autoscaler" >}}がこのイベントを拾えなくなったり、新しく追加されたPodが別のゾーンで開始されたりする可能性があります。
|
||||
|
||||
## カスタムヒューリスティック {#custom-heuristics}
|
||||
|
||||
Kubernetesは様々な方法でデブロイされ、endpointをゾーンに割り当てるための単独のヒューリスティックは、すべてのユースケースに通用するわけではありません。
|
||||
この機能の主な目的は、内蔵のヒューリスティックがユースケースに合わない場合に、カスタムヒューリスティックを開発できるようにすることです。カスタムヒューリスティックを有効にするための最初のステップは、1.27リリースに含まれています。これは限定的な実装であり、関連する妥当と思われる状況をまだカバーしていない可能性があります。
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む。
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 利用可能なドキュメントバージョン
|
||||
content_type: custom
|
||||
layout: supported-versions
|
||||
layout: supported-versions
|
||||
card:
|
||||
name: about
|
||||
weight: 10
|
||||
|
@ -9,3 +9,6 @@ card:
|
|||
---
|
||||
|
||||
本ウェブサイトには、現行版とその直前4バージョンのKubernetesドキュメントがあります。
|
||||
|
||||
Kubernetesバージョンのドキュメントの入手性は、そのリリースが現在サポートされているかどうかで分かれます。
|
||||
どのKubernetesバージョンが公式にどのくらいの期間サポートされるかについて知るには、[サポート期間](/releases/patch-releases/#support-period)を参照してください。
|
||||
|
|
|
@ -4,22 +4,26 @@ linkTitle: "リファレンス"
|
|||
main_menu: true
|
||||
weight: 70
|
||||
content_type: concept
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
本セクションには、Kubernetesのドキュメントのリファレンスが含まれています。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## APIリファレンス
|
||||
|
||||
* [Kubernetes API概要](/docs/reference/using-api/) - Kubernetes APIの概要です。
|
||||
* [Kubernetes APIリファレンス {{< latest-version >}}](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/)
|
||||
* [標準化用語集](/ja/docs/reference/glossary) - Kubernetesの用語の包括的で標準化されたリストです。
|
||||
|
||||
## APIクライアントライブラリー
|
||||
* [Kubernetes APIリファレンス](/docs/reference/using-api/)
|
||||
* [Kubernetes {{< param "version" >}}の単一ページのAPIリファレンス](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
|
||||
* [Kubernetes APIの使用](/ja/docs/reference/using-api/) - KubernetesのAPIの概要です。
|
||||
* [API アクセスコントロール](/docs/reference/access-authn-authz/) - KubernetesがAPIアクセスをどのように制御するかの詳細です。
|
||||
* [よく知られたラベル、アノテーション、テイント](/docs/reference/labels-annotations-taints/)
|
||||
|
||||
## 公式にサポートされているクライアントライブラリー
|
||||
|
||||
プログラミング言語からKubernetesのAPIを呼ぶためには、[クライアントライブラリー](/docs/reference/using-api/client-libraries/)を使うことができます。公式にサポートしているクライアントライブラリー:
|
||||
|
||||
|
@ -27,11 +31,13 @@ content_type: concept
|
|||
- [Kubernetes Python client library](https://github.com/kubernetes-client/python)
|
||||
- [Kubernetes Java client library](https://github.com/kubernetes-client/java)
|
||||
- [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript)
|
||||
- [Kubernetes C# client library](https://github.com/kubernetes-client/csharp)
|
||||
- [Kubernetes Haskell client library](https://github.com/kubernetes-client/haskell)
|
||||
|
||||
## CLIリファレンス
|
||||
|
||||
* [kubectl](/ja/docs/reference/kubectl/) - コマンドの実行やKubernetesクラスターの管理に使う主要なCLIツールです。
|
||||
* [JSONPath](/ja/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。
|
||||
* [JSONPath](/ja/docs/reference/kubectl/jsonpath/) - kubectlで[JSONPath記法](https://goessner.net/articles/JsonPath/)を使うための構文ガイドです。
|
||||
* [kubeadm](/ja/docs/reference/setup-tools/kubeadm/) - セキュアなKubernetesクラスターを簡単にプロビジョニングするためのCLIツールです。
|
||||
|
||||
## コンポーネントリファレンス
|
||||
|
@ -41,11 +47,43 @@ content_type: concept
|
|||
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) - Kubernetesに同梱された、コアのコントロールループを埋め込むデーモンです。
|
||||
* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) - 単純なTCP/UDPストリームのフォワーディングや、一連のバックエンド間でTCP/UDPのラウンドロビンでのフォワーディングを実行できます。
|
||||
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/) - 可用性、パフォーマンス、およびキャパシティを管理するスケジューラーです。
|
||||
* [kube-schedulerポリシー](/docs/reference/scheduling/policies)
|
||||
* [kube-schedulerプロファイル](/docs/reference/scheduling/profiles)
|
||||
* [kube-schedulerポリシー](/ja/docs/reference/scheduling/policies)
|
||||
* [Schedulerプロファイル](/ja/docs/reference/scheduling/config#プロファイル)
|
||||
|
||||
* コントロールプレーンとワーカーノードで開いておくべき[ポートとプロトコル](/ja/docs/reference/networking/ports-and-protocols/)の一覧
|
||||
|
||||
## 設定APIリファレンス
|
||||
|
||||
このセクションでは、Kubernetesのコンポーネントやツールを設定するのに使われている「未公開」のAPIのドキュメントをまとめています。
|
||||
クラスターを使ったり管理したりするユーザーやオペレーターにとって必要不可欠ではありますが、これらのAPIの大半はRESTful方式のAPIサーバーでは提供されません。
|
||||
|
||||
* [kubeconfig (v1)](/docs/reference/config-api/kubeconfig.v1/)
|
||||
* [kube-apiserver admission (v1)](/docs/reference/config-api/apiserver-admission.v1/)
|
||||
* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/)および
|
||||
* [kube-apiserver configuration (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/)および
|
||||
[kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/)
|
||||
* [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/)
|
||||
* [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/)
|
||||
* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/)および
|
||||
[kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/)
|
||||
[kubelet configuration (v1)](/docs/reference/config-api/kubelet-config.v1/)
|
||||
* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/)、
|
||||
[kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/)および
|
||||
[kubelet credential providers (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/)
|
||||
* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/)、
|
||||
[kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)および
|
||||
[kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/)
|
||||
* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/)
|
||||
* [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/)
|
||||
* [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/)
|
||||
* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/)および
|
||||
[Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/)
|
||||
* [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/)
|
||||
* [ImagePolicy API (v1alpha1)](/docs/reference/config-api/imagepolicy.v1alpha1/)
|
||||
## kubeadmの設定APIリファレンス
|
||||
|
||||
* [v1beta3](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
|
||||
## 設計のドキュメント
|
||||
|
||||
Kubernetesの機能に関する設計ドキュメントのアーカイブです。[Kubernetesアーキテクチャ](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) と[Kubernetesデザイン概要](https://git.k8s.io/community/contributors/design-proposals)から読み始めると良いでしょう。
|
||||
|
||||
|
||||
Kubernetesの機能に関する設計ドキュメントのアーカイブです。[Kubernetesアーキテクチャ](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) と[Kubernetesデザイン概要](https://git.k8s.io/design-proposals-archive)から読み始めると良いでしょう。
|
||||
|
|
|
@ -108,7 +108,8 @@ content_type: concept
|
|||
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 |
|
||||
| `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | |
|
||||
| `KubeletTracing` | `false` | Alpha | 1.25 | |
|
||||
| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.25 | |
|
||||
| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | 1.26 |
|
||||
| `LegacyServiceAccountTokenTracking` | `true` | Beta | 1.27 | |
|
||||
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - |
|
||||
| `LogarithmicScaleDown` | `false` | Alpha | 1.21 | 1.21 |
|
||||
| `LogarithmicScaleDown` | `true` | Beta | 1.22 | |
|
||||
|
|
|
@ -132,7 +132,7 @@ kubeadmは`kubelet`や`kubectl`をインストールまたは管理**しない**
|
|||
2. Google Cloudの公開鍵をダウンロードします:
|
||||
|
||||
```shell
|
||||
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
|
||||
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
|
||||
```
|
||||
|
||||
3. Kubernetesの`apt`リポジトリを追加します:
|
||||
|
|
|
@ -54,7 +54,7 @@ Kubesprayは環境のプロビジョニングを支援するために次のユ
|
|||
* 下記のクラウドプロバイダー用の[Terraform](https://www.terraform.io/)スクリプト:
|
||||
* [AWS](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/aws)
|
||||
* [OpenStack](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/openstack)
|
||||
* [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/metal)
|
||||
* [Equinix Metal](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform/equinix)
|
||||
|
||||
|
||||
### (2/5) インベントリーファイルの用意
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: ターンキークラウドソリューション
|
||||
content_type: concept
|
||||
weight: 40
|
||||
---
|
||||
<!-- overview -->
|
||||
|
||||
このページは、Kubernetes認定ソリューションプロバイダーのリストを提供します。
|
||||
各プロバイダーのページから、本番環境でも利用可能なクラスターのインストール方法やセットアップ方法を学ぶことができます。
|
||||
|
||||
<!-- body -->
|
||||
|
||||
{{< cncf-landscape helpers=true category="certified-kubernetes-hosted" >}}
|
|
@ -201,8 +201,8 @@ web-0
|
|||
web-1
|
||||
```
|
||||
その後、次のコマンドを実行します。
|
||||
```
|
||||
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm /bin/sh
|
||||
```shell
|
||||
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
|
||||
```
|
||||
これにより、新しいシェルが起動します。新しいシェルで、次のコマンドを実行します。
|
||||
```shell
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Community
|
||||
title: 커뮤니티
|
||||
layout: basic
|
||||
cid: community
|
||||
community_styles_migrated: true
|
||||
|
|
|
@ -9,7 +9,7 @@ community_styles_migrated: true
|
|||
<p>
|
||||
쿠버네티스는
|
||||
<a href="https://github.com/cncf/foundation/blob/main/code-of-conduct.md">CNCF의 행동 강령</a>을 따르고 있습니다.
|
||||
<a href="https://github.com/cncf/foundation/blob/71b12a2f8b4589788ef2d69b351a3d035c68d927/code-of-conduct.md">커밋 71b12a2</a>
|
||||
<a href="https://github.com/cncf/foundation/blob/fff715fb000ba4d7422684eca1d50d80676be254/code-of-conduct.md">커밋 fff715fb0</a>
|
||||
에 따라 CNCF 행동 강령의 내용이 아래에 복제됩니다.
|
||||
만약 최신 버전이 아닌 경우에는
|
||||
<a href="https://github.com/kubernetes/website/issues/new">이슈를 제기해 주세요</a>.
|
||||
|
|
|
@ -107,7 +107,7 @@ IP 주소, 네트워크 패킷 필터링 그리고 대상 상태 확인과 같
|
|||
|
||||
### 서비스 컨트롤러 {#authorization-service-controller}
|
||||
|
||||
서비스 컨트롤러는 서비스 오브젝트 생성, 업데이트 그리고 삭제 이벤트를 수신한 다음 해당 서비스에 대한 엔드포인트를 적절하게 구성한다.
|
||||
서비스 컨트롤러는 서비스 오브젝트 생성, 업데이트 그리고 삭제 이벤트를 수신한 다음 해당 서비스에 대한 엔드포인트를 적절하게 구성한다(엔드포인트슬라이스(EndpointSlice)의 경우, kube-controller-manager가 필요에 따라 이들을 관리한다).
|
||||
|
||||
서비스에 접근하려면, 목록과 감시 접근 권한이 필요하다. 서비스를 업데이트하려면, 패치와 업데이트 접근 권한이 필요하다.
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ weight: 30
|
|||
|
||||
사용자는 온도를 설정해서, 사용자가 *의도한 상태* 를
|
||||
온도 조절기에 알려준다.
|
||||
*현재 상태* 이다. 온도 조절기는 장비를 켜거나 꺼서
|
||||
실제 실내 온도는 *현재 상태* 이다. 온도 조절기는 장비를 켜거나 꺼서
|
||||
현재 상태를 의도한 상태에 가깝게 만든다.
|
||||
|
||||
{{< glossary_definition term_id="controller" length="short">}}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 컨테이너 런타임 인터페이스(CRI)
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: 가비지(Garbage) 수집
|
||||
content_type: concept
|
||||
weight: 50
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -0,0 +1,80 @@
|
|||
---
|
||||
title: 리스(Lease)
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
분산 시스템에는 종종 공유 리소스를 잠그고 노드 간의 활동을 조정하는 메커니즘을 제공하는 "리스(Lease)"가 필요하다.
|
||||
쿠버네티스에서 "리스" 개념은 `coordination.k8s.io` API 그룹에 있는 `Lease` 오브젝트로 표현되며,
|
||||
노드 하트비트 및 컴포넌트 수준의 리더 선출과 같은 시스템 핵심 기능에서 사용된다.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## 노드 하트비트
|
||||
|
||||
쿠버네티스는 리스 API를 사용하여 kubelet 노드의 하트비트를 쿠버네티스 API 서버에 전달한다.
|
||||
모든 `노드`에는 같은 이름을 가진 `Lease` 오브젝트가 `kube-node-lease` 네임스페이스에 존재한다.
|
||||
내부적으로, 모든 kubelet 하트비트는 이 `Lease` 오브젝트에 대한 업데이트 요청이며,
|
||||
이 업데이트 요청은 `spec.renewTime` 필드를 업데이트한다.
|
||||
쿠버네티스 컨트롤 플레인은 이 필드의 타임스탬프를 사용하여 해당 `노드`의 가용성을 확인한다.
|
||||
|
||||
자세한 내용은 [노드 리스 오브젝트](/ko/docs/concepts/architecture/nodes/#heartbeats)를 참조한다.
|
||||
|
||||
## 리더 선출
|
||||
|
||||
리스는 쿠버네티스에서도 특정 시간 동안 컴포넌트의 인스턴스 하나만 실행되도록 보장하는 데에도 사용된다.
|
||||
이는 구성 요소의 한 인스턴스만 활성 상태로 실행되고 다른 인스턴스는 대기 상태여야 하는
|
||||
`kube-controller-manager` 및 `kube-scheduler`와 같은 컨트롤 플레인 컴포넌트의
|
||||
고가용성 설정에서 사용된다.
|
||||
|
||||
## API 서버 신원
|
||||
|
||||
{{< feature-state for_k8s_version="v1.26" state="beta" >}}
|
||||
|
||||
쿠버네티스 v1.26부터, 각 `kube-apiserver`는 리스 API를 사용하여 시스템의 나머지 부분에 자신의 신원을 게시한다.
|
||||
그 자체로는 특별히 유용하지는 않지만, 이것은 클라이언트가 쿠버네티스 컨트롤 플레인을 운영 중인 `kube-apiserver` 인스턴스 수를
|
||||
파악할 수 있는 메커니즘을 제공한다.
|
||||
kube-apiserver 리스의 존재는 향후 각 kube-apiserver 간의 조정이 필요할 때
|
||||
기능을 제공해 줄 수 있다.
|
||||
|
||||
각 kube-apiserver가 소유한 리스는 `kube-system` 네임스페이스에서`kube-apiserver-<sha256-hash>`라는 이름의
|
||||
리스 오브젝트를 확인하여 볼 수 있다. 또는 `k8s.io/component=kube-apiserver` 레이블 설렉터를 사용하여 볼 수도 있다.
|
||||
|
||||
```shell
|
||||
$ kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver
|
||||
NAME HOLDER AGE
|
||||
kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s
|
||||
kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s
|
||||
kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s
|
||||
```
|
||||
|
||||
리스 이름에 사용된 SHA256 해시는 kube-apiserver가 보는 OS 호스트 이름을 기반으로 한다.
|
||||
각 kube-apiserver는 클러스터 내에서 고유한 호스트 이름을 사용하도록 구성해야 한다.
|
||||
동일한 호스트명을 사용하는 새로운 kube-apiserver 인스턴스는 새 리스 오브젝트를 인스턴스화하는 대신 새로운 소유자 ID를 사용하여 기존 리스를 차지할 수 있다.
|
||||
kube-apiserver가 사용하는 호스트네임은 `kubernetes.io/hostname` 레이블의 값을 확인하여 확인할 수 있다.
|
||||
|
||||
```shell
|
||||
$ kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: coordination.k8s.io/v1
|
||||
kind: Lease
|
||||
metadata:
|
||||
creationTimestamp: "2022-11-30T15:37:15Z"
|
||||
labels:
|
||||
k8s.io/component: kube-apiserver
|
||||
kubernetes.io/hostname: kind-control-plane
|
||||
name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a
|
||||
namespace: kube-system
|
||||
resourceVersion: "18171"
|
||||
uid: d6c68901-4ec5-4385-b1ef-2d783738da6c
|
||||
spec:
|
||||
holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4
|
||||
leaseDurationSeconds: 3600
|
||||
renewTime: "2022-11-30T18:04:27.912073Z"
|
||||
```
|
||||
|
||||
더 이상 존재하지 않는 kube-apiserver의 만료된 임대는 1시간 후에 새로운 kube-apiserver에 의해 가비지 컬렉션된다.
|
|
@ -456,7 +456,7 @@ Message: Pod was terminated in response to imminent node shutdown.
|
|||
|
||||
## 논 그레이스풀 노드 셧다운 {#non-graceful-node-shutdown}
|
||||
|
||||
{{< feature-state state="alpha" for_k8s_version="v1.24" >}}
|
||||
{{< feature-state state="beta" for_k8s_version="v1.26" >}}
|
||||
|
||||
전달한 명령이 kubelet에서 사용하는 금지 잠금 메커니즘(inhibitor locks mechanism)을 트리거하지 않거나,
|
||||
또는 사용자 오류(예: ShutdownGracePeriod 및 ShutdownGracePeriodCriticalPods가 제대로 설정되지 않음)로 인해
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: 애드온 설치
|
||||
content_type: concept
|
||||
weight: 120
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -9,24 +9,35 @@ weight: 60
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
애플리케이션 로그는 애플리케이션 내부에서 발생하는 상황을 이해하는 데 도움이 된다. 로그는 문제를 디버깅하고 클러스터 활동을 모니터링하는 데 특히 유용하다. 대부분의 최신 애플리케이션에는 일종의 로깅 메커니즘이 있다. 마찬가지로, 컨테이너 엔진들도 로깅을 지원하도록 설계되었다. 컨테이너화된 애플리케이션에 가장 쉽고 가장 널리 사용되는 로깅 방법은 표준 출력과 표준 에러 스트림에 작성하는 것이다.
|
||||
애플리케이션 로그는 애플리케이션 내부에서 발생하는 상황을 이해하는 데 도움이 된다.
|
||||
로그는 문제를 디버깅하고 클러스터 활동을 모니터링하는 데 특히 유용하다.
|
||||
대부분의 최신 애플리케이션에는 일종의 로깅 메커니즘이 있다.
|
||||
마찬가지로, 컨테이너 엔진들도 로깅을 지원하도록 설계되었다.
|
||||
컨테이너화된 애플리케이션에 가장 쉽고 가장 널리 사용되는 로깅 방법은 표준 출력과 표준 에러 스트림에 작성하는 것이다.
|
||||
|
||||
그러나, 일반적으로 컨테이너 엔진이나 런타임에서 제공하는 기본 기능은 완전한 로깅 솔루션으로 충분하지 않다.
|
||||
그러나, 일반적으로 컨테이너 엔진이나 런타임에서 제공하는 기본 기능은
|
||||
완전한 로깅 솔루션으로 충분하지 않다.
|
||||
|
||||
예를 들어, 컨테이너가 크래시되거나, 파드가 축출되거나, 노드가 종료된 경우에도 애플리케이션의 로그에 접근하고 싶을 것이다.
|
||||
예를 들어, 컨테이너가 크래시되거나, 파드가 축출되거나, 노드가 종료된 경우에
|
||||
애플리케이션의 로그에 접근하고 싶을 것이다.
|
||||
|
||||
클러스터에서 로그는 노드, 파드 또는 컨테이너와는 독립적으로 별도의 스토리지와 라이프사이클을 가져야 한다. 이 개념을 _클러스터-레벨-로깅_ 이라고 한다.
|
||||
클러스터에서 로그는 노드, 파드 또는 컨테이너와는 독립적으로
|
||||
별도의 스토리지와 라이프사이클을 가져야 한다.
|
||||
이 개념을 [클러스터-레벨 로깅](#cluster-level-logging-architectures)이라고 한다.
|
||||
|
||||
클러스터-레벨 로깅은 로그를 저장, 분석, 쿼리하기 위해서는 별도의 백엔드가 필요하다.
|
||||
쿠버네티스가 로그 데이터를 위한 네이티브 스토리지 솔루션을 제공하지는 않지만,
|
||||
쿠버네티스에 통합될 수 있는 기존의 로깅 솔루션이 많이 있다.
|
||||
아래 내용은 로그를 어떻게 처리하고 관리하는지 설명한다.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
클러스터-레벨 로깅은 로그를 저장, 분석, 쿼리하기 위해서는 별도의 백엔드가 필요하다. 쿠버네티스가
|
||||
로그 데이터를 위한 네이티브 스토리지 솔루션을 제공하지는 않지만,
|
||||
쿠버네티스에 통합될 수 있는 기존의 로깅 솔루션이 많이 있다.
|
||||
## 파드와 컨테이너 로그 {#basic-logging-in-kubernetes}
|
||||
|
||||
## 쿠버네티스의 기본 로깅
|
||||
쿠버네티스는 실행중인 파드의 컨테이너에서 출력하는 로그를 감시한다.
|
||||
|
||||
이 예시는 텍스트를 초당 한 번씩 표준 출력에 쓰는
|
||||
컨테이너에 대한 `Pod` 명세를 사용한다.
|
||||
아래 예시는, 초당 한 번씩 표준 출력에 텍스트를 기록하는
|
||||
컨테이너를 포함하는 `파드` 매니페스트를 사용한다.
|
||||
|
||||
{{< codenew file="debug/counter-pod.yaml" >}}
|
||||
|
||||
|
@ -51,10 +62,9 @@ kubectl logs counter
|
|||
출력은 다음과 같다.
|
||||
|
||||
```console
|
||||
0: Mon Jan 1 00:00:00 UTC 2001
|
||||
1: Mon Jan 1 00:00:01 UTC 2001
|
||||
2: Mon Jan 1 00:00:02 UTC 2001
|
||||
...
|
||||
0: Fri Apr 1 11:42:23 UTC 2022
|
||||
1: Fri Apr 1 11:42:24 UTC 2022
|
||||
2: Fri Apr 1 11:42:25 UTC 2022
|
||||
```
|
||||
|
||||
`kubectl logs --previous` 를 사용해서 컨테이너의 이전 인스턴스에 대한 로그를 검색할 수 있다.
|
||||
|
@ -67,72 +77,129 @@ kubectl logs counter -c count
|
|||
|
||||
자세한 내용은 [`kubectl logs` 문서](/docs/reference/generated/kubectl/kubectl-commands#logs)를 참조한다.
|
||||
|
||||
## 노드 레벨에서의 로깅
|
||||
### 노드가 컨테이너 로그를 처리하는 방법
|
||||
|
||||

|
||||
|
||||
컨테이너화된 애플리케이션의 `stdout(표준 출력)` 및 `stderr(표준 에러)` 스트림에 의해 생성된 모든 출력은 컨테이너 엔진이 처리 및 리디렉션 한다.
|
||||
예를 들어, 도커 컨테이너 엔진은 이 두 스트림을 [로깅 드라이버](https://docs.docker.com/engine/admin/logging/overview)로 리디렉션 한다. 이 드라이버는 쿠버네티스에서 JSON 형식의 파일에 작성하도록 구성된다.
|
||||
컨테이너화된 애플리케이션의 `stdout(표준 출력)` 및 `stderr(표준 에러)` 스트림에 의해 생성된 모든 출력은 컨테이너 런타임이 처리하고 리디렉션 시킨다.
|
||||
다양한 컨테이너 런타임들은 이를 각자 다른 방법으로 구현하였지만,
|
||||
kubelet과의 호환성은 _CRI 로깅 포맷_ 으로 표준화되어 있다.
|
||||
|
||||
{{< note >}}
|
||||
도커 JSON 로깅 드라이버는 각 라인을 별도의 메시지로 취급한다. 도커 로깅 드라이버를 사용하는 경우, 멀티-라인 메시지를 직접 지원하지 않는다. 로깅 에이전트 레벨 이상에서 멀티-라인 메시지를 처리해야 한다.
|
||||
{{< /note >}}
|
||||
기본적으로 컨테이너가 재시작하는 경우, kubelet은 종료된 컨테이너 하나를 로그와 함께 유지한다.
|
||||
파드가 노드에서 축출되면, 해당하는 모든 컨테이너와 로그가 함께 축출된다.
|
||||
|
||||
기본적으로, 컨테이너가 다시 시작되면, kubelet은 종료된 컨테이너 하나를 로그와 함께 유지한다. 파드가 노드에서 축출되면, 해당하는 모든 컨테이너도 로그와 함께 축출된다.
|
||||
kubelet은 쿠버네티스의 특정 API를 통해 사용자들에게 로그를 공개하며,
|
||||
일반적으로 `kubectl logs`를 통해 접근할 수 있다.
|
||||
|
||||
노드-레벨 로깅에서 중요한 고려 사항은 로그 로테이션을 구현하여,
|
||||
로그가 노드에서 사용 가능한 모든 스토리지를 사용하지 않도록 하는 것이다. 쿠버네티스는
|
||||
로그 로테이션에 대한 의무는 없지만, 디플로이먼트 도구로
|
||||
이를 해결하기 위한 솔루션을 설정해야 한다.
|
||||
예를 들어, `kube-up.sh` 스크립트에 의해 배포된 쿠버네티스 클러스터에는,
|
||||
매시간 실행되도록 구성된 [`logrotate`](https://linux.die.net/man/8/logrotate)
|
||||
도구가 있다. 애플리케이션의 로그를 자동으로
|
||||
로테이션하도록 컨테이너 런타임을 설정할 수도 있다.
|
||||
### 로그 로테이션
|
||||
|
||||
예를 들어, `kube-up.sh` 가 GCP의 COS 이미지 로깅을 설정하는 방법은
|
||||
[`configure-helper` 스크립트](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)를 통해
|
||||
자세히 알 수 있다.
|
||||
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
|
||||
|
||||
**CRI 컨테이너 런타임** 을 사용할 때, kubelet은 로그를 로테이션하고 로깅 디렉터리 구조를 관리한다.
|
||||
kubelet은 이 정보를 CRI 컨테이너 런타임에 전송하고 런타임은 컨테이너 로그를 지정된 위치에 기록한다.
|
||||
[kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/)에 있는
|
||||
두 개의 kubelet 파라미터 [`containerLogMaxSize` 및 `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)를
|
||||
사용하여 각 로그 파일의 최대 크기와 각 컨테이너에 허용되는 최대 파일 수를 각각 구성할 수 있다.
|
||||
kubelet이 로그를 자동으로 로테이트하도록 설정할 수 있다.
|
||||
|
||||
로테이션을 구성해놓으면, kubelet은 컨테이너 로그를 로테이트하고 로깅 경로 구조를 관리한다.
|
||||
kubelet은 이 정보를 컨테이너 런타임에 전송하고(CRI를 사용),
|
||||
런타임은 지정된 위치에 컨테이너 로그를 기록한다.
|
||||
|
||||
[kubelet 설정 파일](/docs/tasks/administer-cluster/kubelet-config-file/)을 사용하여
|
||||
두 개의 kubelet 파라미터
|
||||
[`containerLogMaxSize` 및 `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)를 설정 가능하다.
|
||||
이러한 설정을 통해 각 로그 파일의 최대 크기와 각 컨테이너에 허용되는 최대 파일 수를 각각 구성할 수 있다.
|
||||
|
||||
기본 로깅 예제에서와 같이 [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs)를
|
||||
실행하면, 노드의 kubelet이 요청을 처리하고
|
||||
로그 파일에서 직접 읽는다. kubelet은 로그 파일의 내용을 반환한다.
|
||||
실행하면, 노드의 kubelet이 요청을 처리하고 로그 파일에서 직접 읽는다.
|
||||
kubelet은 로그 파일의 내용을 반환한다.
|
||||
|
||||
|
||||
{{< note >}}
|
||||
만약, 일부 외부 시스템이 로테이션을 수행했거나 CRI 컨테이너 런타임이 사용된 경우,
|
||||
`kubectl logs` 를 통해 최신 로그 파일의 내용만
|
||||
사용할 수 있다. 예를 들어, 10MB 파일이 있으면, `logrotate` 가
|
||||
로테이션을 수행하고 두 개의 파일이 생긴다. (크기가 10MB인 파일 하나와 비어있는 파일)
|
||||
`kubectl logs` 는 이 예시에서는 빈 응답에 해당하는 최신 로그 파일을 반환한다.
|
||||
`kubectl logs`를 통해서는
|
||||
최신 로그만 확인할 수 있다.
|
||||
|
||||
예를 들어, 파드가 40MiB 크기의 로그를 기록했고 kubelet이 10MiB 마다 로그를 로테이트하는 경우
|
||||
`kubectl logs`는 최근의 10MiB 데이터만 반환한다.
|
||||
{{< /note >}}
|
||||
|
||||
### 시스템 컴포넌트 로그
|
||||
## 시스템 컴포넌트 로그
|
||||
|
||||
시스템 컴포넌트에는 컨테이너에서 실행되는 것과 컨테이너에서 실행되지 않는 두 가지 유형이 있다.
|
||||
시스템 컴포넌트에는 두 가지 유형이 있는데, 컨테이너에서 실행되는 것과 실행 중인 컨테이너와 관련된 것이다.
|
||||
예를 들면 다음과 같다.
|
||||
|
||||
* 쿠버네티스 스케줄러와 kube-proxy는 컨테이너에서 실행된다.
|
||||
* Kubelet과 컨테이너 런타임은 컨테이너에서 실행되지 않는다.
|
||||
* kubelet과 컨테이너 런타임은 컨테이너에서 실행되지 않는다.
|
||||
kubelet이 컨테이너({{< glossary_tooltip text="파드" term_id="pod" >}}와 그룹화된)를 실행시킨다.
|
||||
* 쿠버네티스의 스케줄러, 컨트롤러 매니저, API 서버는
|
||||
파드(일반적으로 {{< glossary_tooltip text="스태틱 파드" term_id="static-pod" >}})로 실행된다.
|
||||
etcd는 컨트롤 플레인에서 실행되며, 대부분의 경우 역시 스태틱 파드로써 실행된다.
|
||||
클러스터가 kube-proxy를 사용하는 경우는 `데몬셋(DaemonSet)`으로써 실행된다.
|
||||
|
||||
systemd를 사용하는 시스템에서는, kubelet과 컨테이너 런타임은 journald에 작성한다.
|
||||
systemd를 사용하지 않으면, kubelet과 컨테이너 런타임은 `/var/log` 디렉터리의
|
||||
`.log` 파일에 작성한다. 컨테이너 내부의 시스템 컴포넌트는 기본 로깅 메커니즘을 무시하고,
|
||||
항상 `/var/log` 디렉터리에 기록한다.
|
||||
시스템 컴포넌트는 [klog](https://github.com/kubernetes/klog)
|
||||
로깅 라이브러리를 사용한다. [로깅에 대한 개발 문서](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)에서
|
||||
해당 컴포넌트의 로깅 심각도(severity)에 대한 규칙을 찾을 수 있다.
|
||||
### 로그의 위치 {#log-location-node}
|
||||
|
||||
컨테이너 로그와 마찬가지로, `/var/log` 디렉터리의 시스템 컴포넌트 로그를
|
||||
로테이트해야 한다. `kube-up.sh` 스크립트로 구축한 쿠버네티스 클러스터에서
|
||||
로그는 매일 또는 크기가 100MB를 초과하면
|
||||
`logrotate` 도구에 의해 로테이트가 되도록 구성된다.
|
||||
kubelet과 컨테이너 런타임이 로그를 기록하는 방법은,
|
||||
노드의 운영체제에 따라 다르다.
|
||||
|
||||
## 클러스터 레벨 로깅 아키텍처
|
||||
{{< tabs name="log_location_node_tabs" >}}
|
||||
{{% tab name="리눅스" %}}
|
||||
|
||||
systemd를 사용하는 시스템에서는 kubelet과 컨테이너 런타임은 기본적으로 로그를 journald에 작성한다.
|
||||
`journalctl`을 사용하여 이를 확인할 수 있다.
|
||||
예를 들어 `journalctl -u kubelet`.
|
||||
|
||||
systemd를 사용하지 않는 시스템에서, kubelet과 컨테이너 런타임은 로그를 `/var/log` 디렉터리의 `.log` 파일에 작성한다.
|
||||
다른 경로에 로그를 기록하고 싶은 경우에는, `kube-log-runner`를 통해
|
||||
간접적으로 kubelet을 실행하여
|
||||
kubelet의 로그를 지정한 디렉토리로 리디렉션할 수 있다.
|
||||
|
||||
kubelet을 실행할 때 `--log-dir` 인자를 통해 로그가 저장될 디렉토리를 지정할 수 있다.
|
||||
그러나 해당 인자는 더 이상 지원되지 않으며(deprecated), kubelet은 항상 컨테이너 런타임으로 하여금
|
||||
`/var/log/pods` 아래에 로그를 기록하도록 지시한다.
|
||||
|
||||
`kube-log-runner`에 대한 자세한 정보는 [시스템 로그](/ko/docs/concepts/cluster-administration/system-logs/#klog)를 확인한다.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="윈도우" %}}
|
||||
|
||||
kubelet은 기본적으로 `C:\var\logs` 아래에 로그를 기록한다
|
||||
(`C:\var\log`가 아님에 주의한다).
|
||||
|
||||
`C:\var\log` 경로가 쿠버네티스에 설정된 기본값이지만,
|
||||
몇몇 클러스터 배포 도구들은 윈도우 노드의 로그 경로로 `C:\var\log\kubelet`를 사용하기도 한다.
|
||||
|
||||
다른 경로에 로그를 기록하고 싶은 경우에는, `kube-log-runner`를 통해
|
||||
간접적으로 kubelet을 실행하여
|
||||
kubelet의 로그를 지정한 디렉토리로 리디렉션할 수 있다.
|
||||
|
||||
그러나, kubelet은 항상 컨테이너 런타임으로 하여금
|
||||
`C:\var\log\pods` 아래에 로그를 기록하도록 지시한다.
|
||||
|
||||
`kube-log-runner`에 대한 자세한 정보는 [시스템 로그](/ko/docs/concepts/cluster-administration/system-logs/#klog)를 확인한다.
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<br /><!-- work around rendering nit -->
|
||||
|
||||
파드로 실행되는 쿠버네티스 컴포넌트의 경우,
|
||||
기본 로깅 메커니즘을 따르지 않고 `/var/log` 아래에 로그를 기록한다
|
||||
(즉, 해당 컴포넌트들은 systemd의 journal에 로그를 기록하지 않는다).
|
||||
쿠버네티스의 저장 메커니즘을 사용하여, 컴포넌트를 실행하는 컨테이너에 영구적으로 사용 가능한 저장 공간을 연결할 수 있다.
|
||||
|
||||
etcd와 etcd의 로그를 기록하는 방식에 대한 자세한 정보는 [etcd 공식 문서](https://etcd.io/docs/)를 확인한다.
|
||||
다시 언급하자면, 쿠버네티스의 저장 메커니즘을 사용하여
|
||||
컴포넌트를 실행하는 컨테이너에 영구적으로 사용 가능한 저장 공간을 연결할 수 있다.
|
||||
|
||||
{{< note >}}
|
||||
스케줄러와 같은 쿠버네티스 클러스터의 컴포넌트를 배포하여 상위 노드에서 공유된 볼륨에 로그를 기록하는 경우,
|
||||
해당 로그들이 로테이트되는지 확인하고 관리해야 한다.
|
||||
**쿠버네티스는 로그 로테이션을 관리하지 않는다**.
|
||||
|
||||
몇몇 로그 로테이션은 운영체제가 자동적으로 구현할 수도 있다.
|
||||
예를 들어, 컴포넌트를 실행하는 스태틱 파드에 `/var/log` 디렉토리를 공유하여 로그를 기록하면,
|
||||
노드-레벨 로그 로테이션은 해당 경로의 파일을
|
||||
쿠버네티스 외부의 다른 컴포넌트들이 기록한 파일과 동일하게 취급한다.
|
||||
|
||||
몇몇 배포 도구들은 로그 로테이션을 자동화하지만,
|
||||
나머지 도구들은 이를 사용자의 책임으로 둔다.
|
||||
{{< /note >}}
|
||||
|
||||
## 클러스터-레벨 로깅 아키텍처 {#cluster-level-logging-architectures}
|
||||
|
||||
쿠버네티스는 클러스터-레벨 로깅을 위한 네이티브 솔루션을 제공하지 않지만, 고려해야 할 몇 가지 일반적인 접근 방법을 고려할 수 있다. 여기 몇 가지 옵션이 있다.
|
||||
|
||||
|
@ -165,7 +232,7 @@ systemd를 사용하지 않으면, kubelet과 컨테이너 런타임은 `/var/lo
|
|||

|
||||
|
||||
사이드카 컨테이너가 자체 `stdout` 및 `stderr` 스트림으로
|
||||
쓰도록 하면, 각 노드에서 이미 실행 중인 kubelet과 로깅 에이전트를
|
||||
기록하도록 하면, 각 노드에서 이미 실행 중인 kubelet과 로깅 에이전트를
|
||||
활용할 수 있다. 사이드카 컨테이너는 파일, 소켓 또는 journald에서 로그를 읽는다.
|
||||
각 사이드카 컨테이너는 자체 `stdout` 또는 `stderr` 스트림에 로그를 출력한다.
|
||||
|
||||
|
@ -177,8 +244,8 @@ systemd를 사용하지 않으면, kubelet과 컨테이너 런타임은 `/var/lo
|
|||
빌트인 도구를 사용할 수 있다.
|
||||
|
||||
예를 들어, 파드는 단일 컨테이너를 실행하고, 컨테이너는
|
||||
서로 다른 두 가지 형식을 사용하여 서로 다른 두 개의 로그 파일에 기록한다. 파드에 대한
|
||||
구성 파일은 다음과 같다.
|
||||
서로 다른 두 가지 형식을 사용하여 서로 다른 두 개의 로그 파일에 기록한다.
|
||||
다음은 파드에 대한 매니페스트이다.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
|
||||
|
||||
|
@ -188,7 +255,7 @@ systemd를 사용하지 않으면, kubelet과 컨테이너 런타임은 `/var/lo
|
|||
컨테이너는 공유 볼륨에서 특정 로그 파일을 테일(tail)한 다음 로그를
|
||||
자체 `stdout` 스트림으로 리디렉션할 수 있다.
|
||||
|
||||
다음은 사이드카 컨테이너가 두 개인 파드에 대한 구성 파일이다.
|
||||
다음은 사이드카 컨테이너가 두 개인 파드에 대한 매니페스트이다.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
|
||||
|
||||
|
@ -202,9 +269,9 @@ kubectl logs counter count-log-1
|
|||
출력은 다음과 같다.
|
||||
|
||||
```console
|
||||
0: Mon Jan 1 00:00:00 UTC 2001
|
||||
1: Mon Jan 1 00:00:01 UTC 2001
|
||||
2: Mon Jan 1 00:00:02 UTC 2001
|
||||
0: Fri Apr 1 11:42:26 UTC 2022
|
||||
1: Fri Apr 1 11:42:27 UTC 2022
|
||||
2: Fri Apr 1 11:42:28 UTC 2022
|
||||
...
|
||||
```
|
||||
|
||||
|
@ -215,27 +282,28 @@ kubectl logs counter count-log-2
|
|||
출력은 다음과 같다.
|
||||
|
||||
```console
|
||||
Mon Jan 1 00:00:00 UTC 2001 INFO 0
|
||||
Mon Jan 1 00:00:01 UTC 2001 INFO 1
|
||||
Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
||||
Fri Apr 1 11:42:29 UTC 2022 INFO 0
|
||||
Fri Apr 1 11:42:30 UTC 2022 INFO 0
|
||||
Fri Apr 1 11:42:31 UTC 2022 INFO 0
|
||||
...
|
||||
```
|
||||
|
||||
클러스터에 설치된 노드-레벨 에이전트는 추가 구성없이
|
||||
클러스터에 노드-레벨 에이전트를 설치했다면, 에이전트는 추가적인 설정 없이도
|
||||
자동으로 해당 로그 스트림을 선택한다. 원한다면, 소스 컨테이너에
|
||||
따라 로그 라인을 파싱(parse)하도록 에이전트를 구성할 수 있다.
|
||||
따라 로그 라인을 파싱(parse)하도록 에이전트를 구성할 수도 있다.
|
||||
|
||||
참고로, CPU 및 메모리 사용량이 낮음에도 불구하고(cpu에 대한 몇 밀리코어의
|
||||
요구와 메모리에 대한 몇 메가바이트의 요구), 로그를 파일에 기록한 다음
|
||||
`stdout` 으로 스트리밍하면 디스크 사용량은 두 배가 될 수 있다. 단일 파일에
|
||||
쓰는 애플리케이션이 있는 경우, 일반적으로 스트리밍
|
||||
사이드카 컨테이너 방식을 구현하는 대신 `/dev/stdout` 을 대상으로
|
||||
설정하는 것을 추천한다.
|
||||
CPU 및 메모리 사용량이 낮은(몇 밀리코어 수준의 CPU와 몇 메가바이트 수준의 메모리 요청) 파드라고 할지라도,
|
||||
로그를 파일에 기록한 다음 `stdout` 으로 스트리밍하는 것은
|
||||
노드가 필요로 하는 스토리지 양을 두 배로 늘릴 수 있다.
|
||||
단일 파일에 로그를 기록하는 애플리케이션이 있는 경우,
|
||||
일반적으로 스트리밍 사이드카 컨테이너 방식을 구현하는 대신
|
||||
`/dev/stdout` 을 대상으로 설정하는 것을 추천한다.
|
||||
|
||||
사이드카 컨테이너를 사용하여 애플리케이션 자체에서 로테이션할 수 없는
|
||||
로그 파일을 로테이션할 수도 있다. 이 방법의 예시는 정기적으로 `logrotate` 를 실행하는 작은 컨테이너를 두는 것이다.
|
||||
사이드카 컨테이너를 사용하여
|
||||
애플리케이션 자체에서 로테이션할 수 없는 로그 파일을 로테이션할 수도 있다.
|
||||
이 방법의 예시는 정기적으로 `logrotate` 를 실행하는 작은 컨테이너를 두는 것이다.
|
||||
그러나, `stdout` 및 `stderr` 을 직접 사용하고 로테이션과
|
||||
유지 정책을 kubelet에 두는 것이 권장된다.
|
||||
유지 정책을 kubelet에 두는 것이 더욱 직관적이다.
|
||||
|
||||
#### 로깅 에이전트가 있는 사이드카 컨테이너
|
||||
|
||||
|
@ -252,24 +320,30 @@ Mon Jan 1 00:00:02 UTC 2001 INFO 2
|
|||
접근할 수 없다.
|
||||
{{< /note >}}
|
||||
|
||||
여기에 로깅 에이전트가 포함된 사이드카 컨테이너를 구현하는 데 사용할 수 있는 두 가지 구성 파일이 있다. 첫 번째 파일에는
|
||||
fluentd를 구성하기 위한 [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)이 포함되어 있다.
|
||||
아래는 로깅 에이전트가 포함된 사이드카 컨테이너를 구현하는 데 사용할 수 있는 두 가지 매니페스트이다.
|
||||
첫 번째 매니페스트는 fluentd를 구성하는
|
||||
[`컨피그맵(ConfigMap)`](/docs/tasks/configure-pod-container/configure-pod-configmap/)이 포함되어 있다.
|
||||
|
||||
{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
|
||||
|
||||
{{< note >}}
|
||||
fluentd를 구성하는 것에 대한 자세한 내용은, [fluentd 문서](https://docs.fluentd.org/)를 참고한다.
|
||||
예제 매니페스트에서, 꼭 fluentd가 아니더라도,
|
||||
애플리케이션 컨테이너 내의 모든 소스에서 로그를 읽어올 수 있는 다른 로깅 에이전트를 사용할 수 있다.
|
||||
{{< /note >}}
|
||||
|
||||
두 번째 파일은 fluentd가 실행되는 사이드카 컨테이너가 있는 파드를 설명한다.
|
||||
두 번째 매니페스트는 fluentd가 실행되는 사이드카 컨테이너가 있는 파드를 설명한다.
|
||||
파드는 fluentd가 구성 데이터를 가져올 수 있는 볼륨을 마운트한다.
|
||||
|
||||
{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
|
||||
|
||||
이 예시 구성에서, 사용자는 애플리케이션 컨테이너 내의 모든 소스을 읽는 fluentd를 다른 로깅 에이전트로 대체할 수 있다.
|
||||
|
||||
### 애플리케이션에서 직접 로그 노출
|
||||
|
||||

|
||||
|
||||
애플리케이션에서 직접 로그를 노출하거나 푸시하는 클러스터-로깅은 쿠버네티스의 범위를 벗어난다.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [쿠버네티스 시스템 로그](/ko/docs/concepts/cluster-administration/system-logs/) 살펴보기.
|
||||
* [쿠버네티스 시스템 컴포넌트에 대한 추적(trace)](/docs/concepts/cluster-administration/system-traces/) 살펴보기.
|
||||
* 파드가 실패했을 때 쿠버네티스가 어떻게 로그를 남기는지에 대해, [종료 메시지를 사용자가 정의하는 방법](/ko/docs/tasks/debug/debug-application/determine-reason-pod-failure/#종료-메시지-사용자-정의하기) 살펴보기.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue