Merge branch 'dev-1.30' into kep-4292

pull/45223/head
Arda Güçlü 2024-03-13 07:35:46 +03:00 committed by GitHub
commit 8067009f5b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
204 changed files with 6413 additions and 2137 deletions

5
.well-known/security.txt Normal file
View File

@ -0,0 +1,5 @@
Contact: mailto:security@kubernetes.io
Expires: 2031-01-11T06:30:00.000Z
Preferred-Languages: en
Canonical: https://kubernetes.io/.well-known/security.txt
Policy: https://github.com/kubernetes/website/blob/main/SECURITY.md

View File

@ -24,6 +24,7 @@ aliases:
- rlenferink
sig-docs-en-owners: # Admins for English content
- celestehorgan
- dipesh-rawat
- divya-mohan0209
- drewhagen # RT 1.30 Docs Lead
- katcosgrove # RT 1.30 Lead
@ -33,12 +34,12 @@ aliases:
- sftim
- tengqm
sig-docs-en-reviews: # PR reviews for English content
- bradtopol
- celestehorgan
- dipesh-rawat
- divya-mohan0209
- kbhawkey
- mehabhalodiya
- mengjiao-liu
- mickeyboxell
- natalisucks
- nate-double-u
- reylejano

View File

@ -1,71 +1,213 @@
# Tài liệu Kubernetes
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
[![Trạng thái Netlify](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [![Phiên bản GitHub](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Chào mừng! Kho lưu trữ này chứa tất cả các tài nguyên cần thiết để xây dựng [trang web của Kubernetes và các tài liệu](https://kubernetes.io/). Chúng tôi rất vui vì bạn muốn đóng góp.
Kho lưu trữ này chứa các tài sản cần thiết để xây dựng [trang web và tài liệu Kubernetes](https://kubernetes.io/). Chúng tôi rất vui mừng khi bạn muốn đóng góp!
## Đóng góp cho tài liệu
- [Đóng góp vào tài liệu](#contributing-to-the-docs)
- [READMEs địa phương](#localization-readmes)
Bạn có thể click vào nút **Fork** ở góc trên bên phải màn hình để tạo bản sao của kho lưu trữ này trong tài khoản GitHub của bạn. Bản sao này được gọi là một bản *fork*. Thực hiện bất kì thay đổi nào mà bạn muốn trong bản fork của bạn và khi bạn sẵn sang gửi những thay đổi đó cho chúng tôi, hãy đến bản fork của bạn và tạo một Pull Request mới để cho chúng tôi biết về nó.
## Sử dụng kho lưu trữ này
Một khi Pull Request của bạn được tạo, reviewer sẽ chịu trách nhiệm cung cấp các phản hồi rõ ràng, có thể thực hiện được. Là chủ sở hữu của pull request, **bạn có trách nhiệm sửa đổi Pull Request của mình để giải quyết các phản hồi bởi reviewer.** Ngoài ra, lưu ý rằng bạn có thể có nhiều hơn một reviewer cung cấp cho bạn các phản hồi hoặc bạn có thể nhận được phản hồi từ reviewer khác với reviewer ban đầu được chỉ định. Hơn nữa, trong một số trường hợp, một trong những reviewer của bạn có thể yêu cầu đánh giá kỹ thuật từ [Kubernetes tech reviewer](https://github.com/kubernetes/website/wiki/Tech-reviewers) khi cần. Các reviewers sẽ cố gắng hết sức để cung cấp phản hồi một cách kịp thời nhưng thời gian phản hồi có thể thay đổi tùy theo hoàn cảnh.
Bạn có thể chạy trang web này ở chế độ local bằng cách sử dụng [Hugo (Phiên bản mở rộng)](https://gohugo.io/), hoặc bạn có thể chạy nó trong một container runtime. Chúng tôi mạnh mẽ khuyến nghị sử dụng container runtime, vì nó mang lại tính nhất quán trong triển khai với trang web thực tế.
Để biết thêm thông tin về việc đóng góp cho tài liệu Kubernetes, hãy xem:
## Yêu cầu tiên quyết
* [Bắt đầu đóng góp](https://kubernetes.io/docs/contribute/start/)
* [Các giai đoạn thay đổi tài liệu](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Sử dụng các trang templates](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Hướng dẫn biểu mẫu tài liệu](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Địa phương hóa tài liệu Kubernetes](https://kubernetes.io/docs/contribute/localization/)
Để sử dụng kho lưu trữ này, bạn cần cài đặt các phần mềm sau trên máy tính của bạn:
- [npm](https://www.npmjs.com/)
- [Go](https://go.dev/)
- [Hugo (Phiên bản mở rộng)](https://gohugo.io/)
- Một container runtime, như [Docker](https://www.docker.com/).
## Chạy website cục bộ dùng Docker
Cách được đề xuất để chạy trang web Kubernetes cục bộ là dùng [Docker](https://docker.com) image chứa trình tạo web tĩnh [Hugo](https://gohugo.io).
> Nếu bạn làm việc trên môi trường Windows, bạn sẽ cần thêm môt vài công cụ mà bạn có thể cài đặt với [Chocolatey](https://chocolatey.org). `choco install make`
> Nếu bạn không muốn dùng Docker để chạy trang web cục bộ, hãy xem [Chạy website cục bộ dùng Hugo](#chạy-website-cục-bộ-dùng-hugo) dưới đây.
Nếu bạn có Docker đang [up và running](https://www.docker.com/get-started), build `kubernetes-hugo` Docker image cục bộ:
Trước khi bắt đầu, hãy cài đặt các phụ thuộc. Sao chép kho lưu trữ và di chuyển đến thư mục:
```bash
make container-image
git clone https://github.com/kubernetes/website.git
cd website
```
Khi image đã được built, bạn có thể chạy website cục bộ:
Trang web Kubernetes sử dụng [chủ đề Docsy Hugo](https://github.com/google/docsy#readme). Ngay cả khi bạn dự định chạy trang web trong một vùng chứa, chúng tôi thực sự khuyên bạn nên kéo mô-đun con và các phần phụ thuộc phát triển khác bằng cách chạy như sau:
### Windows
```powershell
# fetch submodule dependencies
git submodule update --init --recursive --depth 1
```
### Linux / other Unix
```bash
# fetch submodule dependencies
make module-init
```
## Chạy trang web bằng container
Để xây dựng trang web trong một container, chạy lệnh sau:
```bash
# Bạn có thể đặt biến $CONTAINER_ENGINE thành tên của bất kỳ công cụ container giống Docker nào
make container-serve
```
Mở trình duyệt và đến địa chỉ http://localhost:1313 để xem website. Khi bạn thay đổi các file nguồn, Hugo cập nhật website và buộc làm mới trình duyệt.
Nếu bạn thấy lỗi, điều đó có thể có nghĩa là container Hugo không có đủ tài nguyên điện toán. Để giải quyết nó, hãy tăng số lượng sử dụng CPU và bộ nhớ được phép cho Docker trên máy của bạn ([macOS] (<https://docs.docker.com/desktop/setings/mac/>) và [windows] (https: // tài liệu .Docker.com/Desktop/Cài đặt/Windows/)).
## Chạy website cục bộ dùng Hugo
Mở trình duyệt của bạn để <http://localhost:1313> để xem trang web. Khi bạn thay đổi các tệp nguồn, Hugo cập nhật trang web và buộc phải làm mới trình duyệt.
Hãy xem [tài liệu chính thức của Hugo](https://gohugo.io/getting-started/installing/) cho việc hướng dẫn cài đặt Hugo. Đảm bảo cài đặt phiên bản mở rộng của Hugo được xác định bởi biến môi trường `HUGO_VERSION` trong file [`netlify.toml`](netlify.toml#L9)
## Chạy trang web cục bộ bằng cách sử dụng Hugo
Để chạy website cục bộ khi Hugo được cài đặt:
Đảm bảo cài đặt phiên bản mở rộng Hugo được chỉ định bởi biến môi trường `Hugo_version` trong tệp [`netlify.toml`] (netlify.toml#l11).
Để cài đặt các phụ thuộc, triển khai và kiểm tra trang web cục bộ, chạy:
- Đối với MacOS và Linux
```bash
npm ci
make serve
```
- Đối với Windows (PowerShell)
```powershell
npm ci
hugo.exe server --buildFuture --environment development
```
Điều này sẽ khởi động máy chủ Hugo cục bộ trên cổng 1313. Mở trình duyệt của bạn thành <http://localhost:1313> để xem trang web. Khi bạn thay đổi các tệp nguồn, Hugo cập nhật trang web và buộc phải làm mới trình duyệt.
## Xây dựng các trang tài liệu tham khảo API
Các trang tài liệu tham khảo API nằm trong `content/en/docs/reference/kubernetes-api` được xây dựng từ Swagger specification, còn được gọi là OpenAPI specification, sử dụng <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>.
Để cập nhật các trang tài liệu tham khảo cho một phiên bản Kubernetes mới, làm theo các bước sau:
1. Kéo về submodule `api-ref-generator`:
```bash
git submodule update --init --recursive --depth 1
```
2. Cập nhật Swagger specification:
```bash
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-assets/api/swagger.json
```
3. Trong `api-ref-assets/config/`, điều chỉnh các tệp `toc.yaml``fields.yaml` để phản ánh các thay đổi của phiên bản mới.
4. Tiếp theo, xây dựng các trang:
```bash
make api-reference
```
Bạn có thể kiểm tra kết quả trên máy cục bộ bằng cách xây dựng và chạy trang web từ một container:
```bash
make container-serve
```
Trong trình duyệt web, truy cập vào <http://localhost:1313/docs/reference/kubernetes-api/> để xem tài liệu tham khảo API.
5. Khi tất cả các thay đổi của hợp đồng mới được phản ánh vào các tệp cấu hình `toc.yaml``fields.yaml`, tạo một Pull Request với các trang tài liệu tham khảo API mới được tạo ra.
## Khắc phục sự cố
### Lỗi: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): tính năng này không khả dụng trong phiên bản Hugo hiện tại của bạn
Hugo được cung cấp dưới dạng hai bộ công cụ nhị phân vì lý do kỹ thuật. Trang web hiện tại chạy dựa trên phiên bản **Hugo Extended** duy nhất. Trong [trang phát hành](https://github.com/gohugoio/hugo/releases), tìm kiếm các bản lưu trữ có chứa từ khóa `extended` trong tên. Để xác nhận, chạy `hugo version` và tìm từ khóa `extended`.
### Khắc phục sự cố trên macOS với quá nhiều tệp mở
Nếu bạn chạy `make serve` trên macOS và nhận được lỗi sau đây:
```bash
make serve
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
Câu lệnh trên sẽ khởi động server Hugo cục bộ trên cổng 1313. Mở trình duyệt và đến địa chỉ http://localhost:1313 để xem website. Khi bạn thay đổi các file nguồn, Hugo cập nhật website và buộc làm mới trình duyệt.
Hãy kiểm tra giới hạn hiện tại cho số tệp mở:
## Cộng đồng, thảo luận, đóng góp và hỗ trợ
`launchctl limit maxfiles`
Tìm hiểu cách tham gia với cộng đồng Kubernetes trên [trang cộng đồng](http://kubernetes.io/community/).
Sau đó, chạy các lệnh sau (được thích nghi từ <https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c>):
Bạn có thể tiếp cận những maintainers của dự án này tại:
```shell
#!/bin/sh
# Đây là các liên kết gist gốc, liên kết đến các gist của tôi ngay bây giờ.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
Điều này hoạt động cho cả macOS Catalina và Mojave.
## Tham gia với SIG Docs
Tìm hiểu thêm về cộng đồng SIG Docs Kubernetes và cuộc họp trên [trang cộng đồng](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
Bạn cũng có thể liên hệ với những người duy trì dự án này tại:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Nhận lời mời tham gia Slack này](https://slack.k8s.io/)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
### Quy tắc ứng xử
## Đóng góp vào tài liệu
Sự tham gia vào cộng đồng Kubernetes được điểu chỉnh bởi [Kubernetes Code of Conduct](code-of-conduct.md).
Bạn có thể nhấp vào nút **Fork** ở phía trên bên phải của màn hình để tạo một bản sao của kho lưu trữ này trong tài khoản GitHub của bạn. Bản sao này được gọi là _fork_. Thực hiện bất kỳ thay đổi nào bạn muốn trong fork của bạn, và khi bạn sẵn sàng gửi những thay đổi đó cho chúng tôi, hãy vào fork của bạn và tạo một pull request mới để thông báo cho chúng tôi biết về nó.
## Cảm ơn!
Sau khi pull request của bạn được tạo, một người xem Kubernetes sẽ chịu trách nhiệm cung cấp phản hồi rõ ràng, có thể thực hiện được. Là chủ sở hữu của pull request, **bạn có trách nhiệm sửa đổi pull request của mình để giải quyết phản hồi đã được cung cấp cho bạn bởi người xem Kubernetes.**
Kubernetes phát triển mạnh mẽ về sự tham gia của cộng đồng và chúng tôi đánh giá cao những đóng góp của bạn cho trang web và tài liệu của chúng tôi!
Lưu ý rằng bạn có thể nhận được phản hồi từ nhiều người đánh giá Kubernetes hoặc bạn có thể nhận được phản hồi từ một người đánh giá Kubernetes khác với người được chỉ định ban đầu để cung cấp phản hồi.
Hơn nữa, trong một số trường hợp, một trong những người đánh giá của bạn có thể yêu cầu một đánh giá kỹ thuật từ một người đánh giá kỹ thuật Kubernetes khi cần thiết. Người đánh giá sẽ cố gắng cung cấp phản hồi một cách kịp thời, nhưng thời gian phản hồi có thể thay đổi tùy thuộc vào các tình huống.
Để biết thêm thông tin về việc đóng góp vào tài liệu Kubernetes, hãy xem:
- [Đóng góp vào tài liệu Kubernetes](https://kubernetes.io/docs/contribute/)
- [Loại nội dung trang](https://kubernetes.io/docs/contribute/style/page-content-types/)
- [Hướng dẫn về phong cách tài liệu](https://kubernetes.io/docs/contribute/style/style-guide/)
- [Dịch tài liệu Kubernetes](https://kubernetes.io/docs/contribute/localization/)
- [Giới thiệu về tài liệu Kubernetes](https://www.youtube.com/watch?v=pprMgmNzDcw)
### Đại sứ đóng góp mới
Nếu bạn cần trợ giúp bất kỳ lúc nào khi đóng góp, [Đại sứ đóng góp mới](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador) là điểm liên lạc tốt. Đây là những người phê duyệt SIG Docs có trách nhiệm hướng dẫn các đóng góp viên mới và giúp họ qua những pull request đầu tiên. Nơi tốt nhất để liên hệ với Đại sứ đóng góp mới là trên [Kubernetes Slack](https://slack.k8s.io/). Đại sứ đóng góp mới hiện tại cho SIG Docs:
| Name | Slack | GitHub |
| -------------------------- | -------------------------- | -------------------------- |
| Arsh Sharma | @arsh | @RinkiyaKeDad |
## Các tệp README đa ngôn ngữ
| Language | Language |
| -------------------------- | -------------------------- |
| [Chinese](README-zh.md) | [Korean](README-ko.md) |
| [French](README-fr.md) | [Polish](README-pl.md) |
| [German](README-de.md) | [Portuguese](README-pt.md) |
| [Hindi](README-hi.md) | [Russian](README-ru.md) |
| [Indonesian](README-id.md) | [Spanish](README-es.md) |
| [Italian](README-it.md) | [Ukrainian](README-uk.md) |
| [Japanese](README-ja.md) | [Vietnamese](README-vi.md) |
## Quy tắc ứng xử
Sự tham gia vào cộng đồng Kubernetes được điều chỉnh bởi [Quy tắc ứng xử của CNCF](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
## Cảm ơn bạn
Kubernetes phát triển dựa trên sự tham gia của cộng đồng, và chúng tôi đánh giá cao những đóng góp của bạn cho trang web và tài liệu của chúng tôi!

View File

@ -99,10 +99,9 @@ To update the reference pages for a new Kubernetes release follow these steps:
make api-reference
```
You can test the results locally by making and serving the site from a container image:
You can test the results locally by building and serving the site from a container:
```bash
make container-image
make container-serve
```

View File

@ -2,6 +2,6 @@
title: "Container"
weight: 40
description: >
Methoden, um Anwendungen und ihre Abhängigkeiten zu zusammenzufassen.
Methoden, um Anwendungen und ihre Abhängigkeiten zusammenzufassen.
---

View File

@ -0,0 +1,16 @@
---
title: cAdvisor
id: cadvisor
date: 2021-12-09
full_link: https://github.com/google/cadvisor/
short_description: >
Werkzeug, um Ressourcenverbrauch und Performance Charakteristiken von Container besser zu verstehen
aka:
tags:
- tool
---
cAdvisor (Container Advisor) ermöglicht Benutzer von Container ein besseres Verständnis des Ressourcenverbrauchs und der Performance Charakteristiken ihrer laufenden {{< glossary_tooltip text="Container" term_id="container" >}}.
<!--more-->
Es ist ein laufender Daemon, der Informationen über laufende Container sammelt, aggregiert, verarbeitet, und exportiert. Genauer gesagt, speichert es für jeden Container die Ressourcenisolationsparameter, den historischen Ressourcenverbrauch, die Histogramme des kompletten historischen Ressourcenverbrauchs und die Netzwerkstatistiken. Diese Daten werden pro Container und maschinenweit exportiert.

View File

@ -0,0 +1,18 @@
---
title: Zertifikat
id: certificate
date: 2018-04-12
full_link: /docs/tasks/tls/managing-tls-in-a-cluster/
short_description: >
Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu validieren.
aka:
tags:
- security
---
Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu bestätigen.
<!--more-->
Zertfikate ermöglichen es Anwendungen in einem Kubernetes Cluster sicher auf die Kubernetes API zuzugreifen. Zertfikate bestätigen, dass Clients die Erlaubnis haben auf die API zuzugreifen.

View File

@ -0,0 +1,18 @@
---
title: CIDR
id: cidr
date: 2019-11-12
full_link:
short_description: >
CIDR ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
aka:
tags:
- networking
---
CIDR (Classless Inter-Domain Routing) ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
<!--more-->
Im Kubernetes Kontext, erhält jeder {{< glossary_tooltip text="Knoten" term_id="node" >}} eine Reihe von IP Adressen durch die Startadresse und eine Subnetzmaske unter Verwendung von CIDR. Dies erlaubt Knoten jedem {{< glossary_tooltip text="Pod" term_id="pod" >}} eine eigene IP Adresse zuzuweisen. Obwohl es ursprünglich ein Konzept für IPv4 ist, wurde CIDR erweitert um auch IPv6 einzubinden.

View File

@ -0,0 +1,18 @@
---
title: CLA (Contributor License Agreement)
id: cla
date: 2018-04-12
full_link: https://github.com/kubernetes/community/blob/master/CLA.md
short_description: >
Bedingungen unter denen ein Mitwirkender eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
aka:
tags:
- community
---
Bedingungen unter denen ein {{< glossary_tooltip text="Mitwirkender" term_id="contributor" >}} eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
<!--more-->
CLAs helfen dabei rechtliche Streitigkeiten rund um Mitwirkungen und geistigem Eigentum (IP) zu lösen.

View File

@ -50,25 +50,25 @@ card:
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_01.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Erstellen Sie einen Kubernetes-Cluster</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/"><h5>1. Erstellen Sie einen Kubernetes-Cluster</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_02.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Stellen Sie eine App bereit</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/"><h5>2. Stellen Sie eine App bereit</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Erkunden Sie Ihre App</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Erkunden Sie Ihre App</h5></a>
</div>
</div>
</div>
@ -78,25 +78,25 @@ card:
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Machen Sie Ihre App öffentlich zugänglich</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Machen Sie Ihre App öffentlich zugänglich</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Skalieren Sie Ihre App</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Skalieren Sie Ihre App</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
<a href="/de/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. Aktualisieren Sie Ihre App</h5></a>
<a href="/de/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. Aktualisieren Sie Ihre App</h5></a>
</div>
</div>
</div>
@ -107,7 +107,7 @@ card:
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button">Starten Sie das Tutorial<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/de/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button">Starten Sie das Tutorial<span class="btn__next"></span></a>
</div>
</div>

View File

@ -62,7 +62,7 @@ It was actually just making— again, startup, small company, small team, so rea
**ADAM GLICK: What time frame was this?**
JORGE ALARCÓN: Three, four years ago, so definitely not 1.13. That's the best guesstimate that I can give at this point. But I wasn't able to find any good examples, any tutorials. The only book that I was able to get my hands on was the one written by Joe Beda, Kelsey Hightower, and I forget the other author. But what is it? "[Kubernetes— Up and Running](](http://shop.oreilly.com/product/0636920223788.do))"?
JORGE ALARCÓN: Three, four years ago, so definitely not 1.13. That's the best guesstimate that I can give at this point. But I wasn't able to find any good examples, any tutorials. The only book that I was able to get my hands on was the one written by Joe Beda, Kelsey Hightower, and I forget the other author. But what is it? "[Kubernetes— Up and Running](http://shop.oreilly.com/product/0636920223788.do)"?
And in general, right now I use it as reference— it's really good. But as a beginner, I still was lost. They give all these amazing examples, they provide the applications, but I had no idea why someone might need a Pod, why someone might need a Deployment. So my last resort was to try and find someone who actually knew Kubernetes.

View File

@ -112,7 +112,7 @@ future. However, the Kubernetes project can't provide _any_ guarantees on how lo
is that going to be. The deprecated legacy repositories, and their contents, might
be removed at any time in the future and without a further notice period.~~
**UPDATE**: The legacy packages are expected to go away in January 2024.
**UPDATE**: The legacy packages are expected to go away by the end of February 2024.
The Kubernetes project **strongly recommends** migrating to the new community-owned
repositories **as soon as possible**.

Binary file not shown.

After

Width:  |  Height:  |  Size: 480 KiB

View File

@ -0,0 +1,126 @@
------
layout: blog
title: "A look into the Kubernetes Book Club"
slug: k8s-book-club
date: 2024-02-22
canonicalUrl: https://www.k8s.dev/blog/2024/02/22/k8s-book-club/
---
**Author**: Frederico Muñoz (SAS Institute)
Learning Kubernetes and the entire ecosystem of technologies around it is not without its
challenges. In this interview, we will talk with [Carlos Santana
(AWS)](https://www.linkedin.com/in/csantanapr/) to learn a bit more about how he created the
[Kubernetes Book Club](https://community.cncf.io/kubernetes-virtual-book-club/), how it works, and
how anyone can join in to take advantage of a community-based learning experience.
![Carlos Santana speaking at KubeCon NA 2023](csantana_k8s_book_club.jpg)
**Frederico Muñoz (FSM)**: Hello Carlos, thank you so much for your availability. To start with,
could you tell us a bit about yourself?
**Carlos Santana (CS)**: Of course. My experience in deploying Kubernetes in production six
years ago opened the door for me to join [Knative](https://knative.dev/) and then contribute to
Kubernetes through the Release Team. Working on upstream Kubernetes has been one of the best
experiences I've had in open-source. Over the past two years, in my role as a Senior Specialist
Solutions Architect at AWS, I have been assisting large enterprises build their internal developer
platforms (IDP) on top of Kubernetes. Going forward, my open source contributions are directed
towards [CNOE](https://cnoe.io/) and CNCF projects like [Argo](https://github.com/argoproj),
[Crossplane](https://www.crossplane.io/), and [Backstage](https://www.cncf.io/projects/backstage/).
## Creating the Book Club
**FSM**: So your path led you to Kubernetes, and at that point what was the motivating factor for
starting the Book Club?
**CS**: The idea for the Kubernetes Book Club sprang from a casual suggestion during a
[TGIK](https://github.com/vmware-archive/tgik) livestream. For me, it was more than just about
reading a book; it was about creating a learning community. This platform has not only been a source
of knowledge but also a support system, especially during the challenging times of the
pandemic. It's gratifying to see how this initiative has helped members cope and grow. The first
book [Production
Kubernetes](https://www.oreilly.com/library/view/production-kubernetes/9781492092292/) took 36
weeks, when we started on March 5th 2021. Currently don't take that long to cover a book, one or two
chapters per week.
**FSM**: Could you describe the way the Kubernetes Book Club works? How do you select the books and how
do you go through them?
**CS**: We collectively choose books based on the interests and needs of the group. This practical
approach helps members, especially beginners, grasp complex concepts more easily. We have two weekly
series, one for the EMEA timezone, and I organize the US one. Each organizer works with their co-host
and picks a book on Slack, then sets up a lineup of hosts for a couple of weeks to discuss each
chapter.
**FSM**: If Im not mistaken, the Kubernetes Book Club is in its 17th book, which is significant: is
there any secret recipe for keeping things active?
**CS**: The secret to keeping the club active and engaging lies in a couple of key factors.
Firstly, consistency has been crucial. We strive to maintain a regular schedule, only cancelling
meetups for major events like holidays or KubeCon. This regularity helps members stay engaged and
builds a reliable community.
Secondly, making the sessions interesting and interactive has been vital. For instance, I often
introduce pop-up quizzes during the meetups, which not only tests members' understanding but also
adds an element of fun. This approach keeps the content relatable and helps members understand how
theoretical concepts are applied in real-world scenarios.
## Topics covered in the Book Club
**FSM**: The main topics of the books have been Kubernetes, GitOps, Security, SRE, and
Observability: is this a reflection of the cloud native landscape, especially in terms of
popularity?
**CS**: Our journey began with 'Production Kubernetes', setting the tone for our focus on practical,
production-ready solutions. Since then, we've delved into various aspects of the CNCF landscape,
aligning our books with a different theme. Each theme, whether it be Security, Observability, or
Service Mesh, is chosen based on its relevance and demand within the community. For instance, in our
recent themes on Kubernetes Certifications, we brought the book authors into our fold as active
hosts, enriching our discussions with their expertise.
**FSM**: I know that the project had recent changes, namely being integrated into the CNCF as a
[Cloud Native Community Group](https://community.cncf.io/). Could you talk a bit about this change?
**CS**: The CNCF graciously accepted the book club as a Cloud Native Community Group. This is a
significant development that has streamlined our operations and expanded our reach. This alignment
has been instrumental in enhancing our administrative capabilities, similar to those used by
Kubernetes Community Days (KCD) meetups. Now, we have a more robust structure for memberships, event
scheduling, mailing lists, hosting web conferences, and recording sessions.
**FSM**: How has your involvement with the CNCF impacted the growth and engagement of the Kubernetes
Book Club over the past six months?
**CS**: Since becoming part of the CNCF community six months ago, we've witnessed significant
quantitative changes within the Kubernetes Book Club. Our membership has surged to over 600 members,
and we've successfully organized and conducted more than 40 events during this period. What's even
more promising is the consistent turnout, with an average of 30 attendees per event. This growth and
engagement are clear indicators of the positive influence of our CNCF affiliation on the Kubernetes
Book Club's reach and impact in the community.
## Joining the Book Club
**FSM**: For anyone wanting to join, what should they do?
**CS**: There are three steps to join:
- First, join the [Kubernetes Book Club
Community](https://community.cncf.io/kubernetes-virtual-book-club/)
- Then RSVP to the
[events](https://community.cncf.io/kubernetes-virtual-book-club/)
on the community page
- Lastly, join the CNCF Slack channel
[#kubernetes-book-club](https://cloud-native.slack.com/archives/C05EYA14P37).
**FSM**: Excellent, thank you! Any final comments you would like to share?
**CS**: The Kubernetes Book Club is more than just a group of professionals discussing books; it's a
vibrant community and amazing volunteers that help organize and host [Neependra
Khare](https://www.linkedin.com/in/neependra/), [Eric
Smalling](https://www.linkedin.com/in/ericsmalling/), [Sevi
Karakulak](https://www.linkedin.com/in/sevikarakulak/), [Chad
M. Crowell](https://www.linkedin.com/in/chadmcrowell/), and [Walid (CNJ)
Shaari](https://www.linkedin.com/in/walidshaari/). Look us up at KubeCon and get your Kubernetes
Book Club sticker!

View File

@ -0,0 +1,148 @@
---
layout: blog
title: "Spotlight on SIG Cloud Provider"
slug: sig-cloud-provider-spotlight-2024
date: 2024-03-01
canonicalUrl: https://www.k8s.dev/blog/2024/03/01/sig-cloud-provider-spotlight-2024/
---
**Author**: Arujjwal Negi
One of the most popular ways developers use Kubernetes-related services is via cloud providers, but
have you ever wondered how cloud providers can do that? How does this whole process of integration
of Kubernetes to various cloud providers happen? To answer that, let's put the spotlight on [SIG
Cloud Provider](https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md).
SIG Cloud Provider works to create seamless integrations between Kubernetes and various cloud
providers. Their mission? Keeping the Kubernetes ecosystem fair and open for all. By setting clear
standards and requirements, they ensure every cloud provider plays nicely with Kubernetes. It is
their responsibility to configure cluster components to enable cloud provider integrations.
In this blog of the SIG Spotlight series, [Arujjwal Negi](https://twitter.com/arujjval) interviews
[Michael McCune](https://github.com/elmiko) (Red Hat), also known as _elmiko_, co-chair of SIG Cloud
Provider, to give us an insight into the workings of this group.
## Introduction
**Arujjwal**: Let's start by getting to know you. Can you give us a small intro about yourself and
how you got into Kubernetes?
**Michael**: Hi, Im Michael McCune, most people around the community call me by my handle,
_elmiko_. Ive been a software developer for a long time now (Windows 3.1 was popular when I
started!), and Ive been involved with open-source software for most of my career. I first got
involved with Kubernetes as a developer of machine learning and data science applications; the team
I was on at the time was creating tutorials and examples to demonstrate the use of technologies like
Apache Spark on Kubernetes. That said, Ive been interested in distributed systems for many years
and when an opportunity arose to join a team working directly on Kubernetes, I jumped at it!
## Functioning and working
**Arujjwal**: Can you give us an insight into what SIG Cloud Provider does and how it functions?
**Michael**: SIG Cloud Provider was formed to help ensure that Kubernetes provides a neutral
integration point for all infrastructure providers. Our largest task to date has been the extraction
and migration of in-tree cloud controllers to out-of-tree components. The SIG meets regularly to
discuss progress and upcoming tasks and also to answer questions and bugs that
arise. Additionally, we act as a coordination point for cloud provider subprojects such as the cloud
provider framework, specific cloud controller implementations, and the [Konnectivity proxy
project](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/).
**Arujjwal:** After going through the project
[README](https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md), I
learned that SIG Cloud Provider works with the integration of Kubernetes with cloud providers. How
does this whole process go?
**Michael:** One of the most common ways to run Kubernetes is by deploying it to a cloud environment
(AWS, Azure, GCP, etc). Frequently, the cloud infrastructures have features that enhance the
performance of Kubernetes, for example, by providing elastic load balancing for Service objects. To
ensure that cloud-specific services can be consistently consumed by Kubernetes, the Kubernetes
community has created cloud controllers to address these integration points. Cloud providers can
create their own controllers either by using the framework maintained by the SIG or by following
the API guides defined in the Kubernetes code and documentation. One thing I would like to point out
is that SIG Cloud Provider does not deal with the lifecycle of nodes in a Kubernetes cluster;
for those types of topics, SIG Cluster Lifecycle and the Cluster API project are more appropriate
venues.
## Important subprojects
**Arujjwal:** There are a lot of subprojects within this SIG. Can you highlight some of the most
important ones and what job they do?
**Michael:** I think the two most important subprojects today are the [cloud provider
framework](https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#kubernetes-cloud-provider)
and the [extraction/migration
project](https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#cloud-provider-extraction-migration). The
cloud provider framework is a common library to help infrastructure integrators build a cloud
controller for their infrastructure. This project is most frequently the starting point for new
people coming to the SIG. The extraction and migration project is the other big subproject and a
large part of why the framework exists. A little history might help explain further: for a long
time, Kubernetes needed some integration with the underlying infrastructure, not
necessarily to add features but to be aware of cloud events like instance termination. The cloud
provider integrations were built into the Kubernetes code tree, and thus the term "in-tree" was
created (check out this [article on the topic](https://kaslin.rocks/out-of-tree/) for more
info). The activity of maintaining provider-specific code in the main Kubernetes source tree was
considered undesirable by the community. The communitys decision inspired the creation of the
extraction and migration project to remove the "in-tree" cloud controllers in favor of
"out-of-tree" components.
**Arujjwal:** What makes [the cloud provider framework] a good place to start? Does it have consistent good beginner work? What
kind?
**Michael:** I feel that the cloud provider framework is a good place to start as it encodes the
communitys preferred practices for cloud controller managers and, as such, will give a newcomer a
strong understanding of how and what the managers do. Unfortunately, there is not a consistent
stream of beginner work on this component; this is due in part to the mature nature of the framework
and that of the individual providers as well. For folks who are interested in getting more involved,
having some [Go language](https://go.dev/) knowledge is good and also having an understanding of
how at least one cloud API (e.g., AWS, Azure, GCP) works is also beneficial. In my personal opinion,
being a newcomer to SIG Cloud Provider can be challenging as most of the code around this project
deals directly with specific cloud provider interactions. My best advice to people wanting to do
more work on cloud providers is to grow your familiarity with one or two cloud APIs, then look
for open issues on the controller managers for those clouds, and always communicate with the other
contributors as much as possible.
## Accomplishments
**Arujjwal:** Can you share about an accomplishment(s) of the SIG that you are proud of?
**Michael:** Since I joined the SIG, more than a year ago, we have made great progress in advancing
the extraction and migration subproject. We have moved from an alpha status on the defining
[KEP](https://github.com/kubernetes/enhancements/blob/master/keps/README.md) to a beta status and
are inching ever closer to removing the old provider code from the Kubernetes source tree. I've been
really proud to see the active engagement from our community members and to see the progress we have
made towards extraction. I have a feeling that, within the next few releases, we will see the final
removal of the in-tree cloud controllers and the completion of the subproject.
## Advice for new contributors
**Arujjwal:** Is there any suggestion or advice for new contributors on how they can start at SIG
Cloud Provider?
**Michael:** This is a tricky question in my opinion. SIG Cloud Provider is focused on the code
pieces that integrate between Kubernetes and an underlying infrastructure. It is very common, but
not necessary, for members of the SIG to be representing a cloud provider in an official capacity. I
recommend that anyone interested in this part of Kubernetes should come to an SIG meeting to see how
we operate and also to study the cloud provider framework project. We have some interesting ideas
for future work, such as a common testing framework, that will cut across all cloud providers and
will be a great opportunity for anyone looking to expand their Kubernetes involvement.
**Arujjwal:** Are there any specific skills you're looking for that we should highlight? To give you
an example from our own [SIG ContribEx]
(https://github.com/kubernetes/community/blob/master/sig-contributor-experience/README.md):
if you're an expert in [Hugo](https://gohugo.io/), we can always use some help with k8s.dev!
**Michael:** The SIG is currently working through the final phases of our extraction and migration
process, but we are looking toward the future and starting to plan what will come next. One of the
big topics that the SIG has discussed is testing. Currently, we do not have a generic common set of
tests that can be exercised by each cloud provider to confirm the behaviour of their controller
manager. If you are an expert in Ginkgo and the Kubetest framework, we could probably use your help
in designing and implementing the new tests.
---
This is where the conversation ends. I hope this gave you some insights about SIG Cloud Provider's
aim and working. This is just the tip of the iceberg. To know more and get involved with SIG Cloud
Provider, try attending their meetings
[here](https://github.com/kubernetes/community/blob/master/sig-cloud-provider/README.md#meetings).

View File

@ -139,7 +139,7 @@ until disk usage reaches the `LowThresholdPercent` value.
#### Garbage collection for unused container images {#image-maximum-age-gc}
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="ImageMaximumGCAge" >}}
As an alpha feature, you can specify the maximum time a local image can be unused for,
regardless of disk usage. This is a kubelet setting that you configure for each node.

View File

@ -33,7 +33,7 @@ instances are on stand-by.
## API server identity
{{< feature-state for_k8s_version="v1.26" state="beta" >}}
{{< feature-state feature_gate_name="APIServerIdentity" >}}
Starting in Kubernetes v1.26, each `kube-apiserver` uses the Lease API to publish its identity to the
rest of the system. While not particularly useful on its own, this provides a mechanism for clients to

View File

@ -8,7 +8,7 @@ weight: 220
<!-- overview -->
{{< feature-state state="alpha" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="UnknownVersionInteroperabilityProxy" >}}
Kubernetes {{< skew currentVersion >}} includes an alpha feature that lets an
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}

View File

@ -280,7 +280,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
## Node topology
{{< feature-state state="stable" for_k8s_version="v1.27" >}}
{{< feature-state feature_gate_name="TopologyManager" >}}
If you have enabled the `TopologyManager`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
@ -290,7 +290,7 @@ for more information.
## Graceful node shutdown {#graceful-node-shutdown}
{{< feature-state state="beta" for_k8s_version="v1.21" >}}
{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
@ -374,7 +374,7 @@ Message: Pod was terminated in response to imminent node shutdown.
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
{{< feature-state state="beta" for_k8s_version="v1.24" >}}
{{< feature-state feature_gate_name="GracefulNodeShutdownBasedOnPodPriority" >}}
To provide more flexibility during graceful node shutdown around the ordering
of pods during shutdown, graceful node shutdown honors the PriorityClass for
@ -471,7 +471,7 @@ are emitted under the kubelet subsystem to monitor node shutdowns.
## Non-graceful node shutdown handling {#non-graceful-node-shutdown}
{{< feature-state state="stable" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="NodeOutOfServiceVolumeDetach" >}}
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
either because the command does not trigger the inhibitor locks mechanism used by
@ -515,12 +515,13 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
## Swap memory management {#swap-memory}
{{< feature-state state="beta" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="NodeSwap" >}}
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
the kubelet (default is true), and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false.
must be set to false.
To allow Pods to utilize swap, `swapBehavior` should not be set to `NoSwap` (which is the default behavior) in the kubelet config.
{{< warning >}}
When the memory swap feature is turned on, Kubernetes data such as the content
@ -532,17 +533,16 @@ specify how a node will use swap memory. For example,
```yaml
memorySwap:
swapBehavior: UnlimitedSwap
swapBehavior: LimitedSwap
```
- `UnlimitedSwap` (default): Kubernetes workloads can use as much swap memory as they
request, up to the system limit.
- `NoSwap` (default): Kubernetes workloads will not use swap.
- `LimitedSwap`: The utilization of swap memory by Kubernetes workloads is subject to limitations.
Only Pods of Burstable QoS are permitted to employ swap.
If configuration for `memorySwap` is not specified and the feature gate is
enabled, by default the kubelet will apply the same behaviour as the
`UnlimitedSwap` setting.
`NoSwap` setting.
With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e.
`BestEffort`/`Guaranteed` Qos Pods) are prohibited from utilizing swap memory.

View File

@ -238,7 +238,7 @@ The `logrotate` tool rotates logs daily, or once the log size is greater than 10
## Log query
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
{{< feature-state feature_gate_name="NodeLogQuery" >}}
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows viewing logs of services
running on the node. To use the feature, ensure that the `NodeLogQuery`

View File

@ -76,7 +76,7 @@ For more information about the `TracingConfiguration` struct, see
### kubelet traces
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
{{< feature-state feature_gate_name="KubeletTracing" >}}
The kubelet CRI interface and authenticated http servers are instrumented to generate
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.

View File

@ -161,7 +161,7 @@ which is 300 seconds (5 minutes).
### Image pull per runtime class
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="RuntimeClassInImageCriApi" >}}
Kubernetes includes alpha support for performing image pulls based on the RuntimeClass of a Pod.
If you enable the `RuntimeClassInImageCriApi` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/),

View File

@ -454,7 +454,7 @@ pluginapi.Device{ID: "25102017", Health: pluginapi.Healthy, Topology:&pluginapi.
Here are some examples of device plugin implementations:
* The [AMD GPU device plugin](https://github.com/RadeonOpenCompute/k8s-device-plugin)
* The [AMD GPU device plugin](https://github.com/ROCm/k8s-device-plugin)
* The [generic device plugin](https://github.com/squat/generic-device-plugin) for generic Linux devices and USB devices
* The [Intel device plugins](https://github.com/intel/intel-device-plugins-for-kubernetes) for
Intel GPU, FPGA, QAT, VPU, SGX, DSA, DLB and IAA devices

View File

@ -54,19 +54,6 @@ that plugin or [networking provider](/docs/concepts/cluster-administration/netwo
## Network Plugin Requirements
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need
specific configuration to support kube-proxy. The iptables proxy depends on iptables, and the
plugin may need to ensure that container traffic is made available to iptables. For example, if
the plugin connects containers to a Linux bridge, the plugin must set the
`net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions
correctly. If the plugin does not use a Linux bridge, but uses something like Open vSwitch or
some other mechanism instead, it should ensure container traffic is appropriately routed for the
proxy.
By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets
`net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge)
work correctly with the iptables proxy.
### Loopback CNI
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network

View File

@ -31,7 +31,7 @@ as well as detecting and responding to cluster events (for example, starting up
`{{< glossary_tooltip text="replicas" term_id="replica" >}}` field is unsatisfied).
Control plane components can be run on any machine in the cluster. However,
for simplicity, set up scripts typically start all control plane components on
for simplicity, setup scripts typically start all control plane components on
the same machine, and do not run user containers on this machine. See
[Creating Highly Available clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
for an example control plane setup that runs across multiple machines.
@ -150,4 +150,4 @@ Learn more about the following:
* Etcd's official [documentation](https://etcd.io/docs/).
* Several [container runtimes](/docs/setup/production-environment/container-runtimes/) in Kubernetes.
* Integrating with cloud providers using [cloud-controller-manager](/docs/concepts/architecture/cloud-controller/).
* [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.
* [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.

View File

@ -22,21 +22,139 @@ external components communicate with one another.
The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes
(for example: Pods, Namespaces, ConfigMaps, and Events).
Most operations can be performed through the
[kubectl](/docs/reference/kubectl/) command-line interface or other
command-line tools, such as
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the
API. However, you can also access the API directly using REST calls.
Most operations can be performed through the [kubectl](/docs/reference/kubectl/)
command-line interface or other command-line tools, such as
[kubeadm](/docs/reference/setup-tools/kubeadm/), which in turn use the API.
However, you can also access the API directly using REST calls. Kubernetes
provides a set of [client libraries](/docs/reference/using-api/client-libraries/)
for those looking to
write applications using the Kubernetes API.
Consider using one of the [client libraries](/docs/reference/using-api/client-libraries/)
if you are writing an application using the Kubernetes API.
Each Kubernetes cluster publishes the specification of the APIs that the cluster serves.
There are two mechanisms that Kubernetes uses to publish these API specifications; both are useful
to enable automatic interoperability. For example, the `kubectl` tool fetches and caches the API
specification for enabling command-line completion and other features.
The two supported mechanisms are as follows:
- [The Discovery API](#discovery-api) provides information about the Kubernetes APIs:
API names, resources, versions, and supported operations. This is a Kubernetes
specific term as it is a separate API from the Kubernetes OpenAPI.
It is intended to be a brief summary of the available resources and it does not
detail specific schema for the resources. For reference about resource schemas,
please refer to the OpenAPI document.
- The [Kubernetes OpenAPI Document](#openapi-specification) provides (full)
[OpenAPI v2.0 and 3.0 schemas](https://www.openapis.org/) for all Kubernetes API
endpoints.
The OpenAPI v3 is the preferred method for accessing OpenAPI as it
provides
a more comprehensive and accurate view of the API. It includes all the available
API paths, as well as all resources consumed and produced for every operations
on every endpoints. It also includes any extensibility components that a cluster supports.
The data is a complete specification and is significantly larger than that from the
Discovery API.
## Discovery API
Kubernetes publishes a list of all group versions and resources supported via
the Discovery API. This includes the following for each resource:
- Name
- Cluster or namespaced scope
- Endpoint URL and supported verbs
- Alternative names
- Group, version, kind
The API is available both aggregated and unaggregated form. The aggregated
discovery serves two endpoints while the unaggregated discovery serves a
separate endpoint for each group version.
### Aggregated discovery
{{< feature-state state="beta" for_k8s_version="v1.27" >}}
Kubernetes offers beta support for aggregated discovery, publishing
all resources supported by a cluster through two endpoints (`/api` and
`/apis`). Requesting this
endpoint drastically reduces the number of requests sent to fetch the
discovery data from the cluster. You can access the data by
requesting the respective endpoints with an `Accept` header indicating
the aggregated discovery resource:
`Accept: application/json;v=v2beta1;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList`.
Without indicating the resource type using the `Accept` header, the default
response for the `/api` and `/apis` endpoint is an unaggregated discovery
document.
The [discovery document](https://github.com/kubernetes/kubernetes/blob/release-v{{< skew currentVersion >}}/api/discovery/aggregated_v2beta1.json)
for the built-in resources can be found in the Kubernetes GitHub repository.
This Github document can be used as a reference of the base set of the available resources
if a Kubernetes cluster is not available to query.
The endpoint also supports ETag and protobuf encoding.
### Unaggregated discovery
Without discovery aggregation, discovery is published in levels, with the root
endpoints publishing discovery information for downstream documents.
A list of all group versions supported by a cluster is published at
the `/api` and `/apis` endpoints. Example:
```
{
"kind": "APIGroupList",
"apiVersion": "v1",
"groups": [
{
"name": "apiregistration.k8s.io",
"versions": [
{
"groupVersion": "apiregistration.k8s.io/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "apiregistration.k8s.io/v1",
"version": "v1"
}
},
{
"name": "apps",
"versions": [
{
"groupVersion": "apps/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "apps/v1",
"version": "v1"
}
},
...
}
```
Additional requests are needed to obtain the discovery document for each group version at
`/apis/<group>/<version>` (for example:
`/apis/rbac.authorization.k8s.io/v1alpha1`), which advertises the list of
resources served under a particular group version. These endpoints are used by
kubectl to fetch the list of resources supported by a cluster.
<!-- body -->
## OpenAPI specification {#api-specification}
<a id="#api-specification" />
Complete API details are documented using [OpenAPI](https://www.openapis.org/).
## OpenAPI interface definition
For details about the OpenAPI specifications, see the [OpenAPI documentation](https://www.openapis.org/).
Kubernetes serves both OpenAPI v2.0 and OpenAPI v3.0. OpenAPI v3 is the
preferred method of accessing the OpenAPI because it offers a more comprehensive
(lossless) representation of Kubernetes resources. Due to limitations of OpenAPI
version 2, certain fields are dropped from the published OpenAPI including but not
limited to `default`, `nullable`, `oneOf`.
### OpenAPI V2
The Kubernetes API server serves an aggregated OpenAPI v2 spec via the
@ -74,15 +192,10 @@ request headers as follows:
</tbody>
</table>
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md) design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.
### OpenAPI V3
{{< feature-state state="stable" for_k8s_version="v1.27" >}}
{{< feature-state feature_gate_name="OpenAPIV3" >}}
Kubernetes supports publishing a description of its APIs as OpenAPI v3.
@ -149,7 +262,20 @@ Refer to the table below for accepted request headers.
</tbody>
</table>
A Golang implementation to fetch the OpenAPI V3 is provided in the package `k8s.io/client-go/openapi3`.
A Golang implementation to fetch the OpenAPI V3 is provided in the package
[`k8s.io/client-go/openapi3`](https://pkg.go.dev/k8s.io/client-go/openapi3).
Kubernetes {{< skew currentVersion >}} publishes
OpenAPI v2.0 and v3.0; there are no plans to support 3.1 in the near future.
### Protobuf serialization
Kubernetes implements an alternative Protobuf based serialization format that
is primarily intended for intra-cluster communication. For more information
about this format, see the [Kubernetes Protobuf serialization](https://git.k8s.io/design-proposals-archive/api-machinery/protobuf.md)
design proposal and the
Interface Definition Language (IDL) files for each schema located in the Go
packages that define the API objects.
## Persistence
@ -167,7 +293,7 @@ cluster.
### Aggregated Discovery
{{< feature-state state="beta" for_k8s_version="v1.27" >}}
{{< feature-state feature_gate_name="AggregatedDiscoveryEndpoint" >}}
Kubernetes offers beta support for aggregated discovery, publishing
all resources supported by a cluster through two endpoints (`/api` and
@ -238,8 +364,6 @@ ways that require deleting all existing alpha objects prior to upgrade.
Refer to [API versions reference](/docs/reference/using-api/#api-versioning)
for more details on the API version level definitions.
## API Extension
The Kubernetes API can be extended in one of two ways:

View File

@ -360,7 +360,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
#### matchLabelKeys
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->
@ -391,26 +391,27 @@ metadata:
...
spec:
template:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- database
topologyKey: topology.kubernetes.io/zone
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
# If you update the Deployment, the replacement Pods follow their own affinity rules
# (if there are any defined in the new Pod template)
matchLabelKeys:
- pod-template-hash
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- database
topologyKey: topology.kubernetes.io/zone
# Only Pods from a given rollout are taken into consideration when calculating pod affinity.
# If you update the Deployment, the replacement Pods follow their own affinity rules
# (if there are any defined in the new Pod template)
matchLabelKeys:
- pod-template-hash
```
#### mismatchLabelKeys
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
{{< note >}}
<!-- UPDATE THIS WHEN PROMOTING TO BETA -->

View File

@ -60,7 +60,7 @@ spec:
# Configure a topology spread constraint
topologySpreadConstraints:
- maxSkew: <integer>
minDomains: <integer> # optional; beta since v1.25
minDomains: <integer> # optional
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
@ -96,11 +96,11 @@ your cluster. Those fields are:
A domain is a particular instance of a topology. An eligible domain is a domain whose
nodes match the node selector.
<!-- OK to remove this note once v1.29 Kubernetes is out of support -->
{{< note >}}
The `MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enables `minDomains` for pod topology spread. Starting from v1.28,
the `MinDomainsInPodTopologySpread` gate
is enabled by default. In older Kubernetes clusters it might be explicitly
Before Kubernetes v1.30, the `minDomains` field was only available if the
`MinDomainsInPodTopologySpread` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
was enabled (default since v1.28). In older Kubernetes clusters it might be explicitly
disabled or the field might not be available.
{{< /note >}}

View File

@ -3,4 +3,127 @@ title: "Security"
weight: 85
description: >
Concepts for keeping your cloud-native workload secure.
simple_list: true
---
This section of the Kubernetes documentation aims to help you learn to run
workloads more securely, and about the essential aspects of keeping a
Kubernetes cluster secure.
Kubernetes is based on a cloud-native architecture, and draws on advice from the
{{< glossary_tooltip text="CNCF" term_id="cncf" >}} about good practice for
cloud native information security.
Read [Cloud Native Security and Kubernetes](/docs/concepts/security/cloud-native-security/)
for the broader context about how to secure your cluster and the applications that
you're running on it.
## Kubernetes security mechanisms {#security-mechanisms}
Kubernetes includes several APIs and security controls, as well as ways to
define [policies](#policies) that can form part of how you manage information security.
### Control plane protection
A key security mechanism for any Kubernetes cluster is to
[control access to the Kubernetes API](/docs/concepts/security/controlling-access).
Kubernetes expects you to configure and use TLS to provide
[data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/)
within the control plane, and between the control plane and its clients.
You can also enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
for the data stored within Kubernetes control plane; this is separate from using
encryption at rest for your own workloads' data, which might also be a good idea.
### Secrets
The [Secret](/docs/concepts/configuration/secret/) API provides basic protection for
configuration values that require confidentiality.
### Workload protection
Enforce [Pod security standards](/docs/concepts/security/pod-security-standards/) to
ensure that Pods and their containers are isolated appropriately. You can also use
[RuntimeClasses](/docs/concepts/containers/runtime-class) to define custom isolation
if you need it.
[Network policies](/docs/concepts/services-networking/network-policies/) let you control
network traffic between Pods, or between Pods and the network outside your cluster.
You can deploy security controls from the wider ecosystem to implement preventative
or detective controls around Pods, their containers, and the images that run in them.
### Auditing
Kubernetes [audit logging](/docs/tasks/debug/debug-cluster/audit/) provides a
security-relevant, chronological set of records documenting the sequence of actions
in a cluster. The cluster audits the activities generated by users, by applications
that use the Kubernetes API, and by the control plane itself.
## Cloud provider security
{{% thirdparty-content vendor="true" %}}
If you are running a Kubernetes cluster on your own hardware or a different cloud provider,
consult your documentation for security best practices.
Here are links to some of the popular cloud providers' security documentation:
{{< table caption="Cloud provider security" >}}
IaaS Provider | Link |
-------------------- | ------------ |
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
Amazon Web Services | https://aws.amazon.com/security |
Google Cloud Platform | https://cloud.google.com/security |
Huawei Cloud | https://www.huaweicloud.com/intl/en-us/securecenter/overallsafety |
IBM Cloud | https://www.ibm.com/cloud/security |
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
Oracle Cloud Infrastructure | https://www.oracle.com/security |
VMware vSphere | https://www.vmware.com/security/hardening-guides |
{{< /table >}}
## Policies
You can define security policies using Kubernetes-native mechanisms,
such as [NetworkPolicy](/docs/concepts/services-networking/network-policies/)
(declarative control over network packet filtering) or
[ValidatingAdmisisonPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) (declarative restrictions on what changes
someone can make using the Kubernetes API).
However, you can also rely on policy implementations from the wider
ecosystem around Kubernetes. Kubernetes provides extension mechanisms
to let those ecosystem projects implement their own policy controls
on source code review, container image approval, API access controls,
networking, and more.
For more information about policy mechanisms and Kubernetes,
read [Policies](/docs/concepts/policy/).
## {{% heading "whatsnext" %}}
Learn about related Kubernetes security topics:
* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* [Known vulnerabilities](/docs/reference/issues-security/official-cve-feed/)
in Kubernetes (and links to further information)
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
* [Network policies](/docs/concepts/services-networking/network-policies/) for Pods
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
* [Pod security standards](/docs/concepts/security/pod-security-standards/)
* [RuntimeClasses](/docs/concepts/containers/runtime-class)
Learn the context:
<!-- if changing this, also edit the front matter of content/en/docs/concepts/security/cloud-native-security.md to match; check the no_list setting -->
* [Cloud Native Security and Kubernetes](/docs/concepts/security/cloud-native-security/)
Get certified:
* [Certified Kubernetes Security Specialist](https://training.linuxfoundation.org/certification/certified-kubernetes-security-specialist/)
certification and official training course.
Read more in this section:

View File

@ -0,0 +1,226 @@
---
title: "Cloud Native Security and Kubernetes"
linkTitle: "Cloud Native Security"
weight: 10
# The section index lists this explicitly
hide_summary: true
description: >
Concepts for keeping your cloud-native workload secure.
---
Kubernetes is based on a cloud-native architecture, and draws on advice from the
{{< glossary_tooltip text="CNCF" term_id="cncf" >}} about good practice for
cloud native information security.
Read on through this page for an overview of how Kubernetes is designed to
help you deploy a secure cloud native platform.
## Cloud native information security
{{< comment >}}
There are localized versions available of this whitepaper; if you can link to one of those
when localizing, that's even better.
{{< /comment >}}
The CNCF [white paper](https://github.com/cncf/tag-security/tree/main/security-whitepaper)
on cloud native security defines security controls and practices that are
appropriate to different _lifecycle phases_.
## _Develop_ lifecycle phase {#lifecycle-phase-develop}
- Ensure the integrity of development environments.
- Design applications following good practice for information security,
appropriate for your context.
- Consider end user security as part of solution design.
To achieve this, you can:
1. Adopt an architecture, such as [zero trust](https://glossary.cncf.io/zero-trust-architecture/),
that minimizes attack surfaces, even for internal threats.
1. Define a code review process that considers security concerns.
1. Build a _threat model_ of your system or application that identifies
trust boundaries. Use that to model to identify risks and to help find
ways to treat those risks.
1. Incorporate advanced security automation, such as _fuzzing_ and
[security chaos engineering](https://glossary.cncf.io/security-chaos-engineering/),
where it's justified.
## _Distribute_ lifecycle phase {#lifecycle-phase-distribute}
- Ensure the security of the supply chain for container images you execute.
- Ensure the security of the supply chain for the cluster and other components
that execute your application. An example of another component might be an
external database that your cloud-native application uses for persistence.
To achieve this, you can:
1. Scan container images and other artifacts for known vulnerabilities.
1. Ensure that software distribution uses encryption in transit, with
a chain of trust for the software source.
1. Adopt and follow processes to update dependencies when updates are
available, especially in response to security announcements.
1. Use validation mechanisms such as digital certificates for supply
chain assurance.
1. Subscribe to feeds and other mechanisms to alert you to security
risks.
1. Restrict access to artifacts. Place container images in a
[private registry](/docs/concepts/containers/images/#using-a-private-registry)
that only allows authorized clients to pull images.
## _Deploy_ lifecycle phase {#lifecycle-phase-deploy}
Ensure appropriate restrictions on what can be deployed, who can deploy it,
and where it can be deployed to.
You can enforce measures from the _distribute_ phase, such as verifying the
cryptographic identity of container image artifacts.
When you deploy Kubernetes, you also set the foundation for your
applications' runtime environment: a Kubernetes cluster (or
multiple clusters).
That IT infrastructure must provide the security guarantees that higher
layers expect.
## _Runtime_ lifecycle phase {#lifecycle-phase-runtime}
The Runtime phase comprises three critical areas: [compute](#protection-runtime-compute),
[access](#protection-runtime-access), and [storage](#protection-runtime-storage).
### Runtime protection: access {#protection-runtime-access}
The Kubernetes API is what makes your cluster work. Protecting this API is key
to providing effective cluster security.
Other pages in the Kubernetes documentation have more detail about how to set up
specific aspects of access control. The [security checklist](/docs/concepts/security/security-checklist/)
has a set of suggested basic checks for your cluster.
Beyond that, securing your cluster means implementing effective
[authentication](/docs/concepts/security/controlling-access/#authentication) and
[authorization](/docs/concepts/security/controlling-access/#authorization) for API access. Use [ServiceAccounts](/docs/concepts/security/service-accounts/) to
provide and manage security identities for workloads and cluster
components.
Kubernetes uses TLS to protect API traffic; make sure to deploy the cluster using
TLS (including for traffic between nodes and the control plane), and protect the
encryption keys. If you use Kubernetes' own API for
[CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/#certificate-signing-requests),
pay special attention to restricting misuse there.
### Runtime protection: compute {#protection-runtime-compute}
{{< glossary_tooltip text="Containers" term_id="container" >}} provide two
things: isolation between different applications, and a mechanism to combine
those isolated applications to run on the same host computer. Those two
aspects, isolation and aggregation, mean that runtime security involves
trade-offs and finding an appropriate balance.
Kubernetes relies on a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
to actually set up and run containers. The Kubernetes project does
not recommend a specific container runtime and you should make sure that
the runtime(s) that you choose meet your information security needs.
To protect your compute at runtime, you can:
1. Enforce [Pod security standards](/docs/concepts/security/pod-security-standards/)
for applications, to help ensure they run with only the necessary privileges.
1. Run a specialized operating system on your nodes that is designed specifically
for running containerized workloads. This is typically based on a read-only
operating system (_immutable image_) that provides only the services
essential for running containers.
Container-specific operating systems help to isolate system components and
present a reduced attack surface in case of a container escape.
1. Define [ResourceQuotas](/docs/concepts/policy/resource-quotas/) to
fairly allocate shared resources, and use
mechanisms such as [LimitRanges](/docs/concepts/policy/limit-range/)
to ensure that Pods specify their resource requirements.
1. Partition workloads across different nodes.
Use [node isolation](/docs/concepts/scheduling-eviction/assign-pod-node/#node-isolation-restriction)
mechanisms, either from Kubernetes itself or from the ecosystem, to ensure that
Pods with different trust contexts are run on separate sets of nodes.
1. Use a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
that provides security restrictions.
1. On Linux nodes, use a Linux security module such as [AppArmor](/docs/tutorials/security/apparmor/) (beta)
or [seccomp](/docs/tutorials/security/seccomp/).
### Runtime protection: storage {#protection-runtime-storage}
To protect storage for your cluster and the applications that run there, you can:
1. Integrate your cluster with an external storage plugin that provides encryption at
rest for volumes.
1. Enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) for
API objects.
1. Protect data durability using backups. Verify that you can restore these, whenever you need to.
1. Authenticate connections between cluster nodes and any network storage they rely
upon.
1. Implement data encryption within your own application.
For encryption keys, generating these within specialized hardware provides
the best protection against disclosure risks. A _hardware security module_
can let you perform cryptographic operations without allowing the security
key to be copied elsewhere.
### Networking and security
You should also consider network security measures, such as
[NetworkPolicy](/docs/concepts/services-networking/network-policies/) or a
[service mesh](https://glossary.cncf.io/service-mesh/).
Some network plugins for Kubernetes provide encryption for your
cluster network, using technologies such as a virtual
private network (VPN) overlay.
By design, Kubernetes lets you use your own networking plugin for your
cluster (if you use managed Kubernetes, the person or organization
managing your cluster may have chosen a network plugin for you).
The network plugin you choose and the way you integrate it can have a
strong impact on the security of information in transit.
### Observability and runtime security
Kubernetes lets you extend your cluster with extra tooling. You can set up third
party solutions to help you monitor or troubleshoot your applications and the
clusters they are running. You also get some basic observability features built
in to Kubernetes itself. Your code running in containers can generate logs,
publish metrics or provide other observability data; at deploy time, you need to
make sure your cluster provides an appropriate level of protection there.
If you set up a metrics dashboard or something similar, review the chain of components
that populate data into that dashboard, as well as the dashboard itself. Make sure
that the whole chain is designed with enough resilience and enough integrity protection
that you can rely on it even during an incident where your cluster might be degraded.
Where appropriate, deploy security measures below the level of Kubernetes
itself, such as cryptographically measured boot, or authenticated distribution
of time (which helps ensure the fidelity of logs and audit records).
For a high assurance environment, deploy cryptographic protections to ensure that
logs are both tamper-proof and confidential.
## {{% heading "whatsnext" %}}
### Cloud native security {#further-reading-cloud-native}
* CNCF [white paper](https://github.com/cncf/tag-security/tree/main/security-whitepaper)
on cloud native security.
* CNCF [white paper](https://github.com/cncf/tag-security/blob/f80844baaea22a358f5b20dca52cd6f72a32b066/supply-chain-security/supply-chain-security-paper/CNCF_SSCP_v1.pdf)
on good practices for securing a software supply chain.
* [Fixing the Kubernetes clusterf\*\*k: Understanding security from the kernel up](https://archive.fosdem.org/2020/schedule/event/kubernetes/) (FOSDEM 2020)
* [Kubernetes Security Best Practices](https://www.youtube.com/watch?v=wqsUfvRyYpw) (Kubernetes Forum Seoul 2019)
* [Towards Measured Boot Out of the Box](https://www.youtube.com/watch?v=EzSkU3Oecuw) (Linux Security Summit 2016)
### Kubernetes and information security {#further-reading-k8s}
* [Kubernetes security](/docs/concepts/security/)
* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
* [Network policies](/docs/concepts/services-networking/network-policies/) for Pods
* [Pod security standards](/docs/concepts/security/pod-security-standards/)
* [RuntimeClasses](/docs/concepts/containers/runtime-class)

View File

@ -1,160 +0,0 @@
---
reviewers:
- zparnold
title: Overview of Cloud Native Security
description: >
A model for thinking about Kubernetes security in the context of Cloud Native security.
content_type: concept
weight: 1
---
<!-- overview -->
This overview defines a model for thinking about Kubernetes security in the context of Cloud Native security.
{{< warning >}}
This container security model provides suggestions, not proven information security policies.
{{< /warning >}}
<!-- body -->
## The 4C's of Cloud Native security
You can think about security in layers. The 4C's of Cloud Native security are Cloud,
Clusters, Containers, and Code.
{{< note >}}
This layered approach augments the [defense in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))
computing approach to security, which is widely regarded as a best practice for securing
software systems.
{{< /note >}}
{{< figure src="/images/docs/4c.png" title="The 4C's of Cloud Native Security" class="diagram-large" >}}
Each layer of the Cloud Native security model builds upon the next outermost layer.
The Code layer benefits from strong base (Cloud, Cluster, Container) security layers.
You cannot safeguard against poor security standards in the base layers by addressing
security at the Code level.
## Cloud
In many ways, the Cloud (or co-located servers, or the corporate datacenter) is the
[trusted computing base](https://en.wikipedia.org/wiki/Trusted_computing_base)
of a Kubernetes cluster. If the Cloud layer is vulnerable (or
configured in a vulnerable way) then there is no guarantee that the components built
on top of this base are secure. Each cloud provider makes security recommendations
for running workloads securely in their environment.
### Cloud provider security
If you are running a Kubernetes cluster on your own hardware or a different cloud provider,
consult your documentation for security best practices.
Here are links to some of the popular cloud providers' security documentation:
{{< table caption="Cloud provider security" >}}
IaaS Provider | Link |
-------------------- | ------------ |
Alibaba Cloud | https://www.alibabacloud.com/trust-center |
Amazon Web Services | https://aws.amazon.com/security |
Google Cloud Platform | https://cloud.google.com/security |
Huawei Cloud | https://www.huaweicloud.com/intl/en-us/securecenter/overallsafety |
IBM Cloud | https://www.ibm.com/cloud/security |
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
Oracle Cloud Infrastructure | https://www.oracle.com/security |
VMware vSphere | https://www.vmware.com/security/hardening-guides |
{{< /table >}}
### Infrastructure security {#infrastructure-security}
Suggestions for securing your infrastructure in a Kubernetes cluster:
{{< table caption="Infrastructure security" >}}
Area of Concern for Kubernetes Infrastructure | Recommendation |
--------------------------------------------- | -------------- |
Network access to API Server (Control plane) | All access to the Kubernetes control plane is not allowed publicly on the internet and is controlled by network access control lists restricted to the set of IP addresses needed to administer the cluster.|
Network access to Nodes (nodes) | Nodes should be configured to _only_ accept connections (via network access control lists) from the control plane on the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If possible, these nodes should not be exposed on the public internet entirely.
Kubernetes access to Cloud Provider API | Each cloud provider needs to grant a different set of permissions to the Kubernetes control plane and nodes. It is best to provide the cluster with cloud provider access that follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) for the resources it needs to administer. The [Kops documentation](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles) provides information about IAM policies and roles.
Access to etcd | Access to etcd (the datastore of Kubernetes) should be limited to the control plane only. Depending on your configuration, you should attempt to use etcd over TLS. More information can be found in the [etcd documentation](https://github.com/etcd-io/etcd/tree/master/Documentation).
etcd Encryption | Wherever possible it's a good practice to encrypt all storage at rest, and since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest.
{{< /table >}}
## Cluster
There are two areas of concern for securing Kubernetes:
* Securing the cluster components that are configurable
* Securing the applications which run in the cluster
### Components of the Cluster {#cluster-components}
If you want to protect your cluster from accidental or malicious access and adopt
good information practices, read and follow the advice about
[securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/).
### Components in the cluster (your application) {#cluster-applications}
Depending on the attack surface of your application, you may want to focus on specific
aspects of security. For example: If you are running a service (Service A) that is critical
in a chain of other resources and a separate workload (Service B) which is
vulnerable to a resource exhaustion attack, then the risk of compromising Service A
is high if you do not limit the resources of Service B. The following table lists
areas of security concerns and recommendations for securing workloads running in Kubernetes:
Area of Concern for Workload Security | Recommendation |
------------------------------ | --------------------- |
RBAC Authorization (Access to the Kubernetes API) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Authentication | https://kubernetes.io/docs/concepts/security/controlling-access/
Application secrets management (and encrypting them in etcd at rest) | https://kubernetes.io/docs/concepts/configuration/secret/ <br> https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
Ensuring that pods meet defined Pod Security Standards | https://kubernetes.io/docs/concepts/security/pod-security-standards/#policy-instantiation
Quality of Service (and Cluster resource management) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
Network Policies | https://kubernetes.io/docs/concepts/services-networking/network-policies/
TLS for Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## Container
Container security is outside the scope of this guide. Here are general recommendations and
links to explore this topic:
Area of Concern for Containers | Recommendation |
------------------------------ | -------------- |
Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provide stronger isolation.
## Code
Application code is one of the primary attack surfaces over which you have the most control.
While securing application code is outside of the Kubernetes security topic, here
are recommendations to protect application code:
### Code security
{{< table caption="Code security" >}}
Area of Concern for Code | Recommendation |
-------------------------| -------------- |
Access over TLS only | If your code needs to communicate by TCP, perform a TLS handshake with the client ahead of time. With the exception of a few cases, encrypt everything in transit. Going one step further, it's a good idea to encrypt network traffic between services. This can be done through a process known as mutual TLS authentication or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. |
Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. |
3rd Party Dependency Security | It is a good practice to regularly scan your application's third party libraries for known security vulnerabilities. Each programming language has a tool for performing this check automatically. |
Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found at: https://owasp.org/www-community/Source_Code_Analysis_Tools |
Dynamic probing attacks | There are a few automated tools that you can run against your service to try some of the well known service attacks. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the [OWASP Zed Attack proxy](https://www.zaproxy.org/) tool. |
{{< /table >}}
## {{% heading "whatsnext" %}}
Learn about related Kubernetes security topics:
* [Pod security standards](/docs/concepts/security/pod-security-standards/)
* [Network policies for Pods](/docs/concepts/services-networking/network-policies/)
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
* [Runtime class](/docs/concepts/containers/runtime-class)

View File

@ -5,7 +5,7 @@ title: Pod Security Standards
description: >
A detailed look at the different policy levels defined in the Pod Security Standards.
content_type: concept
weight: 10
weight: 15
---
<!-- overview -->

View File

@ -3,7 +3,7 @@ title: Service Accounts
description: >
Learn about ServiceAccount objects in Kubernetes.
content_type: concept
weight: 10
weight: 25
---
<!-- overview -->

View File

@ -11,33 +11,32 @@ description: >-
NetworkPolicies allow you to specify rules for traffic flow within your cluster, and
also between Pods and the outside world.
Your cluster must use a network plugin that supports NetworkPolicy enforcement.
---
<!-- overview -->
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols,
then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
NetworkPolicies are an application-centric construct which allow you to specify how a {{<
glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
NetworkPolicies are an application-centric construct which allow you to specify how a
{{< glossary_tooltip text="pod" term_id="pod">}} is allowed to communicate with various network
"entities" (we use the word "entity" here to avoid overloading the more common terms such as
"endpoints" and "services", which have specific Kubernetes connotations) over the network.
NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to
other connections.
The entities that a Pod can communicate with are identified through a combination of the following
3 identifiers:
three identifiers:
1. Other pods that are allowed (exception: a pod cannot block access to itself)
2. Namespaces that are allowed
3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed,
1. Namespaces that are allowed
1. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed,
regardless of the IP address of the Pod or the node)
When defining a pod- or namespace- based NetworkPolicy, you use a
When defining a pod- or namespace-based NetworkPolicy, you use a
{{< glossary_tooltip text="selector" term_id="selector">}} to specify what traffic is allowed to
and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
Meanwhile, when IP-based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
<!-- body -->
## Prerequisites
@ -46,12 +45,12 @@ Network policies are implemented by the [network plugin](/docs/concepts/extend-k
To use network policies, you must be using a networking solution which supports NetworkPolicy.
Creating a NetworkPolicy resource without a controller that implements it will have no effect.
## The Two Sorts of Pod Isolation
## The two sorts of pod isolation
There are two sorts of isolation for a pod: isolation for egress, and isolation for ingress.
They concern what connections may be established. "Isolation" here is not absolute, rather it
means "some restrictions apply". The alternative, "non-isolated for $direction", means that no
restrictions apply in the stated direction. The two sorts of isolation (or not) are declared
restrictions apply in the stated direction. The two sorts of isolation (or not) are declared
independently, and are both relevant for a connection from one pod to another.
By default, a pod is non-isolated for egress; all outbound connections are allowed.
@ -93,7 +92,7 @@ solution supports network policy.
{{< /note >}}
__Mandatory Fields__: As with all other Kubernetes config, a NetworkPolicy needs `apiVersion`,
`kind`, and `metadata` fields. For general information about working with config files, see
`kind`, and `metadata` fields. For general information about working with config files, see
[Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/),
and [Object Management](/docs/concepts/overview/working-with-objects/object-management).
@ -227,7 +226,7 @@ that explicitly allows that.
{{% code_sample file="service/networking/network-policy-allow-all-ingress.yaml" %}}
With this policy in place, no additional policy or policies can cause any incoming connection to
those pods to be denied. This policy has no effect on isolation for egress from any pod.
those pods to be denied. This policy has no effect on isolation for egress from any pod.
### Default deny all egress traffic
@ -247,7 +246,7 @@ explicitly allows all outgoing connections from pods in that namespace.
{{% code_sample file="service/networking/network-policy-allow-all-egress.yaml" %}}
With this policy in place, no additional policy or policies can cause any outgoing connection from
those pods to be denied. This policy has no effect on isolation for ingress to any pod.
those pods to be denied. This policy has no effect on isolation for ingress to any pod.
### Default deny all ingress and all egress traffic
@ -261,8 +260,8 @@ ingress or egress traffic.
## Network traffic filtering
NetworkPolicy is defined for [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer)
connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
NetworkPolicy is defined for [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer)
connections (TCP, UDP, and optionally SCTP). For all the other protocols, the behaviour may vary
across network plugins.
{{< note >}}
@ -273,7 +272,7 @@ protocol NetworkPolicies.
When a `deny all` network policy is defined, it is only guaranteed to deny TCP, UDP and SCTP
connections. For other protocols, such as ARP or ICMP, the behaviour is undefined.
The same applies to allow rules: when a specific pod is allowed as ingress source or egress destination,
it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
it is undefined what happens with (for example) ICMP packets. Protocols such as ICMP may be allowed by some
network plugins and denied by others.
## Targeting a range of ports
@ -286,8 +285,8 @@ This is achievable with the usage of the `endPort` field, as the following examp
{{% code_sample file="service/networking/networkpolicy-multiport-egress.yaml" %}}
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
with any IP within the range `10.0.0.0/24` over TCP, provided that the target
The above rule allows any Pod with label `role=db` on the namespace `default` to communicate
with any IP within the range `10.0.0.0/24` over TCP, provided that the target
port is between the range 32000 and 32768.
The following restrictions apply when using this field:
@ -299,7 +298,7 @@ The following restrictions apply when using this field:
{{< note >}}
Your cluster must be using a {{< glossary_tooltip text="CNI" term_id="cni" >}} plugin that
supports the `endPort` field in NetworkPolicy specifications.
If your [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
If your [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
does not support the `endPort` field and you specify a NetworkPolicy with that,
the policy will be applied only for the single `port` field.
{{< /note >}}
@ -310,8 +309,8 @@ In this scenario, your `Egress` NetworkPolicy targets more than one namespace us
label names. For this to work, you need to label the target namespaces. For example:
```shell
kubectl label namespace frontend namespace=frontend
kubectl label namespace backend namespace=backend
kubectl label namespace frontend namespace=frontend
kubectl label namespace backend namespace=backend
```
Add the labels under `namespaceSelector` in your NetworkPolicy document. For example:
@ -360,32 +359,31 @@ NetworkPolicy.
When a new NetworkPolicy object is created, it may take some time for a network plugin
to handle the new object. If a pod that is affected by a NetworkPolicy
is created before the network plugin has completed NetworkPolicy handling,
that pod may be started unprotected, and isolation rules will be applied when
that pod may be started unprotected, and isolation rules will be applied when
the NetworkPolicy handling is completed.
Once the NetworkPolicy is handled by a network plugin,
1. All newly created pods affected by a given NetworkPolicy will be isolated before
they are started.
Implementations of NetworkPolicy must ensure that filtering is effective throughout
the Pod lifecycle, even from the very first instant that any container in that Pod is started.
Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
sidecar containers, and regular containers.
1. All newly created pods affected by a given NetworkPolicy will be isolated before they are started.
Implementations of NetworkPolicy must ensure that filtering is effective throughout
the Pod lifecycle, even from the very first instant that any container in that Pod is started.
Because they are applied at Pod level, NetworkPolicies apply equally to init containers,
sidecar containers, and regular containers.
2. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
isolation rules were already applied, but no allow rules were applied yet.
1. Allow rules will be applied eventually after the isolation rules (or may be applied at the same time).
In the worst case, a newly created pod may have no network connectivity at all when it is first started, if
isolation rules were already applied, but no allow rules were applied yet.
Every created NetworkPolicy will be handled by a network plugin eventually, but there is no
way to tell from the Kubernetes API when exactly that happens.
Therefore, pods must be resilient against being started up with different network
connectivity than expected. If you need to make sure the pod can reach certain destinations
connectivity than expected. If you need to make sure the pod can reach certain destinations
before being started, you can use an [init container](/docs/concepts/workloads/pods/init-containers/)
to wait for those destinations to be reachable before kubelet starts the app containers.
Every NetworkPolicy will be applied to all selected pods eventually.
Because the network plugin may implement NetworkPolicy in a distributed manner,
Because the network plugin may implement NetworkPolicy in a distributed manner,
it is possible that pods may see a slightly inconsistent view of network policies
when the pod is first created, or when pods or policies change.
For example, a newly-created pod that is supposed to be able to reach both Pod A
@ -395,16 +393,18 @@ but cannot reach Pod B until a few seconds later.
## NetworkPolicy and `hostNetwork` pods
NetworkPolicy behaviour for `hostNetwork` pods is undefined, but it should be limited to 2 possibilities:
- The network plugin can distinguish `hostNetwork` pod traffic from all other traffic
(including being able to distinguish traffic from different `hostNetwork` pods on
the same node), and will apply NetworkPolicy to `hostNetwork` pods just like it does
to pod-network pods.
- The network plugin cannot properly distinguish `hostNetwork` pod traffic,
and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
- The network plugin cannot properly distinguish `hostNetwork` pod traffic,
and so it ignores `hostNetwork` pods when matching `podSelector` and `namespaceSelector`.
Traffic to/from `hostNetwork` pods is treated the same as all other traffic to/from the node IP.
(This is the most common implementation.)
This applies when
1. a `hostNetwork` pod is selected by `spec.podSelector`.
```yaml
@ -416,7 +416,7 @@ This applies when
...
```
2. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
1. a `hostNetwork` pod is selected by a `podSelector` or `namespaceSelector` in an `ingress` or `egress` rule.
```yaml
...
@ -437,7 +437,7 @@ from a `hostNetwork` Pod using an `ipBlock` rule.
As of Kubernetes {{< skew currentVersion >}}, the following functionality does not exist in the
NetworkPolicy API, but you might be able to implement workarounds using Operating System
components (such as SELinux, OpenVSwitch, IPTables, and so on) or Layer 7 technologies (Ingress
controllers, Service Mesh implementations) or admission controllers. In case you are new to
controllers, Service Mesh implementations) or admission controllers. In case you are new to
network security in Kubernetes, its worth noting that the following User Stories cannot (yet) be
implemented using the NetworkPolicy API.

View File

@ -621,7 +621,7 @@ can define your own (provider specific) annotations on the Service that specify
#### Load balancers with mixed protocol types
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
{{< feature-state feature_gate_name="MixedProtocolLBService" >}}
By default, for LoadBalancer type of Services, when there is more than one port defined, all
ports must have the same protocol, and the protocol must be one which is supported
@ -670,9 +670,9 @@ Unprefixed names are reserved for end-users.
#### Specifying IPMode of load balancer status {#load-balancer-ip-mode}
{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
{{< feature-state feature_gate_name="LoadBalancerIPMode" >}}
Starting as Alpha in Kubernetes 1.29,
As a Beta feature in Kubernetes 1.30,
a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
named `LoadBalancerIPMode` allows you to set the `.status.loadBalancer.ingress.ipMode`
for a Service with `type` set to `LoadBalancer`.

View File

@ -119,9 +119,10 @@ When a default `StorageClass` exists in a cluster and a user creates a
`DefaultStorageClass` admission controller automatically adds the
`storageClassName` field pointing to the default storage class.
Note that there can be at most one *default* storage class on a cluster, or
a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot
be created.
Note that if you set the `storageclass.kubernetes.io/is-default-class`
annotation to true on more than one StorageClass in your cluster, and you then
create a `PersistentVolumeClaim` with no `storageClassName` set, Kubernetes
uses the most recently created default StorageClass.
## Topology Awareness

View File

@ -18,7 +18,7 @@ particular PersistentVolumeClaim and PersistentVolume.
<!-- body -->
Some application need additional storage but don't care whether that
Some applications need additional storage but don't care whether that
data is stored persistently across restarts. For example, caching
services are often limited by memory size and can move infrequently
used data into storage that is slower than memory with little impact

View File

@ -506,30 +506,33 @@ PersistentVolume types are implemented as plugins. Kubernetes currently supports
mounted on nodes.
* [`nfs`](/docs/concepts/storage/volumes/#nfs) - Network File System (NFS) storage
The following types of PersistentVolume are deprecated.
This means that support is still available but will be removed in a future Kubernetes release.
The following types of PersistentVolume are deprecated but still available.
If you are using these volume types except for `flexVolume`, `cephfs` and `rbd`,
please install corresponding CSI drivers.
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
(**migration on by default** starting v1.23)
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
(**migration on by default** starting v1.23)
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile) - Azure File
(**deprecated** in v1.21)
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
(**deprecated** in v1.23)
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx volume
(**deprecated** in v1.25)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
(**deprecated** in v1.19)
(**migration on by default** starting v1.24)
* [`cephfs`](/docs/concepts/storage/volumes/#cephfs) - CephFS volume
(**deprecated** in v1.28)
(**deprecated** starting v1.28, no migration plan, support will be removed in a future release)
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
(**migration on by default** starting v1.21)
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
(**deprecated** starting v1.23, no migration plan and no plan to remove support)
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcePersistentDisk) - GCE Persistent Disk
(**migration on by default** starting v1.23)
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx volume
(**deprecated** starting v1.25)
* [`rbd`](/docs/concepts/storage/volumes/#rbd) - Rados Block Device (RBD) volume
(**deprecated** in v1.28)
(**deprecated** starting v1.28, no migration plan, support will be removed in a future release)
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
(**migration on by default** starting v1.25)
Older versions of Kubernetes also supported the following in-tree PersistentVolume types:
* [`awsElasticBlockStore`](/docs/concepts/storage/volumes/#awselasticblockstore) - AWS Elastic Block Store (EBS)
(**not available** in v1.27)
* [`azureDisk`](/docs/concepts/storage/volumes/#azuredisk) - Azure Disk
(**not available** in v1.27)
* [`cinder`](/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
(**not available** in v1.26)
* `photonPersistentDisk` - Photon controller persistent disk.
(**not available** starting v1.15)
* `scaleIO` - ScaleIO volume.

View File

@ -62,12 +62,14 @@ a different volume.
Kubernetes supports several types of volumes.
### awsElasticBlockStore (removed) {#awselasticblockstore}
### awsElasticBlockStore (deprecated) {#awselasticblockstore}
<!-- maintenance note: OK to remove all mention of awsElasticBlockStore once the v1.27 release of
Kubernetes has gone out of support -->
Kubernetes {{< skew currentVersion >}} does not include a `awsElasticBlockStore` volume type.
In Kubernetes {{< skew currentVersion >}}, all operations for the in-tree `awsElasticBlockStore` type
are redirected to the `ebs.csi.aws.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
The AWSElasticBlockStore in-tree storage driver was deprecated in the Kubernetes v1.19 release
and then removed entirely in the v1.27 release.
@ -75,12 +77,13 @@ and then removed entirely in the v1.27 release.
The Kubernetes project suggests that you use the [AWS EBS](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) third party
storage driver instead.
### azureDisk (removed) {#azuredisk}
### azureDisk (deprecated) {#azuredisk}
<!-- maintenance note: OK to remove all mention of azureDisk once the v1.27 release of
Kubernetes has gone out of support -->
Kubernetes {{< skew currentVersion >}} does not include a `azureDisk` volume type.
In Kubernetes {{< skew currentVersion >}}, all operations for the in-tree `azureDisk` type
are redirected to the `disk.csi.azure.com` {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
The AzureDisk in-tree storage driver was deprecated in the Kubernetes v1.19 release
and then removed entirely in the v1.27 release.
@ -118,7 +121,7 @@ Azure File CSI driver does not support using same volume with different fsgroups
To disable the `azureFile` storage plugin from being loaded by the controller manager
and the kubelet, set the `InTreePluginAzureFileUnregister` flag to `true`.
### cephfs
### cephfs (deprecated) {#cephfs}
{{< feature-state for_k8s_version="v1.28" state="deprecated" >}}
{{< note >}}
@ -139,12 +142,13 @@ You must have your own Ceph server running with the share exported before you ca
See the [CephFS example](https://github.com/kubernetes/examples/tree/master/volumes/cephfs/) for more details.
### cinder (removed) {#cinder}
### cinder (deprecated) {#cinder}
<!-- maintenance note: OK to remove all mention of cinder once the v1.26 release of
Kubernetes has gone out of support -->
Kubernetes {{< skew currentVersion >}} does not include a `cinder` volume type.
In Kubernetes {{< skew currentVersion >}}, all operations for the in-tree `cinder` type
are redirected to the `cinder.csi.openstack.org` {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
The OpenStack Cinder in-tree storage driver was deprecated in the Kubernetes v1.11 release
and then removed entirely in the v1.26 release.
@ -194,7 +198,7 @@ keyed with `log_level`.
{{< note >}}
* You must create a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)
* You must [create a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap)
before you can use it.
* A ConfigMap is always mounted as `readOnly`.
@ -295,9 +299,10 @@ beforehand so that Kubernetes hosts can access them.
See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)
for more details.
### gcePersistentDisk (removed) {#gcepersistentdisk}
### gcePersistentDisk (deprecated) {#gcepersistentdisk}
Kubernetes {{< skew currentVersion >}} does not include a `gcePersistentDisk` volume type.
In Kubernetes {{< skew currentVersion >}}, all operations for the in-tree `gcePersistentDisk` type
are redirected to the `pd.csi.storage.gke.io` {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
The `gcePersistentDisk` in-tree storage driver was deprecated in the Kubernetes v1.17 release
and then removed entirely in the v1.28 release.
@ -859,7 +864,7 @@ To turn off the `vsphereVolume` plugin from being loaded by the controller manag
## Using subPath {#using-subpath}
Sometimes, it is useful to share one volume for multiple uses in a single pod.
The `volumeMounts.subPath` property specifies a sub-path inside the referenced volume
The `volumeMounts[*].subPath` property specifies a sub-path inside the referenced volume
instead of its root.
The following example shows how to configure a Pod with a LAMP stack (Linux Apache MySQL PHP)
@ -1162,7 +1167,7 @@ Mount propagation allows for sharing volumes mounted by a container to
other containers in the same pod, or even to other pods on the same node.
Mount propagation of a volume is controlled by the `mountPropagation` field
in `Container.volumeMounts`. Its values are:
in `containers[*].volumeMounts`. Its values are:
* `None` - This volume mount will not receive any subsequent mounts
that are mounted to this volume or any of its subdirectories by the host.

View File

@ -352,6 +352,40 @@ Windows Server SAC release
The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies.
## Hardware recommendations and considerations {#windows-hardware-recommendations}
{{% thirdparty-content %}}
{{< note >}}
The following hardware specifications outlined here should be regarded as sensible default values.
They are not intended to represent minimum requirements or specific recommendations for production environments.
Depending on the requirements for your workload these values may need to be adjusted.
{{< /note >}}
- 64-bit processor 4 CPU cores or more, capable of supporting virtualization
- 8GB or more of RAM
- 50GB or more of free disk space
Refer to
[Hardware requirements for Windows Server Microsoft documentation](https://learn.microsoft.com/en-us/windows-server/get-started/hardware-requirements)
for the most up-to-date information on minimum hardware requirements. For guidance on deciding on resources for
production worker nodes refer to [Production worker nodes Kubernetes documentation](https://kubernetes.io/docs/setup/production-environment/#production-worker-nodes).
To optimize system resources, if a graphical user interface is not required,
it may be preferable to use a Windows Server OS installation that excludes
the [Windows Desktop Experience](https://learn.microsoft.com/en-us/windows-server/get-started/install-options-server-core-desktop-experience)
installation option, as this configuration typically frees up more system
resources.
In assessing disk space for Windows worker nodes, take note that Windows container images are typically larger than
Linux container images, with container image sizes ranging
from [300MB to over 10GB](https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785)
for a single image. Additionally, take note that the `C:` drive in Windows containers represents a virtual free size of
20GB by default, which is not the actual consumed space, but rather the disk size for which a single container can grow
to occupy when using local storage on the host.
See [Containers on Windows - Container Storage Documentation](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage#storage-limits)
for more detail.
## Getting help and troubleshooting {#troubleshooting}
Your main source of help for troubleshooting your Kubernetes cluster should start

View File

@ -0,0 +1,146 @@
---
title: Autoscaling Workloads
description: >-
With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster to react to changes in resource demand more elastically and efficiently.
content_type: concept
weight: 40
---
<!-- overview -->
In Kubernetes, you can _scale_ a workload depending on the current demand of resources.
This allows your cluster to react to changes in resource demand more elastically and efficiently.
When you scale a workload, you can either increase or decrease the number of replicas managed by
the workload, or adjust the resources available to the replicas in-place.
The first approach is referred to as _horizontal scaling_, while the second is referred to as
_vertical scaling_.
There are manual and automatic ways to scale your workloads, depending on your use case.
<!-- body -->
## Scaling workloads manually
Kubernetes supports _manual scaling_ of workloads. Horizontal scaling can be done
using the `kubectl` CLI.
For vertical scaling, you need to _patch_ the resource definition of your workload.
See below for examples of both strategies.
- **Horizontal scaling**: [Running multiple instances of your app](/docs/tutorials/kubernetes-basics/scale/scale-intro/)
- **Vertical scaling**: [Resizing CPU and memory resources assigned to containers](/docs/tasks/configure-pod-container/resize-container-resources)
## Scaling workloads automatically
Kubernetes also supports _automatic scaling_ of workloads, which is the focus of this page.
The concept of _Autoscaling_ in Kubernetes refers to the ability to automatically update an
object that manages a set of Pods (for example a
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}.
### Scaling workloads horizontally
In Kubernetes, you can automatically scale a workload horizontally using a _HorizontalPodAutoscaler_ (HPA).
It is implemented as a Kubernetes API resource and a {{< glossary_tooltip text="controller" term_id="controller" >}}
and periodically adjusts the number of {{< glossary_tooltip text="replicas" term_id="replica" >}}
in a workload to match observed resource utilization such as CPU or memory usage.
There is a [walkthrough tutorial](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough) of configuring a HorizontalPodAutoscaler for a Deployment.
### Scaling workloads vertically
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
You can automatically scale a workload vertically using a _VerticalPodAutoscaler_ (VPA).
Different to the HPA, the VPA doesn't come with Kubernetes by default, but is a separate project
that can be found [on GitHub](https://github.com/kubernetes/autoscaler/tree/9f87b78df0f1d6e142234bb32e8acbd71295585a/vertical-pod-autoscaler).
Once installed, it allows you to create {{< glossary_tooltip text="CustomResourceDefinitions" term_id="customresourcedefinition" >}}
(CRDs) for your workloads which define _how_ and _when_ to scale the resources of the managed replicas.
{{< note >}}
You will need to have the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
installed to your cluster for the HPA to work.
{{< /note >}}
At the moment, the VPA can operate in four different modes:
{{< table caption="Different modes of the VPA" >}}
Mode | Description
:----|:-----------
`Auto` | Currently `Recreate`, might change to in-place updates in the future
`Recreate` | The VPA assigns resource requests on pod creation as well as updates them on existing pods by evicting them when the requested resources differ significantly from the new recommendation
`Initial` | The VPA only assigns resource requests on pod creation and never changes them later.
`Off` | The VPA does not automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
{{< /table >}}
#### Requirements for in-place resizing
{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
Resizing a workload in-place **without** restarting the {{< glossary_tooltip text="Pods" term_id="pod" >}}
or its {{< glossary_tooltip text="Containers" term_id="container" >}} requires Kubernetes version 1.27 or later.<br />
Additionally, the `InPlaceVerticalScaling` feature gate needs to be enabled.
{{< feature-gate-description name="InPlacePodVerticalScaling" >}}
### Autoscaling based on cluster size
For workloads that need to be scaled based on the size of the cluster (for example
`cluster-dns` or other system components), you can use the
[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).<br />
Just like the VPA, it is not part of the Kubernetes core, but hosted as its
own project on GitHub.
The Cluster Proportional Autoscaler watches the number of schedulable {{< glossary_tooltip text="nodes" term_id="node" >}}
and cores and scales the number of replicas of the target workload accordingly.
If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
The project is **currently in beta** and can be found on GitHub.
While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
in the cluster.
### Event driven Autoscaling
It is also possible to scale workloads based on events, for example using the
[_Kubernetes Event Driven Autoscaler_ (**KEDA**)](https://keda.sh/).
KEDA is a CNCF graduated enabling you to scale your workloads based on the number
of events to be processed, for example the amount of messages in a queue. There exists
a wide range of adapters for different event sources to choose from.
### Autoscaling based on schedules
Another strategy for scaling your workloads is to **schedule** the scaling operations, for example in order to
reduce resource consumption during off-peak hours.
Similar to event driven autoscaling, such behavior can be achieved using KEDA in conjunction with
its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler allows you to define schedules
(and time zones) for scaling your workloads in or out.
## Scaling cluster infrastructure
If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself.
Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}.
This can be done using one of two available autoscalers:
- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file)
Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or
removing nodes as needed.
## {{% heading "whatsnext" %}}
- Learn more about scaling horizontally
- [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/)
- [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
- [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/)
- [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)

View File

@ -1006,6 +1006,50 @@ status:
terminating: 3 # three Pods are terminating and have not yet reached the Failed phase
```
### Delegation of managing a Job object to external controller
{{< feature-state feature_gate_name="JobManagedBy" >}}
{{< note >}}
You can only set the `managedBy` field on Jobs if you enable the `JobManagedBy`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
(disabled by default).
{{< /note >}}
This feature allows you to disable the built-in Job controller, for a specific
Job, and delegate reconciliation of the Job to an external controller.
You indicate the controller that reconciles the Job by setting a custom value
for the `spec.managedBy` field - any value
other than `kubernetes.io/job-controller`. The value of the field is immutable.
{{< note >}}
When using this feature, make sure the controller indicated by the field is
installed, otherwise the Job may not be reconciled at all.
{{< /note >}}
{{< note >}}
When developing an external Job controller be aware that your controller needs
to operate in a fashion conformant with the definitions of the API spec and
status fields of the Job object.
Please review these in detail in the [Job API](/docs/reference/kubernetes-api/workload-resources/job-v1/).
We also recommend that you run the e2e conformance tests for the Job object to
verify your implementation.
Finally, when developing an external Job controller make sure it does not use the
`batch.kubernetes.io/job-tracking` finalizer, reserved for the built-in controller.
{{< /note >}}
{{< warning >}}
If you are considering to disable the `JobManagedBy` feature gate, or to
downgrade the cluster to a version without the feature gate enabled, check if
there are jobs with a custom value of the `spec.managedBy` field. If there
are such jobs, there is a risk that they might be reconciled by two controllers
after the operation: the built-in Job controller and the external controller
indicated by the field value.
{{< /warning >}}
## Alternatives
### Bare Pods

View File

@ -19,10 +19,10 @@ containers which are relatively tightly coupled.
In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
As well as application containers, a Pod can contain
[init containers](/docs/concepts/workloads/pods/init-containers/) that run
{{< glossary_tooltip text="init containers" term_id="init-container" >}} that run
during Pod startup. You can also inject
[ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/)
for debugging if your cluster offers this.
{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
for debugging a running Pod.
<!-- body -->
@ -39,6 +39,26 @@ further sub-isolations applied.
A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.
Pods in a Kubernetes cluster are used in two main ways:
* **Pods that run a single container**. The "one-container-per-Pod" model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container; Kubernetes manages Pods rather than managing
the containers directly.
* **Pods that run multiple containers that need to work together**. A Pod can
encapsulate an application composed of
[multiple co-located containers](#how-pods-manage-multiple-containers) that are
tightly coupled and need to share resources. These co-located containers
form a single cohesive unit.
Grouping multiple co-located and co-managed containers in a single Pod is a
relatively advanced use case. You should use this pattern only in specific
instances in which your containers are tightly coupled.
You don't need to run multiple containers to provide replication (for resilience
or capacity); if you need multiple replicas, see
[Workload management](/docs/concepts/workloads/controllers/).
## Using Pods
The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`.
@ -61,26 +81,6 @@ term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}.
If your Pods need to track state, consider the
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource.
Pods in a Kubernetes cluster are used in two main ways:
* **Pods that run a single container**. The "one-container-per-Pod" model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container; Kubernetes manages Pods rather than managing
the containers directly.
* **Pods that run multiple containers that need to work together**. A Pod can
encapsulate an application composed of multiple co-located containers that are
tightly coupled and need to share resources. These co-located containers
form a single cohesive unit of service—for example, one container serving data
stored in a shared volume to the public, while a separate _sidecar_ container
refreshes or updates those files.
The Pod wraps these containers, storage resources, and an ephemeral network
identity together as a single unit.
{{< note >}}
Grouping multiple co-located and co-managed containers in a single Pod is a
relatively advanced use case. You should use this pattern only in specific
instances in which your containers are tightly coupled.
{{< /note >}}
Each Pod is meant to run a single instance of a given application. If you want to
scale your application horizontally (to provide more overall resources by running
@ -93,36 +93,10 @@ See [Pods and controllers](#pods-and-controllers) for more information on how
Kubernetes uses workload resources, and their controllers, to implement application
scaling and auto-healing.
### How Pods manage multiple containers
Pods are designed to support multiple cooperating processes (as containers) that form
a cohesive unit of service. The containers in a Pod are automatically co-located and
co-scheduled on the same physical or virtual machine in the cluster. The containers
can share resources and dependencies, communicate with one another, and coordinate
when and how they are terminated.
For example, you might have a container that
acts as a web server for files in a shared volume, and a separate "sidecar" container
that updates those files from a remote source, as in the following diagram:
{{< figure src="/images/docs/pod.svg" alt="Pod creation diagram" class="diagram-medium" >}}
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
By default, init containers run and complete before the app containers are started.
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
allows you to specify `restartPolicy: Always` for init containers.
Setting the `Always` restart policy ensures that the init containers where you set it are
kept running during the entire lifetime of the Pod.
See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
for more details.
Pods natively provide two kinds of shared resources for their constituent containers:
[networking](#pod-networking) and [storage](#pod-storage).
## Working with Pods
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This
@ -343,6 +317,57 @@ The `spec` of a static Pod cannot refer to other API objects
{{< glossary_tooltip text="Secret" term_id="secret" >}}, etc).
{{< /note >}}
## Pods with multiple containers {#how-pods-manage-multiple-containers}
Pods are designed to support multiple cooperating processes (as containers) that form
a cohesive unit of service. The containers in a Pod are automatically co-located and
co-scheduled on the same physical or virtual machine in the cluster. The containers
can share resources and dependencies, communicate with one another, and coordinate
when and how they are terminated.
<!--intentionally repeats some text from earlier in the page, with more detail -->
Pods in a Kubernetes cluster are used in two main ways:
* **Pods that run a single container**. The "one-container-per-Pod" model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container; Kubernetes manages Pods rather than managing
the containers directly.
* **Pods that run multiple containers that need to work together**. A Pod can
encapsulate an application composed of
multiple co-located containers that are
tightly coupled and need to share resources. These co-located containers
form a single cohesive unit of service—for example, one container serving data
stored in a shared volume to the public, while a separate
{{< glossary_tooltip text="sidecar container" term_id="sidecar-container" >}}
refreshes or updates those files.
The Pod wraps these containers, storage resources, and an ephemeral network
identity together as a single unit.
For example, you might have a container that
acts as a web server for files in a shared volume, and a separate
[sidecar container](/docs/concepts/workloads/pods/sidecar-containers/)
that updates those files from a remote source, as in the following diagram:
{{< figure src="/images/docs/pod.svg" alt="Pod creation diagram" class="diagram-medium" >}}
Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
By default, init containers run and complete before the app containers are started.
You can also have [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/)
that provide auxiliary services to the main application Pod (for example: a service mesh).
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
allows you to specify `restartPolicy: Always` for init containers.
Setting the `Always` restart policy ensures that the containers where you set it are
treated as _sidecars_ that are kept running during the entire lifetime of the Pod.
Containers that you explicitly define as sidecar containers
start up before the main application Pod and remain running until the Pod is
shut down.
## Container probes
A _probe_ is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:

View File

@ -5,7 +5,7 @@ reviewers:
- davidopp
title: Disruptions
content_type: concept
weight: 60
weight: 70
---
<!-- overview -->

View File

@ -77,7 +77,6 @@ The following information is available through environment variables
`status.hostIPs`
: the IP addresses is a dual-stack version of `status.hostIP`, the first is always the same as `status.hostIP`.
The field is available if you enable the `PodHostIPs` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
`status.podIP`
: the pod's primary IP address (usually, its IPv4 address)

View File

@ -4,7 +4,7 @@ reviewers:
- yujuhong
title: Ephemeral Containers
content_type: concept
weight: 80
weight: 60
---
<!-- overview -->

View File

@ -161,7 +161,7 @@ the Pod level `restartPolicy` is either `OnFailure` or `Always`.
When the kubelet is handling container restarts according to the configured restart
policy, that only applies to restarts that make replacement containers inside the
same Pod and running on the same node. After containers in a Pod exit, the kubelet
restarts them with an exponential back-off delay (10s, 20s,40s, …), that is capped at
restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at
five minutes. Once a container has executed for 10 minutes without any problems, the
kubelet resets the restart backoff timer for that container.
[Sidecar containers and Pod lifecycle](/docs/concepts/workloads/pods/sidecar-containers/#sidecar-containers-and-pod-lifecycle)

View File

@ -87,7 +87,7 @@ Containers in a Pod can request other resources (not CPU or memory) and still be
## Memory QoS with cgroup v2
{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
{{< feature-state feature-gate-name="MemoryQoS" >}}
Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes.
Memory requests and limits of containers in pod are used to set specific interfaces `memory.min`

View File

@ -49,6 +49,21 @@ Renders to:
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
### Feature state retrieval from description file
To dynamically determine the state of the feature, make use of the `feature_gate_name`
shortcode parameter. The feature state details will be extracted from the corresponding feature gate
description file located in `content/en/docs/reference/command-line-tools-reference/feature-gates/`.
For example:
```
{{</* feature-state feature_gate_name="NodeSwap" */>}}
```
Renders to:
{{< feature-state feature_gate_name="NodeSwap" >}}
## Feature gate description
In a Markdown page (`.md` file) on this site, you can add a shortcode to

View File

@ -116,17 +116,20 @@ The copy is called a "fork". | The copy is called a "fork."
## Inline code formatting
### Use code style for inline code, commands, and API objects {#code-style-inline-code}
### Use code style for inline code, commands {#code-style-inline-code}
For inline code in an HTML document, use the `<code>` tag. In a Markdown
document, use the backtick (`` ` ``).
document, use the backtick (`` ` ``). However, API kinds such as StatefulSet
or ConfigMap are written verbatim (no backticks); this allows using possessive
apostrophes.
{{< table caption = "Do and Don't - Use code style for inline code, commands, and API objects" >}}
Do | Don't
:--| :-----
The `kubectl run` command creates a `Pod`. | The "kubectl run" command creates a pod.
The kubelet on each node acquires a `Lease`… | The kubelet on each node acquires a lease…
A `PersistentVolume` represents durable storage… | A Persistent Volume represents durable storage…
The `kubectl run` command creates a Pod. | The "kubectl run" command creates a Pod.
The kubelet on each node acquires a Lease… | The kubelet on each node acquires a `Lease`
A PersistentVolume represents durable storage… | A `PersistentVolume` represents durable storage…
The CustomResourceDefinition's `.spec.group` field… | The `CustomResourceDefinition.spec.group` field…
For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply".
Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax.
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.
@ -191,37 +194,60 @@ Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.1
Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
{{< /table >}}
However, consider quoting values where there is a risk that readers might confuse the value
with an API kind.
## Referring to Kubernetes API resources
This section talks about how we reference API resources in the documentation.
### Clarification about "resource"
Kubernetes uses the word "resource" to refer to API resources, such as `pod`,
`deployment`, and so on. We also use "resource" to talk about CPU and memory
requests and limits. Always refer to API resources as "API resources" to avoid
confusion with CPU and memory resources.
Kubernetes uses the word _resource_ to refer to API resources. For example,
the URL path `/apis/apps/v1/namespaces/default/deployments/my-app` represents a
Deployment named "my-app" in the "default"
{{< glossary_tooltip text="namespace" term_id="namespace" >}}. In HTTP jargon,
{{< glossary_tooltip text="namespace" term_id="namespace" >}} is a resource -
the same way that all web URLs identify a resource.
Kubernetes documentation also uses "resource" to talk about CPU and memory
requests and limits. It's very often a good idea to refer to API resources
as "API resources"; that helps to avoid confusion with CPU and memory resources,
or with other kinds of resource.
If you are using the lowercase plural form of a resource name, such as
`deployments` or `configmaps`, provide extra written context to help readers
understand what you mean. If you are using the term in a context where the
UpperCamelCase name could work too, and there is a risk of ambiguity,
consider using the API kind in UpperCamelCase.
### When to use Kubernetes API terminologies
The different Kubernetes API terminologies are:
- Resource type: the name used in the API URL (such as `pods`, `namespaces`)
- Resource: a single instance of a resource type (such as `pod`, `secret`)
- Object: a resource that serves as a "record of intent". An object is a desired
- _API kinds_: the name used in the API URL (such as `pods`, `namespaces`).
API kinds are sometimes also called _resource types_.
- _API resource_: a single instance of an API kind (such as `pod`, `secret`).
- _Object_: a resource that serves as a "record of intent". An object is a desired
state for a specific part of your cluster, which the Kubernetes control plane tries to maintain.
All objects in the Kubernetes API are also resources.
Always use "resource" or "object" when referring to an API resource in docs.
For example, use "a `Secret` object" over just "a `Secret`".
For clarity, you can add "resource" or "object" when referring to an API resource in Kubernetes
documentation.
An example: write "a Secret object" instead of "a Secret".
If it is clear just from the capitalization, you don't need to add the extra word.
Consider rephrasing when that change helps avoid misunderstandings. A common situation is
when you want to start a sentence with an API kind, such as “Secret”; because English
and other languages capitalize at the start of sentences, readers cannot tell whether you
mean the API kind or the general concept. Rewording can help.
### API resource names
Always format API resource names using [UpperCamelCase](https://en.wikipedia.org/wiki/Camel_case),
also known as PascalCase, and code formatting.
also known as PascalCase. Do not write API kinds with code formatting.
For inline code in an HTML document, use the `<code>` tag. In a Markdown document, use the backtick (`` ` ``).
Don't split an API object name into separate words. For example, use `PodTemplateList`, not Pod Template List.
Don't split an API object name into separate words. For example, use PodTemplateList, not Pod Template List.
For more information about PascalCase and code formatting, please review the related guidance on
[Use upper camel case for API objects](/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects)
@ -237,7 +263,7 @@ guidance on [Kubernetes API terminology](/docs/reference/using-api/api-concepts/
{{< table caption = "Do and Don't - Don't include the command prompt" >}}
Do | Don't
:--| :-----
kubectl get pods | $ kubectl get pods
`kubectl get pods` | `$ kubectl get pods`
{{< /table >}}
### Separate commands from output

View File

@ -211,33 +211,31 @@ so an earlier module has higher priority to allow or deny a request.
## Configuring the API Server using an Authorization Config File
{{< feature-state state="alpha" for_k8s_version="v1.29" >}}
{{< feature-state state="beta" for_k8s_version="v1.30" >}}
The Kubernetes API server's authorizer chain can be configured using a
configuration file.
You specify the path to that authorization configuration using the
`--authorization-config` command line argument. This feature enables
creation of authorization chains with multiple webhooks with well-defined
parameters that validate requests in a certain order and enables fine grained
control - such as explicit Deny on failures. An example configuration with
all possible values is provided below.
This feature enables the creation of authorization chains with multiple webhooks with well-defined parameters that validate requests in a particular order and allows fine-grained control such as explicit Deny on failures. The configuration file approach even allows you to specify [CEL](/docs/reference/using-api/cel/) rules to pre-filter requests before they are dispatched to webhooks, helping you to prevent unnecessary invocations. The API server also automatically reloads the authorizer chain when the configuration file is modified. An example configuration with all possible values is provided below.
In order to customise the authorizer chain, you need to enable the
`StructuredAuthorizationConfiguration` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
You must specify the path to the authorization configuration using the `--authorization-config`command line argument. If you want to keep using command line flags instead of a configuration file, those will continue to work as-is. To gain access to new authorization webhook capabilities like multiple webhooks, failure policy, and pre-filter rules, switch to putting options in an `--authorization-config` file.
Note: When the feature is enabled, setting both `--authorization-config` and
Starting Kubernetes 1.30, the configuration file format is
beta-level, and only requires specifying `--authorization-config` since the `StructuredAuthorizationConfiguration` feature gate is enabled by default.
{{< caution >}}
If you want to keep using command line flags to configure authorization instead of a configuration file, those will continue to work as-is.
When the feature is enabled, setting both `--authorization-config` and
configuring an authorization webhook using the `--authorization-mode` and
`--authorization-webhook-*` command line flags is not allowed. If done, there
will be an error and API Server would exit right away.
{{< caution >}}
While the feature is in Alpha/Beta, there is no change if you want to keep on
using command line flags. When the feature goes Beta, the feature flag would
be turned on by default. The feature flag would be removed when feature goes GA.
Authorization Config file reloads when an observed file event occurs or a 1 minute poll is encountered. All non-webhook authorizer types are required to remain unchanged in the file on reload. Reload must not add or remove Node or RBAC
authorizers. They can be reordered, but cannot be added or removed.
When configuring the authorizer chain using a config file, make sure all the
apiserver nodes have the file. Also, take a note of the apiserver configuration
apiserver nodes have the file. Take a note of the apiserver configuration
when upgrading/downgrading the clusters. For example, if upgrading to v1.29+
clusters and using the config file, you would need to make sure the config file
exists before upgrading the cluster. When downgrading to v1.28, you would need
@ -248,9 +246,8 @@ to add the flags back to their bootstrap mechanism.
#
# DO NOT USE THE CONFIG AS IS. THIS IS AN EXAMPLE.
#
apiVersion: apiserver.config.k8s.io/v1alpha1
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
# authorizers are defined in order of precedence
authorizers:
- type: Webhook
# Name used to describe the authorizer
@ -283,7 +280,7 @@ authorizers:
# MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview
# version the CEL expressions are evaluated against
# Valid values: v1
# Required only if matchConditions are specified, no default value
# Required, no default value
matchConditionSubjectAccessReviewVersion: v1
# Controls the authorization decision when a webhook request fails to
# complete or returns a malformed response or errors evaluating

View File

@ -721,7 +721,7 @@ The `matchPolicy` for an admission webhooks defaults to `Equivalent`.
### Matching requests: `matchConditions`
{{< feature-state state="beta" for_k8s_version="v1.28" >}}
{{< feature-state feature_gate_name="AdmissionWebhookMatchConditions" >}}
You can define _match conditions_ for webhooks if you need fine-grained request filtering. These
conditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still

View File

@ -62,7 +62,7 @@ for a number of reasons:
## Bound service account token volume mechanism {#bound-service-account-token-volume}
{{< feature-state for_k8s_version="v1.22" state="stable" >}}
{{< feature-state feature_gate_name="BoundServiceAccountTokenVolume" >}}
By default, the Kubernetes control plane (specifically, the
[ServiceAccount admission controller](#serviceaccount-admission-controller))
@ -249,7 +249,7 @@ it does the following when a Pod is created:
### Legacy ServiceAccount token tracking controller
{{< feature-state for_k8s_version="v1.28" state="stable" >}}
{{< feature-state feature_gate_name="LegacyServiceAccountTokenTracking" >}}
This controller generates a ConfigMap called
`kube-system/kube-apiserver-legacy-service-account-token-tracking` in the
@ -258,7 +258,7 @@ account tokens began to be monitored by the system.
### Legacy ServiceAccount token cleaner
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
{{< feature-state feature_gate_name="LegacyServiceAccountTokenCleanUp" >}}
The legacy ServiceAccount token cleaner runs as part of the
`kube-controller-manager` and checks every 24 hours to see if any auto-generated

View File

@ -9,7 +9,7 @@ content_type: concept
<!-- overview -->
{{< feature-state state="beta" for_k8s_version="v1.28" >}}
{{< feature-state state="stable" for_k8s_version="v1.30" >}}
This page provides an overview of Validating Admission Policy.

View File

@ -12,7 +12,11 @@ stages:
toVersion: "1.27"
- stage: beta
defaultValue: true
fromVersion: "1.28"
fromVersion: "1.28"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable [match conditions](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchconditions)
on mutating & validating admission webhooks.

View File

@ -13,6 +13,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.27"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable the `HorizontalPodAutoscaler` to scale based on
metrics from individual containers in target pods.
Allow {{< glossary_tooltip text="HorizontalPodAutoscalers" term_id="horizontal-pod-autoscaler" >}}
to scale based on metrics from individual containers within target pods.

View File

@ -0,0 +1,14 @@
---
title: JobManagedBy
content_type: feature_gate
_build:
list: never
render: false
stages:
- stage: alpha
defaultValue: false
fromVersion: "1.30"
---
Allows to delegate reconciliation of a Job object to an external controller.

View File

@ -13,6 +13,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.29"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable cleaning up Secret-based
[service account tokens](/docs/concepts/security/service-accounts/#get-a-token)

View File

@ -9,6 +9,10 @@ stages:
- stage: alpha
defaultValue: false
fromVersion: "1.29"
toVersion: "1.30"
- stage: beta
defaultValue: true
fromVersion: "1.30"
---
Allows setting `ipMode` for Services where `type` is set to `LoadBalancer`.
See [Specifying IPMode of load balancer status](/docs/concepts/services-networking/service/#load-balancer-ip-mode)

View File

@ -17,6 +17,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.27"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable `minDomains` in
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).

View File

@ -13,6 +13,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.29"
toVersion: "1.30"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable the `status.hostIPs` field for pods and the {{< glossary_tooltip term_id="downward-api" text="downward API" >}}.
The field lets you expose host IP addresses to workloads.

View File

@ -0,0 +1,27 @@
---
# Removed from Kubernetes
title: ReadOnlyAPIDataVolumes
content_type: feature_gate
_build:
list: never
render: false
stages:
- stage: beta
defaultValue: true
fromVersion: "1.8"
toVersion: "1.9"
- stage: stable
fromVersion: "1.10"
toVersion: "1.10"
removed: true
---
Set [`configMap`](/docs/concepts/storage/volumes/#configmap),
[`secret`](/docs/concepts/storage/volumes/#secret),
[`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi) and
[`projected`](/docs/concepts/storage/volumes/#projected)
{{< glossary_tooltip term_id="volume" text="volumes" >}} to be mounted read-only.
Since Kubernetes v1.10, these volume types are always read-only and you cannot opt out.

View File

@ -9,6 +9,10 @@ stages:
- stage: beta
defaultValue: true
fromVersion: "1.27"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enables less load balancer re-configurations by
the service controller (KCCM) as an effect of changing node state.

View File

@ -13,5 +13,9 @@ stages:
- stage: beta
defaultValue: false
fromVersion: "1.28"
toVersion: "1.29"
- stage: stable
defaultValue: true
fromVersion: "1.30"
---
Enable [ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) support for CEL validations be used in Admission Control.

View File

@ -416,7 +416,7 @@ KubeletPodResourcesGet=true|false (ALPHA - default=false)<br/>
KubeletSeparateDiskGC=true|false (ALPHA - default=false)<br/>
KubeletTracing=true|false (BETA - default=true)<br/>
LegacyServiceAccountTokenCleanUp=true|false (BETA - default=true)<br/>
LoadBalancerIPMode=true|false (ALPHA - default=false)<br/>
LoadBalancerIPMode=true|false (BETA - default=true)<br/>
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br/>
LogarithmicScaleDown=true|false (BETA - default=true)<br/>
LoggingAlphaOptions=true|false (ALPHA - default=false)<br/>

View File

@ -5,7 +5,7 @@ date: 2018-04-12
full_link:
short_description: >
One or more initialization containers that must run to completion before any app containers run.
full_link: /docs/concepts/workloads/pods/init-containers/
aka:
tags:
- fundamental
@ -15,3 +15,7 @@ tags:
<!--more-->
Initialization (init) containers are like regular app containers, with one difference: init containers must run to completion before any app containers can start. Init containers run in series: each init container must run to completion before the next init container begins.
Unlike {{< glossary_tooltip text="sidecar containers" term_id="sidecar-container" >}}, init containers do not remain running after Pod startup.
For more information, read [init containers](/docs/concepts/workloads/pods/init-containers/).

View File

@ -0,0 +1,20 @@
---
title: Sidecar Container
id: sidecar-container
date: 2018-04-12
full_link:
short_description: >
An auxilliary container that stays running throughout the lifecycle of a Pod.
full_link: /docs/concepts/workloads/pods/sidecar-containers/
tags:
- fundamental
---
One or more {{< glossary_tooltip text="containers" term_id="container" >}} that are typically started before any app containers run.
<!--more-->
Sidecar containers are like regular app containers, but with a different purpose: the sidecar provides a Pod-local service to the main app container.
Unlike {{< glossary_tooltip text="init containers" term_id="init-container" >}}, sidecar containers
continue running after Pod startup.
Read [Sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/) for more information.

View File

@ -6,12 +6,11 @@ weight: 30
## {{% heading "synopsis" %}}
kubectl controls the Kubernetes cluster manager.
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
Find more information in [Command line tool](/docs/reference/kubectl/) (`kubectl`).
```
```shell
kubectl [flags]
```
@ -210,7 +209,7 @@ kubectl [flags]
<td colspan="2">--one-output</td>
</tr>
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level</td>
<td></td><td style="line-height: 130%; word-wrap: break-word;">If true, only write logs to their native severity level (vs also writing to each lower severity level)</td>
</tr>
<tr>
@ -388,47 +387,50 @@ kubectl [flags]
## {{% heading "seealso" %}}
* [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Update the annotations on a resource
* [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Print the supported API resources on the server
* [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Print the supported API versions on the server, in the form of "group/version"
* [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands#attach) - Attach to a running container
* [kubectl auth](/docs/reference/generated/kubectl/kubectl-commands#auth) - Inspect authorization
* [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
* [kubectl certificate](/docs/reference/generated/kubectl/kubectl-commands#certificate) - Modify certificate resources.
* [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands#cluster-info) - Display cluster info
* [kubectl completion](/docs/reference/generated/kubectl/kubectl-commands#completion) - Output shell completion code for the specified shell (bash or zsh)
* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) - Modify kubeconfig files
* [kubectl cordon](/docs/reference/generated/kubectl/kubectl-commands#cordon) - Mark node as unschedulable
* [kubectl cp](/docs/reference/generated/kubectl/kubectl-commands#cp) - Copy files and directories to and from containers.
* [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) - Create a resource from a file or from stdin.
* [kubectl debug](/docs/reference/generated/kubectl/kubectl-commands#debug) - Create debugging sessions for troubleshooting workloads and nodes
* [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands#delete) - Delete resources by filenames, stdin, resources and names, or by resources and label selector
* [kubectl describe](/docs/reference/generated/kubectl/kubectl-commands#describe) - Show details of a specific resource or group of resources
* [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - Diff live version against would-be applied version
* [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - Drain node in preparation for maintenance
* [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - Edit a resource on the server
* [kubectl events](/docs/reference/generated/kubectl/kubectl-commands#events) - List events
* [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - Execute a command in a container
* [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - Documentation of resources
* [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
* [kubectl get](/docs/reference/generated/kubectl/kubectl-commands#get) - Display one or many resources
* [kubectl kustomize](/docs/reference/generated/kubectl/kubectl-commands#kustomize) - Build a kustomization target from a directory or a remote url.
* [kubectl label](/docs/reference/generated/kubectl/kubectl-commands#label) - Update the labels on a resource
* [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands#logs) - Print the logs for a container in a pod
* [kubectl options](/docs/reference/generated/kubectl/kubectl-commands#options) - Print the list of flags inherited by all commands
* [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands#patch) - Update field(s) of a resource
* [kubectl plugin](/docs/reference/generated/kubectl/kubectl-commands#plugin) - Provides utilities for interacting with plugins.
* [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands#port-forward) - Forward one or more local ports to a pod
* [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands#proxy) - Run a proxy to the Kubernetes API server
* [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - Replace a resource by filename or stdin
* [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - Manage the rollout of a resource
* [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - Run a particular image on the cluster
* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Set a new size for a Deployment, ReplicaSet or Replication Controller
* [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - Set specific features on objects
* [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - Update the taints on one or more nodes
* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - Display Resource (CPU/Memory/Storage) usage.
* [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - Mark node as schedulable
* [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - Print the client and server version information
* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - Experimental: Wait for a specific condition on one or many resources.
* [kubectl annotate](/docs/reference/kubectl/generated/kubectl_annotate/) - Update the annotations on a resource
* [kubectl api-resources](/docs/reference/kubectl/generated/kubectl_api-resources/) - Print the supported API resources on the server
* [kubectl api-versions](/docs/reference/kubectl/generated/kubectl_api-versions/) - Print the supported API versions on the server,
in the form of "group/version"
* [kubectl apply](/docs/reference/kubectl/generated/kubectl_apply/) - Apply a configuration to a resource by filename or stdin
* [kubectl attach](/docs/reference/kubectl/generated/kubectl_attach/) - Attach to a running container
* [kubectl auth](/docs/reference/kubectl/generated/kubectl_auth/) - Inspect authorization
* [kubectl autoscale](/docs/reference/kubectl/generated/kubectl_autoscale/) - Auto-scale a Deployment, ReplicaSet, or ReplicationController
* [kubectl certificate](/docs/reference/kubectl/generated/kubectl_certificate/) - Modify certificate resources.
* [kubectl cluster-info](/docs/reference/kubectl/generated/kubectl_cluster-info/) - Display cluster info
* [kubectl completion](/docs/reference/kubectl/generated/kubectl_completion/) - Output shell completion code for the specified shell (bash or zsh)
* [kubectl config](/docs/reference/kubectl/generated/kubectl_config/) - Modify kubeconfig files
* [kubectl cordon](/docs/reference/kubectl/generated/kubectl_cordon/) - Mark node as unschedulable
* [kubectl cp](/docs/reference/kubectl/generated/kubectl_cp/) - Copy files and directories to and from containers.
* [kubectl create](/docs/reference/kubectl/generated/kubectl_create/) - Create a resource from a file or from stdin.
* [kubectl debug](/docs/reference/kubectl/generated/kubectl_debug/) - Create debugging sessions for troubleshooting workloads and nodes
* [kubectl delete](/docs/reference/kubectl/generated/kubectl_delete/) - Delete resources by filenames,
stdin, resources and names, or by resources and label selector
* [kubectl describe](/docs/reference/kubectl/generated/kubectl_describe/) - Show details of a specific resource or group of resources
* [kubectl diff](/docs/reference/kubectl/generated/kubectl_diff/) - Diff live version against would-be applied version
* [kubectl drain](/docs/reference/kubectl/generated/kubectl_drain/) - Drain node in preparation for maintenance
* [kubectl edit](/docs/reference/kubectl/generated/kubectl_edit/) - Edit a resource on the server
* [kubectl events](/docs/reference/kubectl/generated/kubectl_events/) - List events
* [kubectl exec](/docs/reference/kubectl/generated/kubectl_exec/) - Execute a command in a container
* [kubectl explain](/docs/reference/kubectl/generated/kubectl_explain/) - Documentation of resources
* [kubectl expose](/docs/reference/kubectl/generated/kubectl_expose/) - Take a replication controller,
service, deployment or pod and expose it as a new Kubernetes Service
* [kubectl get](/docs/reference/kubectl/generated/kubectl_get/) - Display one or many resources
* [kubectl kustomize](/docs/reference/kubectl/generated/kubectl_kustomize/) - Build a kustomization
target from a directory or a remote url.
* [kubectl label](/docs/reference/kubectl/generated/kubectl_label/) - Update the labels on a resource
* [kubectl logs](/docs/reference/kubectl/generated/kubectl_logs/) - Print the logs for a container in a pod
* [kubectl options](/docs/reference/kubectl/generated/kubectl_options/) - Print the list of flags inherited by all commands
* [kubectl patch](/docs/reference/kubectl/generated/kubectl_patch/) - Update field(s) of a resource
* [kubectl plugin](/docs/reference/kubectl/generated/kubectl_plugin/) - Provides utilities for interacting with plugins.
* [kubectl port-forward](/docs/reference/kubectl/generated/kubectl_port-forward/) - Forward one or more local ports to a pod
* [kubectl proxy](/docs/reference/kubectl/generated/kubectl_proxy/) - Run a proxy to the Kubernetes API server
* [kubectl replace](/docs/reference/kubectl/generated/kubectl_replace/) - Replace a resource by filename or stdin
* [kubectl rollout](/docs/reference/kubectl/generated/kubectl_rollout/) - Manage the rollout of a resource
* [kubectl run](/docs/reference/kubectl/generated/kubectl_run/) - Run a particular image on the cluster
* [kubectl scale](/docs/reference/kubectl/generated/kubectl_scale/) - Set a new size for a Deployment, ReplicaSet or Replication Controller
* [kubectl set](/docs/reference/kubectl/generated/kubectl_set/) - Set specific features on objects
* [kubectl taint](/docs/reference/kubectl/generated/kubectl_taint/) - Update the taints on one or more nodes
* [kubectl top](/docs/reference/kubectl/generated/kubectl_top/) - Display Resource (CPU/Memory/Storage) usage.
* [kubectl uncordon](/docs/reference/kubectl/generated/kubectl_uncordon/) - Mark node as schedulable
* [kubectl version](/docs/reference/kubectl/generated/kubectl_version/) - Print the client and server version information
* [kubectl wait](/docs/reference/kubectl/generated/kubectl_wait/) - Experimental: Wait for a specific condition on one or many resources.

View File

@ -287,7 +287,7 @@ kubectl label pods my-pod new-label=awesome # Add a Label
kubectl label pods my-pod new-label- # Remove a label
kubectl label pods my-pod new-label=new-value --overwrite # Overwrite an existing value
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
kubectl annotate pods my-pod icon- # Remove annotation
kubectl annotate pods my-pod icon-url- # Remove annotation
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
```

View File

@ -1105,13 +1105,11 @@ Example: `kubernetes.io/legacy-token-invalid-since: 2023-10-27`
Used on: Secret
The control plane automatically adds this label to auto-generated Secrets that
have the type `kubernetes.io/service-account-token`, provided that you have the
`LegacyServiceAccountTokenCleanUp` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled. Kubernetes {{< skew currentVersion >}} enables that behavior by default.
This label marks the Secret-based token as invalid for authentication. The value
of this label records the date (ISO 8601 format, UTC time zone) when the control
plane detects that the auto-generated Secret has not been used for a specified
duration (defaults to one year).
have the type `kubernetes.io/service-account-token`. This label marks the
Secret-based token as invalid for authentication. The value of this label
records the date (ISO 8601 format, UTC time zone) when the control plane detects
that the auto-generated Secret has not been used for a specified duration
(defaults to one year).
### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by}
@ -2255,7 +2253,8 @@ Starting in v1.16, this annotation was removed in favor of
- [`pod-security.kubernetes.io/audit-violations`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations)
- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
- [`validation.policy.admission.k8s.io/validation_failure`](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure)
See more details on [Audit Annotations](/docs/reference/labels-annotations-taints/audit-annotations/).
## kubeadm

View File

@ -109,8 +109,6 @@ The user can skip specific preflight checks or all of them with the `--ignore-pr
- [warning] if firewalld is active
- [error] if API server bindPort or ports 10250/10251/10252 are used
- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
- [Error] if swap is on
- [Error] if `conntrack`, `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path
- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path

View File

@ -1,7 +1,4 @@
---
reviewers:
- luxas
- jbeda
title: kubeadm init
content_type: concept
weight: 20
@ -161,6 +158,7 @@ Feature | Default | Alpha | Beta | GA
`EtcdLearnerMode` | `true` | 1.27 | 1.29 | -
`PublicKeysECDSA` | `false` | 1.19 | - | -
`RootlessControlPlane` | `false` | 1.22 | - | -
`WaitForAllControlPlaneComponents` | `false` | 1.30 | - | -
{{< /table >}}
{{< note >}}
@ -184,6 +182,16 @@ for `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` and `etcd` to
If the flag is not set, those components run as root. You can change the value of this feature gate before
you upgrade to a newer version of Kubernetes.
`WaitForAllControlPlaneComponents`
: With this feature gate enabled kubeadm will wait for all control plane components (kube-apiserver,
kube-controller-manager, kube-scheduler) on a control plane node to report status 200 on their `/healthz`
endpoints. These checks are performed on `https://127.0.0.1:PORT/healthz`, where `PORT` is taken from
`--secure-port` of a component. If you specify custom `--secure-port` values in the kubeadm configuration
they will be respected. Without the feature gate enabled, kubeadm will only wait for the kube-apiserver
on a control plane node to become ready. The wait process starts right after the kubelet on the host
is started by kubeadm. You are advised to enable this feature gate in case you wish to observe a ready
state from all control plane components during the `kubeadm init` or `kubeadm join` command execution.
List of deprecated feature gates:
{{< table caption="kubeadm deprecated feature gates" >}}

View File

@ -34,6 +34,17 @@ etcdctl del "" --prefix
See the [etcd documentation](https://github.com/coreos/etcd/tree/master/etcdctl) for more information.
### Graceful kube-apiserver shutdown
If you have your `kube-apiserver` configured with the `--shutdown-delay-duration` flag,
you can run the following commands to attempt a graceful shutdown for the running API server Pod,
before you run `kubeadm reset`:
```bash
yq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml
timeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true
```
## {{% heading "whatsnext" %}}
* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node

View File

@ -55,10 +55,12 @@ Example CEL expressions:
| `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration |
| `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' |
| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the int and string cases |
| `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the int and string cases |
| `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) |
| `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint |
| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet |
| `self.details.all(key, key.matches('^[a-zA-Z]*$')` | Validate the keys of the 'details' map |
| `self.details.all(key, self.details[key].matches('^[a-zA-Z]*$')` | Validate the values of the 'details' map |
{{< /table >}}
## CEL options, language features, and libraries

View File

@ -95,13 +95,16 @@ such as Deployment, StatefulSet, or Job.
## Storage access for zones
When persistent volumes are created, the `PersistentVolumeLabel`
[admission controller](/docs/reference/access-authn-authz/admission-controllers/)
automatically adds zone labels to any PersistentVolumes that are linked to a specific
zone. The {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} then ensures,
When persistent volumes are created, Kubernetes automatically adds zone labels
to any PersistentVolumes that are linked to a specific zone.
The {{< glossary_tooltip text="scheduler" term_id="kube-scheduler" >}} then ensures,
through its `NoVolumeZoneConflict` predicate, that pods which claim a given PersistentVolume
are only placed into the same zone as that volume.
Please note that the method of adding zone labels can depend on your
cloud provider and the storage provisioner youre using. Always refer to the specific
documentation for your environment to ensure correct configuration.
You can specify a {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}
for PersistentVolumeClaims that specifies the failure domains (zones) that the
storage in that class may use.

View File

@ -47,50 +47,33 @@ check the documentation for that version.
<!-- body -->
## Install and configure prerequisites
The following steps apply common settings for Kubernetes nodes on Linux.
### Network configuration
You can skip a particular setting if you're certain you don't need it.
By default, the Linux kernel does not allow IPv4 packets to be routed
between interfaces. Most Kubernetes cluster networking implementations
will change this setting (if needed), but some might expect the
administrator to do it for them. (Some might also expect other sysctl
parameters to be set, kernel modules to be loaded, etc; consult the
documentation for your specific network implementation.)
For more information, see
[Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements)
or the documentation for your specific container runtime.
### Enable IPv4 packet forwarding {#prerequisite-ipv4-forwarding-optional}
### Forwarding IPv4 and letting iptables see bridged traffic
Execute the below mentioned instructions:
To manually enable IPv4 packet forwarding:
```bash
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
Verify that the `br_netfilter`, `overlay` modules are loaded by running the following commands:
Verify that `net.ipv4.ip_forward` is set to 1 with:
```bash
lsmod | grep br_netfilter
lsmod | grep overlay
```
Verify that the `net.bridge.bridge-nf-call-iptables`, `net.bridge.bridge-nf-call-ip6tables`, and
`net.ipv4.ip_forward` system variables are set to `1` in your `sysctl` config by running the following command:
```bash
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
sysctl net.ipv4.ip_forward
```
## cgroup drivers

View File

@ -568,37 +568,6 @@ reference documentation for more information about this subcommand and its
options.
<!-- discussion -->
## What's next {#whats-next}
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;
control of your Kubernetes cluster.
* Configure how your cluster handles logs for cluster events and from
applications running in Pods.
See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
an overview of what is involved.
### Feedback {#feedback}
* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* For support, visit the
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
* General SIG Cluster Lifecycle development Slack channel:
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG Cluster Lifecycle mailing list:
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
## Version skew policy {#version-skew-policy}
While kubeadm allows version skew against some components that it manages, it is recommended that you
@ -619,8 +588,8 @@ Example:
### kubeadm's skew against the kubelet
Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is the same
version as kubeadm or one version older.
Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is
the same version as kubeadm or three versions older.
Example:
* kubeadm is at {{< skew currentVersion >}}
@ -686,3 +655,33 @@ supports your chosen platform.
If you are running into difficulties with kubeadm, please consult our
[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
<!-- discussion -->
## What's next {#whats-next}
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;
control of your Kubernetes cluster.
* Configure how your cluster handles logs for cluster events and from
applications running in Pods.
See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
an overview of what is involved.
### Feedback {#feedback}
* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* For support, visit the
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
* General SIG Cluster Lifecycle development Slack channel:
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG Cluster Lifecycle mailing list:
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)

View File

@ -213,7 +213,7 @@ in kube-apiserver logs. To fix the issue you must follow these steps:
`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.
`$NODE` must be set to the name of the existing failed node in the cluster.
Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
or pass `kubeconfig user --config` (see [Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users)). If your cluster does not have
the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node.
1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for

View File

@ -110,7 +110,7 @@ for database debugging.
27017
```
27017 is the TCP port allocated to MongoDB on the internet.
27017 is the official TCP port for MongoDB.
## Forward a local port to a port on the Pod

View File

@ -87,7 +87,7 @@ Here is the configuration file for the application Deployment:
Events: <none>
```
Make a note of the NodePort value for the service. For example,
Make a note of the NodePort value for the Service. For example,
in the preceding output, the NodePort value is 31496.
1. List the pods that are running the Hello World application:

View File

@ -15,9 +15,9 @@ weight: 270
## {{% heading "prerequisites" %}}
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to communicate with your cluster. It is recommended to run this
task on a cluster with at least two nodes that are not acting as control plane
nodes . If you do not already have a cluster, you can create one by using
be configured to communicate with your cluster. It is recommended to follow this
guide on a cluster with at least two nodes that are not acting as control plane
nodes. If you do not already have a cluster, you can create one by using
[minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/).
<!-- steps -->
@ -57,7 +57,7 @@ This section covers starting a single-node and multi-node etcd cluster.
### Single-node etcd cluster
Use a single-node etcd cluster only for testing purpose.
Use a single-node etcd cluster only for testing purposes.
1. Run the following:
@ -144,8 +144,8 @@ ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 \
### Limiting access of etcd clusters
After configuring secure communication, restrict the access of etcd cluster to
only the Kubernetes API servers. Use TLS authentication to do so.
After configuring secure communication, restrict the access of the etcd cluster to
only the Kubernetes API servers using TLS authentication.
For example, consider key pairs `k8sclient.key` and `k8sclient.cert` that are
trusted by the CA `etcd.ca`. When etcd is configured with `--client-cert-auth`
@ -160,9 +160,7 @@ flags `--etcd-certfile=k8sclient.cert`, `--etcd-keyfile=k8sclient.key` and
`--etcd-cafile=ca.cert`.
{{< note >}}
etcd authentication is not currently supported by Kubernetes. For more
information, see the related issue
[Support Basic Auth for Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398).
etcd authentication is not planned for Kubernetes.
{{< /note >}}
## Replacing a failed etcd member
@ -203,9 +201,9 @@ replace it with `member4=http://10.0.0.4`.
etcd.
1. Stop the etcd server on the broken node. It is possible that other
clients besides the Kubernetes API server is causing traffic to etcd
clients besides the Kubernetes API server are causing traffic to etcd
and it is desirable to stop all traffic to prevent writes to the data
dir.
directory.
1. Remove the failed member:
@ -256,10 +254,10 @@ For more information on cluster reconfiguration, see
## Backing up an etcd cluster
All Kubernetes objects are stored on etcd. Periodically backing up the etcd
All Kubernetes objects are stored in etcd. Periodically backing up the etcd
cluster data is important to recover Kubernetes clusters under disaster
scenarios, such as losing all control plane nodes. The snapshot file contains
all the Kubernetes states and critical information. In order to keep the
all the Kubernetes state and critical information. In order to keep the
sensitive Kubernetes data safe, encrypt the snapshot files.
Backing up an etcd cluster can be accomplished in two ways: etcd built-in
@ -267,14 +265,14 @@ snapshot and volume snapshot.
### Built-in snapshot
etcd supports built-in snapshot. A snapshot may either be taken from a live
etcd supports built-in snapshot. A snapshot may either be created from a live
member with the `etcdctl snapshot save` command or by copying the
`member/snap/db` file from an etcd
[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir)
that is not currently used by an etcd process. Taking the snapshot will
that is not currently used by an etcd process. Creating the snapshot will
not affect the performance of the member.
Below is an example for taking a snapshot of the keyspace served by
Below is an example for creating a snapshot of the keyspace served by
`$ENDPOINT` to the file `snapshot.db`:
```shell
@ -298,19 +296,19 @@ ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db
### Volume snapshot
If etcd is running on a storage volume that supports backup, such as Amazon
Elastic Block Store, back up etcd data by taking a snapshot of the storage
Elastic Block Store, back up etcd data by creating a snapshot of the storage
volume.
### Snapshot using etcdctl options
We can also take the snapshot using various options given by etcdctl. For example
We can also create the snapshot using various options given by etcdctl. For example:
```shell
ETCDCTL_API=3 etcdctl -h
```
will list various options available from etcdctl. For example, you can take a snapshot by specifying
the endpoint, certificates etc as shown below:
will list various options available from etcdctl. For example, you can create a snapshot by specifying
the endpoint, certificates and key as shown below:
```shell
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
@ -324,7 +322,7 @@ where `trusted-ca-file`, `cert-file` and `key-file` can be obtained from the des
Scaling out etcd clusters increases availability by trading off performance.
Scaling does not increase cluster performance nor capability. A general rule
is not to scale out or in etcd clusters. Do not configure any auto scaling
groups for etcd clusters. It is highly recommended to always run a static
groups for etcd clusters. It is strongly recommended to always run a static
five-member etcd cluster for production Kubernetes clusters at any officially
supported scale.
@ -337,7 +335,7 @@ for information on how to add members into an existing cluster.
etcd supports restoring from snapshots that are taken from an etcd process of
the [major.minor](http://semver.org/) version. Restoring a version from a
different patch version of etcd also is supported. A restore operation is
different patch version of etcd is also supported. A restore operation is
employed to recover the data of a failed cluster.
Before starting the restore operation, a snapshot file must be present. It can
@ -358,12 +356,12 @@ export ETCDCTL_API=3
etcdctl --data-dir <data-dir-location> snapshot restore snapshot.db
```
If `<data-dir-location>` is the same folder as before, delete it and stop etcd process before restoring the cluster. Else change etcd configuration and restart the etcd process after restoration to make it use the new data directory.
If `<data-dir-location>` is the same folder as before, delete it and stop the etcd process before restoring the cluster. Otherwise, change etcd configuration and restart the etcd process after restoration to have it use the new data directory.
For more information and examples on restoring a cluster from a snapshot file, see
[etcd disaster recovery documentation](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
If the access URLs of the restored cluster is changed from the previous
If the access URLs of the restored cluster are changed from the previous
cluster, the Kubernetes API server must be reconfigured accordingly. In this
case, restart Kubernetes API servers with the flag
`--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag
@ -408,9 +406,9 @@ For more details on etcd maintenance, please refer to the [etcd maintenance](htt
{{% thirdparty-content single="true" %}}
{{< note >}}
Defragmentation is an expensive operation, so it should be executed as infrequent
Defragmentation is an expensive operation, so it should be executed as infrequently
as possible. On the other hand, it's also necessary to make sure any etcd member
will not run out of the storage quota. The Kubernetes project recommends that when
will not exceed the storage quota. The Kubernetes project recommends that when
you perform defragmentation, you use a tool such as [etcd-defrag](https://github.com/ahrtr/etcd-defrag).
You can also run the defragmentation tool as a Kubernetes CronJob, to make sure that

View File

@ -168,19 +168,31 @@ encrypt all resources, even custom resources that are added after API server sta
since part of the configuration would be ineffective. The `resources` list's processing order and precedence
are determined by the order it's listed in the configuration. {{< /note >}}
Opting out of encryption for specific resources while wildcard is enabled can be achieved by adding a new
`resources` array item with the resource name, followed by the `providers` array item with the `identity` provider.
For example, if '`*.*`' is enabled and you want to opt-out encryption for the `events` resource, add a new item
to the `resources` array with `events` as the resource name, followed by the providers array item with `identity`.
The new item should look like this:
If you have a wildcard covering resources and want to opt out of at-rest encryption for a particular kind
of resource, you achieve that by adding a separate `resources` array item with the name of the resource that
you want to exempt, followed by a `providers` array item where you specify the `identity` provider. You add
this item to the list so that it appears earlier than the configuration where you do specify encryption
(a provider that is not `identity`).
For example, if '`*.*`' is enabled and you want to opt out of encryption for Events and ConfigMaps, add a
new **earlier** item to the `resources`, followed by the providers array item with `identity` as the
provider. The more specific entry must come before the wildcard entry.
The new item would look similar to:
```yaml
- resources:
- events
providers:
- identity: {}
...
- resources:
- configmaps. # specifically from the core API group,
# because of trailing "."
- events
providers:
- identity: {}
# and then other entries in resources
```
Ensure that the new item is listed before the wildcard '`*.*`' item in the resources array to give it precedence.
Ensure that the exemption is listed _before_ the wildcard '`*.*`' item in the resources array
to give it precedence.
For more detailed information about the `EncryptionConfiguration` struct, please refer to the
[encryption configuration API](/docs/reference/config-api/apiserver-encryption.v1/).

View File

@ -46,8 +46,46 @@ CA key on disk.
Instead, run the controller-manager standalone with `--controllers=csrsigner` and
point to the CA certificate and key.
[PKI certificates and requirements](/docs/setup/best-practices/certificates/) includes guidance on
setting up a cluster to use an external CA.
There are various ways to prepare the component credentials when using external CA mode.
### Manual preparation of component credentials
[PKI certificates and requirements](/docs/setup/best-practices/certificates/) includes information
on how to prepare all the required by kubeadm component credentials manually.
### Preparation of credentials by signing CSRs generated by kubeadm
kubeadm can [generate CSR files](#signing-csr) that you can sign manually with tools like
`openssl` and your external CA. These CSR files will include all the specification for credentials
that components deployed by kubeadm require.
### Automated preparation of component credentials by using kubeadm phases
Alternatively, it is possible to use kubeadm phase commands to automate this process.
- Go to a host that you want to prepare as a kubeadm control plane node with external CA.
- Copy the external CA files `ca.crt` and `ca.key` that you have into `/etc/kubernetes/pki` on the node.
- Prepare a temporary [kubeadm configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)
called `config.yaml` that can be used with `kubeadm init`. Make sure that this file includes
any relevant cluster wide or host-specific information that could be included in certificates, such as,
`ClusterConfiguration.controlPlaneEndpoint`, `ClusterConfiguration.certSANs` and `InitConfiguration.APIEndpoint`.
- On the same host execute the commands `kubeadm init phase kubeconfig all --config config.yaml` and
`kubeadm init phase certs all --config config.yaml`. This will generate all required kubeconfig
files and certificates under `/etc/kubernetes/` and its `pki` sub directory.
- Inspect the generated files. Delete `/etc/kubernetes/pki/ca.key`, delete or move to a safe location
the file `/etc/kubernetes/super-admin.conf`.
- On nodes where `kubeadm join` will be called also delete `/etc/kubernetes/kubelet.conf`.
This file is only required on the first node where `kubeadm init` will be called.
- Note that some files such `pki/sa.*`, `pki/front-proxy-ca.*` and `pki/etc/ca.*` are
shared between control plane nodes, You can generate them once and
[distribute them manually](/docs/setup/production-environment/tools/kubeadm/high-availability/#manual-certs)
to nodes where `kubeadm join` will be called, or you can use the
[`--upload-certs`](/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes)
functionality of `kubeadm init` and `--certificate-key` of `kubeadm join` to automate this distribution.
Once the credentials are prepared on all nodes, call `kubeadm init` and `kubeadm join` for these nodes to
join the cluster. kubeadm will use the existing kubeconfig and certificate files under `/etc/kubernetes/`
and its `pki` sub directory.
## Check certificate expiration

View File

@ -43,7 +43,7 @@ The upgrade workflow at high level is the following:
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
- The Kubernetes project recommends that you match your kubelet and kubeadm versions.
You can instead use an a version of kubelet that is older than kubeadm, provided it is within the
You can instead use a version of kubelet that is older than kubeadm, provided it is within the
range of supported versions.
For more details, please visit [kubeadm's skew against the kubelet](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#kubeadm-s-skew-against-the-kubelet).
- All containers are restarted after upgrade, because the container spec hash value is changed.
@ -54,11 +54,28 @@ The upgrade workflow at high level is the following:
with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure) instead.
### Considerations when upgrading etcd
Because the `kube-apiserver` static pod is running at all times (even if you
have drained the node), when you perform a kubeadm upgrade which includes an
etcd upgrade, in-flight requests to the server will stall while the new etcd
static pod is restarting. As a workaround, it is possible to actively stop the
`kube-apiserver` process a few seconds before starting the `kubeadm upgrade
apply` command. This permits to complete in-flight requests and close existing
connections, and minimizes the consequence of the etcd downtime. This can be
done as follows on control plane nodes:
```shell
killall -s SIGTERM kube-apiserver # trigger a graceful kube-apiserver shutdown
sleep 20 # wait a little bit to permit completing in-flight requests
kubeadm upgrade ... # execute a kubeadm upgrade command
```
<!-- steps -->
## Changing the package repository
If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to
If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to
enable the package repository for the desired Kubernetes minor release. This is explained in
[Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/)
document.
@ -75,8 +92,8 @@ Find the latest patch release for Kubernetes {{< skew currentVersion >}} using t
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
apt update
apt-cache madison kubeadm
sudo apt update
sudo apt-cache madison kubeadm
```
{{% /tab %}}
@ -85,7 +102,7 @@ apt-cache madison kubeadm
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
yum list --showduplicates kubeadm --disableexcludes=kubernetes
sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes
```
{{% /tab %}}
@ -107,9 +124,9 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
apt-mark hold kubeadm
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
sudo apt-mark hold kubeadm
```
{{% /tab %}}
@ -117,7 +134,7 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@ -132,7 +149,7 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
1. Verify the upgrade plan:
```shell
kubeadm upgrade plan
sudo kubeadm upgrade plan
```
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
@ -221,9 +238,9 @@ kubectl drain <node-to-drain> --ignore-daemonsets
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
apt-mark hold kubelet kubectl
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
@ -231,7 +248,7 @@ kubectl drain <node-to-drain> --ignore-daemonsets
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@ -279,7 +296,7 @@ The `STATUS` column should show `Ready` for all your nodes, and the version numb
If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run `kubeadm upgrade apply --force` without changing the version that your cluster is running.
To recover from a bad state, you can also run `sudo kubeadm upgrade apply --force` without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under `/etc/kubernetes/tmp`:
@ -320,4 +337,4 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
`kubeadm upgrade node` does the following on worker nodes:
- Fetches the kubeadm `ClusterConfiguration` from the cluster.
- Upgrades the kubelet configuration for this node.
- Upgrades the kubelet configuration for this node.

View File

@ -36,15 +36,15 @@ Upgrade kubeadm:
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
apt-mark hold kubeadm
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
sudo apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@ -75,15 +75,15 @@ kubectl drain <node-to-drain> --ignore-daemonsets
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
apt-mark hold kubelet kubectl
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}

View File

@ -19,10 +19,10 @@ You need to have a Kubernetes cluster. Follow the
## Install the Weave Net addon
Follow the [Integrating Kubernetes via the Addon](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) guide.
Follow the [Integrating Kubernetes via the Addon](https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#-installation) guide.
The Weave Net addon for Kubernetes comes with a
[Network Policy Controller](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#npc)
[Network Policy Controller](https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#network-policy)
that automatically monitors Kubernetes for any NetworkPolicy annotations on all
namespaces and configures `iptables` rules to allow or block traffic as directed by the policies.

View File

@ -294,7 +294,6 @@ For example:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:
@ -318,7 +317,6 @@ So, the previous example would become:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:
@ -542,7 +540,6 @@ spec:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:

View File

@ -39,7 +39,7 @@ Figure 1 represents what you're going to achieve in this task.
graph LR;
subgraph local[Local client machine]
client([client])-- local <br> traffic .-> local_ssh[Local SSH <br> SOCKS5 proxy];
client([client])-. local <br> traffic .-> local_ssh[Local SSH <br> SOCKS5 proxy];
end
local_ssh[SSH <br>SOCKS5 <br> proxy]-- SSH Tunnel -->sshd

View File

@ -48,14 +48,14 @@ Start RabbitMQ as follows:
```shell
# make a Service for the StatefulSet to use
kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-service.yaml
kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml
```
```
service "rabbitmq-service" created
```
```shell
kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-statefulset.yaml
kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml
```
```
statefulset "rabbitmq" created

View File

@ -278,7 +278,7 @@ pod usage is still within acceptable limits.
### Container resource metrics
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
{{< feature-state feature_gate_name="HPAContainerMetrics" >}}
The HorizontalPodAutoscaler API also supports a container metric source where the HPA can track the
resource usage of individual containers across a set of Pods, in order to scale the target resource.

View File

@ -17,23 +17,19 @@ description: |-
<div class="row">
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Scale an app using kubectl.</li>
</ul>
</div>
<div class="col-md-8">
<h3>Objectives</h3>
<ul>
<li>Scale an app using kubectl.</li>
</ul>
<h3>Scaling an application</h3>
<p>Previously we created a <a href="/docs/concepts/workloads/controllers/deployment/"> Deployment</a>, and then exposed it publicly via a <a href="/docs/concepts/services-networking/service/">Service</a>. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.</p>
<p>If you haven't worked through the earlier sections, start from <a href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/">Using minikube to create a cluster</a>.</p>
<p><em>Scaling</em> is accomplished by changing the number of replicas in a Deployment</p>
{{< note >}}
<p>If you are trying this after <a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/">the previous section</a>, you may have deleted the Service exposing the Deployment. In that case, please expose the Deployment again using the following command:</p><p><code><b>kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080</b></code></p>
{{< /note >}}
<p><em>Scaling</em> is accomplished by changing the number of replicas in a Deployment.</p>
</div>
<div class="col-md-4">
<div class="content__box content__box_lined">
@ -47,7 +43,14 @@ description: |-
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-md-12">
{{< note >}}
<p>If you are trying this after <a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/">the previous section</a>, you may have deleted the Service exposing the Deployment. In that case, please expose the Deployment again using the following command:</p><p><code><b>kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080</b></code></p>
{{< /note >}}
</div>
</div>
<div class="row">
<div class="col-md-8">

View File

@ -133,7 +133,7 @@ description: |-
and look for the <code>Image</code> field:</p>
<p><code><b>kubectl describe pods</b></code></p>
<p>To update the image of the application to version 2, use the <code>set image</code> subcommand, followed by the deployment name and the new image version:</p>
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2</b></code></p>
<p><code><b>kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v2</b></code></p>
<p>The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the <code>get pods</code> subcommand:</p>
<p><code><b>kubectl get pods</b></code></p>
</div>

View File

@ -11,6 +11,8 @@ spec:
operations: ["CREATE", "UPDATE"]
resources: ["deployments"]
validations:
- key: "high-replica-count"
expression: "object.spec.replicas > 50"
- expression: "object.spec.replicas > 50"
messageExpression: "'Deployment spec.replicas set to ' + string(object.spec.replicas)"
auditAnnotations:
- key: "high-replica-count"
valueExpression: "'Deployment spec.replicas set to ' + string(object.spec.replicas)"

View File

@ -78,9 +78,9 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
| February 2024 | 2024-02-09 | 2024-02-14 |
| March 2024 | 2024-03-08 | 2024-03-13 |
| April 2024 | 2024-04-12 | 2024-04-17 |
| May 2024 | 2024-05-10 | 2024-05-15 |
## Detailed Release History for Active Branches

View File

@ -0,0 +1,87 @@
---
title: Controladores Ingress
description: >-
Para que un [Ingress](/docs/concepts/services-networking/ingress/) funcione en tu clúster,
debe haber un _ingress controller_ en ejecución.
Debes seleccionar al menos un controlador Ingress y asegurarte de que está configurado en tu clúster.
En esta página se enumeran los controladores Ingress más comunes que se pueden implementar.
content_type: concept
weight: 50
---
<!-- overview -->
Para que el recurso Ingress funcione, el clúster necesita tener un controlador Ingress corriendo.
Mientras otro tipo de controladores que corren como parte del binario de `kube-controller-manager`, los controladores Ingress no son automaticamente iniciados dentro del clúster. Usa esta página para elegir la mejor implementación de controlador Ingress que funcione mejor para tu clúster.
Kubernetes es un proyecto que soporta y mantiene los controladores Ingress de [AWS](https://github.com/kubernetes-sigs/aws-load-balancer-controller#readme), [GCE](https://git.k8s.io/ingress-gce/README.md#readme) y
[nginx](https://git.k8s.io/ingress-nginx/README.md#readme).
<!-- body -->
## Controladores adicionales
{{% thirdparty-content %}}
* [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) es un controlador Ingress que controla la configuración de [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
* [Alibaba Cloud MSE Ingress](https://www.alibabacloud.com/help/en/mse/user-guide/overview-of-mse-ingress-gateways) es un controlador Ingress que controla la configuración de [Alibaba Cloud Native Gateway](https://www.alibabacloud.com/help/en/mse/product-overview/cloud-native-gateway-overview?spm=a2c63.p38356.0.0.20563003HJK9is), el cual es una versión comercial de [Higress](https://github.com/alibaba/higress).
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) es un [Apache APISIX](https://github.com/apache/apisix)-basado en un controlador Ingress.
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provee un balanceador de cargas L4-L7 usando [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
* [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) es un controlador Ingress basado en [BFE](https://www.bfe-networks.net).
* [Cilium Ingress Controller](https://docs.cilium.io/en/stable/network/servicemesh/ingress/) es un controlador Ingress potenciado por [Cilium](https://cilium.io/).
* El [Citrix ingress controller](https://github.com/citrix/citrix-k8s-ingress-controller#readme) funciona con Citrix Application Delivery Controller.
* [Contour](https://projectcontour.io/) es un controlador Ingress basado en [Envoy](https://www.envoyproxy.io/).
* [Emissary-Ingress](https://www.getambassador.io/products/api-gateway) es un API Gateway [Envoy](https://www.envoyproxy.io)-basado en el controlador Ingress.
* [EnRoute](https://getenroute.io/) es un API Gateway basado en [Envoy](https://www.envoyproxy.io) que puede correr como un controlador Ingress.
* [Easegress IngressController](https://github.com/megaease/easegress/blob/main/doc/reference/ingresscontroller.md) es una API Gateway basada en [Easegress](https://megaease.com/easegress/) que puede correr como un controlador Ingress.
* F5 BIG-IP [Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/)
te permite usar un Ingress para configurar los servidores virtuales de F5 BIG-IP.
* [FortiADC Ingress Controller](https://docs.fortinet.com/document/fortiadc/7.0.0/fortiadc-ingress-controller/742835/fortiadc-ingress-controller-overview) soporta el recurso de Kubernetes Ingress y te permite manejar los objectos FortiADC desde Kubernetes.
* [Gloo](https://gloo.solo.io) es un controlador Ingress de código abierto basado en [Envoy](https://www.envoyproxy.io),
el cual ofrece la funcionalidad de API gateway.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) es un controlador Ingress para
[HAProxy](https://www.haproxy.org/#desc).
* [Higress](https://github.com/alibaba/higress) es una API Gateway basada en [Envoy](https://www.envoyproxy.io) que puede funcionar como un controlador Ingress.
* El [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
es también un controlador Ingress para [HAProxy](https://www.haproxy.org/#desc).
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
es un controlador Ingress basado en [Istio](https://istio.io/).
* El [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller#readme)
es un controlador Ingress que controla [Kong Gateway](https://konghq.com/kong/).
* [Kusk Gateway](https://kusk.kubeshop.io/) es un controlador Ingress OpenAPI-driven basado en [Envoy](https://www.envoyproxy.io).
* El [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx-ingress-controller/)
trabaja con el servidor web (como un proxy) [NGINX](https://www.nginx.com/resources/glossary/nginx/).
* El [ngrok Kubernetes Ingress Controller](https://github.com/ngrok/kubernetes-ingress-controller) es un controlador de código abierto para añadir acceso público seguro a sus servicios K8s utilizando la [plataforma ngrok](https://ngrok.com).
* El [OCI Native Ingress Controller](https://github.com/oracle/oci-native-ingress-controller#readme) es un controlador Ingress para Oracle Cloud Infrastructure el cual te permite manejar el [balanceador de cargas OCI](https://docs.oracle.com/en-us/iaas/Content/Balance/home.htm).
* El [Pomerium Ingress Controller](https://www.pomerium.com/docs/k8s/ingress.html) esta basado en [Pomerium](https://pomerium.com/), que ofrece una política de acceso sensible al contexto.
* [Skipper](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/) es un enrutador HTTP y proxy inverso para la composición de servicios, incluyendo casos de uso como Kubernetes Ingress, diseñado como una biblioteca para construir su proxy personalizado.
* El [Traefik Kubernetes Ingress provider](https://doc.traefik.io/traefik/providers/kubernetes-ingress/) es un controlador Ingress para el [Traefik](https://traefik.io/traefik/) proxy.
* El [Tyk Operator](https://github.com/TykTechnologies/tyk-operator) amplía Ingress con recursos personalizados para aportar capacidades de gestión de API a Ingress. Tyk Operator funciona con el plano de control de código abierto Tyk Gateway y Tyk Cloud.
* [Voyager](https://voyagermesh.com) es un controlador Ingress para
[HAProxy](https://www.haproxy.org/#desc).
* [Wallarm Ingress Controller](https://www.wallarm.com/solutions/waf-for-kubernetes) es un controlador Ingress que proporciona capacidades de seguridad WAAP (WAF) y API.
## Uso de varios controladores Ingress
Puedes desplegar cualquier número de controladores Ingress utilizando [clase ingress](/docs/concepts/services-networking/ingress/#ingress-class)
dentro de un clúster. Ten en cuenta el `.metadata.name` de tu recurso de clase Ingress. Cuando creas un Ingress, necesitarás ese nombre para especificar el campo `ingressClassName` de su objeto Ingress (consulta [referencia IngressSpec v1](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec)). `ingressClassName` sustituye el antiguo [método de anotación](/docs/concepts/services-networking/ingress/#deprecated-annotation).
Si no especificas una IngressClass para un Ingress, y tu clúster tiene exactamente una IngressClass marcada como predeterminada, Kubernetes [aplica](/docs/concepts/services-networking/ingress/#default-ingress-class) la IngressClass predeterminada del clúster al Ingress.
Se marca una IngressClass como predeterminada estableciendo la [anotación `ingressclass.kubernetes.io/is-default-class`](/docs/reference/labels-annotations-taints/#ingressclass-kubernetes-io-is-default-class) en esa IngressClass, con el valor de cadena `"true"`.
Lo ideal sería que todos los controladores Ingress cumplieran esta especificación, pero los distintos controladores Ingress funcionan de forma ligeramente diferente.
{{< note >}}
Asegúrate de revisar la documentación de tu controlador Ingress para entender las advertencias de tu elección.
{{< /note >}}
## {{% heading "whatsnext" %}}
* Más información [Ingress](/docs/concepts/services-networking/ingress/).
* [Configurar Ingress en Minikube con el controlador NGINX](/docs/tasks/access-application-cluster/ingress-minikube).

View File

@ -1022,7 +1022,7 @@ Para apagar el complemento `vsphereVolume` y no cargarlo por el administrador de
## Uso de subPath {#using-subpath}
Algunas veces es útil compartir un volumen para múltiples usos en un único Pod.
La propiedad `volumeMounts.subPath` especifica una sub-ruta dentro del volumen referenciado en lugar de su raíz.
La propiedad `volumeMounts[*].subPath` especifica una sub-ruta dentro del volumen referenciado en lugar de su raíz.
El siguiente ejemplo muestra cómo configurar un Pod con la pila LAMP (Linux Apache MySQL PHP) usando un único volumen compartido. Esta configuración de ejemplo usando `subPath` no se recomienda para su uso en producción.
@ -1198,7 +1198,7 @@ For more details, see the [FlexVolume](https://github.com/kubernetes/community/b
La propagación del montaje permite compartir volúmenes montados por un contenedor para otros contenedores en el mismo Pod, o aun para otros pods en el mismo nodo.
La propagación del montaje de un volumen es controlada por el campo `mountPropagation` en `Container.volumeMounts`. Sus valores son:
La propagación del montaje de un volumen es controlada por el campo `mountPropagation` en `containers[*].volumeMounts`. Sus valores son:
- `None` - Este montaje de volumen no recibirá ningún montaje posterior que el host haya montado en este volumen o en cualquiera de sus subdirectorios. De manera similar, los montajes creados por el contenedor no serán visibles en el host. Este es el modo por defecto.

View File

@ -0,0 +1,109 @@
---
reviewers:
- ramrodo
title: Eliminación Forzosa de Pods de StatefulSet
content_type: task
weight: 70
---
<!-- overview -->
Esta página muestra cómo eliminar Pods que son parte de un
{{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}},
y explica las consideraciones a tener en cuenta al hacerlo.
## {{% heading "prerequisites" %}}
- Esta es una tarea bastante avanzada y tiene el potencial de violar algunas de las propiedades
inherentes de StatefulSet.
- Antes de proceder, familiarízate con las consideraciones enumeradas a continuación.
<!-- steps -->
## Consideraciones de StatefulSet
En la operación normal de un StatefulSet, **nunca** hay necesidad de eliminar forzosamente un Pod de StatefulSet.
El [controlador de StatefulSet](/es/docs/concepts/workloads/controllers/statefulset/) es responsable de
crear, escalar y eliminar miembros del StatefulSet. Intenta asegurar que el número especificado
de Pods, desde el ordinal 0 hasta N-1, estén vivos y listos. StatefulSet asegura que, en cualquier momento,
exista como máximo un Pod con una identidad dada, corriendo en un clúster. Esto se refiere a la semántica de
*como máximo uno* proporcionada por un StatefulSet.
La eliminación manual forzada debe realizarse con precaución, ya que tiene el potencial de violar la
semántica de como máximo uno, inherente a StatefulSet. Los StatefulSets pueden usarse para ejecutar aplicaciones distribuidas y
agrupadas que necesitan una identidad de red estable y almacenamiento estable.
Estas aplicaciones a menudo tienen configuraciones que dependen de un conjunto de un número fijo de
miembros con identidades fijas. Tener múltiples miembros con la misma identidad puede ser desastroso
y puede llevar a pérdida de datos (por ejemplo, escenario de cerebro dividido en sistemas basados en quórum).
## Eliminar Pods
Puedes realizar una eliminación de Pod paulatina con el siguiente comando:
```shell
kubectl delete pods <pod>
```
Para que lo anterior conduzca a una terminación paulatina, el Pod no debe especificar un
`pod.Spec.TerminationGracePeriodSeconds` de 0. La práctica de establecer un
`pod.Spec.TerminationGracePeriodSeconds` de 0 segundos es insegura y se desaconseja rotundamente
para los Pods de StatefulSet. La eliminación paulatina es segura y garantizará que el Pod
se apague de [manera paulatina](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination), antes de que kubelet elimine el nombre en el apiserver.
Un Pod no se elimina automáticamente cuando un nodo no es accesible.
Los Pods que se ejecutan en un Nodo inaccesible entran en el estado 'Terminating' o 'Unknown' después de un
[tiempo de espera](es/docs/concepts/architecture/nodes/#Estados).
Los Pods también pueden entrar en estos estados cuando el usuario intenta la eliminación paulatina de un Pod
en un nodo inaccesible.
Las únicas formas en que un Pod en tal estado puede ser eliminado del apiserver son las siguientes:
- El objeto Node es eliminado (ya sea por ti, o por el [Controlador de Nodo](/es/docs/concepts/architecture/nodes/#controlador-de-nodos)).).
- Kubelet, en el nodo no responsivo, comienza a responder, mata el Pod y elimina la entrada del apiserver.
- Eliminación forzada del Pod por el usuario.
-
La mejor práctica recomendada es usar el primer o segundo enfoque. Si un nodo está confirmado
como muerto (por ejemplo, desconectado permanentemente de la red, apagado, etc.), entonces elimina
el objeto Node. Si el nodo es afectado de una partición de red, entonces trata de resolver esto
o espera a que se resuelva. Cuando la partición se solucione, kubelet completará la eliminación
del Pod y liberará su nombre en el apiserver.
Normalmente, el sistema completa la eliminación una vez que el Pod ya no se está ejecutando en un nodo, o
el nodo es eliminado por un administrador. Puedes anular esto forzando la eliminación del Pod.
### Eliminación Forzosa
Las eliminaciones forzosas **no** esperan confirmación de kubelet de que el Pod ha sido terminado.
Independientemente de si una eliminación forzosa tiene éxito en matar un Pod, inmediatamente
liberará el nombre del apiserver. Esto permitiría que el controlador de StatefulSet cree un Pod de reemplazo
con esa misma identidad; esto puede llevar a la duplicación de un Pod que aún está en ejecución,
y si dicho Pod todavía puede comunicarse con los otros miembros del StatefulSet,
violará la semántica de como máximo uno que StatefulSet está diseñado para garantizar.
Cuando eliminas forzosamente un Pod de StatefulSet, estás afirmando que el Pod en cuestión nunca
volverá a hacer contacto con otros Pods en el StatefulSet y su nombre puede ser liberado de forma segura para que
se cree un reemplazo.
Si quieres eliminar un Pod de forma forzosa usando la versión de kubectl >= 1.5, haz lo siguiente:
```shell
kubectl delete pods <pod> --grace-period=0 --force
```
Si estás usando cualquier versión de kubectl <= 1.4, deberías omitir la opción `--force` y usar:
```shell
kubectl delete pods <pod> --grace-period=0
```
Si incluso después de estos comandos el pod está atascado en el estado `Unknown`, usa el siguiente comando para
eliminar el Pod del clúster:
```shell
kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'
```
Siempre realiza la eliminación forzosa de Pods de StatefulSet con cuidado y con pleno conocimiento de los riesgos involucrados.
## {{% heading "whatsnext" %}}
Aprende más sobre [depurar un StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/).

View File

@ -9,8 +9,7 @@ weight: 20
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">

View File

@ -9,8 +9,6 @@ weight: 20
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
<script src="https://katacoda.com/embed.js"></script>
<div class="layout" id="top">

View File

@ -8,14 +8,13 @@ card:
title: Pas à pas des bases
---
<!DOCTYPE html>
<html lang="fr">
<body>
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
<div class="layout" id="top">
<main class="content">
@ -67,9 +66,9 @@ card:
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
<a href="/fr/docs/tutorials/kubernetes-basics/explore/explore-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_03.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Explorez votre application</h5></a>
<a href="/fr/docs/tutorials/kubernetes-basics/explore/explore-intro/"><h5>3. Explorez votre application</h5></a>
</div>
</div>
</div>
@ -79,25 +78,25 @@ card:
<div class="row">
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<a href="/fr/docs/tutorials/kubernetes-basics/expose/expose-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_04.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Exposez votre application publiquement</h5></a>
<a href="/fr/docs/tutorials/kubernetes-basics/expose/expose-intro/"><h5>4. Exposez votre application publiquement</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
<a href="/fr/docs/tutorials/kubernetes-basics/scale/scale-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_05.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Mettre à l'échelle votre application</h5></a>
<a href="/fr/docs/tutorials/kubernetes-basics/scale/scale-intro/"><h5>5. Mettre à l'échelle votre application</h5></a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="thumbnail">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
<a href="/fr/docs/tutorials/kubernetes-basics/update/update-intro/"><img src="/docs/tutorials/kubernetes-basics/public/images/module_06.svg?v=1469803628347" alt=""></a>
<div class="caption">
<a href="/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. Mettre à jour votre application</h5></a>
<a href="/fr/docs/tutorials/kubernetes-basics/update/update-intro/"><h5>6. Mettre à jour votre application</h5></a>
</div>
</div>
</div>
@ -108,7 +107,7 @@ card:
<div class="row">
<div class="col-md-12">
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button">Lancer le tutoriel<span class="btn__next"></span></a>
<a class="btn btn-lg btn-success" href="/fr/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" role="button">Lancer le tutoriel<span class="btn__next"></span></a>
</div>
</div>

Some files were not shown because too many files have changed in this diff Show More