Merge branch 'kubernetes:main' into main
commit
115950ea75
|
@ -16,5 +16,8 @@ indent_size = 2
|
|||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
[*.{yaml}]
|
||||
insert_final_newline = true
|
||||
|
||||
[Makefile]
|
||||
indent_style = tab
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
Contact: mailto:security@kubernetes.io
|
||||
Expires: 2031-01-11T06:30:00.000Z
|
||||
Preferred-Languages: en
|
||||
Canonical: https://kubernetes.io/.well-known/security.txt
|
||||
Policy: https://github.com/kubernetes/website/blob/main/SECURITY.md
|
2
OWNERS
2
OWNERS
|
@ -7,12 +7,12 @@ approvers:
|
|||
- sig-docs-en-owners # Defined in OWNERS_ALIASES
|
||||
|
||||
emeritus_approvers:
|
||||
# - celestehorgan, commented out to disable PR assignments
|
||||
# - chenopis, commented out to disable PR assignments
|
||||
# - irvifa, commented out to disable PR assignments
|
||||
# - jaredbhatti, commented out to disable PR assignments
|
||||
# - jimangel, commented out to disable PR assignments
|
||||
# - kbarnard10, commented out to disable PR assignments
|
||||
# - kbhawkey, commented out to disable PR assignments
|
||||
# - steveperry-53, commented out to disable PR assignments
|
||||
- stewart-yu
|
||||
# - zacharysarah, commented out to disable PR assignments
|
||||
|
|
|
@ -2,58 +2,50 @@ aliases:
|
|||
sig-docs-blog-owners: # Approvers for blog content
|
||||
- mrbobbytables
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- sftim
|
||||
sig-docs-blog-reviewers: # Reviewers for blog content
|
||||
- Gauravpadam
|
||||
- mrbobbytables
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- sftim
|
||||
sig-docs-localization-owners: # Admins for localization content
|
||||
- a-mccarthy
|
||||
- divya-mohan0209
|
||||
- jimangel
|
||||
- kbhawkey
|
||||
- natalisucks
|
||||
- onlydole
|
||||
- reylejano
|
||||
- sftim
|
||||
- seokho-son
|
||||
- tengqm
|
||||
sig-docs-de-owners: # Admins for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
- rlenferink
|
||||
sig-docs-de-reviews: # PR reviews for German content
|
||||
- bene2k1
|
||||
- mkorbi
|
||||
- rlenferink
|
||||
sig-docs-en-owners: # Admins for English content
|
||||
- annajung
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- divya-mohan0209
|
||||
- kbhawkey
|
||||
- drewhagen # RT 1.30 Docs Lead
|
||||
- katcosgrove # RT 1.30 Lead
|
||||
- natalisucks
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- reylejano
|
||||
- Rishit-dagli # 1.28 Release Team Docs Lead
|
||||
- sftim
|
||||
- tengqm
|
||||
sig-docs-en-reviews: # PR reviews for English content
|
||||
- bradtopol
|
||||
- dipesh-rawat
|
||||
- dipesh-rawat
|
||||
- divya-mohan0209
|
||||
- kbhawkey
|
||||
- mehabhalodiya
|
||||
- mengjiao-liu
|
||||
- natalisucks
|
||||
- nate-double-u
|
||||
- onlydole
|
||||
- reylejano
|
||||
- sftim
|
||||
- shannonxtreme
|
||||
- tengqm
|
||||
- windsonsea
|
||||
sig-docs-es-owners: # Admins for Spanish content
|
||||
- 92nqb
|
||||
- krol3
|
||||
|
@ -111,28 +103,24 @@ aliases:
|
|||
- atoato88
|
||||
- bells17
|
||||
- kakts
|
||||
- ptux
|
||||
- t-inu
|
||||
sig-docs-ko-owners: # Admins for Korean content
|
||||
- gochist
|
||||
- ianychoi
|
||||
- jihoon-seo
|
||||
- seokho-son
|
||||
- yoonian
|
||||
- ysyukr
|
||||
sig-docs-ko-reviews: # PR reviews for Korean content
|
||||
- gochist
|
||||
- ianychoi
|
||||
- jihoon-seo
|
||||
- jmyung
|
||||
- jongwooo
|
||||
- seokho-son
|
||||
- yoonian
|
||||
- ysyukr
|
||||
sig-docs-leads: # Website chairs and tech leads
|
||||
- divya-mohan0209
|
||||
- kbhawkey
|
||||
- natalisucks
|
||||
- onlydole
|
||||
- reylejano
|
||||
- sftim
|
||||
- tengqm
|
||||
|
@ -151,7 +139,6 @@ aliases:
|
|||
sig-docs-zh-reviews: # PR reviews for Chinese content
|
||||
- asa3311
|
||||
- chenrui333
|
||||
- chenxuc
|
||||
- howieyuen
|
||||
# idealhack
|
||||
- kinzhi
|
||||
|
@ -206,16 +193,15 @@ aliases:
|
|||
- Arhell
|
||||
- idvoretskyi
|
||||
- MaxymVlasov
|
||||
- Potapy4
|
||||
# authoritative source: git.k8s.io/community/OWNERS_ALIASES
|
||||
committee-steering: # provide PR approvals for announcements
|
||||
- cblecker
|
||||
- cpanato
|
||||
- bentheelder
|
||||
- justaugustus
|
||||
- mrbobbytables
|
||||
- pacoxu
|
||||
- palnabarun
|
||||
- tpepper
|
||||
- pohly
|
||||
- soltysh
|
||||
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
|
||||
sig-release-leads:
|
||||
- cpanato # SIG Technical Lead
|
||||
|
@ -239,7 +225,7 @@ aliases:
|
|||
- jimangel # Release Manager Associate
|
||||
- jrsapi # Release Manager Associate
|
||||
- salaxander # Release Manager Associate
|
||||
# authoritative source: https://github.com/kubernetes/committee-security-response/blob/main/OWNERS_ALIASES
|
||||
# authoritative source: https://github.com/kubernetes/committee-security-response/blob/main/OWNERS_ALIASES
|
||||
committee-security-response:
|
||||
- cjcullen
|
||||
- cji
|
||||
|
@ -249,7 +235,7 @@ aliases:
|
|||
- ritazh
|
||||
- SaranBalaji90
|
||||
- tabbysable
|
||||
# authoritative source: https://github.com/kubernetes/sig-security/blob/main/OWNERS_ALIASES
|
||||
# authoritative source: https://github.com/kubernetes/sig-security/blob/main/OWNERS_ALIASES
|
||||
sig-security-leads:
|
||||
- IanColdwater
|
||||
- tabbysable
|
||||
|
|
|
@ -3,11 +3,11 @@
|
|||
[](https://travis-ci.org/kubernetes/website)
|
||||
[](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
स्वागत हे! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियां हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
|
||||
स्वागत है! इस रिपॉजिटरी में [कुबरनेट्स वेबसाइट और दस्तावेज](https://kubernetes.io/) बनाने के लिए आवश्यक सभी संपत्तियाँ हैं। हम बहुत खुश हैं कि आप योगदान करना चाहते हैं!
|
||||
|
||||
## डॉक्स में योगदान देना
|
||||
|
||||
आप अपने GitHub खाते में इस रिपॉजिटरी की एक copy बनाने के लिए स्क्रीन के ऊपरी-दाएँ क्षेत्र में **Fork** बटन पर क्लिक करें। इस copy को *Fork* कहा जाता है। अपने fork में परिवर्तन करने के बाद जब आप उनको हमारे पास भेजने के लिए तैयार हों, तो अपने fork पर जाएं और हमें इसके बारे में बताने के लिए एक नया pull request बनाएं।
|
||||
आप अपने GitHub खाते में इस रिपॉजिटरी की एक copy बनाने के लिए स्क्रीन के ऊपरी-दाएँ क्षेत्र में **Fork** बटन पर क्लिक करें। इस copy को *Fork* कहा जाता है। अपने fork में परिवर्तन करने के बाद जब आप उनको हमारे पास भेजने के लिए तैयार हों, तो अपने fork पर जाएँ और हमें इसके बारे में बताने के लिए एक नया pull request बनाएं।
|
||||
|
||||
एक बार जब आपका pull request बन जाता है, तो एक कुबरनेट्स समीक्षक स्पष्ट, कार्रवाई योग्य प्रतिक्रिया प्रदान करने की जिम्मेदारी लेगा। pull request के मालिक के रूप में, **यह आपकी जिम्मेदारी है कि आप कुबरनेट्स समीक्षक द्वारा प्रदान की गई प्रतिक्रिया को संबोधित करने के लिए अपने pull request को संशोधित करें।**
|
||||
|
||||
|
@ -37,8 +37,6 @@
|
|||
|
||||
> यदि आप विंडोज पर हैं, तो आपको कुछ और टूल्स की आवश्यकता होगी जिन्हें आप [Chocolatey](https://chocolatey.org) के साथ इंस्टॉल कर सकते हैं।
|
||||
|
||||
> यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे Hugo का उपयोग करके स्थानीय रूप से साइट चलाना देखें।
|
||||
|
||||
यदि आप डॉकर के बिना स्थानीय रूप से वेबसाइट चलाना पसंद करते हैं, तो नीचे दिए गए Hugo का उपयोग करके स्थानीय रूप से [साइट को चलाने](#hugo-का-उपयोग-करते-हुए-स्थानीय-रूप-से-साइट-चलाना) का तरीका देखें।
|
||||
|
||||
यदि आप [डॉकर](https://www.docker.com/get-started) चला रहे हैं, तो स्थानीय रूप से `कुबेरनेट्स-ह्यूगो` Docker image बनाएँ:
|
||||
|
|
32
README-pl.md
32
README-pl.md
|
@ -9,7 +9,7 @@ W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony
|
|||
|
||||
## Jak używać tego repozytorium
|
||||
|
||||
Możesz uruchomić serwis lokalnie poprzez Hugo (Extended version) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
|
||||
Możesz uruchomić serwis lokalnie poprzez [Hugo (Extended version)](https://gohugo.io/) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
|
||||
|
||||
## Wymagania wstępne
|
||||
|
||||
|
@ -29,21 +29,28 @@ cd website
|
|||
|
||||
Strona Kubernetesa używa [Docsy Hugo theme](https://github.com/google/docsy#readme). Nawet jeśli planujesz uruchomić serwis w środowisku kontenerowym, zalecamy pobranie podmodułów i innych zależności za pomocą polecenia:
|
||||
|
||||
```bash
|
||||
# pull in the Docsy submodule
|
||||
### Windows
|
||||
```powershell
|
||||
# aktualizuj podrzędne moduły
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
### Linux / inne systemy Unix
|
||||
```bash
|
||||
# aktualizuj podrzędne moduły
|
||||
make module-init
|
||||
```
|
||||
|
||||
## Uruchomienie serwisu w kontenerze
|
||||
|
||||
Aby zbudować i uruchomić serwis wewnątrz środowiska kontenerowego, wykonaj następujące polecenia:
|
||||
|
||||
```bash
|
||||
make container-image
|
||||
# Możesz ustawić zmienną $CONTAINER_ENGINE wskazującą na dowolne narzędzie obsługujące kontenery podobnie jak Docker
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)).
|
||||
Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOS](https://docs.docker.com/desktop/settings/mac/) i [Windows](https://docs.docker.com/desktop/settings/windows/)).
|
||||
|
||||
Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres <http://localhost:1313>. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
|
||||
|
@ -53,11 +60,16 @@ Upewnij się, że zainstalowałeś odpowiednią wersję Hugo "extended", określ
|
|||
|
||||
Aby uruchomić i przetestować serwis lokalnie, wykonaj:
|
||||
|
||||
```bash
|
||||
# install dependencies
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- macOS i Linux
|
||||
```bash
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- Windows (PowerShell)
|
||||
```powershell
|
||||
npm ci
|
||||
hugo.exe server --buildFuture --environment development
|
||||
```
|
||||
|
||||
Zostanie uruchomiony lokalny serwer Hugo na porcie 1313. Otwórz w przeglądarce adres <http://localhost:1313>, aby obejrzeć zawartość serwisu. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
|
||||
|
||||
|
|
87
README-pt.md
87
README-pt.md
|
@ -2,11 +2,11 @@
|
|||
|
||||
[](https://app.netlify.com/sites/kubernetes-io-main-staging/deploys) [](https://github.com/kubernetes/website/releases/latest)
|
||||
|
||||
Bem-vindos! Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Estamos muito satisfeitos por você querer contribuir!
|
||||
Este repositório contém todos os recursos necessários para criar o [website e documentação do Kubernetes](https://kubernetes.io/). Ficamos felizes por você querer contribuir!
|
||||
|
||||
# Utilizando este repositório
|
||||
## Utilizando este repositório
|
||||
|
||||
Você pode executar o website localmente utilizando o Hugo (versão Extended), ou você pode executa-ló em um container runtime. É altamente recomendável utilizar um container runtime, pois garante a consistência na implantação do website real.
|
||||
Você pode executar o website localmente utilizando o [Hugo (versão Extended)](https://gohugo.io/), ou você pode executá-lo em um agente de execução de contêiner. É altamente recomendável utilizar um agente de execução de contêiner, pois este fornece consistência de implantação em relação ao website real.
|
||||
|
||||
## Pré-requisitos
|
||||
|
||||
|
@ -24,22 +24,33 @@ git clone https://github.com/kubernetes/website.git
|
|||
cd website
|
||||
```
|
||||
|
||||
O website do Kubernetes utiliza o [tema Docsy Hugo](https://github.com/google/docsy#readme). Mesmo se você planeje executar o website em um container, é altamente recomendado baixar os submódulos e outras dependências executando o seguinte comando:
|
||||
O website do Kubernetes utiliza o [tema Docsy Hugo](https://github.com/google/docsy#readme). Mesmo que você planeje executar o website em um contêiner, é altamente recomendado baixar os submódulos e outras dependências executando o seguinte comando:
|
||||
|
||||
```
|
||||
# Baixar o submódulo Docsy
|
||||
## Windows
|
||||
|
||||
```powershell
|
||||
# Obter dependências e outros submódulos
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
## Linux / outros Unix
|
||||
|
||||
```bash
|
||||
# Obter dependências e outros submódulos
|
||||
make module-init
|
||||
```
|
||||
|
||||
## Executando o website usando um container
|
||||
|
||||
Para executar o build do website em um container, execute o comando abaixo para criar a imagem do container e executa-lá:
|
||||
Para executar o build do website em um contêiner, execute o comando abaixo:
|
||||
|
||||
```
|
||||
make container-image
|
||||
```bash
|
||||
# Você pode definir a variável $CONTAINER_ENGINE com o nome do agente de execução de contêiner utilizado.
|
||||
make container-serve
|
||||
```
|
||||
|
||||
Caso ocorram erros, é provável que o contêiner que está executando o Hugo não tenha recursos suficientes. A solução é aumentar a quantidade de CPU e memória disponível para o Docker ([MacOS](https://docs.docker.com/desktop/settings/mac/) e [Windows](https://docs.docker.com/desktop/settings/windows/)).
|
||||
|
||||
Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador.
|
||||
|
||||
## Executando o website localmente utilizando o Hugo
|
||||
|
@ -54,7 +65,7 @@ npm ci
|
|||
make serve
|
||||
```
|
||||
|
||||
Isso iniciará localmente o Hugo na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
|
||||
O Hugo iniciará localmente na porta 1313. Abra o seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força uma atualização no navegador.
|
||||
|
||||
## Construindo a página de referência da API
|
||||
|
||||
|
@ -62,31 +73,21 @@ A página de referência da API localizada em `content/en/docs/reference/kuberne
|
|||
|
||||
Siga os passos abaixo para atualizar a página de referência para uma nova versão do Kubernetes:
|
||||
|
||||
OBS: modifique o "v1.20" no exemplo a seguir pela versão a ser atualizada
|
||||
|
||||
1. Obter o submódulo `kubernetes-resources-reference`:
|
||||
1. Obter o submódulo `api-ref-generator`:
|
||||
|
||||
```
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
2. Criar a nova versão da API no submódulo e adicionar à especificação do Swagger:
|
||||
2. Atualizar a especificação do Swagger:
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/gen-resourcesdocs/api/v1.20/swagger.json
|
||||
```bash
|
||||
curl 'https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json' > api-ref-generator/
|
||||
```
|
||||
|
||||
3. Copiar o sumário e os campos de configuração para a nova versão a partir da versão anterior:
|
||||
3. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
|
||||
|
||||
```
|
||||
mkdir api-ref-generator/gen-resourcesdocs/api/v1.20
|
||||
cp api-ref-generator/gen-resourcesdocs/api/v1.19/* api-ref-generator/gen-resourcesdocs/api/v1.20/
|
||||
```
|
||||
|
||||
4. Ajustar os arquivos `toc.yaml` e `fields.yaml` para refletir as mudanças entre as duas versões.
|
||||
|
||||
5. Em seguida, gerar as páginas:
|
||||
4. Em seguida, gerar as páginas:
|
||||
|
||||
```
|
||||
make api-reference
|
||||
|
@ -101,7 +102,7 @@ make container-serve
|
|||
|
||||
Abra o seu navegador em http://localhost:1313/docs/reference/kubernetes-api/ para visualizar a página de referência da API.
|
||||
|
||||
6. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
|
||||
5. Quando todas as mudanças forem refletidas nos arquivos de configuração `toc.yaml` e `fields.yaml`, crie um pull request com a nova página de referência de API.
|
||||
|
||||
## Troubleshooting
|
||||
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
|
||||
|
@ -153,16 +154,17 @@ make: *** [container-serve] Error 137
|
|||
|
||||
Verifique a quantidade de memória disponível para o agente de execução de contêiner. No caso do Docker Desktop para macOS, abra o menu "Preferences..." -> "Resources..." e tente disponibilizar mais memória.
|
||||
|
||||
# Comunidade, discussão, contribuição e apoio
|
||||
## Comunidade, discussão, contribuição e apoio
|
||||
|
||||
Saiba mais sobre a comunidade Kubernetes SIG Docs e reuniões na [página da comunidade](http://kubernetes.io/community/).
|
||||
|
||||
Você também pode entrar em contato com os mantenedores deste projeto em:
|
||||
Você também pode entrar em contato com os mantenedores deste projeto utilizando:
|
||||
|
||||
- [Slack](https://kubernetes.slack.com/messages/sig-docs) ([Obter o convide para o este slack](https://slack.k8s.io/))
|
||||
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
|
||||
- [Obter o convide para este slack](https://slack.k8s.io/)
|
||||
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
|
||||
|
||||
# Contribuindo com os documentos
|
||||
## Contribuindo com a documentação
|
||||
|
||||
Você pode clicar no botão **Fork** na área superior direita da tela para criar uma cópia desse repositório na sua conta do GitHub. Esta cópia é chamada de *fork*. Faça as alterações desejadas no seu fork e, quando estiver pronto para enviar as alterações para nós, vá até o fork e crie um novo **pull request** para nos informar sobre isso.
|
||||
|
||||
|
@ -179,10 +181,27 @@ Para mais informações sobre como contribuir com a documentação do Kubernetes
|
|||
* [Guia de Estilo da Documentação](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Localizando documentação do Kubernetes](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
Você pode contatar os mantenedores da localização em Português em:
|
||||
### Embaixadores para novos colaboradores
|
||||
|
||||
* Felipe ([GitHub - @femrtnz](https://github.com/femrtnz))
|
||||
* [Slack channel](https://kubernetes.slack.com/messages/kubernetes-docs-pt)
|
||||
Caso você precise de ajuda em algum momento ao contribuir, os [Embaixadores para novos colaboradores](https://kubernetes.io/docs/contribute/advanced/#serve-as-a-new-contributor-ambassador) são pontos de contato. São aprovadores do SIG Docs cujas responsabilidades incluem orientar e ajudar novos colaboradores em seus primeiros pull requests. O melhor canal para contato com embaixadores é o [Slack do Kubernetes](https://slack.k8s.io/). Atuais Embaixadores do SIG Docs:
|
||||
|
||||
| Nome | Slack | GitHub |
|
||||
| -------------------------- | -------------------------- | -------------------------- |
|
||||
| Arsh Sharma | @arsh | @RinkiyaKeDad |
|
||||
|
||||
## Traduções do `README.md`
|
||||
|
||||
| Idioma | Idioma |
|
||||
| ------------------------- | -------------------------- |
|
||||
| [Alemão](README-de.md) | [Italiano](README-it.md) |
|
||||
| [Chinês](README-zh.md) | [Japonês](README-ja.md) |
|
||||
| [Coreano](README-ko.md) | [Polonês](README-pl.md) |
|
||||
| [Espanhol](README-es.md) | [Português](README-pt.md) |
|
||||
| [Francês](README-fr.md) | [Russo](README-ru.md) |
|
||||
| [Hindi](README-hi.md) | [Ucraniano](README-uk.md) |
|
||||
| [Indonésio](README-id.md) | [Vietnamita](README-vi.md) |
|
||||
|
||||
Você pode contatar os mantenedores da localização em Português no canal do Slack [#kubernetes-docs-pt](https://kubernetes.slack.com/messages/kubernetes-docs-pt).
|
||||
|
||||
# Código de conduta
|
||||
|
||||
|
|
83
README-zh.md
83
README-zh.md
|
@ -14,10 +14,10 @@ This repository contains the assets required to build the [Kubernetes website an
|
|||
|
||||
<!--
|
||||
- [Contributing to the docs](#contributing-to-the-docs)
|
||||
- [Localization ReadMes](#localization-readmemds)
|
||||
- [Localization READMEs](#localization-readmemds)
|
||||
-->
|
||||
- [为文档做贡献](#为文档做贡献)
|
||||
- [README.md 本地化](#readmemd-本地化)
|
||||
- [README 本地化](#readme-本地化)
|
||||
|
||||
<!--
|
||||
## Using this repository
|
||||
|
@ -26,7 +26,8 @@ You can run the website locally using [Hugo (Extended version)](https://gohugo.i
|
|||
-->
|
||||
## 使用这个仓库
|
||||
|
||||
可以使用 [Hugo(扩展版)](https://gohugo.io/)在本地运行网站,也可以在容器中运行它。强烈建议使用容器,因为这样可以和在线网站的部署保持一致。
|
||||
可以使用 [Hugo(扩展版)](https://gohugo.io/)在本地运行网站,也可以在容器中运行它。
|
||||
强烈建议使用容器,因为这样可以和在线网站的部署保持一致。
|
||||
|
||||
<!--
|
||||
## Prerequisites
|
||||
|
@ -71,10 +72,11 @@ git submodule update --init --recursive --depth 1
|
|||
```
|
||||
-->
|
||||
### Windows
|
||||
|
||||
```powershell
|
||||
# 获取子模块依赖
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
```
|
||||
|
||||
<!--
|
||||
### Linux / other Unix
|
||||
|
@ -84,10 +86,11 @@ make module-init
|
|||
```
|
||||
-->
|
||||
### Linux / 其它 Unix
|
||||
|
||||
```bash
|
||||
# 获取子模块依赖
|
||||
make module-init
|
||||
```
|
||||
```
|
||||
|
||||
<!--
|
||||
## Running the website using a container
|
||||
|
@ -98,17 +101,23 @@ To build the site in a container, run the following:
|
|||
|
||||
要在容器中构建网站,请运行以下命令:
|
||||
|
||||
<!--
|
||||
```bash
|
||||
# You can set $CONTAINER_ENGINE to the name of any Docker-like container tool
|
||||
make container-serve
|
||||
```
|
||||
-->
|
||||
```bash
|
||||
# 你可以将 $CONTAINER_ENGINE 设置为任何 Docker 类容器工具的名称
|
||||
make container-serve
|
||||
```
|
||||
|
||||
<!--
|
||||
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) and [Windows](https://docs.docker.com/docker-for-windows/#resources)).
|
||||
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOS](https://docs.docker.com/desktop/settings/mac/) and [Windows](https://docs.docker.com/desktop/settings/windows/)).
|
||||
-->
|
||||
如果你看到错误,这可能意味着 Hugo 容器没有足够的可用计算资源。
|
||||
要解决这个问题,请增加机器([MacOSX](https://docs.docker.com/docker-for-mac/#resources)
|
||||
和 [Windows](https://docs.docker.com/docker-for-windows/#resources))上
|
||||
要解决这个问题,请增加机器([MacOS](https://docs.docker.com/desktop/settings/mac/)
|
||||
和 [Windows](https://docs.docker.com/desktop/settings/windows/))上
|
||||
Docker 允许的 CPU 和内存使用量。
|
||||
|
||||
<!--
|
||||
|
@ -120,22 +129,36 @@ Open up your browser to <http://localhost:1313> to view the website. As you make
|
|||
<!--
|
||||
## Running the website locally using Hugo
|
||||
|
||||
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
|
||||
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L11) file.
|
||||
|
||||
To build and test the site locally, run:
|
||||
To install dependencies, deploy and test the site locally, run:
|
||||
-->
|
||||
## 在本地使用 Hugo 来运行网站
|
||||
|
||||
请确保安装的是 [`netlify.toml`](netlify.toml#L10) 文件中环境变量 `HUGO_VERSION` 所指定的
|
||||
请确保安装的是 [`netlify.toml`](netlify.toml#L11) 文件中环境变量 `HUGO_VERSION` 所指定的
|
||||
Hugo Extended 版本。
|
||||
|
||||
若要在本地构造和测试网站,请运行:
|
||||
若要在本地安装依赖,构建和测试网站,运行以下命令:
|
||||
|
||||
```bash
|
||||
# 安装依赖
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
<!--
|
||||
- For macOS and Linux
|
||||
-->
|
||||
- 对于 macOS 和 Linux
|
||||
|
||||
```bash
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
|
||||
<!--
|
||||
- For Windows (PowerShell)
|
||||
-->
|
||||
- 对于 Windows (PowerShell)
|
||||
|
||||
```powershell
|
||||
npm ci
|
||||
hugo.exe server --buildFuture --environment development
|
||||
```
|
||||
|
||||
<!--
|
||||
This will start the local Hugo server on port 1313. Open up your browser to <http://localhost:1313> to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
@ -154,7 +177,9 @@ The API reference pages located in `content/en/docs/reference/kubernetes-api` ar
|
|||
|
||||
To update the reference pages for a new Kubernetes release follow these steps:
|
||||
-->
|
||||
位于 `content/en/docs/reference/kubernetes-api` 的 API 参考页面是使用 <https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs> 根据 Swagger 规范(也称为 OpenAPI 规范)构建的。
|
||||
位于 `content/en/docs/reference/kubernetes-api` 的 API 参考页面是使用
|
||||
<https://github.com/kubernetes-sigs/reference-docs/tree/master/gen-resourcesdocs>
|
||||
根据 Swagger 规范(也称为 OpenAPI 规范)构建的。
|
||||
|
||||
要更新 Kubernetes 新版本的参考页面,请执行以下步骤:
|
||||
|
||||
|
@ -193,7 +218,7 @@ To update the reference pages for a new Kubernetes release follow these steps:
|
|||
<!--
|
||||
You can test the results locally by making and serving the site from a container image:
|
||||
-->
|
||||
你可以通过从容器映像创建和提供站点来在本地测试结果:
|
||||
你可以通过从容器镜像创建和提供站点来在本地测试结果:
|
||||
|
||||
```bash
|
||||
make container-image
|
||||
|
@ -203,12 +228,13 @@ To update the reference pages for a new Kubernetes release follow these steps:
|
|||
<!--
|
||||
In a web browser, go to <http://localhost:1313/docs/reference/kubernetes-api/> to view the API reference.
|
||||
-->
|
||||
在 Web 浏览器中,打开 <http://localhost:1313/docs/reference/kubernetes-api/> 查看 API 参考。
|
||||
在 Web 浏览器中,打开 <http://localhost:1313/docs/reference/kubernetes-api/> 查看 API 参考页面。
|
||||
|
||||
<!--
|
||||
5. When all changes of the new contract are reflected into the configuration files `toc.yaml` and `fields.yaml`, create a Pull Request with the newly generated API reference pages.
|
||||
-->
|
||||
5. 当所有新的更改都反映到配置文件 `toc.yaml` 和 `fields.yaml` 中时,使用新生成的 API 参考页面创建一个 Pull Request。
|
||||
5. 当所有新的更改都反映到配置文件 `toc.yaml` 和 `fields.yaml` 中时,使用新生成的 API
|
||||
参考页面创建一个 Pull Request。
|
||||
|
||||
<!--
|
||||
## Troubleshooting
|
||||
|
@ -252,10 +278,13 @@ Then run the following commands (adapted from <https://gist.github.com/tombigel/
|
|||
-->
|
||||
然后运行以下命令(参考 <https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c>):
|
||||
|
||||
<!--
|
||||
# These are the original gist links, linking to my gists now.
|
||||
-->
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
# 这些是原始的 gist 链接,立即链接到我的 gist。
|
||||
# 这些是原始的 gist 链接,立即链接到我的 gist
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
|
||||
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
|
||||
|
||||
|
@ -319,7 +348,7 @@ ARG HUGO_VERSION
|
|||
|
||||
将 "https://proxy.golang.org" 替换为本地可以使用的代理地址。
|
||||
|
||||
**注意:** 此部分仅适用于中国大陆
|
||||
**注意:** 此部分仅适用于中国大陆。
|
||||
|
||||
<!--
|
||||
## Get involved with SIG Docs
|
||||
|
@ -404,14 +433,14 @@ SIG Docs 的当前新贡献者大使:
|
|||
| -------------------------- | -------------------------- | -------------------------- |
|
||||
| Arsh Sharma | @arsh | @RinkiyaKeDad |
|
||||
-->
|
||||
| 姓名 | Slack | GitHub |
|
||||
| 姓名 | Slack | GitHub |
|
||||
| -------------------------- | -------------------------- | -------------------------- |
|
||||
| Arsh Sharma | @arsh | @RinkiyaKeDad |
|
||||
|
||||
<!--
|
||||
## Localization `README.md`'s
|
||||
## Localization READMEs
|
||||
-->
|
||||
## `README.md` 本地化
|
||||
## README 本地化
|
||||
|
||||
<!--
|
||||
| Language | Language |
|
||||
|
@ -434,7 +463,7 @@ SIG Docs 的当前新贡献者大使:
|
|||
| [意大利语](README-it.md) | [乌克兰语](README-uk.md) |
|
||||
| [日语](README-ja.md) | [越南语](README-vi.md) |
|
||||
|
||||
# 中文本地化
|
||||
## 中文本地化
|
||||
|
||||
可以通过以下方式联系中文本地化的维护人员:
|
||||
|
||||
|
|
24
README.md
24
README.md
|
@ -5,7 +5,7 @@
|
|||
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
|
||||
|
||||
- [Contributing to the docs](#contributing-to-the-docs)
|
||||
- [Localization ReadMes](#localization-readmemds)
|
||||
- [Localization READMEs](#localization-readmes)
|
||||
|
||||
## Using this repository
|
||||
|
||||
|
@ -50,7 +50,7 @@ To build the site in a container, run the following:
|
|||
make container-serve
|
||||
```
|
||||
|
||||
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) and [Windows](https://docs.docker.com/docker-for-windows/#resources)).
|
||||
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOS](https://docs.docker.com/desktop/settings/mac/) and [Windows](https://docs.docker.com/desktop/settings/windows/)).
|
||||
|
||||
Open up your browser to <http://localhost:1313> to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
||||
|
@ -58,13 +58,18 @@ Open up your browser to <http://localhost:1313> to view the website. As you make
|
|||
|
||||
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L11) file.
|
||||
|
||||
To build and test the site locally, run:
|
||||
To install dependencies, deploy and test the site locally, run:
|
||||
|
||||
```bash
|
||||
# install dependencies
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- For macOS and Linux
|
||||
```bash
|
||||
npm ci
|
||||
make serve
|
||||
```
|
||||
- For Windows (PowerShell)
|
||||
```powershell
|
||||
npm ci
|
||||
hugo.exe server --buildFuture --environment development
|
||||
```
|
||||
|
||||
This will start the local Hugo server on port 1313. Open up your browser to <http://localhost:1313> to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
|
||||
|
||||
|
@ -173,6 +178,7 @@ For more information about contributing to the Kubernetes documentation, see:
|
|||
- [Page Content Types](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
- [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
- [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/)
|
||||
- [Introduction to Kubernetes Docs](https://www.youtube.com/watch?v=pprMgmNzDcw)
|
||||
|
||||
### New contributor ambassadors
|
||||
|
||||
|
@ -182,7 +188,7 @@ If you need help at any point when contributing, the [New Contributor Ambassador
|
|||
| -------------------------- | -------------------------- | -------------------------- |
|
||||
| Arsh Sharma | @arsh | @RinkiyaKeDad |
|
||||
|
||||
## Localization `README.md`'s
|
||||
## Localization READMEs
|
||||
|
||||
| Language | Language |
|
||||
| -------------------------- | -------------------------- |
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -88,6 +88,7 @@
|
|||
- fields:
|
||||
- nominatedNodeName
|
||||
- hostIP
|
||||
- hostIPs
|
||||
- startTime
|
||||
- phase
|
||||
- message
|
||||
|
@ -99,6 +100,7 @@
|
|||
- initContainerStatuses
|
||||
- containerStatuses
|
||||
- ephemeralContainerStatuses
|
||||
- resourceClaimStatuses
|
||||
- resize
|
||||
|
||||
- definition: io.k8s.api.core.v1.Container
|
||||
|
@ -137,6 +139,7 @@
|
|||
- livenessProbe
|
||||
- readinessProbe
|
||||
- startupProbe
|
||||
- restartPolicy
|
||||
- name: Security Context
|
||||
fields:
|
||||
- securityContext
|
||||
|
@ -228,6 +231,7 @@
|
|||
fields:
|
||||
- terminationMessagePath
|
||||
- terminationMessagePolicy
|
||||
- restartPolicy
|
||||
- name: Debugging
|
||||
fields:
|
||||
- stdin
|
||||
|
@ -393,9 +397,14 @@
|
|||
fields:
|
||||
- selector
|
||||
- manualSelector
|
||||
- name: Alpha level
|
||||
- name: Beta level
|
||||
fields:
|
||||
- podFailurePolicy
|
||||
- name: Alpha level
|
||||
fields:
|
||||
- backoffLimitPerIndex
|
||||
- maxFailedIndexes
|
||||
- podReplacementPolicy
|
||||
|
||||
- definition: io.k8s.api.batch.v1.JobStatus
|
||||
field_categories:
|
||||
|
@ -411,6 +420,10 @@
|
|||
- name: Beta level
|
||||
fields:
|
||||
- ready
|
||||
- name: Alpha level
|
||||
fields:
|
||||
- failedIndexes
|
||||
- terminating
|
||||
|
||||
- definition: io.k8s.api.batch.v1.CronJobSpec
|
||||
field_categories:
|
||||
|
|
|
@ -153,7 +153,7 @@ parts:
|
|||
version: v1alpha1
|
||||
- name: SelfSubjectReview
|
||||
group: authentication.k8s.io
|
||||
version: v1beta1
|
||||
version: v1
|
||||
- name: Authorization Resources
|
||||
chapters:
|
||||
- name: LocalSubjectAccessReview
|
||||
|
@ -168,9 +168,6 @@ parts:
|
|||
- name: SubjectAccessReview
|
||||
group: authorization.k8s.io
|
||||
version: v1
|
||||
- name: SelfSubjectReview
|
||||
group: authentication.k8s.io
|
||||
version: v1alpha1
|
||||
- name: ClusterRole
|
||||
group: rbac.authorization.k8s.io
|
||||
version: v1
|
||||
|
@ -218,7 +215,7 @@ parts:
|
|||
version: v1
|
||||
- name: ValidatingAdmissionPolicy
|
||||
group: admissionregistration.k8s.io
|
||||
version: v1alpha1
|
||||
version: v1beta1
|
||||
otherDefinitions:
|
||||
- ValidatingAdmissionPolicyList
|
||||
- ValidatingAdmissionPolicyBinding
|
||||
|
|
|
@ -76,6 +76,11 @@ footer {
|
|||
text-decoration: none;
|
||||
font-size: 1rem;
|
||||
border: 0px;
|
||||
|
||||
}
|
||||
|
||||
.button:hover {
|
||||
background-color: darken($blue, 10%);
|
||||
}
|
||||
|
||||
#cellophane {
|
||||
|
@ -547,6 +552,12 @@ section#cncf {
|
|||
padding: 20px 10px 20px 10px;
|
||||
}
|
||||
|
||||
#desktopKCButton:hover{
|
||||
background-color: #ffffff;
|
||||
color: #3371e3;
|
||||
transition: 150ms;
|
||||
}
|
||||
|
||||
#desktopShowVideoButton {
|
||||
position: relative;
|
||||
font-size: 24px;
|
||||
|
@ -566,6 +577,15 @@ section#cncf {
|
|||
border-width: 10px 0 10px 20px;
|
||||
border-color: transparent transparent transparent $blue;
|
||||
}
|
||||
|
||||
&:hover::before {
|
||||
border-color: transparent transparent transparent $dark-grey;
|
||||
}
|
||||
}
|
||||
|
||||
#desktopShowVideoButton:hover{
|
||||
color: $dark-grey;
|
||||
transition: 150ms;
|
||||
}
|
||||
|
||||
#mobileShowVideoButton {
|
||||
|
@ -882,9 +902,16 @@ section#cncf {
|
|||
margin: 0;
|
||||
}
|
||||
|
||||
//Table Content
|
||||
.tab-content table{
|
||||
border-collapse: separate;
|
||||
border-spacing: 6px;
|
||||
}
|
||||
|
||||
.tab-pane {
|
||||
border-radius: 0.25rem;
|
||||
padding: 0 16px 16px;
|
||||
overflow: auto;
|
||||
|
||||
border: 1px solid #dee2e6;
|
||||
&:first-of-type.active {
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SASS for Case Studies pages go here:
|
||||
|
||||
hr {
|
||||
background-color: #999999;
|
||||
background-color: #303030;
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -317,20 +317,63 @@ footer {
|
|||
/* DOCS */
|
||||
|
||||
.launch-cards {
|
||||
button {
|
||||
cursor: pointer;
|
||||
box-sizing: border-box;
|
||||
background: none;
|
||||
margin: 0;
|
||||
border: 0;
|
||||
}
|
||||
padding: 0;
|
||||
display: grid;
|
||||
grid-template-columns: repeat(3, 1fr);
|
||||
row-gap: 1em;
|
||||
.launch-card {
|
||||
display: flex;
|
||||
padding: 0 30px 0 0;
|
||||
.card-content{
|
||||
width: fit-content;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
margin: 0;
|
||||
row-gap: 1em;
|
||||
h2 {
|
||||
font-size: 1.75em;
|
||||
padding: 0.5em 0;
|
||||
margin: 0;
|
||||
a {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
ul {
|
||||
list-style: none;
|
||||
height: fit-content;
|
||||
line-height: 1.6;
|
||||
padding: 0;
|
||||
margin-block-end: auto;
|
||||
}
|
||||
|
||||
br {
|
||||
display: none;
|
||||
}
|
||||
|
||||
button {
|
||||
height: min-content;
|
||||
width: auto;
|
||||
padding: .5em 1em;
|
||||
cursor: pointer;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ul,
|
||||
li {
|
||||
list-style: none;
|
||||
padding-left: 0;
|
||||
}
|
||||
@media only screen and (max-width: 1000px) {
|
||||
grid-template-columns: 1fr;
|
||||
|
||||
.launch-card {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// table of contents
|
||||
.td-toc {
|
||||
|
@ -349,52 +392,63 @@ footer {
|
|||
}
|
||||
|
||||
main {
|
||||
.td-content table code,
|
||||
.td-content>table td {
|
||||
word-break: break-word;
|
||||
|
||||
/* SCSS Related to the Metrics list */
|
||||
|
||||
div.metric:nth-of-type(odd) { // Look & Feel , Aesthetics
|
||||
background-color: $light-grey;
|
||||
}
|
||||
|
||||
/* SCSS Related to the Metrics Table */
|
||||
div.metrics {
|
||||
|
||||
@media (max-width: 767px) { // for mobile devices, Display the names, Stability levels & types
|
||||
|
||||
table.metrics {
|
||||
th:nth-child(n + 4),
|
||||
td:nth-child(n + 4) {
|
||||
.metric {
|
||||
div:empty{
|
||||
display: none;
|
||||
}
|
||||
|
||||
td.metric_type{
|
||||
min-width: 7em;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
flex-wrap: wrap;
|
||||
gap: .75em;
|
||||
padding:.75em .75em .75em .75em;
|
||||
|
||||
.metric_name{
|
||||
font-size: large;
|
||||
font-weight: bold;
|
||||
word-break: break-word;
|
||||
}
|
||||
td.metric_stability_level{
|
||||
min-width: 6em;
|
||||
|
||||
label{
|
||||
font-weight: bold;
|
||||
margin-right: .5em;
|
||||
}
|
||||
}
|
||||
ul {
|
||||
li:empty{
|
||||
display: none;
|
||||
}
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: .75em;
|
||||
flex-wrap: wrap;
|
||||
li.metric_labels_varying{
|
||||
span{
|
||||
display: inline-block;
|
||||
background-color: rgb(240, 239, 239);
|
||||
padding: 0 0.5em;
|
||||
margin-right: .35em;
|
||||
font-family: monospace;
|
||||
border: 1px solid rgb(230 , 230 , 230);
|
||||
border-radius: 5%;
|
||||
margin-bottom: .35em;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
table.metrics tbody{ // Tested dimensions to improve overall aesthetic of the table
|
||||
tr {
|
||||
td {
|
||||
font-size: smaller;
|
||||
}
|
||||
td.metric_labels_varying{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_type{
|
||||
min-width: 9em;
|
||||
}
|
||||
td.metric_description{
|
||||
min-width: 10em;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
table.no-word-break td,
|
||||
table.no-word-break code {
|
||||
word-break: normal;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// blockquotes and callouts
|
||||
|
@ -626,17 +680,9 @@ main.content {
|
|||
}
|
||||
}
|
||||
|
||||
/* COMMUNITY legacy styles */
|
||||
/* Leave these in place until localizations are caught up */
|
||||
|
||||
.newcommunitywrapper {
|
||||
.news {
|
||||
margin-left: 0;
|
||||
|
||||
@media screen and (min-width: 768px) {
|
||||
margin-left: 10%;
|
||||
}
|
||||
}
|
||||
.td-blog .header-hero h1, .td-blog .header-hero h2 {
|
||||
font-size: 2.25rem; // match rest of site, even if it is actually h2
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
/* CASE-STUDIES */
|
||||
|
@ -778,7 +824,8 @@ body.td-documentation {
|
|||
}
|
||||
}
|
||||
|
||||
#announcement + .header-hero.filler {
|
||||
/* don't display the hero header for some pages when there is a banner active */
|
||||
#announcement + .header-hero.filler, .td-page.td-blog #announcement + .header-hero {
|
||||
display: none;
|
||||
}
|
||||
|
||||
|
@ -943,6 +990,16 @@ div.alert > em.javascript-required {
|
|||
#bing-results-container {
|
||||
padding: 1em;
|
||||
}
|
||||
.bing-result {
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
.bing-result-url {
|
||||
font-size: 14px;
|
||||
}
|
||||
.bing-result-snippet {
|
||||
color: #666666;
|
||||
font-size: 14px;
|
||||
}
|
||||
#bing-pagination-container {
|
||||
padding: 1em;
|
||||
margin-bottom: 1em;
|
||||
|
@ -952,3 +1009,32 @@ div.alert > em.javascript-required {
|
|||
margin: 0.25em;
|
||||
}
|
||||
}
|
||||
|
||||
// Adjust Search-bar search-icon
|
||||
.search-bar {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
background-color: #fff;
|
||||
border: 1px solid #4c4c4c;
|
||||
border-radius: 20px;
|
||||
vertical-align: middle;
|
||||
flex-grow: 1;
|
||||
overflow-x: hidden;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.search-bar:focus-within {
|
||||
border: 2.5px solid rgba(47, 135, 223, 0.7);
|
||||
}
|
||||
|
||||
.search-bar i.search-icon {
|
||||
padding: .5em .5em .5em .75em;
|
||||
opacity: .75;
|
||||
}
|
||||
|
||||
.search-input {
|
||||
flex: 1;
|
||||
border: none;
|
||||
outline: none;
|
||||
padding: .5em 0 .5em 0;
|
||||
}
|
|
@ -4,6 +4,7 @@ abstract: "Automatisierte Bereitstellung, Skalierung und Verwaltung von Containe
|
|||
cid: home
|
||||
---
|
||||
|
||||
{{< site-searchbar >}}
|
||||
|
||||
{{< blocks/section id="oceanNodes" >}}
|
||||
{{% blocks/feature image="flower" %}}
|
||||
|
@ -42,12 +43,12 @@ Kubernetes ist Open Source und bietet Dir die Freiheit, die Infrastruktur vor Or
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Video ansehen</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon Europe vom 18. bis 21. April 2023</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon Europe vom 19. bis 22. März 2024</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Besuche die KubeCon North America vom 6. bis 9. November 2023</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Besuche die KubeCon + CloudNativeCon North America vom 12. bis 15. November 2024</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -246,7 +246,7 @@ VISIT SITE
|
|||
|
||||
<br>
|
||||
<div class="twittercol1">
|
||||
<a class="twitter-timeline" data-tweet-limit="1" href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">Tweets von kubernetesio</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
<a class="twitter-timeline" data-tweet-limit="1" data-width="600" data-height="800" href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">Tweets von kubernetesio</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
|
|
@ -0,0 +1,105 @@
|
|||
---
|
||||
title: Über cgroup v2
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Auf Linux beschränken {{< glossary_tooltip text="control groups" term_id="cgroup" >}} die Ressourcen, die einem Prozess zugeteilt werden.
|
||||
|
||||
Das {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} und die zugrundeliegende Container Runtime müssen mit cgroups interagieren um
|
||||
[Ressourcen-Verwaltung für Pods und Container](/docs/concepts/configuration/manage-resources-containers/) durchzusetzen. Das schließt CPU/Speicher Anfragen und Limits für containerisierte Arbeitslasten ein.
|
||||
|
||||
Es gibt zwei Versionen cgroups in Linux: cgroup v1 und cgroup v2. cgroup v2 ist die neue Generation der `cgroup` API.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
||||
## Was ist cgroup v2? {#cgroup-v2}
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
cgroup v2 ist die nächste Version der Linux `cgroup` API. cgroup v2 stellt ein einheitliches Kontrollsystem mit erweiterten Ressourcenmanagement Fähigkeiten bereit.
|
||||
|
||||
cgroup v2 bietet einige Verbesserungen gegenüber cgroup v1, zum Beispiel folgende:
|
||||
|
||||
- Einzelnes vereinheitlichtes Hierarchiendesign in der API
|
||||
- Erhöhte Sicherheit bei sub-tree Delegierung zu Container
|
||||
- Neuere Features, wie [Pressure Stall Information](https://www.kernel.org/doc/html/latest/accounting/psi.html)
|
||||
- Erweitertes Ressourcen Zuteilungsmanagement und Isolierung über mehrfache Ressourcen
|
||||
- Einheitliche Erfassung für verschiedene Arten der Speicherzuteilung (Netzwerkspeicher, Kernelspeicher, usw.)
|
||||
- Erfassung nicht-unmittelbarer Ressourcenänderungen wie "page cache write backs"
|
||||
|
||||
Manche Kubernetes Funktionen verwenden ausschließlich cgroup v2 für erweitertes Ressourcenmanagement und Isolierung. Die [MemoryQoS](/blog/2021/11/26/qos-memory-resources/) Funktion, zum Beispiel, verbessert Speicher QoS und setzt dabei auf cgroup v2 Primitives.
|
||||
|
||||
|
||||
## cgroup v2 verwenden {#cgroupv2-verwenden}
|
||||
|
||||
Die empfohlene Methode um cgroup v2 zu verwenden, ist eine Linux Distribution zu verwenden, die cgroup v2 standardmäßig aktiviert und verwendet.
|
||||
|
||||
Um zu Kontrollieren ob ihre Distribution cgroup v2 verwendet, siehe [Identifizieren der cgroup Version auf Linux Knoten](#cgroup-version-identifizieren).
|
||||
|
||||
### Voraussetzungen {#Voraussetzungen}
|
||||
|
||||
cgroup v2 hat folgende Voraussetzungen:
|
||||
|
||||
* Betriebssystem Distribution ermöglicht cgroup v2
|
||||
* Linux Kernel Version ist 5.8 oder neuer
|
||||
* Container Runtime unterstützt cgroup v2. Zum Besipiel:
|
||||
* [containerd](https://containerd.io/) v1.4 und neuer
|
||||
* [cri-o](https://cri-o.io/) v1.20 und neuer
|
||||
* Das kubelet und die Container Runtime sind konfiguriert, um den [systemd cgroup Treiber](/docs/setup/production-environment/container-runtimes#systemd-cgroup-driver) zu verwenden
|
||||
|
||||
### Linux Distribution cgroup v2 Support
|
||||
|
||||
Für eine Liste der Linux Distributionen, die cgroup v2 verwenden, siehe die [cgroup v2 Dokumentation](https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md)
|
||||
|
||||
<!-- the list should be kept in sync with https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md -->
|
||||
* Container Optimized OS (seit M97)
|
||||
* Ubuntu (seit 21.10, 22.04+ empfohlen)
|
||||
* Debian GNU/Linux (seit Debian 11 bullseye)
|
||||
* Fedora (seit 31)
|
||||
* Arch Linux (seit April 2021)
|
||||
* RHEL und RHEL-basierte Distributionen (seit 9)
|
||||
|
||||
Zum Überprüfen ob Ihre Distribution cgroup v2 verwendet, siehe die Dokumentation Ihrer Distribution, oder folge den Anweisungen in [Identifizieren der cgroup Version auf Linux Knoten](#cgroup-version-identifizieren).
|
||||
|
||||
Man kann auch manuell cgroup v2 aktivieren, indem man die Kernel Boot Argumente anpasst. Wenn Ihre Distribution GRUB verwendet, muss `systemd.unified_cgroup_hierarchy=1` in `GRUB_CMDLINE_LINUX` unter `/etc/default/grub` hinzugefügt werden. Danach muss man `sudo update-grub` ausführen. Die empfohlene Methode ist aber das Verwenden einer Distribution, die schon standardmäßig cgroup v2 aktiviert.
|
||||
|
||||
### Migrieren zu cgroup v2 {#cgroupv2-migrieren}
|
||||
|
||||
Um zu cgroup v2 zu migrieren, müssen Sie erst sicherstellen, dass die [Voraussetzungen](#Voraussetzungen) erfüllt sind. Dann müssen Sie auf eine Kernel Version aktualisieren, die cgroup v2 standardmäßig aktiviert.
|
||||
|
||||
Das kubelet erkennt automatisch, dass das Betriebssystem auf cgroup v2 läuft, und verhält sich entsprechend, ohne weitere Konfiguration.
|
||||
|
||||
Nach dem Umschalten auf cgroup v2 sollte es keinen erkennbaren Unterschied in der Benutzererfahrung geben, es sei denn, die Benutzer greifen auf das cgroup Dateisystem direkt zu, entweder auf dem Knoten oder in den Containern.
|
||||
|
||||
cgroup v2 verwendet eine andere API als cgroup v1. Wenn es also Anwendungen gibt, die direkt auf das cgroup Dateisystem zugreifen, müssen sie aktualisiert werden, um cgroup v2 zu unterstützen. Zum Beispiel:
|
||||
|
||||
* Manche Überwachungs- und Sicherheitsagenten von Drittanbietern können vom cgroup Dateisystem abhängig sein.
|
||||
Diese müssen aktualisiert werden um cgroup v2 zu unterstützen.
|
||||
* Wenn Sie [cAdvisor](https://github.com/google/cadvisor) als eigenständigen DaemonSet verwenden, zum Überwachen von Pods und Container, muss es auf v0.43.0 oder neuer aktualisiert werden.
|
||||
* Wenn Sie Java Applikationen bereitstellen, sollten Sie bevorzugt Versionen verwenden, die cgroup v2 vollständig unterstützen:
|
||||
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 und neuer
|
||||
* [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, und neuer
|
||||
* [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 und neuer
|
||||
* Wenn Sie das [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) Paket verwenden, vergewissern Sie sich, dass Sie v1.5.1 oder höher verwenden.
|
||||
|
||||
## Identifizieren der cgroup Version auf Linux Knoten {#cgroup-version-identifizieren}
|
||||
|
||||
Die cgroup Version hängt von der verwendeten Linux Distribution und der standardmäßig auf dem Betriebssystem konfigurierten cgroup Version ab. Zum Überprüfen der cgroup Version, die ihre Distribution verwendet, führen Sie den Befehl `stat -fc %T /sys/fs/cgroup/` auf dem Knoten aus:
|
||||
|
||||
```shell
|
||||
stat -fc %T /sys/fs/cgroup/
|
||||
```
|
||||
|
||||
Für cgroup v2, ist das Ergebnis `cgroup2fs`.
|
||||
|
||||
Für cgroup v1, ist das Ergebnis `tmpfs.`
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
- Erfahre mehr über [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html)
|
||||
- Erfahre mehr über [container runtime](/docs/concepts/architecture/cri)
|
||||
- Erfahre mehr über [cgroup drivers](/docs/setup/production-environment/container-runtimes#cgroup-drivers)
|
||||
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Master-Node Kommunikation
|
||||
title: Control-Plane-Node Kommunikation
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Dieses Dokument katalogisiert die Kommunikationspfade zwischen dem Master (eigentlich dem Apiserver) und des Kubernetes-Clusters.
|
||||
Dieses Dokument katalogisiert die Kommunikationspfade zwischen dem Control Plane (eigentlich dem Apiserver) und des Kubernetes-Clusters.
|
||||
Die Absicht besteht darin, Benutzern die Möglichkeit zu geben, ihre Installation so anzupassen, dass die Netzwerkkonfiguration so abgesichert wird, dass der Cluster in einem nicht vertrauenswürdigen Netzwerk (oder mit vollständig öffentlichen IP-Adressen eines Cloud-Providers) ausgeführt werden kann.
|
||||
|
||||
|
||||
|
@ -14,28 +14,28 @@ Die Absicht besteht darin, Benutzern die Möglichkeit zu geben, ihre Installatio
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## Cluster zum Master
|
||||
## Cluster zum Control Plane
|
||||
|
||||
Alle Kommunikationspfade vom Cluster zum Master enden beim Apiserver (keine der anderen Master-Komponenten ist dafür ausgelegt, Remote-Services verfügbar zu machen).
|
||||
Alle Kommunikationspfade vom Cluster zum Control Plane enden beim Apiserver (keine der anderen Control-Plane-Komponenten ist dafür ausgelegt, Remote-Services verfügbar zu machen).
|
||||
In einem typischen Setup ist der Apiserver so konfiguriert, dass er Remote-Verbindungen an einem sicheren HTTPS-Port (443) mit einer oder mehreren Formen der [Clientauthentifizierung](/docs/reference/access-authn-authz/authentication/) überwacht.
|
||||
Eine oder mehrere Formene von [Autorisierung](/docs/reference/access-authn-authz/authorization/) sollte aktiviert sein, insbesondere wenn [anonyme Anfragen](/docs/reference/access-authn-authz/authentication/#anonymous-requests) oder [Service Account Tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) aktiviert sind.
|
||||
Eine oder mehrere Formen von [Autorisierung](/docs/reference/access-authn-authz/authorization/) sollte aktiviert sein, insbesondere wenn [anonyme Anfragen](/docs/reference/access-authn-authz/authentication/#anonymous-requests) oder [Service Account Tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) aktiviert sind.
|
||||
|
||||
Nodes sollten mit dem öffentlichen Stammzertifikat für den Cluster konfiguriert werden, sodass sie eine sichere Verbindung zum Apiserver mit gültigen Client-Anmeldeinformationen herstellen können.
|
||||
Knoten sollten mit dem öffentlichen Stammzertifikat für den Cluster konfiguriert werden, sodass sie eine sichere Verbindung zum Apiserver mit gültigen Client-Anmeldeinformationen herstellen können.
|
||||
Beispielsweise bei einer gewöhnlichen GKE-Konfiguration enstprechen die dem kubelet zur Verfügung gestellten Client-Anmeldeinformationen eines Client-Zertifikats.
|
||||
Lesen Sie über [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) zur automatisierten Bereitstellung von kubelet-Client-Zertifikaten.
|
||||
|
||||
Pods, die eine Verbindung zum Apiserver herstellen möchten, können dies auf sichere Weise tun, indem sie ein Dienstkonto verwenden, sodass Kubernetes das öffentliche Stammzertifikat und ein gültiges Trägertoken automatisch in den Pod einfügt, wenn er instanziiert wird.
|
||||
Der `kubernetes`-Dienst (in allen Namespaces) ist mit einer virtuellen IP-Adresse konfiguriert, die (über den Kube-Proxy) an den HTTPS-Endpunkt auf dem Apiserver umgeleitet wird.
|
||||
|
||||
Die Master-Komponenten kommunizieren auch über den sicheren Port mit dem Cluster-Apiserver.
|
||||
Die Control-Plane-Komponenten kommunizieren auch über den sicheren Port mit dem Cluster-Apiserver.
|
||||
|
||||
Der Standardbetriebsmodus für Verbindungen vom Cluster (Knoten und Pods, die auf den Knoten ausgeführt werden) zum Master ist daher standardmäßig gesichert und kann über nicht vertrauenswürdige und/oder öffentliche Netzwerke laufen.
|
||||
Der Standardbetriebsmodus für Verbindungen vom Cluster (Knoten und Pods, die auf den Knoten ausgeführt werden) zum Control Plane ist daher standardmäßig gesichert und kann über nicht vertrauenswürdige und/oder öffentliche Netzwerke laufen.
|
||||
|
||||
## Master zum Cluster
|
||||
## Control Plane zum Cluster
|
||||
|
||||
Es gibt zwei primäre Kommunikationspfade vom Master (Apiserver) zum Cluster.
|
||||
Der Erste ist vom Apiserver hin zum Kubelet-Prozess, der auf jedem Node im Cluster ausgeführt wird.
|
||||
Der Zweite ist vom Apiserver zu einem beliebigen Node, Pod oder Dienst über die Proxy-Funktionalität des Apiservers.
|
||||
Es gibt zwei primäre Kommunikationspfade vom Control Plane (Apiserver) zum Cluster.
|
||||
Der Erste ist vom Apiserver hin zum Kubelet-Prozess, der auf jedem Knoten im Cluster ausgeführt wird.
|
||||
Der Zweite ist vom Apiserver zu einem beliebigen Knoten, Pod oder Dienst über die Proxy-Funktionalität des Apiservers.
|
||||
|
||||
### Apiserver zum kubelet
|
||||
|
||||
|
@ -55,16 +55,16 @@ zwischen dem Apiserver und dem kubelet, falls es erforderlich ist eine Verbindun
|
|||
|
||||
Außerdem sollte [Kubelet Authentifizierung und/oder Autorisierung](/docs/admin/kubelet-authentication-authorization/) aktiviert sein, um die kubelet-API abzusichern.
|
||||
|
||||
### Apiserver zu Nodes, Pods und Services
|
||||
### Apiserver zu Knoten, Pods und Services
|
||||
|
||||
Die Verbindungen vom Apiserver zu einem Node, Pod oder Dienst verwenden standardmäßig einfache HTTP-Verbindungen und werden daher weder authentifiziert noch verschlüsselt.
|
||||
Die Verbindungen vom Apiserver zu einem Knoten, Pod oder Dienst verwenden standardmäßig einfache HTTP-Verbindungen und werden daher weder authentifiziert noch verschlüsselt.
|
||||
Sie können über eine sichere HTTPS-Verbindung ausgeführt werden, indem dem Node, dem Pod oder dem Servicenamen in der API-URL "https:" vorangestellt wird. Das vom HTTPS-Endpunkt bereitgestellte Zertifikat wird jedoch nicht überprüft, und es werden keine Clientanmeldeinformationen bereitgestellt. Die Verbindung wird zwar verschlüsselt, garantiert jedoch keine Integrität.
|
||||
Diese Verbindungen **sind derzeit nicht sicher** innerhalb von nicht vertrauenswürdigen und/oder öffentlichen Netzen.
|
||||
|
||||
### SSH Tunnels
|
||||
### SSH-Tunnel
|
||||
|
||||
Kubernetes unterstützt SSH-Tunnel zum Schutz der Master -> Cluster Kommunikationspfade.
|
||||
In dieser Konfiguration initiiert der Apiserver einen SSH-Tunnel zu jedem Node im Cluster (Verbindung mit dem SSH-Server, der mit Port 22 läuft), und leitet den gesamten Datenverkehr für ein kubelet, einen Node, einen Pod oder einen Dienst durch den Tunnel.
|
||||
Kubernetes unterstützt SSH-Tunnel zum Schutz der Control Plane -> Cluster Kommunikationspfade.
|
||||
In dieser Konfiguration initiiert der Apiserver einen SSH-Tunnel zu jedem Knoten im Cluster (Verbindung mit dem SSH-Server, der mit Port 22 läuft), und leitet den gesamten Datenverkehr für ein kubelet, einen Knoten, einen Pod oder einen Dienst durch den Tunnel.
|
||||
Dieser Tunnel stellt sicher, dass der Datenverkehr nicht außerhalb des Netzwerks sichtbar ist, in dem die Knoten ausgeführt werden.
|
||||
|
||||
SSH-Tunnel werden zur Zeit nicht unterstützt. Sie sollten also nicht verwendet werden, sei denn, man weiß, was man tut. Ein Ersatz für diesen Kommunikationskanal wird entwickelt.
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
title: Controller
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
In der Robotik und der Automatisierung ist eine _Kontrollschleife_ eine endlose Schleife, die den Zustand eines Systems regelt.
|
||||
|
||||
Hier ist ein Beispiel einer Kontrollschleife: ein Thermostat in einem Zimmer.
|
||||
|
||||
Wenn Sie die Temperatur einstellen, sagen Sie dem Thermostaten was der *Wunschzustand* ist. Die tatsächliche Raumtemperatur ist der *Istzustand*. Der Thermostat agiert um den Istzustand dem Wunschzustand anzunähern, indem er Geräte an oder ausschaltet.
|
||||
|
||||
{{< glossary_definition text="Controller" term_id="controller" length="short">}}
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Controller Muster
|
||||
|
||||
Ein Controller überwacht mindestens einen Kubernetes Ressourcentyp.
|
||||
Diese {{< glossary_tooltip text="Objekte" term_id="object" >}}
|
||||
haben ein Spezifikationsfeld, das den Wunschzustand darstellt. Der oder die Controller für diese Ressource sind dafür verantwortlich, dass sich der Istzustand dem Wunschzustand annähert.
|
||||
|
||||
Der Controller könnte die Aktionen selbst ausführen; meistens jedoch sendet der Controller Nachrichten an den {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}, die nützliche Effekte haben. Unten sehen Sie Beispiele dafür.
|
||||
|
||||
{{< comment >}}
|
||||
Manche eingebaute Controller, zum Beispiel der Namespace Controller, agieren auf Objekte die keine Spezifikation haben. Zur Vereinfachung lässt diese Seite die Erklärung zu diesem Detail aus.
|
||||
{{< /comment >}}
|
||||
|
||||
### Kontrolle via API Server
|
||||
|
||||
Der {{< glossary_tooltip text="Job" term_id="job" >}} Controller ist ein Beispiel eines eingebauten Kubernetes Controllers. Eingebaute Controller verwalten den Zustand durch Interaktion mit dem Cluster API Server.
|
||||
|
||||
Ein Job ist eine Kubernetes Ressource, die einen {{< glossary_tooltip text="Pod" term_id="pod" >}}, oder vielleicht mehrere Pods, erstellt, um eine Tätigkeit auszuführen und dann zu beenden.
|
||||
|
||||
(Sobald [geplant](/docs/concepts/scheduling-eviction/), werden Pod Objekte Teil des Wunschzustands eines Kubelets).
|
||||
|
||||
Wenn der Job Controller eine neue Tätigkeit erkennt, versichert er, dass irgendwo in Ihrem Cluster, die Kubelets auf einem Satz Knoten, die korrekte Anzahl Pods laufen lässt, um die Tätigkeit auszuführen. Der Job Controller selbst lässt keine Pods oder Container laufen. Stattdessen sagt der Job Controller dem API Server, dass er Pods erstellen oder entfernen soll.
|
||||
Andere Komponenten in der {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} reagieren auf die neue Information (neue Pods müssen geplant werden und müssen laufen), und irgendwann ist die Arbeit beendet.
|
||||
|
||||
Nachdem Sie einen neuen Job erstellen, ist der Wunschzustand, dass dieser Job beendet ist. Der Job Controller sorgt dafür, dass der Istzustand sich dem Wunschzustand annähert: Pods, die die Arbeit ausführen, werden erstellt, sodass der Job näher an seine Vollendung kommt.
|
||||
|
||||
Controller aktualisieren auch die Objekte die sie konfigurieren. Zum Beispiel: sobald die Arbeit eines Jobs beendet ist, aktualisiert der Job Controller das Job Objekt und markiert es als `beendet`.
|
||||
|
||||
(Das ist ungefähr wie ein Thermostat, der ein Licht ausschaltet, um anzuzeigen dass der Raum nun die Wunschtemperatur hat).
|
||||
|
||||
### Direkte Kontrolle
|
||||
|
||||
Im Gegensatz zum Job Controller, müssen manche Controller auch Sachen außerhalb Ihres Clusters ändern.
|
||||
|
||||
Zum Beispiel, wenn Sie eine Kontrollschleife verwenden, um sicherzustellen dass es genug {{< glossary_tooltip text="Knoten" term_id="node" >}} in ihrem Cluster gibt, dann benötigt dieser Controller etwas außerhalb des jetztigen Clusters, um neue Knoten bei Bedarf zu erstellen.
|
||||
|
||||
Controller die mit dem externen Status interagieren, erhalten den Wunschzustand vom API Server, und kommunizieren dann direkt mit einem externen System, um den Istzustand näher an den Wunschzustand zu bringen.
|
||||
|
||||
(Es gibt tatsächlich einen [Controller](https://github.com/kubernetes/autoscaler/), der die Knoten in Ihrem Cluster horizontal skaliert.)
|
||||
|
||||
Wichtig ist hier, dass der Controller Veränderungen vornimmt, um den Wunschzustand zu erreichen, und dann den Istzustand an den API Server Ihres Clusters meldet. Andere Kontrollschleifen können diese Daten beobachten und eigene Aktionen unternehmen.
|
||||
|
||||
Im Beispiel des Thermostaten, wenn der Raum sehr kalt ist, könnte ein anderer Controller eine Frostschutzheizung einschalten. Bei Kubernetes Cluster arbeitet die Contol Plane indirekt mit IP Adressen Verwaltungstools, Speicherdienste, Cloud Provider APIs, und andere Dienste, um [Kubernetes zu erweitern](/docs/concepts/extend-kubernetes/) und das zu implementieren.
|
||||
|
||||
## Wunschzustand gegen Istzustand {#desired-vs-current}
|
||||
|
||||
Kubernetes hat eine Cloud-Native Sicht auf Systeme, und kann ständige Veränderungen verarbeiten.
|
||||
|
||||
Ihr Cluster kann sich jederzeit verändern, während Arbeit erledigt wird und Kontrollschleifen automatisch Fehler reparieren. Das bedeutet, dass Ihr Cluster eventuell nie einen stabilen Zustand erreicht.
|
||||
|
||||
Solange die Controller Ihres Clusters laufen, und sinnvolle Veränderungen vornehmen können, ist es egal ob der Gesamtzustand stabil ist oder nicht.
|
||||
|
||||
## Design
|
||||
|
||||
Als Grundlage seines Designs verwendet Kubernetes viele Controller, die alle einen bestimmten Aspekt des Clusterzustands verwalten. Meistens verwendet eine bestimmte Kontrollschleife (Controller) eine Sorte Ressourcen als seinen Wunschzustand, und hat eine andere Art Ressource, dass sie verwaltet um den Wunschzustand zu erreichen. Zum Beispiel, ein Controller für Jobs überwacht Job Objekte (um neue Arbeit zu finden), und Pod Objekte (um die Arbeit auszuführen, und um zu erkennen wann die Arbeit beendet ist). In diesem Fall erstellt etwas anderes die Jobs, während der Job Controller Pods erstellt.
|
||||
|
||||
Es ist sinnvoll einfache Controller zu haben, statt eines monolithischen Satzes Kontrollschleifen, die miteinander verbunden sind. Controller können scheitern, also ist Kubernetes entworfen um das zu erlauben.
|
||||
|
||||
{{< note >}}
|
||||
Es kann mehrere Controller geben, die die gleiche Art Objekte erstellen oder aktualisieren können. Im Hintergrund sorgen Kubernetes Controller dafür, dass sie nur auf die Ressourcen achten, die mit den kontrollierenden Ressourcen verbunden sind.
|
||||
|
||||
Man kann zum Beispiel Deployments und Jobs haben; beide erstellen Pods.
|
||||
Der Job Controller löscht nicht die Pods die Ihr Deployment erstellt hat, weil es Informationen ({{< glossary_tooltip term_id="Label" text="Bezeichnungen" >}}) gibt, die der Controller verwenden kann, um die Pods voneinander zu unterscheiden.
|
||||
{{< /note >}}
|
||||
|
||||
## Wege um Controller auszuführen {#running-controllers}
|
||||
|
||||
Kubernetes enthält eingebaute Controller, die innerhalb des {{< glossary_tooltip text="Kube Controller Manager" term_id="kube-controller-manager" >}} laufen. Diese eingebaute Controller liefern wichtige grundsätzliche Verhalten.
|
||||
|
||||
Der Deployment Controller und Job Controller sind Beispiele für Controller die Teil von Kubernetes selbst sind ("eingebaute" Controller).
|
||||
Kubernetes erlaubt die Verwendung von resilienten Control Planes, sodass bei Versagen eines eingebauten Controllers ein anderer Teil des Control Planes die Arbeit übernimmt.
|
||||
|
||||
Sie finden auch Controller, die außerhalb des Control Planes laufen, um Kubernetes zu erweitern. Oder, wenn Sie möchten, können Sie auch selbst einen neuen Controller entwickeln.
|
||||
Sie können Ihren Controller als einen Satz Pods oder außerhalb von Kubernetes laufen lassen. Was am besten passt, hängt davon ab was der jeweilige Controller tut.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Lesen Sie über den [Kubernetes Control Plane](/docs/concepts/overview/components/#control-plane-components)
|
||||
* Entdecke einige der grundlegenden [Kubernetes Objekte](/docs/concepts/overview/working-with-objects/)
|
||||
* Lerne mehr über die [Kubernetes API](/docs/concepts/overview/kubernetes-api/)
|
||||
* Wenn Sie ihren eigenen Controller entwickeln wollen, siehe [Kubernetes extension patterns](/docs/concepts/extend-kubernetes/#extension-patterns)
|
||||
und das [sample-controller](https://github.com/kubernetes/sample-controller) Repository.
|
||||
|
|
@ -76,7 +76,7 @@ Diese Container bilden eine einzelne zusammenhängende
|
|||
Serviceeinheit, z. B. ein Container, der Daten in einem gemeinsam genutzten
|
||||
Volume öffentlich verfügbar macht, während ein separater _Sidecar_-Container
|
||||
die Daten aktualisiert. Der Pod fasst die Container, die Speicherressourcen
|
||||
und eine kurzlebiges Netzwerk-Identität als eine Einheit zusammen.
|
||||
und eine kurzlebige Netzwerk-Identität als eine Einheit zusammen.
|
||||
|
||||
{{< note >}}
|
||||
Das Gruppieren mehrerer gemeinsam lokalisierter und gemeinsam verwalteter
|
||||
|
|
|
@ -4,7 +4,7 @@ noedit: true
|
|||
cid: docsHome
|
||||
layout: docsportal_home
|
||||
class: gridPage gridPageHome
|
||||
linkTitle: "Home"
|
||||
linkTitle: "Dokumentation"
|
||||
main_menu: true
|
||||
weight: 10
|
||||
hide_feedback: true
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Add-ons
|
||||
id: addons
|
||||
date: 2019-12-15
|
||||
full_link: /docs/concepts/cluster-administration/addons/
|
||||
short_description: >
|
||||
Ressourcen, die die Funktionalität von Kubernetes erweitern.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
---
|
||||
Ressourcen, die die Funktionalität von Kubernetes erweitern.
|
||||
|
||||
<!--more-->
|
||||
[Add-Ons installieren](/docs/concepts/cluster-administration/addons/) erklärt mehr über die Verwendung von Add-Ons in Ihrem Cluster, und listet einige populäre Add-Ons auf.
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Zugangscontroller
|
||||
id: admission-controller
|
||||
date: 2019-06-28
|
||||
full_link: /docs/reference/access-authn-authz/admission-controllers/
|
||||
short_description: >
|
||||
Ein Teil Code, das Anfragen an den Kubernetes API Server abfängt, vor der Persistenz eines Objekts.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- extension
|
||||
- security
|
||||
---
|
||||
Ein Teil Code, das Anfragen an den Kubernetes API Server abfängt, vor der Persistenz eines Objekts.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Zugangscontroller für den Kubernetes API Server sind konfigurierbar, und können "validierend", "verändernd", oder beides sein. Jeder Zugangscontroller kann die Anfrage ablehnen. Verändernde Controller können die Objekte ändern, die sie zulassen; validierende Controller dürfen das nicht.
|
||||
|
||||
* [Zugangscontroller in der Kubernetes Dokumentation](/docs/reference/access-authn-authz/admission-controllers/)
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Affinität
|
||||
id: affinity
|
||||
date: 2019-01-11
|
||||
full_link: /docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
|
||||
short_description: >
|
||||
Regeln, die vom Scheduler verwendet werden, um festzulegen wo Pods platziert werden.
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
|
||||
In Kubernetes, ist _Affinität_ ein Satz Regeln, die dem Scheduler Hinweise geben, wo er Pods platzieren soll.
|
||||
|
||||
<!--more-->
|
||||
Es gibt zwei Arten Affinität:
|
||||
* [Knoten Affinität](/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
|
||||
* [Pod-zo-Pod Affinität](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
|
||||
|
||||
Die Regeln werden mithilfe der in {{< glossary_tooltip term_id="pod" text="Pods" >}} angegebenen {{< glossary_tooltip term_id="label" text="Label">}} und {{< glossary_tooltip term_id="selector" text="Selektoren">}} definiert, und sie können entweder erforderlich oder bevorzugt sein, je nachdem wie streng sie möchten, dass der Scheduler sie durchsetzen soll.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Aggregationsschicht
|
||||
id: aggregation-layer
|
||||
date: 2018-10-08
|
||||
full_link: /docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
|
||||
short_description: >
|
||||
Die Aggregationsschicht erlaubt Ihnen die Installation zusätzlicher Kubernetes-artiger APIs in Ihrem Cluster.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- extension
|
||||
- operation
|
||||
---
|
||||
Die Aggregationsschicht erlaubt Ihnen die Installation zusätzlicher Kubernetes-artiger APIs in Ihrem Cluster.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Wenn Sie den {{< glossary_tooltip text="Kubernetes API Server" term_id="kube-apiserver" >}} konfiguriert haben um [zusätzliche APIs zu unterstützen](/docs/tasks/extend-kubernetes/configure-aggregation-layer/), können Sie `APIService` Objekte hinzufügen, um einen URL Pfad in der Kubernetes API zu "belegen".
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Annotation
|
||||
id: annotation
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/annotations
|
||||
short_description: >
|
||||
Ein Key-Value Paar, dass verwendet wird um willkürliche, nicht-identifizierende Metadaten an Objekte zu binden.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Ein Key-Value Paar, dass verwendet wird um willkürliche, nicht-identifizierende Metadaten an Objekte zu binden.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Die Metadaten in einer Annotation können klein oder groß sein, strukturiert oder unstrukturiert, und können Zeichen enthalten, die nicht in {{< glossary_tooltip text="Label" term_id="label" >}} erlaubt sind. Clients wie Tools oder Libraries können diese Metadaten abfragen.
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: API-initiierte Räumung
|
||||
id: api-eviction
|
||||
date: 2021-04-27
|
||||
full_link: /docs/concepts/scheduling-eviction/api-eviction/
|
||||
short_description: >
|
||||
API-initiierte Räumung ist der Prozess, durch den Sie die Räumungs API verwenden, um ein Räumungsobjekt zu erstellen, dass eine geordnete Beendung des Pods auslöst.
|
||||
aka:
|
||||
tags:
|
||||
- operation
|
||||
---
|
||||
API-initiierte Räumung ist der Prozess, durch den Sie die [Räumungs API](/docs/reference/generated/kubernetes-api/{{<param "version">}}/#create-eviction-pod-v1-core) verwenden, um ein Räumungsobjekt zu erstellen, dass eine geordnete Beendung des Pods auslöst.
|
||||
|
||||
|
||||
<!--more-->
|
||||
|
||||
Sie können Räumung anfragen, indem Sie direkt die Räumungs API verwenden, mithilfe eines Clients des kube-api-servers, wie der `kubectl drain` Befehl. Wenn ein `Räumungs` Objekt erstellt wird, beendet der API Server den Pod.
|
||||
|
||||
API-initiierte Räumungen respektieren Ihre konfigurierte [`PodDisruptionBudgets`](/docs/tasks/run-application/configure-pdb/)
|
||||
und [`terminationGracePeriodSeconds`](/docs/concepts/workloads/pods/pod-lifecycle#pod-termination).
|
||||
|
||||
API-initiierte Räumung ist nicht das gleiche wie [Knotendruck Räumung](/docs/concepts/scheduling-eviction/node-pressure-eviction/).
|
||||
|
||||
* Siehe [API-initiierte Räumung](/docs/concepts/scheduling-eviction/api-eviction/) für mehr Informationen.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: API Gruppe
|
||||
id: api-group
|
||||
date: 2019-09-02
|
||||
full_link: /docs/concepts/overview/kubernetes-api/#api-groups-and-versioning
|
||||
short_description: >
|
||||
Ein Satz zugehöriger Pfade in der Kubernetes API.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- architecture
|
||||
---
|
||||
Ein Satz zugehöriger Pfade in der Kubernetes API.
|
||||
|
||||
<!--more-->
|
||||
Sie können jedeAPI Gruppe ein- oder ausschalten durch Änderung der Konfiguration Ihres API Servers. Sie können auch Pfade zu spezifischen Ressourcen ein- oder ausschalten. API Gruppe vereinfacht die Erweiterung der Kubernetes API. Die API Gruppe ist festgelegt durch einen REST Pfad und durch das `apiVersion` Feld eines serialisierten Objekts.
|
||||
|
||||
* Siehe [API Gruppe](/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) für mehr Informationen.
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: App Container
|
||||
id: app-container
|
||||
date: 2019-02-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Ein Container, der verwendet wird um einen Teil einer Arbeitslast auszuführen. Vergleiche mit init Container.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- workload
|
||||
---
|
||||
Anwendungscontainer (oder App Container) sind die {{< glossary_tooltip text="Container" term_id="container" >}} in einem {{< glossary_tooltip text="Pod" term_id="pod" >}}, die gestartet werden, nachdem jegliche {{< glossary_tooltip text="Init Container" term_id="init-container" >}} abgeschlossen haben.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Ein Init Container erlaubt es Ihnen Initialisierungsdetails, die wichtig sind für die gesamte {{< glossary_tooltip text="Arbeitslast" term_id="workload" >}}, und die nicht mehr weiter laufen müssen, sobald der Anwendungscontainer startet, sauber abzutrennen. Wenn ein Pod keine Init Container konfiguriert hat, sind alle Container in diesem Pod App Container.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Anwendungsarchitekt
|
||||
id: application-architect
|
||||
date: 2018-04-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Eine Person, die verantwortlich ist für das Highlevel-Design einer Anwendung.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- user-type
|
||||
---
|
||||
Eine Person, die verantwortlich ist für das Highlevel-Design einer Anwendung.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Ein Architekt sorgt dafür, dass die Implementierung einer Anwendung eine skalierbare und verwaltbare Interaktion mit den umgebenden Komponenten ermöglicht. Umgebende Komponenten können Datenbanken, Logging-Infrastruktur und andere Microservices sein.
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Anwendungsentwickler
|
||||
id: application-developer
|
||||
date: 2018-04-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Eine Person, die eine Anwendung entwickelt, die in einem Kubernetes Cluster läuft.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- user-type
|
||||
---
|
||||
Eine Person, die eine Anwendung entwickelt, die in einem Kubernetes Cluster läuft.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Ein Anwendungsentwickler fokussiert auf einen Teil der Anwendung. Die Größe des Fokus kann significant variieren.
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: Anwendungen
|
||||
id: applications
|
||||
date: 2019-05-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Die Schicht, in der verschiedene containerisierte Anwendungen laufen.
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Die Schicht, in der verschiedene containerisierte Anwendungen laufen.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Approver
|
||||
id: approver
|
||||
date: 2018-04-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Eine Person, die Kubernetes Code Beiträge überprüfen und zulassen kann.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- community
|
||||
---
|
||||
Eine Person, die Kubernetes Code Beiträge überprüfen und zulassen kann.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Während Code Review sich auf Qualität und Korrektheit des Codes konzentriert, ist das Genehmigen auf die holistische Akzeptanz eines Beitrags fokussiert. Holistische Akzeptanz achtet unter anderem auf Rückwärts- und Vorwärtskompatibilität, beachten der API und Flag Konventionen, subtile Performance- und Korrektheitsprobleme, und Interaktionen mit anderen Teilendes Systems. Approver Status ist begrenzt auf einen Teil des gesamten Codes (Codebase). Approver wurden früher Maintainer genannt.
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: cAdvisor
|
||||
id: cadvisor
|
||||
date: 2021-12-09
|
||||
full_link: https://github.com/google/cadvisor/
|
||||
short_description: >
|
||||
Werkzeug, um Ressourcenverbrauch und Performance Charakteristiken von Container besser zu verstehen
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
---
|
||||
cAdvisor (Container Advisor) ermöglicht Benutzer von Container ein besseres Verständnis des Ressourcenverbrauchs und der Performance Charakteristiken ihrer laufenden {{< glossary_tooltip text="Container" term_id="container" >}}.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Es ist ein laufender Daemon, der Informationen über laufende Container sammelt, aggregiert, verarbeitet, und exportiert. Genauer gesagt, speichert es für jeden Container die Ressourcenisolationsparameter, den historischen Ressourcenverbrauch, die Histogramme des kompletten historischen Ressourcenverbrauchs und die Netzwerkstatistiken. Diese Daten werden pro Container und maschinenweit exportiert.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Zertifikat
|
||||
id: certificate
|
||||
date: 2018-04-12
|
||||
full_link: /docs/tasks/tls/managing-tls-in-a-cluster/
|
||||
short_description: >
|
||||
Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu validieren.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- security
|
||||
---
|
||||
Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu bestätigen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Zertfikate ermöglichen es Anwendungen in einem Kubernetes Cluster sicher auf die Kubernetes API zuzugreifen. Zertfikate bestätigen, dass Clients die Erlaubnis haben auf die API zuzugreifen.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: cgroup (control group)
|
||||
id: cgroup
|
||||
date: 2019-06-25
|
||||
full_link:
|
||||
short_description: >
|
||||
Eine Gruppe Linux Prozesse mit optionaler Isolation, Erfassung und Begrenzung der Ressourcen
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Eine Gruppe Linux Prozesse mit optionaler Isolation, Erfassung und Begrenzung der Ressourcen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
cgroup ist eine Funktion des Linux Kernels, dass die Ressourcennutzung (CPU, Speicher, Platten I/O, Netzwerk) begrenzt, erfasst und isoliert, für eine Sammling Prozesse.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: CIDR
|
||||
id: cidr
|
||||
date: 2019-11-12
|
||||
full_link:
|
||||
short_description: >
|
||||
CIDR ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- networking
|
||||
---
|
||||
CIDR (Classless Inter-Domain Routing) ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Im Kubernetes Kontext, erhält jeder {{< glossary_tooltip text="Knoten" term_id="node" >}} eine Reihe von IP Adressen durch die Startadresse und eine Subnetzmaske unter Verwendung von CIDR. Dies erlaubt Knoten jedem {{< glossary_tooltip text="Pod" term_id="pod" >}} eine eigene IP Adresse zuzuweisen. Obwohl es ursprünglich ein Konzept für IPv4 ist, wurde CIDR erweitert um auch IPv6 einzubinden.
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: CLA (Contributor License Agreement)
|
||||
id: cla
|
||||
date: 2018-04-12
|
||||
full_link: https://github.com/kubernetes/community/blob/master/CLA.md
|
||||
short_description: >
|
||||
Bedingungen unter denen ein Mitwirkender eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- community
|
||||
---
|
||||
Bedingungen unter denen ein {{< glossary_tooltip text="Mitwirkender" term_id="contributor" >}} eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
CLAs helfen dabei rechtliche Streitigkeiten rund um Mitwirkungen und geistigem Eigentum (IP) zu lösen.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Cloud Controller Manager
|
||||
id: cloud-controller-manager
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/architecture/cloud-controller/
|
||||
short_description: >
|
||||
Control Plane Komponente, die Kubernetes mit Drittanbieter Cloud Provider integriert.
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- architecture
|
||||
- operation
|
||||
---
|
||||
Eine Kubernetes {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} Komponente, die Cloud spezifische Kontrolllogik einbettet. Der [Cloud Controller Manager](/docs/concepts/architecture/cloud-controller/) lässt Sie Ihr Cluster in die Cloud Provider API einbinden, und trennt die Komponenten die mit der Cloud Platform interagieren von Komponenten, die nur mit Ihrem Cluster interagieren.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Durch Entkopplung der Interoperabilitätslogik zwischen Kubernetes und der darunterliegenden Cloud Infrastruktur, ermöglicht der Cloud Controller Manager den Cloud Providern das Freigeben neuer Features in einem anderen Tempo als das Kubernetes Projekt.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Cluster
|
||||
id: cluster
|
||||
date: 2019-06-15
|
||||
full_link:
|
||||
short_description: >
|
||||
Ein Satz Arbeitermaschinen, genannt Knoten, die containerisierte Anwendungen ausführen. Jedes Cluster hat mindestens einen Arbeiterknoten.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- operation
|
||||
---
|
||||
Ein Satz Arbeitermaschinen, gennant {{< glossary_tooltip text="Knoten" term_id="node" >}}, die containerisierte Anwendungen ausführen. Jedes Cluster hat mindestens einen Arbeiterknoten.
|
||||
|
||||
<!--more-->
|
||||
Die Arbeiterknoten bringen die {{< glossary_tooltip text="Pods" term_id="pod" >}} unter, die die Komponenten der Applikationslast sind. Die {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} verwaltet die Arbeiterknoten und Pods im Cluster. In Produktionsumgebungen läuft die Control Plane meistens über mehrere Computer, und ein Cluster hat meistens mehrere Knoten, um Fehlertoleranz und Hochverfügbarkeit zu ermöglichen.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Container
|
||||
id: container
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/containers/
|
||||
short_description: >
|
||||
Ein kleines und portierbares ausführbares Image, dass eine Software und all seine Abhängigkeiten enthält.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- workload
|
||||
---
|
||||
Ein kleines und portierbares ausführbares Image, dass eine Software und all seine Abhängigkeiten enthält.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Container entkuppeln Anwendungen von der darunterliegenden Rechner Infrastruktur, um den Einsatz in verschiedenen Cloud- oder Betriebssystemumgebungen zu vereinfachen.
|
||||
Die Anwendungen in Container nennt man Containerisierte Anwendungen. Der Prozess des Bündelns dieser Anwendungen und ihrer Abhängigkeiten in einem Container Image nennt man Containerisierung.
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: Control Plane
|
||||
id: control-plane
|
||||
date: 2019-05-12
|
||||
full_link:
|
||||
short_description: >
|
||||
Die Container Orchestrierungsschicht, die die API und Schnittstellen exponiert, um den Lebenszyklus von Container zu definieren, bereitzustellen, und zu verwalten.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Die Container Orchestrierungsschicht, die die API und Schnittstellen exponiert, um den Lebenszyklus von Container zu definieren, bereitzustellen, und zu verwalten.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Diese Schicht besteht aus vielen verschiedenen Komponenten, zum Beispiel (aber nicht begranzt auf):
|
||||
|
||||
* {{< glossary_tooltip text="etcd" term_id="etcd" >}}
|
||||
* {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
|
||||
* {{< glossary_tooltip text="Scheduler" term_id="kube-scheduler" >}}
|
||||
* {{< glossary_tooltip text="Controller Manager" term_id="kube-controller-manager" >}}
|
||||
* {{< glossary_tooltip text="Cloud Controller Manager" term_id="cloud-controller-manager" >}}
|
||||
|
||||
Diese Komponenten können als traditionelle Betriebssystemdienste (daemons) oder als Container laufen. Die Hosts auf denen diese Komponenten laufen, hießen früher {{< glossary_tooltip text="Master" term_id="master" >}}.
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Controller
|
||||
id: controller
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/architecture/controller/
|
||||
short_description: >
|
||||
Eine Kontrollschleife, die den geteilten Zustand des Clusters über den Apiserver beobachtet, und Änderungen ausführt, um den aktuellen Zustand in Richtung des Wunschzustands zu bewegen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
In Kubernetes sind Controller Kontrollschleifen, die den Zustand des {{< glossary_tooltip term_id="cluster" text="Clusters">}} beobachten, und dann Änderungen ausführen oder anfragen, wenn benötigt.
|
||||
Jeder Controller versucht den aktuellen Clusterzustand in Richtung des Wunschzustands zu bewegen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Controller beobachten den geteilten Zustand des Clusters durch den {{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}} (Teil der {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}}).
|
||||
|
||||
Mache Controller, laufen auch im Control Plane, und stellen Kontrollschleifen zur Verfügung, die essentiell für die grundlegende Kubernetes Funktionalität sind. Zum Beispiel: der Deployment Controller, der Daemonset Controller, der Namespace Controller und der Persistent Volume Controller (unter anderem) laufen alle innerhalb des {{< glossary_tooltip text="Kube Controller Managers" term_id="kube-controller-manager" >}}.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Deployment
|
||||
id: deployment
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/controllers/deployment/
|
||||
short_description: >
|
||||
Verwaltet eine replizierte Anwendung in Ihrem Cluster.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
- workload
|
||||
---
|
||||
Ein API Object, das eine replizierte Anwendung verwaltet, typischerweise durch laufende Pods ohne lokalem Zustand.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Jedes Replikat wird durch ein {{< glossary_tooltip text="Pod" term_id="pod" >}} repräsentiert, und die Pods werden auf den {{< glossary_tooltip text="Knoten" term_id="node" >}} eines Clusters verteilt. Für Arbeitslasten, die einen lokalen Zustand benötigen, sollten Sie einen {{< glossary_tooltip term_id="StatefulSet" >}} verwenden.
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Docker
|
||||
id: docker
|
||||
date: 2018-04-12
|
||||
full_link: https://docs.docker.com/engine/
|
||||
short_description: >
|
||||
Docker ist eine Software Technologie, die Virtualisierung auf Betriebssystemebene (auch bekannt als Container) bereitstellt.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Docker (genauer gesagt, Docker Engine) ist eine Software Technologie, die Virtualisierung auf Betriebssystemebene (auch bekannt als {{< glossary_tooltip text="Container" term_id="container" >}}) bereitstellt.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Docker verwendet die Ressourcenisolierungsfunktionen des Linux Kernels, wie cgroups und Kernel Namespaces, und ein Unionsfähiges Dateisystem wie OverlayFS (unter anderem), um unabhängige Container auf einer einzigen Linux Instanz auszuführen. Dies vermeidet den Mehraufwand des Starten und Verwalten virtueller Maschinen (VMs).
|
|
@ -4,16 +4,16 @@ id: kube-apiserver
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-apiserver/
|
||||
short_description: >
|
||||
Komponente auf dem Master, der die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
Komponente auf der Control Plane, die die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
Komponente auf dem Master, der die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
Komponente auf der Control Plane, die die Kubernetes-API verfügbar macht. Es ist das Frontend für die Kubernetes-Steuerebene.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Es ist für die horizontale Skalierung konzipiert, d. H. Es skaliert durch die Bereitstellung von mehr Instanzen. Mehr informationen finden Sie unter [Cluster mit hoher Verfügbarkeit erstellen](/docs/admin/high-availability/).
|
||||
Es ist für die horizontale Skalierung konzipiert, d.h. es skaliert durch die Bereitstellung von mehr Instanzen. Mehr informationen finden Sie unter [Cluster mit hoher Verfügbarkeit erstellen](/docs/admin/high-availability/).
|
||||
|
||||
|
|
|
@ -4,16 +4,16 @@ id: kube-controller-manager
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-controller-manager/
|
||||
short_description: >
|
||||
Komponente auf dem Master, auf dem Controller ausgeführt werden.
|
||||
Komponente auf der Control Plane, auf der Controller ausgeführt werden.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
- fundamental
|
||||
---
|
||||
Komponente auf dem Master, auf dem {{< glossary_tooltip text="controllers" term_id="controller" >}} ausgeführt werden.
|
||||
Komponente auf der Control Plane, auf der {{< glossary_tooltip text="Controller" term_id="controller" >}} ausgeführt werden.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Logisch gesehen ist jeder {{< glossary_tooltip text="controller" term_id="controller" >}} ein separater Prozess, aber zur Vereinfachung der Komplexität werden sie alle zu einer einzigen Binärdatei zusammengefasst und in einem einzigen Prozess ausgeführt.
|
||||
Logisch gesehen ist jeder {{< glossary_tooltip text="Controller" term_id="controller" >}} ein separater Prozess, aber zur Vereinfachung der Komplexität werden sie alle zu einer einzigen Binärdatei zusammengefasst und in einem einzigen Prozess ausgeführt.
|
||||
|
||||
|
|
|
@ -4,13 +4,13 @@ id: kube-scheduler
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kube-scheduler/
|
||||
short_description: >
|
||||
Komponente auf dem Master, die neu erstellte Pods überwacht, denen kein Node zugewiesen ist. Sie wählt den Node aus, auf dem sie ausgeführt werden sollen.
|
||||
Komponente auf der Control Plane, die neu erstellte Pods überwacht, denen kein Knoten zugewiesen ist. Sie wählt den Knoten aus, auf dem sie ausgeführt werden sollen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- architecture
|
||||
---
|
||||
Komponente auf dem Master, die neu erstellte Pods überwacht, denen kein Node zugewiesen ist. Sie wählt den Node aus, auf dem sie ausgeführt werden sollen.
|
||||
Komponente auf der Control Plane, die neu erstellte Pods überwacht, denen kein Knoten zugewiesen ist. Sie wählt den Knoten aus, auf dem sie ausgeführt werden sollen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Kubeadm
|
||||
id: kubeadm
|
||||
date: 2018-04-12
|
||||
full_link: /docs/reference/setup-tools/kubeadm/
|
||||
short_description: >
|
||||
Ein Werkzeug um schnell Kubernetes zu installieren und ein sicheres Cluster zu erstellen.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- tool
|
||||
- operation
|
||||
---
|
||||
Ein Werkzeug um schnell Kubernetes zu installieren und ein sicheres Cluster zu erstellen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Man kann kubeadm verwenden, um sowohl die Control Plane, als auch die {{< glossary_tooltip text="Worker Node" term_id="node" >}} Komponenten zu installieren.
|
||||
|
|
@ -4,14 +4,14 @@ id: kubelet
|
|||
date: 2018-04-12
|
||||
full_link: /docs/reference/generated/kubelet
|
||||
short_description: >
|
||||
Ein Agent, der auf jedem Node im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
Ein Agent, der auf jedem Knoten im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
---
|
||||
Ein Agent, der auf jedem Node im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
Ein Agent, der auf jedem Knoten im Cluster ausgeführt wird. Er stellt sicher, dass Container in einem Pod ausgeführt werden.
|
||||
|
||||
<!--more-->
|
||||
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Label
|
||||
id: label
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/labels
|
||||
short_description: >
|
||||
Kennzeichnet Objekte mit identifizierenden Attributen, die sinnvoll und relevant für die Nutzer sind.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Kennzeichnet Objekte mit identifizierenden Attributen, die sinnvoll und relevant für die Nutzer sind.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Labels sind Key-Value Paare, die an Objekte gebunden sind, zum Beispiel {{< glossary_tooltip text="Pods" term_id="pod" >}}. Sie werden verwendet, um eine Untermenge Objekte zu organisieren und auszuwählen.
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Master
|
||||
id: master
|
||||
date: 2020-04-16
|
||||
short_description: >
|
||||
Veralteter Begriff, verwendet als Synonym für die Knoten auf denen die Control Plane läuft.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Veralteter Begriff, verwendet als Synonym für die {{< glossary_tooltip text="Knoten" term_id="node" >}} auf denen die {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} läuft.
|
||||
|
||||
<!--more-->
|
||||
Dieser Begriff wird noch durch einige Provisionierungswerkzeuge verwendet, wie zum Beispiel {{< glossary_tooltip text="kubeadm" term_id="kubeadm" >}}, und gemanagte Dienste, um {{< glossary_tooltip text="Knoten" term_id="node" >}} mit dem `kubernetes.io/role` {{< glossary_tooltip text="Label" term_id="label" >}} zu kennzeichnen, und {{< glossary_tooltip text="Pods" term_id="pod" >}} auf der {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} zu platzieren.
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
title: Knoten
|
||||
id: node
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/architecture/nodes/
|
||||
short_description: >
|
||||
Ein Knoten ist eine Arbietermaschine in Kubernetes.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Ein Knoten ist eine Arbietermaschine in Kubernetes.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Ein Arbeiterknoten kann eine virtuelle Maschine oder physische Maschine sein, abhängig vom Cluster. Es hat lokale Daemons und Dienste, die nötig sind um {{< glossary_tooltip text="Pods" term_id="pod" >}} auszuführen, und wird von der Control Plane administriert. Die Daemonen auf einem Knoten beinhalten auch das {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}, und eine Container Runtime, die ein {{< glossary_tooltip text="CRI" term_id="cri" >}}, wie zum Beispiel {{< glossary_tooltip term_id="docker" >}}. implementieren.
|
||||
|
||||
In älteren Kubernetes Versionen wurden Knoten "Minions" genannt.
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Objekt
|
||||
id: object
|
||||
date: 2020-10-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/#kubernetes-objects
|
||||
short_description: >
|
||||
Eine Einheit im Kubernetessystem, die ein Teil des Zustands Ihres Clusters darstellt.
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Eine Einheit im Kubernetessystem. Die Kubernetes API verwendet diese Einheiten um den Zustand Ihres Clusters darzustellen.
|
||||
<!--more-->
|
||||
Ein Kubernetes Objekt ist typischerweise ein "Datenstatz der Absicht"—sobald Sie das Objekt erstellen, arbeitet die Kubernetes {{< glossary_tooltip text="Control Plane" term_id="control-plane" >}} ständig, um zu versichern, dass das Element, welches es darstellt, auch existiert.
|
||||
Durch erstellen eines Objekts, erzählen Sie grundsätzlich dem Kubernetessystem wie dieser Teil der Arbeitslast Ihres Clusters aussehen soll; das ist der Wunschzustand Ihres Clusters.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Pod
|
||||
id: pod
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/pods/
|
||||
short_description: >
|
||||
Ein Pod stellt ein Satz laufender Container in Ihrem Cluster dar.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- fundamental
|
||||
---
|
||||
Das kleinste und einfachste Kubernetesobjekt. Ein Pod stellt ein Satz laufender {{< glossary_tooltip text="Container" term_id="container" >}} in Ihrem Cluster dar.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Ein Pod wird typischerweise verwendet, um einen einzelnen primären Container laufen zu lassen. Es kann optional auch "sidecar" Container laufen lassen, die zusätzliche Features, wie logging, hinzufügen. Pods werden normalerweise durch ein {{< glossary_tooltip text="Deployment" term_id="deployment" >}} verwaltet.
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
title: Selector
|
||||
id: selector
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/overview/working-with-objects/labels/
|
||||
short_description: >
|
||||
Eralubt Benutzer das Filtern einer Liste Ressourcen basierend auf Label.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
---
|
||||
Erlaubt Benutzer das Filtern einer Liste Ressourcen basierend auf {{< glossary_tooltip text="Label" term_id="label" >}}.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Selektoren werden verwendet beim Abfragen einer Liste Ressourcen, um Sie nach Label zu filtern.
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
title: Service
|
||||
id: service
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/services-networking/service/
|
||||
short_description: >
|
||||
Eine Methode um Anwendungen, die auf einem Satz Pods laufen, als Netzwerkdienst freizugeben.
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
---
|
||||
Eine Methode um Netwzwerkanwendungen freizugeben, die als einen oder mehrere {{< glossary_tooltip text="Pods" term_id="pod" >}} in Ihrem Cluster laufen.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Der Satz Pods, der von einem Servie anvisiert ist, wird durch einen {{< glossary_tooltip text="Selector" term_id="selector" >}} bestimmt. Wenn mehrere Pods hinzugefügt oder entfernt werden, ändert sich der Satz Pods die zum Selector passen. Der Service versichert, dass Netzwerkverkehr an den aktuellen Satz Pods für die Arbeitslast gelenkt werden kann.
|
||||
|
||||
Kubernetes Services verwenden entweder IP Netzwerke (IPv4, IPv6, oder beide), oder referenzieren einen externen Namen im Domain Name System (DNS).
|
||||
|
||||
Die Service Abstraktion ermöglicht andere Mechanismen, wie Ingress und Gateway.
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: StatefulSet
|
||||
id: statefulset
|
||||
date: 2018-04-12
|
||||
full_link: /docs/concepts/workloads/controllers/statefulset/
|
||||
short_description: >
|
||||
Ein StatefulSet verwaltet die Bereitstellung und die Skalierung eines Satzes Pods, mit langlebigem Speicher und persistenter Identifzierung für jeden Pod.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- fundamental
|
||||
- core-object
|
||||
- workload
|
||||
- storage
|
||||
---
|
||||
Verwaltet die Bereitstellung und Skalierung eines Satzes {{< glossary_tooltip text="Pods" term_id="pod" >}}, *und stellt Garantieen zur Reihenfolge und Einzigartigkeit bereit* für diese Pods.
|
||||
|
||||
<!--more-->
|
||||
|
||||
Wie ein {{< glossary_tooltip text="Deployment" term_id="deployment" >}}, verwaltet ein StatefulSet Pods basierend auf eine identische Container Spezifikation. Anders als ein Deployment, verwaltet ein StatefulSet eine persistente Identität für jeden seiner Pods. Diese Pods werden anhand der gleichen Spezifikation erstellt, sind aber nicht austauschbar: Jeder hat eine persistente Identifizierung, die über jede Verschiebung erhalten bleibt.
|
||||
|
||||
Wenn Sie Speichervolumen verwenden wollen, um Persistenz der Arbeitslast zu ermöglichen, können Sie einen StatefulSet as Teil der Lösung verwenden. Obwohl einzelne Pods in einem StatefulSet anfälling für Fehler sind, machen die persistente Podidentifizierungen es einfacher, existierende Volumen mit neuen Pods, die die fehlerhaften ersetzen, zu verbinden.
|
|
@ -51,7 +51,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
|
|||
Download der kubectl Checksum-Datei:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
|
||||
```
|
||||
|
||||
Kubectl Binary mit der Checksum-Datei validieren:
|
||||
|
@ -236,7 +236,7 @@ Untenstehend ist beschrieben, wie die Autovervollständigungen für Fish und Zsh
|
|||
Download der kubectl-convert Checksum-Datei:
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
|
||||
```
|
||||
|
||||
Kubectl-convert Binary mit der Checksum-Datei validieren:
|
||||
|
|
|
@ -12,18 +12,13 @@ Ein Tutorial zeigt, wie Sie ein Ziel erreichen, das größer ist als eine einzel
|
|||
Ein Tutorial besteht normalerweise aus mehreren Abschnitten, die jeweils eine Abfolge von Schritten haben.
|
||||
Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise ein Lesezeichen zur Seite mit dem [Standardisierten Glossar](/docs/reference/glossary/) setzen um später Informationen nachzuschlagen.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Grundlagen
|
||||
|
||||
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) ist ein ausführliches interaktives Lernprogramm, das Ihnen hilft, das Kubernetes-System zu verstehen und einige grundlegende Kubernetes-Funktionen auszuprobieren.
|
||||
|
||||
* [Scalable Microservices mit Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) (Englisch)
|
||||
|
||||
* [Einführung in Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#) (Englisch)
|
||||
|
||||
* [Hello Minikube](/docs/tutorials/hello-minikube/)
|
||||
|
||||
## Konfiguration
|
||||
|
@ -33,36 +28,26 @@ Bevor Sie die einzelnen Lernprogramme durchgehen, möchten Sie möglicherweise e
|
|||
## Stateless Anwendungen
|
||||
|
||||
* [Freigeben einer externen IP-Adresse für den Zugriff auf eine Anwendung in einem Cluster](/docs/tutorials/stateless-application/expose-external-ip-address/)
|
||||
|
||||
* [Beispiel: Bereitstellung der PHP-Gästebuchanwendung mit Redis](/docs/tutorials/stateless-application/guestbook/)
|
||||
|
||||
## Stateful Anwendungen
|
||||
|
||||
* [StatefulSet Grundlagen](/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
* [Beispiel: WordPress und MySQL mit persistenten Volumes](/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)
|
||||
|
||||
* [Beispiel: Bereitstellen von Cassandra mit Stateful-Sets](/docs/tutorials/stateful-application/cassandra/)
|
||||
|
||||
* [ZooKeeper, ein verteiltes CP-System](/docs/tutorials/stateful-application/zookeeper/)
|
||||
|
||||
## Clusters
|
||||
|
||||
* [AppArmor](/docs/tutorials/clusters/apparmor/)
|
||||
|
||||
* [seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
* [Seccomp](/docs/tutorials/clusters/seccomp/)
|
||||
|
||||
## Services
|
||||
|
||||
* [Source IP verwenden](/docs/tutorials/services/source-ip/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Wenn Sie ein Tutorial schreiben möchten, lesen Sie
|
||||
[Seitenvorlagen verwenden](/docs/home/contribute/page-templates/)
|
||||
für weitere Informationen zum Typ der Tutorial-Seite und zur Tutorial-Vorlage.
|
||||
|
||||
|
||||
|
|
|
@ -14,8 +14,6 @@ card:
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,9 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -9,8 +9,6 @@ weight: 20
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
|
|
@ -9,9 +9,6 @@ weight: 10
|
|||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="https://fonts.googleapis.com/css?family=Roboto+Slab:300,400,700" rel="stylesheet">
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content">
|
||||
|
|
|
@ -16,7 +16,7 @@ cid: partners
|
|||
</h5>
|
||||
<br>Geprüfte Dienstleister mit umfassender Erfahrung bei der erfolgreichen Einführung von Kubernetes in Unternehmen.
|
||||
<br><br><br>
|
||||
<button class="button landscape-trigger landscape-default" data-landscape-types="kubernetes-certified-service-provider" id="kcsp">KCSP Partner anzeigen</button>
|
||||
<button class="button landscape-trigger landscape-default" data-landscape-types="special--kubernetes-certified-service-provider" id="kcsp">KCSP Partner anzeigen</button>
|
||||
<br><br>Interessiert daran, ein
|
||||
<a href="https://www.cncf.io/certification/kcsp/">KCSP</a> zu werden?
|
||||
</center>
|
||||
|
@ -27,7 +27,7 @@ cid: partners
|
|||
<b>Zertifizierte Kubernetes-Distributionen, gehostete Plattformen und Installationssysteme</b>
|
||||
</h5>Die Softwarekonformität stellt sicher, dass die Kubernetes-Version eines jeden Anbieters die erforderlichen APIs unterstützt.
|
||||
<br><br><br>
|
||||
<button class="button landscape-trigger" data-landscape-types="certified-kubernetes-distribution,certified-kubernetes-hosted,certified-kubernetes-installer" id="conformance">Konforme Partner anzeigen</button>
|
||||
<button class="button landscape-trigger" data-landscape-types="platform" id="conformance">Konforme Partner anzeigen</button>
|
||||
<br><br>Interessiert daran,
|
||||
<a href="https://www.cncf.io/certification/software-conformance/">Kubernetes Zertifiziert</a> zu werden?
|
||||
</center>
|
||||
|
@ -39,7 +39,7 @@ cid: partners
|
|||
</h5>
|
||||
<br>Geprüfte Schulungsanbieter mit umfassender Erfahrung in der Weiterbildung im Bereich Cloud Native Technology.
|
||||
<br><br><br>
|
||||
<button class="button landscape-trigger" data-landscape-types="kubernetes-training-partner" id="ktp">KTP Partner anzeigen</button>
|
||||
<button class="button landscape-trigger" data-landscape-types="special--kubernetes-training-partner" id="ktp">KTP Partner anzeigen</button>
|
||||
<br><br>Interessiert daran, ein
|
||||
<a href="https://www.cncf.io/certification/training/">KTP</a> zu werden?
|
||||
</center>
|
||||
|
@ -50,4 +50,4 @@ cid: partners
|
|||
|
||||
<style>
|
||||
{{< include "partner-style.css" >}}
|
||||
</style>
|
||||
</style>
|
||||
|
|
|
@ -132,6 +132,6 @@ class: training
|
|||
</center>
|
||||
</div>
|
||||
<div class="main-section landscape-section">
|
||||
{{< cncf-landscape helpers=false category="kubernetes-training-partner" >}}
|
||||
{{< cncf-landscape helpers=false category="special--kubernetes-training-partner" >}}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
@ -47,12 +47,12 @@ To download Kubernetes, visit the [download](/releases/download/) section.
|
|||
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on April 18-21, 2023</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024</a>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 6-9, 2023</a>
|
||||
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/" button id="desktopKCButton">Attend KubeCon + CloudNativeCon North America on November 12-15, 2024</a>
|
||||
</div>
|
||||
<div id="videoPlayer">
|
||||
<iframe data-url="https://www.youtube.com/embed/H06qrNmGqyE?autoplay=1" frameborder="0" allowfullscreen></iframe>
|
||||
|
|
|
@ -20,7 +20,7 @@ Real time visualization is a strength that UI’s have over CLI’s, and with 1.
|
|||
Based on user research with Kubernetes’ predecessor [Borg](http://research.google.com/pubs/pub43438.html) and continued community feedback, we know logs are tremendously important to users. For this reason we’re constantly looking for ways to improve these features in Dashboard. This release includes a fix for an issue wherein large numbers of logs would crash the system, as well as the introduction of the ability to view logs by date.
|
||||
|
||||
**Showing More Resources**
|
||||
The previous release brought all workloads to Dashboard: Pods, Pet Sets, Daemon Sets, Replication Controllers, Replica Set, Services, & Deployments. With 1.4, we expand upon that set of objects by including Services, Ingresses, Persistent Volume Claims, Secrets, & Config Maps. We’ve also introduced an “Admin” section with the Namespace-independent global objects of Namespaces, Nodes, and Persistent Volumes. With the addition of roles, these will be shown only to cluster operators, and developers’ side nav will begin with the Namespace dropdown.
|
||||
The previous release brought all workloads to Dashboard: Pods, Pet Sets, Daemon Sets, Replication Controllers, Replica Set, Services, & Deployments. With 1.4, we expand upon that set of objects by including Services, Ingresses, Persistent Volume Claims, Secrets, & ConfigMaps. We’ve also introduced an “Admin” section with the Namespace-independent global objects of Namespaces, Nodes, and Persistent Volumes. With the addition of roles, these will be shown only to cluster operators, and developers’ side nav will begin with the Namespace dropdown.
|
||||
|
||||
Like glue binding together a loose stack of papers into a book, we needed some way to impose order on these resources for their value to be realized, so one of the features we’re most excited to announce in 1.4 is navigation.
|
||||
|
||||
|
|
|
@ -8,12 +8,11 @@ date: 2018-05-29
|
|||
|
||||
[**kustomize**]: https://github.com/kubernetes-sigs/kustomize
|
||||
[hello world]: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/helloWorld
|
||||
[kustomization]: https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization
|
||||
[mailing list]: https://groups.google.com/forum/#!forum/kustomize
|
||||
[open an issue]: https://github.com/kubernetes-sigs/kustomize/issues/new
|
||||
[subproject]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/0008-kustomize.md
|
||||
[subproject]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2377-Kustomize/README.md
|
||||
[SIG-CLI]: https://github.com/kubernetes/community/tree/master/sig-cli
|
||||
[workflow]: https://github.com/kubernetes-sigs/kustomize/blob/master/docs/workflows.md
|
||||
[workflow]: https://github.com/kubernetes-sigs/kustomize/blob/1dd448e65c81aab9d09308b695691175ca6459cd/docs/workflows.md
|
||||
|
||||
If you run a Kubernetes environment, chances are you’ve
|
||||
customized a Kubernetes configuration — you've copied
|
||||
|
|
|
@ -37,7 +37,7 @@ Prow automatically applies language labels based on file path. Thanks to SIG Doc
|
|||
/language ko
|
||||
```
|
||||
|
||||
These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the k/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
|
||||
These repo labels let reviewers filter for PRs and issues by language. For example, you can now filter the kubernetes/website dashboard for [PRs with Chinese content](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+label%3Alanguage%2Fzh).
|
||||
|
||||
### Team review
|
||||
|
||||
|
|
|
@ -3,6 +3,13 @@ layout: blog
|
|||
title: "Moving Forward From Beta"
|
||||
date: 2020-08-21
|
||||
slug: moving-forward-from-beta
|
||||
|
||||
# note to localizers: including this means you are marking
|
||||
# the article as maintained. That should be fine, but if
|
||||
# there is ever an update, you're committing to also updating
|
||||
# the localized version.
|
||||
# If unsure: omit this next field.
|
||||
evergreen: true
|
||||
---
|
||||
|
||||
**Author**: Tim Bannister, The Scale Factory
|
||||
|
@ -12,7 +19,7 @@ In Kubernetes, features follow a defined
|
|||
First, as the twinkle of an eye in an interested developer. Maybe, then,
|
||||
sketched in online discussions, drawn on the online equivalent of a cafe
|
||||
napkin. This rough work typically becomes a
|
||||
[Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/0001-kubernetes-enhancement-proposal-process.md#kubernetes-enhancement-proposal-process) (KEP), and
|
||||
[Kubernetes Enhancement Proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/0000-kep-process/README.md#kubernetes-enhancement-proposal-process) (KEP), and
|
||||
from there it usually turns into code.
|
||||
|
||||
For Kubernetes v1.20 and onwards, we're focusing on helping that code
|
||||
|
|
|
@ -77,7 +77,7 @@ Check out the full details of the Kubernetes 1.19 release in our [release notes]
|
|||
|
||||
## Availability
|
||||
|
||||
Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [KinD](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
|
||||
Kubernetes 1.19 is available for download on [GitHub](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.0). To get started with Kubernetes, check out these [interactive tutorials](https://kubernetes.io/docs/tutorials/) or run local Kubernetes clusters using Docker container “nodes” with [kind](https://kind.sigs.k8s.io/) (Kubernetes in Docker). You can also easily install 1.19 using [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
|
||||
|
||||
## Release Team
|
||||
This release is made possible through the efforts of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the [release team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.19/release_team.md) led by Taylor Dolezal, Senior Developer Advocate at HashiCorp. The 34 release team members coordinated many aspects of the release, from documentation to testing, validation, and feature completeness.
|
||||
|
|
|
@ -63,7 +63,7 @@ This metric has labels for the API `group`, `version`, `resource`, and `subresou
|
|||
and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served.
|
||||
|
||||
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
|
||||
and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
|
||||
and [jq](https://jqlang.github.io/jq/) to determine which deprecated APIs have been requested
|
||||
from the current instance of the API server:
|
||||
|
||||
```sh
|
||||
|
|
|
@ -129,7 +129,7 @@ will have strictly better performance and less overhead. However, we encourage y
|
|||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
|
||||
[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
|
||||
[CNCF landscape]: https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime
|
||||
|
||||
|
||||
### What should I look out for when changing CRI implementations?
|
||||
|
|
|
@ -5,7 +5,18 @@ date: 2021-07-15
|
|||
slug: sig-usability-spotlight-2021
|
||||
---
|
||||
|
||||
**Author:** Kunal Kushwaha, Civo
|
||||
**Author:** Kunal Kushwaha (Civo)
|
||||
|
||||
{{< note >}}
|
||||
SIG Usability, which is featured in this Spotlight blog, has been deprecated and is no longer active.
|
||||
As a result, the links and information provided in this blog post may no longer be valid or relevant.
|
||||
Should there be renewed interest and increased participation in the future, the SIG may be revived.
|
||||
However, as of August 2023 the SIG is inactive per the Kubernetes community policy.
|
||||
The Kubernetes project encourages you to explore other
|
||||
[SIGs](https://github.com/kubernetes/community/blob/master/sig-list.md#special-interest-groups)
|
||||
and resources available on the Kubernetes website to stay up-to-date with the latest developments
|
||||
and enhancements in Kubernetes.
|
||||
{{< /note >}}
|
||||
|
||||
## Introduction
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ and are in no way a direct recommendation from the Kubernetes community or autho
|
|||
|
||||
USA's National Security Agency (NSA) and the Cybersecurity and Infrastructure
|
||||
Security Agency (CISA)
|
||||
released, "[Kubernetes Hardening Guidance](https://media.defense.gov/2021/Aug/03/2002820425/-1/-1/1/CTR_KUBERNETES%20HARDENING%20GUIDANCE.PDF)"
|
||||
released Kubernetes Hardening Guidance
|
||||
on August 3rd, 2021. The guidance details threats to Kubernetes environments
|
||||
and provides secure configuration guidance to minimize risk.
|
||||
|
||||
|
@ -29,6 +29,14 @@ _Note_: This blog post is not a substitute for reading the guide. Reading the pu
|
|||
guidance is recommended before proceeding as the following content is
|
||||
complementary.
|
||||
|
||||
{{% pageinfo color="primary" %}}
|
||||
**Update, November 2023:**
|
||||
|
||||
The National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) released the 1.0 version of the Kubernetes hardening guide in August 2021 and updated it based on industry feedback in March 2022 (version 1.1).
|
||||
|
||||
The most recent version of the Kubernetes hardening guidance was released in August 2022 with corrections and clarifications. Version 1.2 outlines a number of recommendations for [hardening Kubernetes clusters](https://media.defense.gov/2022/Aug/29/2003066362/-1/-1/0/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF).
|
||||
{{% /pageinfo %}}
|
||||
|
||||
## Introduction and Threat Model
|
||||
|
||||
Note that the threats identified as important by the NSA/CISA, or the intended audience of this guidance, may be different from the threats that other enterprise users of Kubernetes consider important. This section
|
||||
|
|
|
@ -210,7 +210,7 @@ podip=$(cat /tmp/out | jq -r '.Endpoints[]|select(.Local == true)|select(.IPs.V6
|
|||
ip6tables -t nat -A PREROUTING -d $xip/128 -j DNAT --to-destination $podip
|
||||
```
|
||||
|
||||
Assuming the JSON output above is stored in `/tmp/out` ([jq](https://stedolan.github.io/jq/) is an *awesome* program!).
|
||||
Assuming the JSON output above is stored in `/tmp/out` ([jq](https://jqlang.github.io/jq/) is an *awesome* program!).
|
||||
|
||||
|
||||
As this is an example we make it really simple for ourselves by using
|
||||
|
|
|
@ -65,7 +65,7 @@ and consult your Kubernetes hosting vendor (if you have one) what container runt
|
|||
Read up [container runtime documentation with instructions on how to use containerd and CRI-O](/docs/setup/production-environment/container-runtimes/#container-runtimes)
|
||||
to help prepare you when you're ready to upgrade to 1.24. CRI-O, containerd, and
|
||||
Docker with [Mirantis cri-dockerd](https://github.com/Mirantis/cri-dockerd) are
|
||||
not the only container runtime options, we encourage you to explore the [CNCF landscape on container runtimes](https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category)
|
||||
not the only container runtime options, we encourage you to explore the [CNCF landscape on container runtimes](https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime)
|
||||
in case another suits you better.
|
||||
|
||||
Thank you!
|
||||
|
|
|
@ -99,7 +99,7 @@ will have strictly better performance and less overhead. However, we encourage y
|
|||
to explore all the options from the [CNCF landscape] in case another would be an
|
||||
even better fit for your environment.
|
||||
|
||||
[CNCF landscape]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category
|
||||
[CNCF landscape]: https://landscape.cncf.io/?group=projects-and-products&view-mode=card#runtime--container-runtime
|
||||
|
||||
#### Can I still use Docker Engine as my container runtime?
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ controller container.
|
|||
|
||||
While this is not strictly true, to understand what was done here, it's good to understand how
|
||||
Linux containers (and underlying mechanisms such as kernel namespaces) work.
|
||||
You can read about cgroups in the Kubernetes glossary: [`cgroup`](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-cgroup) and learn more about cgroups interact with namespaces in the NGINX project article
|
||||
You can read about cgroups in the Kubernetes glossary: [`cgroup`](/docs/reference/glossary/?fundamental=true#term-cgroup) and learn more about cgroups interact with namespaces in the NGINX project article
|
||||
[What Are Namespaces and cgroups, and How Do They Work?](https://www.nginx.com/blog/what-are-namespaces-cgroups-how-do-they-work/).
|
||||
(As you read that, bear in mind that Linux kernel namespaces are a different thing from
|
||||
[Kubernetes namespaces](/docs/concepts/overview/working-with-objects/namespaces/)).
|
||||
|
|
|
@ -41,7 +41,7 @@ gateways and service meshes and guides are available to start exploring quickly.
|
|||
### Getting started
|
||||
|
||||
Gateway API is an official Kubernetes API like
|
||||
[Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/).
|
||||
[Ingress](/docs/concepts/services-networking/ingress/).
|
||||
Gateway API represents a superset of Ingress functionality, enabling more
|
||||
advanced concepts. Similar to Ingress, there is no default implementation of
|
||||
Gateway API built into Kubernetes. Instead, there are many different
|
||||
|
@ -169,7 +169,7 @@ If there's something on this list you want to get involved in, or there's
|
|||
something not on this list that you want to advocate for to get on the roadmap
|
||||
please join us in the #sig-network-gateway-api channel on Kubernetes Slack or our weekly [community calls](https://gateway-api.sigs.k8s.io/contributing/community/#meetings).
|
||||
|
||||
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/master/site-src/geps/gep-1016.md
|
||||
[gep1016]:https://github.com/kubernetes-sigs/gateway-api/blob/main/geps/gep-1016.md
|
||||
[grpc]:https://grpc.io/
|
||||
[pr1085]:https://github.com/kubernetes-sigs/gateway-api/pull/1085
|
||||
[tcpr]:https://github.com/kubernetes-sigs/gateway-api/blob/main/apis/v1alpha2/tcproute_types.go
|
||||
|
|
|
@ -47,7 +47,7 @@ API.
|
|||
Kubernetes 1.0 was released on 10 July 2015 without any mechanism to restrict the
|
||||
security context and sensitive options of workloads, other than an alpha-quality
|
||||
SecurityContextDeny admission plugin (then known as `scdeny`).
|
||||
The [SecurityContextDeny plugin](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#securitycontextdeny)
|
||||
The [SecurityContextDeny plugin](/docs/reference/access-authn-authz/admission-controllers/#securitycontextdeny)
|
||||
is still in Kubernetes today (as an alpha feature) and creates an admission controller that
|
||||
prevents the usage of some fields in the security context.
|
||||
|
||||
|
|
|
@ -169,7 +169,7 @@ JAMES LAVERACK: Not really. The cornerstone of a Kubernetes organization is the
|
|||
|
||||
**CRAIG BOX: Let's talk about some of the new features in 1.24. We have been hearing for many releases now about the impending doom which is the removal of Dockershim. [It is gone in 1.24](https://github.com/kubernetes/enhancements/issues/2221). Do we worry?**
|
||||
|
||||
JAMES LAVERACK: I don't think we worry. This is something that the community has been preparing for for a long time. [We've](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) [published](https://kubernetes.io/blog/2022/02/17/dockershim-faq/) a [lot](https://kubernetes.io/blog/2021/11/12/are-you-ready-for-dockershim-removal/) of [documentation](https://kubernetes.io/blog/2022/03/31/ready-for-dockershim-removal/) [about](https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) [how](https://kubernetes.io/blog/2022/05/03/dockershim-historical-context/) you need to approach this. The honest truth is that most users, most application developers in Kubernetes, will simply not notice a difference or have to worry about it.
|
||||
JAMES LAVERACK: I don't think we worry. This is something that the community has been preparing for for a long time. [We've](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/) [published](/blog/2022/02/17/dockershim-faq/) a [lot](/blog/2021/11/12/are-you-ready-for-dockershim-removal/) of [documentation](/blog/2022/03/31/ready-for-dockershim-removal/) [about](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you/) [how](/blog/2022/05/03/dockershim-historical-context/) you need to approach this. The honest truth is that most users, most application developers in Kubernetes, will simply not notice a difference or have to worry about it.
|
||||
|
||||
It's only really platform teams that administer Kubernetes clusters and people in very specific circumstances that are using Docker directly, not through the Kubernetes API, that are going to experience any issue at all.
|
||||
|
||||
|
@ -203,7 +203,7 @@ JAMES LAVERACK: This is really about encouraging the use of stable APIs. There w
|
|||
|
||||
JAMES LAVERACK: That's correct. There's no breaking changes in beta APIs other than the ones we've documented this release. It's only new things.
|
||||
|
||||
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/). What needed to happen to make that process possible?**
|
||||
**CRAIG BOX: Now in this release, [the artifacts are signed](https://github.com/kubernetes/enhancements/issues/3031) using Cosign signatures, and there is [experimental support for verification of those signatures](/docs/tasks/administer-cluster/verify-signed-artifacts/). What needed to happen to make that process possible?**
|
||||
|
||||
JAMES LAVERACK: This was a huge process from the other half of SIG Release. SIG Release has the release team, but it also has the release engineering team that handles the mechanics of actually pushing releases out. They have spent, and one of my friends over there, Adolfo, has spent a lot of time trying to bring us in line with [SLSA](https://slsa.dev/) compliance. I believe we're [looking now at Level 3 compliance](https://github.com/kubernetes/enhancements/issues/3027).
|
||||
|
||||
|
@ -251,7 +251,7 @@ With Kubernetes 1.24, we're enabling a beta feature that allows them to use gRPC
|
|||
|
||||
**CRAIG BOX: Are there any other enhancements that are particularly notable or relevant perhaps to the work you've been doing?**
|
||||
|
||||
JAMES LAVERACK: There's a really interesting one from SIG Network which is about [avoiding collisions in IP allocations to services](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services). In existing versions of Kubernetes, you can allocate a service to have a particular internal cluster IP, or you can leave it blank and it will generate its own IP.
|
||||
JAMES LAVERACK: There's a really interesting one from SIG Network which is about [avoiding collisions in IP allocations to services](/blog/2022/05/03/kubernetes-1-24-release-announcement/#avoiding-collisions-in-ip-allocation-to-services). In existing versions of Kubernetes, you can allocate a service to have a particular internal cluster IP, or you can leave it blank and it will generate its own IP.
|
||||
|
||||
In Kubernetes 1.24, there's an opt-in feature, which allows you to specify a pool for dynamic IPs to be generated from. This means that you can statically allocate an IP to a service and know that IP can not be accidentally dynamically allocated. This is a problem I've actually had in my local Kubernetes cluster, where I use static IP addresses for a bunch of port forwarding rules. I've always worried that during server start-up, they're going to get dynamically allocated to one of the other services. Now, with 1.24, and this feature, I won't have to worry about it more.
|
||||
|
||||
|
@ -267,7 +267,7 @@ JAMES LAVERACK: That is a very deep question I don't think we have time for.
|
|||
|
||||
JAMES LAVERACK: [LAUGHING]
|
||||
|
||||
**CRAIG BOX: [The theme for Kubernetes 1.24 is Stargazer](https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo). How did you pick that as the theme?**
|
||||
**CRAIG BOX: [The theme for Kubernetes 1.24 is Stargazer](/blog/2022/05/03/kubernetes-1-24-release-announcement/#release-theme-and-logo). How did you pick that as the theme?**
|
||||
|
||||
JAMES LAVERACK: Every release lead gets to pick their theme, pretty much by themselves. When I started, I asked Rey, the previous release lead, how he picked his theme, because he picked the Next Frontier for Kubernetes 1.23. And he told me that he'd actually picked it before the release even started, which meant for the first couple of weeks and months of the release, I was really worried about it, because I hadn't picked one yet, and I wasn't sure what to pick.
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ In this SIG Storage spotlight, [Frederico Muñoz](https://twitter.com/fredericom
|
|||
|
||||
**Frederico (FSM)**: Hello, thank you for the opportunity of learning more about SIG Storage. Could you tell us a bit about yourself, your role, and how you got involved in SIG Storage.
|
||||
|
||||
**Xing Yang (XY)**: I am a Tech Lead at VMware, working on Cloud Native Storage. I am also a Co-Chair of SIG Storage. I started to get involved in K8s SIG Storage at the end of 2017, starting with contributing to the [VolumeSnapshot](https://kubernetes.io/docs/concepts/storage/volume-snapshots/) project. At that time, the VolumeSnapshot project was still in an experimental, pre-alpha stage. It needed contributors. So I volunteered to help. Then I worked with other community members to bring VolumeSnapshot to Alpha in K8s 1.12 release in 2018, Beta in K8s 1.17 in 2019, and eventually GA in 1.20 in 2020.
|
||||
**Xing Yang (XY)**: I am a Tech Lead at VMware, working on Cloud Native Storage. I am also a Co-Chair of SIG Storage. I started to get involved in K8s SIG Storage at the end of 2017, starting with contributing to the [VolumeSnapshot](/docs/concepts/storage/volume-snapshots/) project. At that time, the VolumeSnapshot project was still in an experimental, pre-alpha stage. It needed contributors. So I volunteered to help. Then I worked with other community members to bring VolumeSnapshot to Alpha in K8s 1.12 release in 2018, Beta in K8s 1.17 in 2019, and eventually GA in 1.20 in 2020.
|
||||
|
||||
**FSM**: Reading the [SIG Storage charter](https://github.com/kubernetes/community/blob/master/sig-storage/charter.md) alone it’s clear that SIG Storage covers a lot of ground, could you describe how the SIG is organised?
|
||||
|
||||
|
@ -34,7 +34,7 @@ We also have other regular meetings, i.e., CSI Implementation meeting, Object Bu
|
|||
|
||||
**XY**: In Kubernetes, there are multiple components involved for a volume operation. For example, creating a Pod to use a PVC has multiple components involved. There are the Attach Detach Controller and the external-attacher working on attaching the PVC to the pod. There’s the Kubelet that works on mounting the PVC to the pod. Of course the CSI driver is involved as well. There could be race conditions sometimes when coordinating between multiple components.
|
||||
|
||||
Another challenge is regarding core vs [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRD), not really storage specific. CRD is a great way to extend Kubernetes capabilities while not adding too much code to the Kubernetes core itself. However, this also means there are many external components that are needed when running a Kubernetes cluster.
|
||||
Another challenge is regarding core vs [Custom Resource Definitions](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CRD), not really storage specific. CRD is a great way to extend Kubernetes capabilities while not adding too much code to the Kubernetes core itself. However, this also means there are many external components that are needed when running a Kubernetes cluster.
|
||||
|
||||
From the SIG Storage side, one most notable example is Volume Snapshot. Volume Snapshot APIs are defined as CRDs. API definitions and controllers are out-of-tree. There is a common snapshot controller and a snapshot validation webhook that should be deployed on the control plane, similar to how kube-controller-manager is deployed. Although Volume Snapshot is a CRD, it is a core feature of SIG Storage. It is recommended for the K8s cluster distros to deploy Volume Snapshot CRDs, the snapshot controller, and the snapshot validation webhook, however, most of the time we don’t see distros deploy them. So this becomes a problem for the storage vendors: now it becomes their responsibility to deploy these non-driver specific common components. This could cause conflicts if a customer wants to use more than one storage system and deploy more than one CSI driver.
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ PodSecurityPolicy was initially [deprecated in v1.21](/blog/2021/04/06/podsecuri
|
|||
|
||||
### Support for cgroups v2 Graduates to Stable
|
||||
|
||||
It has been more than two years since the Linux kernel cgroups v2 API was declared stable. With some distributions now defaulting to this API, Kubernetes must support it to continue operating on those distributions. cgroups v2 offers several improvements over cgroups v1, for more information see the [cgroups v2](https://kubernetes.io/docs/concepts/architecture/cgroups/) documentation. While cgroups v1 will continue to be supported, this enhancement puts us in a position to be ready for its eventual deprecation and replacement.
|
||||
It has been more than two years since the Linux kernel cgroups v2 API was declared stable. With some distributions now defaulting to this API, Kubernetes must support it to continue operating on those distributions. cgroups v2 offers several improvements over cgroups v1, for more information see the [cgroups v2](/docs/concepts/architecture/cgroups/) documentation. While cgroups v1 will continue to be supported, this enhancement puts us in a position to be ready for its eventual deprecation and replacement.
|
||||
|
||||
|
||||
### Improved Windows support
|
||||
|
@ -53,11 +53,11 @@ It has been more than two years since the Linux kernel cgroups v2 API was declar
|
|||
|
||||
### Promoted SeccompDefault to Beta
|
||||
|
||||
SeccompDefault promoted to beta, see the tutorial [Restrict a Container's Syscalls with seccomp](https://kubernetes.io/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads) for more details.
|
||||
SeccompDefault promoted to beta, see the tutorial [Restrict a Container's Syscalls with seccomp](/docs/tutorials/security/seccomp/#enable-the-use-of-runtimedefault-as-the-default-seccomp-profile-for-all-workloads) for more details.
|
||||
|
||||
### Promoted endPort in Network Policy to Stable
|
||||
|
||||
Promoted `endPort` in [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports) to GA. Network Policy providers that support `endPort` field now can use it to specify a range of ports to apply a Network Policy. Previously, each Network Policy could only target a single port.
|
||||
Promoted `endPort` in [Network Policy](/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports) to GA. Network Policy providers that support `endPort` field now can use it to specify a range of ports to apply a Network Policy. Previously, each Network Policy could only target a single port.
|
||||
|
||||
Please be aware that `endPort` field **must be supported** by the Network Policy provider. If your provider does not support `endPort`, and this field is specified in a Network Policy, the Network Policy will be created covering only the port field (single port).
|
||||
|
||||
|
@ -75,7 +75,7 @@ The [CSI Ephemeral Volume](https://github.com/kubernetes/enhancements/tree/maste
|
|||
|
||||
### Promoted CRD Validation Expression Language to Beta
|
||||
|
||||
[CRD Validation Expression Language](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2876-crd-validation-expression-language/README.md) is promoted to beta, which makes it possible to declare how custom resources are validated using the [Common Expression Language (CEL)](https://github.com/google/cel-spec). Please see the [validation rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) guide.
|
||||
[CRD Validation Expression Language](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2876-crd-validation-expression-language/README.md) is promoted to beta, which makes it possible to declare how custom resources are validated using the [Common Expression Language (CEL)](https://github.com/google/cel-spec). Please see the [validation rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) guide.
|
||||
|
||||
### Promoted Server Side Unknown Field Validation to Beta
|
||||
|
||||
|
@ -83,7 +83,7 @@ Promoted the `ServerSideFieldValidation` feature gate to beta (on by default). T
|
|||
|
||||
### Introduced KMS v2 API
|
||||
|
||||
Introduce KMS v2alpha1 API to add performance, rotation, and observability improvements. Encrypt data at rest (ie Kubernetes `Secrets`) with DEK using AES-GCM instead of AES-CBC for kms data encryption. No user action is required. Reads with AES-GCM and AES-CBC will continue to be allowed. See the guide [Using a KMS provider for data encryption](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/) for more information.
|
||||
Introduce KMS v2alpha1 API to add performance, rotation, and observability improvements. Encrypt data at rest (ie Kubernetes `Secrets`) with DEK using AES-GCM instead of AES-CBC for kms data encryption. No user action is required. Reads with AES-GCM and AES-CBC will continue to be allowed. See the guide [Using a KMS provider for data encryption](/docs/tasks/administer-cluster/kms-provider/) for more information.
|
||||
|
||||
### Kube-proxy images are now based on distroless images
|
||||
|
||||
|
|
|
@ -10,11 +10,11 @@ slug: pod-security-admission-stable
|
|||
The release of Kubernetes v1.25 marks a major milestone for Kubernetes out-of-the-box pod security
|
||||
controls: Pod Security admission (PSA) graduated to stable, and Pod Security Policy (PSP) has been
|
||||
removed.
|
||||
[PSP was deprecated in Kubernetes v1.21](https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
|
||||
[PSP was deprecated in Kubernetes v1.21](/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/),
|
||||
and no longer functions in Kubernetes v1.25 and later.
|
||||
|
||||
The Pod Security admission controller replaces PodSecurityPolicy, making it easier to enforce predefined
|
||||
[Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/) by
|
||||
[Pod Security Standards](/docs/concepts/security/pod-security-standards/) by
|
||||
simply adding a label to a namespace. The Pod Security Standards are maintained by the K8s
|
||||
community, which means you automatically get updated security policies whenever new
|
||||
security-impacting Kubernetes features are introduced.
|
||||
|
@ -56,7 +56,7 @@ Warning: myjob-g342hj (and 6 other pods): host namespaces, allowPrivilegeEscalat
|
|||
```
|
||||
|
||||
Additionally, when you apply a non-privileged label to a namespace that has been
|
||||
[configured to be exempt](https://kubernetes.io/docs/concepts/security/pod-security-admission/#exemptions),
|
||||
[configured to be exempt](/docs/concepts/security/pod-security-admission/#exemptions),
|
||||
you will now get a warning alerting you to this fact:
|
||||
|
||||
```
|
||||
|
@ -65,7 +65,7 @@ Warning: namespace 'kube-system' is exempt from Pod Security, and the policy (en
|
|||
|
||||
### Changes to the Pod Security Standards
|
||||
|
||||
The [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/),
|
||||
The [Pod Security Standards](/docs/concepts/security/pod-security-standards/),
|
||||
which Pod Security admission enforces, have been updated with support for the new Pod OS
|
||||
field. In v1.25 and later, if you use the Restricted policy, the following Linux-specific restrictions will no
|
||||
longer be required if you explicitly set the pod's `.spec.os.name` field to `windows`:
|
||||
|
@ -76,14 +76,14 @@ longer be required if you explicitly set the pod's `.spec.os.name` field to `win
|
|||
|
||||
In Kubernetes v1.23 and earlier, the kubelet didn't enforce the Pod OS field.
|
||||
If your cluster includes nodes running a v1.23 or older kubelet, you should explicitly
|
||||
[pin Restricted policies](https://kubernetes.io/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces)
|
||||
[pin Restricted policies](/docs/concepts/security/pod-security-admission/#pod-security-admission-labels-for-namespaces)
|
||||
to a version prior to v1.25.
|
||||
|
||||
## Migrating from PodSecurityPolicy to the Pod Security admission controller
|
||||
|
||||
For instructions to migrate from PodSecurityPolicy to the Pod Security admission controller, and
|
||||
for help choosing a migration strategy, refer to the
|
||||
[migration guide](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/).
|
||||
[migration guide](/docs/tasks/configure-pod-container/migrate-from-psp/).
|
||||
We're also developing a tool called
|
||||
[pspmigrator](https://github.com/kubernetes-sigs/pspmigrator) to automate parts
|
||||
of the migration process.
|
||||
|
|
|
@ -13,7 +13,7 @@ CSI Inline Volumes are similar to other ephemeral volume types, such as `configM
|
|||
|
||||
## What's new in 1.25?
|
||||
|
||||
There are a couple of new bug fixes related to this feature in 1.25, and the [CSIInlineVolume feature gate](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/) has been locked to `True` with the graduation to GA. There are no new API changes, so users of this feature during beta should not notice any significant changes aside from these bug fixes.
|
||||
There are a couple of new bug fixes related to this feature in 1.25, and the [CSIInlineVolume feature gate](/docs/reference/command-line-tools-reference/feature-gates/) has been locked to `True` with the graduation to GA. There are no new API changes, so users of this feature during beta should not notice any significant changes aside from these bug fixes.
|
||||
|
||||
- [#89290 - CSI inline volumes should support fsGroup](https://github.com/kubernetes/kubernetes/issues/89290)
|
||||
- [#79980 - CSI volume reconstruction does not work for ephemeral volumes](https://github.com/kubernetes/kubernetes/issues/79980)
|
||||
|
@ -95,8 +95,8 @@ Cluster administrators may choose to omit (or remove) `Ephemeral` from `volumeLi
|
|||
|
||||
For more information on this feature, see:
|
||||
|
||||
- [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes)
|
||||
- [Kubernetes documentation](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes)
|
||||
- [CSI documentation](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
|
||||
- [KEP-596](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/596-csi-inline-volumes/README.md)
|
||||
- [Beta blog post for CSI Inline Volumes](https://kubernetes.io/blog/2020/01/21/csi-ephemeral-inline-volumes/)
|
||||
- [Beta blog post for CSI Inline Volumes](/blog/2020/01/21/csi-ephemeral-inline-volumes/)
|
||||
|
||||
|
|
|
@ -118,8 +118,8 @@ Scenarios in which you might need to update to cgroup v2 include the following:
|
|||
DaemonSet for monitoring pods and containers, update it to v0.43.0 or later.
|
||||
* If you deploy Java applications, prefer to use versions which fully support cgroup v2:
|
||||
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later
|
||||
* [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later
|
||||
* [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later
|
||||
* [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later
|
||||
* [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later
|
||||
|
||||
## Learn more
|
||||
|
||||
|
|
|
@ -23,6 +23,13 @@ per container characteristics like image size or payload) can utilize the
|
|||
the `PodHasNetwork` condition to optimize the set of actions performed when pods
|
||||
repeatedly fail to come up.
|
||||
|
||||
### Updates for Kubernetes 1.28
|
||||
|
||||
The `PodHasNetwork` condition has been renamed to `PodReadyToStartContainers`.
|
||||
Alongside that change, the feature gate `PodHasNetworkCondition` has been replaced by
|
||||
`PodReadyToStartContainersCondition`. You need to set `PodReadyToStartContainersCondition`
|
||||
to true in order to use the new feature in v1.28.0 and later.
|
||||
|
||||
### How is this different from the existing Initialized condition reported for pods?
|
||||
|
||||
The kubelet sets the status of the existing `Initialized` condition reported in
|
||||
|
|
|
@ -75,12 +75,10 @@ the CSI provisioner receives the credentials from the Secret as part of the Node
|
|||
CSI volumes that require secrets for online expansion will have NodeExpandSecretRef
|
||||
field set. If not set, the NodeExpandVolume CSI RPC call will be made without a secret.
|
||||
|
||||
|
||||
|
||||
## Trying it out
|
||||
|
||||
1. Enable the `CSINodeExpandSecret` feature gate (please refer to
|
||||
[Feature Gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)).
|
||||
[Feature Gates](/docs/reference/command-line-tools-reference/feature-gates/)).
|
||||
|
||||
1. Create a Secret, and then a StorageClass that uses that Secret.
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue