Merge pull request #27870 from PI-Victor/merged-master-dev-1.22

Merge master into dev-1.22
pull/27988/head
Kubernetes Prow Robot 2021-05-11 16:51:39 -07:00 committed by GitHub
commit 347388843f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
691 changed files with 19695 additions and 14876 deletions

View File

@ -68,7 +68,7 @@ container-build: module-check
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 $(CONTAINER_IMAGE) sh -c "npm ci && hugo --minify"
container-serve: module-check ## Boot the development server using container. Run `make container-image` before this.
$(CONTAINER_RUN) --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
$(CONTAINER_RUN) --cap-drop=ALL --cap-add=AUDIT_WRITE --read-only --mount type=tmpfs,destination=/tmp,tmpfs-mode=01777 -p 1313:1313 $(CONTAINER_IMAGE) hugo server --buildFuture --bind 0.0.0.0 --destination /tmp/hugo --cleanDestinationDir
test-examples:
scripts/test_examples.sh install
@ -91,4 +91,4 @@ clean-api-reference: ## Clean all directories in API reference directory, preser
api-reference: clean-api-reference ## Build the API reference pages. go needed
cd api-ref-generator/gen-resourcesdocs && \
go run cmd/main.go kwebsite --config-dir config/v1.20/ --file api/v1.20/swagger.json --output-dir ../../content/en/docs/reference/kubernetes-api --templates templates
go run cmd/main.go kwebsite --config-dir config/v1.21/ --file api/v1.21/swagger.json --output-dir ../../content/en/docs/reference/kubernetes-api --templates templates

2
OWNERS
View File

@ -11,7 +11,7 @@ emeritus_approvers:
# - jaredbhatti, commented out to disable PR assignments
# - steveperry-53, commented out to disable PR assignments
- stewart-yu
- zacharysarah
# - zacharysarah, commented out to disable PR assignments
labels:
- sig/docs

View File

@ -3,7 +3,6 @@ aliases:
- castrojo
- kbarnard10
- onlydole
- zacharysarah
- mrbobbytables
sig-docs-blog-reviewers: # Reviewers for blog content
- castrojo
@ -33,7 +32,6 @@ aliases:
- sftim
- steveperry-53
- tengqm
- zacharysarah
- zparnold
sig-docs-en-reviews: # PR reviews for English content
- bradtopol
@ -50,11 +48,9 @@ aliases:
- zparnold
sig-docs-es-owners: # Admins for Spanish content
- raelga
- alexbrand
- electrocucaracha
sig-docs-es-reviews: # PR reviews for Spanish content
- raelga
- alexbrand
# glo-pena
- electrocucaracha
sig-docs-fr-owners: # Admins for French content
- remyleone

View File

@ -9,7 +9,7 @@ Herzlich willkommen! Dieses Repository enthält alle Assets, die zur Erstellung
Sie können auf die Schaltfläche **Fork** im oberen rechten Bereich des Bildschirms klicken, um eine Kopie dieses Repositorys in Ihrem GitHub-Konto zu erstellen. Diese Kopie wird als *Fork* bezeichnet. Nehmen Sie die gewünschten Änderungen an Ihrem Fork vor. Wenn Sie bereit sind, diese Änderungen an uns zu senden, gehen Sie zu Ihrem Fork und erstellen Sie eine neue Pull-Anforderung, um uns darüber zu informieren.
Sobald Ihre Pull-Anfrage erstellt wurde, übernimmt ein Rezensent von Kubernetes die Verantwortung für klares, umsetzbares Feedback. Als Eigentümer des Pull-Request **liegt es in Ihrer Verantwortung Ihren Pull-Reqest enstsprechend des Feedbacks, dass Sie vom Kubernetes-Reviewer erhalten haben abzuändern.** Beachten Sie auch, dass Sie am Ende mehr als einen Rezensenten von Kubernetes erhalten, der Ihnen Feedback gibt, oder dass Sie Rückmeldungen von einem Rezensenten von Kubernetes erhalten, der sich von demjenigen unterscheidet, der ursprünglich für das Feedback zugewiesen wurde. In einigen Fällen kann es vorkommen, dass einer Ihrer Prüfer bei Bedarf eine technische Überprüfung von einem [Kubernetes Tech-Reviewer](https://github.com/kubernetes/website/wiki/tech-reviewers) anfordert. Reviewer geben ihr Bestes, um zeitnah Feedback zu geben, die Antwortzeiten können jedoch je nach den Umständen variieren.
Sobald Ihre Pull-Anfrage erstellt wurde, übernimmt ein Rezensent von Kubernetes die Verantwortung für klares, umsetzbares Feedback. Als Eigentümer des Pull-Request **liegt es in Ihrer Verantwortung Ihren Pull-Reqest entsprechend des Feedbacks, dass Sie vom Kubernetes-Reviewer erhalten haben abzuändern.** Beachten Sie auch, dass Sie am Ende mehr als einen Rezensenten von Kubernetes erhalten, der Ihnen Feedback gibt, oder dass Sie Rückmeldungen von einem Rezensenten von Kubernetes erhalten, der sich von demjenigen unterscheidet, der ursprünglich für das Feedback zugewiesen wurde. In einigen Fällen kann es vorkommen, dass einer Ihrer Prüfer bei Bedarf eine technische Überprüfung von einem [Kubernetes Tech-Reviewer](https://github.com/kubernetes/website/wiki/tech-reviewers) anfordert. Reviewer geben ihr Bestes, um zeitnah Feedback zu geben, die Antwortzeiten können jedoch je nach den Umständen variieren.
Weitere Informationen zum Beitrag zur Kubernetes-Dokumentation finden Sie unter:

View File

@ -1,60 +1,45 @@
# Dokumentacja projektu Kubernetes
[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
Witamy!
[![Netlify Status](https://api.netlify.com/api/v1/badges/be93b718-a6df-402a-b4a4-855ba186c97d/deploy-status)](https://app.netlify.com/sites/kubernetes-io-master-staging/deploys) [![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
W tym repozytorium znajdziesz wszystko, czego potrzebujesz do zbudowania [strony internetowej Kubernetesa wraz z dokumentacją](https://kubernetes.io/). Bardzo nam miło, że chcesz wziąć udział w jej współtworzeniu!
## Twój wkład w dokumentację
+ [Twój wkład w dokumentację](#twój-wkład-w-dokumentację)
+ [Informacje o wersjach językowych](#informacje-o-wersjach-językowych)
Możesz kliknąć w przycisk **Fork** w prawym górnym rogu ekranu, aby stworzyć kopię tego repozytorium na swoim koncie GitHub. Taki rodzaj kopii (odgałęzienia) nazywa się *fork*. Zmieniaj w nim, co chcesz, a kiedy będziesz już gotowy/a przesłać te zmiany do nas, przejdź do swojej kopii i stwórz nowy *pull request*, abyśmy zostali o tym poinformowani.
# Jak używać tego repozytorium
Po stworzeniu *pull request*, jeden z recenzentów projektu Kubernetes podejmie się przekazania jasnych wskazówek pozwalających podjąć następne działania. Na Tobie, jako właścicielu *pull requesta*, **spoczywa odpowiedzialność za wprowadzenie poprawek zgodnie z uwagami recenzenta.** Może też się zdarzyć, że swoje uwagi zgłosi więcej niż jeden recenzent, lub że recenzję będzie robił ktoś inny, niż ten, kto został przydzielony na początku. W niektórych przypadkach, jeśli zajdzie taka potrzeba, recenzent może poprosić dodatkowo o recenzję jednego z [recenzentów technicznych](https://github.com/kubernetes/website/wiki/Tech-reviewers). Recenzenci zrobią wszystko, aby odpowiedzieć sprawnie, ale konkretny czas odpowiedzi zależy od wielu czynników.
Możesz uruchomić serwis lokalnie poprzez Hugo (Extended version) lub ze środowiska kontenerowego. Zdecydowanie zalecamy korzystanie z kontenerów, bo dzięki temu lokalna wersja będzie spójna z tym, co jest na oficjalnej stronie.
Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz na stronach:
## Wymagania wstępne
* [Jak rozpocząć współpracę](https://kubernetes.io/docs/contribute/start/)
* [Podgląd wprowadzanych zmian w dokumentacji](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
* [Szablony stron](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Styl pisania dokumentacji](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Lokalizacja dokumentacji Kubernetes](https://kubernetes.io/docs/contribute/localization/)
Aby móc skorzystać z tego repozytorium, musisz lokalnie zainstalować:
## Różne wersje językowe `README.md`
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo (Extended version)](https://gohugo.io/)
- Środowisko obsługi kontenerów, np. [Docker-a](https://www.docker.com/).
| | |
|----------------------------------------|----------------------------------------|
| [README po angielsku](README.md) | [README po francusku](README-fr.md) |
| [README po koreańsku](README-ko.md) | [README po niemiecku](README-de.md) |
| [README po portugalsku](README-pt.md) | [README w hindi](README-hi.md) |
| [README po hiszpańsku](README-es.md) | [README po indonezyjsku](README-id.md) |
| [README po chińsku](README-zh.md) | [README po japońsku](README-ja.md) |
| [README po wietnamsku](README-vi.md) | [README po rosyjsku](README-ru.md) |
| [README po włosku](README-it.md) | [README po ukraińsku](README-uk.md) |
| | |
Przed rozpoczęciem zainstaluj niezbędne zależności. Sklonuj repozytorium i przejdź do odpowiedniego katalogu:
## Jak uruchomić lokalną kopię strony przy pomocy Dockera?
Zalecaną metodą uruchomienia serwisu internetowego Kubernetesa lokalnie jest użycie specjalnego obrazu [Dockera](https://docker.com), który zawiera generator stron statycznych [Hugo](https://gohugo.io).
> Użytkownicy Windows będą potrzebowali dodatkowych narzędzi, które mogą zainstalować przy pomocy [Chocolatey](https://chocolatey.org).
```bash
choco install make
```
git clone https://github.com/kubernetes/website.git
cd website
```
> Jeśli wolisz uruchomić serwis lokalnie bez Dockera, przeczytaj [jak uruchomić serwis lokalnie przy pomocy Hugo](#jak-uruchomić-lokalną-kopię-strony-przy-pomocy-hugo) poniżej.
Strona Kubernetesa używa [Docsy Hugo theme](https://github.com/google/docsy#readme). Nawet jeśli planujesz uruchomić serwis w środowisku kontenerowym, zalecamy pobranie podmodułów i innych zależności za pomocą polecenia:
Jeśli [zainstalowałeś i uruchomiłeś](https://www.docker.com/get-started) już Dockera, zbuduj obraz `kubernetes-hugo` lokalnie:
```
# pull in the Docsy submodule
git submodule update --init --recursive --depth 1
```
```bash
## Uruchomienie serwisu w kontenerze
Aby zbudować i uruchomić serwis wewnątrz środowiska kontenerowego, wykonaj następujące polecenia:
```
make container-image
```
Po zbudowaniu obrazu, możesz uruchomić serwis lokalnie:
```bash
make container-serve
```
@ -62,29 +47,106 @@ Aby obejrzeć zawartość serwisu otwórz w przeglądarce adres http://localhost
## Jak uruchomić lokalną kopię strony przy pomocy Hugo?
Zajrzyj do [oficjalnej dokumentacji Hugo](https://gohugo.io/getting-started/installing/) po instrukcję instalacji. Upewnij się, że instalujesz rozszerzoną wersję Hugo, określoną przez zmienną środowiskową `HUGO_VERSION` w pliku [`netlify.toml`](netlify.toml#L9).
Upewnij się, że zainstalowałeś odpowiednią wersję Hugo "extended", określoną przez zmienną środowiskową `HUGO_VERSION` w pliku [`netlify.toml`](netlify.toml#L10).
Aby uruchomić serwis lokalnie po instalacji Hugo, napisz:
Aby uruchomić i przetestować serwis lokalnie, wykonaj:
```bash
# install dependencies
npm ci
make serve
```
Zostanie uruchomiony lokalny serwer Hugo na porcie 1313. Otwórz w przeglądarce adres http://localhost:1313, aby obejrzeć zawartość serwisu. Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
## Społeczność, listy dyskusyjne, uczestnictwo i wsparcie
## Budowanie dokumentacji źródłowej API
Zajrzyj na stronę [społeczności](http://kubernetes.io/community/), aby dowiedzieć się, jak możesz zaangażować się w jej działania.
Budowanie dokumentacji źródłowej API zostało opisane w [angielskiej wersji pliku README.md](README.md#building-the-api-reference-pages).
## Rozwiązywanie problemów
### error: failed to transform resource: TOCSS: failed to transform "scss/main.scss" (text/x-scss): this feature is not available in your current Hugo version
Z przyczyn technicznych, Hugo jest rozprowadzany w dwóch wersjach. Aktualny serwis używa tylko wersji **Hugo Extended**. Na stronie z [wydaniami](https://github.com/gohugoio/hugo/releases) poszukaj archiwum z `extended` w nazwie. Dla potwierdzenia, uruchom `hugo version` i poszukaj słowa `extended`.
### Błąd w środowisku macOS: "too many open files"
Jeśli po uruchomieniu `make serve` na macOS widzisz następujący błąd:
```
ERROR 2020/08/01 19:09:18 Error: listen tcp 127.0.0.1:1313: socket: too many open files
make: *** [serve] Error 1
```
sprawdź aktualny limit otwartych plików:
`launchctl limit maxfiles`
Uruchom następujące polecenia: (na podstawie https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c):
```shell
#!/bin/sh
# These are the original gist links, linking to my gists now.
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxfiles.plist
# curl -O https://gist.githubusercontent.com/a2ikm/761c2ab02b7b3935679e55af5d81786a/raw/ab644cb92f216c019a2f032bbf25e258b01d87f9/limit.maxproc.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxfiles.plist
curl -O https://gist.githubusercontent.com/tombigel/d503800a282fcadbee14b537735d202c/raw/ed73cacf82906fdde59976a0c8248cce8b44f906/limit.maxproc.plist
sudo mv limit.maxfiles.plist /Library/LaunchDaemons
sudo mv limit.maxproc.plist /Library/LaunchDaemons
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
sudo chown root:wheel /Library/LaunchDaemons/limit.maxproc.plist
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
Przedstawiony sposób powinien działać dla MacOS w wersji Catalina i Mojave.
# Zaangażowanie w prace SIG Docs
O społeczności SIG Docs i terminach spotkań dowiesz z [jej strony](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
Możesz kontaktować się z gospodarzami projektu za pomocą:
* [Komunikatora Slack](https://kubernetes.slack.com/messages/sig-docs)
* [List dyskusyjnych](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
- [Komunikatora Slack](https://kubernetes.slack.com/messages/sig-docs) [Tutaj możesz dostać zaproszenie do tej grupy Slack-a](https://slack.k8s.io/)
- [List dyskusyjnych](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
### Zasady postępowania
# Twój wkład w dokumentację
Udział w działaniach społeczności Kubernetes jest regulowany przez [Kodeks postępowania](code-of-conduct.md).
Możesz kliknąć w przycisk **Fork** w prawym górnym rogu ekranu, aby stworzyć kopię tego repozytorium na swoim koncie GitHub. Taki rodzaj kopii (odgałęzienia) nazywa się *fork*. Zmieniaj w nim, co chcesz, a kiedy będziesz już gotowy/a przesłać te zmiany do nas, przejdź do swojej kopii i stwórz nowy *pull request*, abyśmy zostali o tym poinformowani.
## Dziękujemy!
Po stworzeniu *pull request*, jeden z recenzentów projektu Kubernetes podejmie się przekazania jasnych wskazówek pozwalających podjąć następne działania. Na Tobie, jako właścicielu *pull requesta*, **spoczywa odpowiedzialność za wprowadzenie poprawek zgodnie z uwagami recenzenta.**
Może też się zdarzyć, że swoje uwagi zgłosi więcej niż jeden recenzent, lub że recenzję będzie robił ktoś inny, niż ten, kto został przydzielony na początku.
W niektórych przypadkach, jeśli zajdzie taka potrzeba, recenzent może poprosić dodatkowo o recenzję jednego z [recenzentów technicznych](https://github.com/kubernetes/website/wiki/Tech-reviewers). Recenzenci zrobią wszystko, aby odpowiedzieć sprawnie, ale konkretny czas odpowiedzi zależy od wielu czynników.
Więcej informacji na temat współpracy przy tworzeniu dokumentacji znajdziesz na stronach:
* [Udział w rozwijaniu dokumentacji](https://kubernetes.io/docs/contribute/)
* [Rodzaje stron](https://kubernetes.io/docs/contribute/style/page-content-types/)
* [Styl pisania dokumentacji](http://kubernetes.io/docs/contribute/style/style-guide/)
* [Lokalizacja dokumentacji Kubernetes](https://kubernetes.io/docs/contribute/localization/)
# Różne wersje językowe `README.md`
| Język | Język |
|---|---|
| [angielski](README.md) | [francuski](README-fr.md) |
| [koreański](README-ko.md) | [niemiecki](README-de.md) |
| [portugalski](README-pt.md) | [hindi](README-hi.md) |
| [hiszpański](README-es.md) | [indonezyjski](README-id.md) |
| [chiński](README-zh.md) | [japoński](README-ja.md) |
| [wietnamski](README-vi.md) | [rosyjski](README-ru.md) |
| [włoski](README-it.md) | [ukraiński](README-uk.md) |
# Zasady postępowania
Udział w działaniach społeczności Kubernetesa jest regulowany przez [Kodeks postępowania CNCF](https://github.com/cncf/foundation/blob/master/code-of-conduct-languages/pl.md).
# Dziękujemy!
Kubernetes rozkwita dzięki zaangażowaniu społeczności — doceniamy twój wkład w tworzenie naszego serwisu i dokumentacji!

View File

@ -43,6 +43,8 @@ make container-image
make container-serve
```
If you see errors, it probably means that the hugo container did not have enough computing resources available. To solve it, increase the amount of allowed CPU and memory usage for Docker on your machine ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) and [Windows](https://docs.docker.com/docker-for-windows/#resources)).
Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
## Running the website locally using Hugo

@ -1 +1 @@
Subproject commit ce97454e557b2b164f77326cb06ef619ab623599
Subproject commit 78e64febda1b53cafc79979c5978b42162cea276

View File

@ -196,7 +196,7 @@ fullversion = "v1.20.7"
version = "v1.20"
githubbranch = "v1.20.7"
docsbranch = "release-1.20"
url = "https://v1-20.kubernetes.io"
url = "https://v1-20.docs.kubernetes.io"
[[params.versions]]
fullversion = "v1.19.11"
@ -391,15 +391,15 @@ time_format_blog = "02.01.2006"
# A list of language codes to look for untranslated content, ordered from left to right.
language_alternatives = ["en"]
[languages.pt]
[languages.pt-br]
title = "Kubernetes"
description = "Orquestração de contêineres em nível de produção"
languageName ="Português"
weight = 9
contentDir = "content/pt"
contentDir = "content/pt-br"
languagedirection = "ltr"
[languages.pt.params]
[languages.pt-br.params]
time_format_blog = "02.01.2006"
# A list of language codes to look for untranslated content, ordered from left to right.
language_alternatives = ["en"]

View File

@ -147,7 +147,8 @@ Die zweite ist, die interne Node-Liste des Node Controllers mit der Liste der ve
Wenn ein Node in einer Cloud-Umgebung ausgeführt wird und sich in einem schlechten Zustand befindet, fragt der Node Controller den Cloud-Anbieter, ob die virtuelle Maschine für diesen Node noch verfügbar ist. Wenn nicht, löscht der Node Controller den Node aus seiner Node-Liste.
Der dritte ist die Überwachung des Zustands der Nodes. Der Node Controller ist dafür verantwortlich,
die NodeReady-Bedingung von NodeStatus auf ConditionUnknown zu aktualisieren, wenn ein wenn ein Node unerreichbar wird (der Node Controller empfängt aus irgendeinem Grund keine Herzschläge mehr, z.B. weil der Node heruntergefahren ist) und später alle Pods aus dem Node zu entfernen (und diese ordnungsgemäss zu beenden), wenn der Node weiterhin unzugänglich ist. (Die Standard-Timeouts sind 40s, um ConditionUnknown zu melden und 5 Minuten, um mit der Evakuierung der Pods zu beginnen).
die NodeReady-Bedingung von NodeStatus auf ConditionUnknown zu aktualisieren, wenn ein Node unerreichbar wird (der Node Controller empfängt aus irgendeinem Grund keine Herzschläge mehr, z.B. weil der Node heruntergefahren ist) und später alle Pods aus dem Node zu entfernen (und diese ordnungsgemäss zu beenden), wenn der Node weiterhin unzugänglich ist. (Die Standard-Timeouts sind 40s, um ConditionUnknown zu melden und 5 Minuten, um mit der Evakuierung der Pods zu beginnen).
Der Node Controller überprüft den Zustand jedes Nodes alle `--node-monitor-period` Sekunden.

View File

@ -0,0 +1,369 @@
---
title: Pods
content_type: concept
weight: 10
no_list: true
card:
name: concepts
weight: 60
---
<!-- overview -->
_Pods_ sind die kleinsten einsetzbaren Einheiten, die in Kubernetes
erstellt und verwaltet werden können.
Ein _Pod_ (übersetzt Gruppe/Schote, wie z. B. eine Gruppe von Walen oder eine
Erbsenschote) ist eine Gruppe von einem oder mehreren
{{< glossary_tooltip text="Containern" term_id="container" >}} mit gemeinsam
genutzten Speicher- und Netzwerkressourcen und einer Spezifikation für die
Ausführung der Container. Die Ressourcen eines Pods befinden sich immer auf dem
gleichen (virtuellen) Server, werden gemeinsam geplant und in einem
gemeinsamen Kontext ausgeführt. Ein Pod modelliert einen anwendungsspezifischen
"logischen Server": Er enthält eine oder mehrere containerisierte Anwendungen,
die relativ stark voneinander abhängen.
In Nicht-Cloud-Kontexten sind Anwendungen, die auf
demselben physischen oder virtuellen Server ausgeführt werden, vergleichbar zu
Cloud-Anwendungen, die auf demselben logischen Server ausgeführt werden.
Ein Pod kann neben Anwendungs-Containern auch sogenannte
[Initialisierungs-Container](/docs/concepts/workloads/pods/init-containers/)
enthalten, die beim Starten des Pods ausgeführt werden.
Es können auch
kurzlebige/[ephemere Container](/docs/concepts/workloads/pods/ephemeral-containers/)
zum Debuggen gestartet werden, wenn dies der Cluster anbietet.
<!-- body -->
## Was ist ein Pod?
{{< note >}}
Obwohl Kubernetes abgesehen von [Docker](https://www.docker.com/) auch andere
{{<glossary_tooltip text="Container-Laufzeitumgebungen"
term_id="container-runtime">}} unterstützt, ist Docker am bekanntesten und
es ist hilfreich, Pods mit der Terminologie von Docker zu beschreiben.
{{< /note >}}
Der gemeinsame Kontext eines Pods besteht aus einer Reihe von Linux-Namespaces,
Cgroups und möglicherweise anderen Aspekten der Isolation, also die gleichen
Dinge, die einen Dockercontainer isolieren. Innerhalb des Kontexts eines Pods
können die einzelnen Anwendungen weitere Unterisolierungen haben.
Im Sinne von Docker-Konzepten ähnelt ein Pod einer Gruppe von Docker-Containern,
die gemeinsame Namespaces und Dateisystem-Volumes nutzen.
## Pods verwenden
Normalerweise müssen keine Pods erzeugt werden, auch keine Singleton-Pods.
Stattdessen werden sie mit Workload-Ressourcen wie {{<glossary_tooltip
text="Deployment" term_id="deployment">}} oder {{<glossary_tooltip
text="Job" term_id="job">}} erzeugt. Für Pods, die von einem Systemzustand
abhängen, ist die Nutzung von {{<glossary_tooltip text="StatefulSet"
term_id="statefulset">}}-Ressourcen zu erwägen.
Pods in einem Kubernetes-Cluster werden hauptsächlich auf zwei Arten verwendet:
* **Pods, die einen einzelnen Container ausführen**. Das
"Ein-Container-per-Pod"-Modell ist der häufigste Kubernetes-Anwendungsfall. In
diesem Fall kannst du dir einen einen Pod als einen Behälter vorstellen, der einen
einzelnen Container enthält; Kubernetes verwaltet die Pods anstatt die
Container direkt zu verwalten.
* **Pods, in denen mehrere Container ausgeführt werden, die zusammenarbeiten
müssen**. Wenn eine Softwareanwendung aus co-lokaliserten Containern besteht,
die sich gemeinsame Ressourcen teilen und stark voneinander abhängen, kann ein
Pod die Container verkapseln.
Diese Container bilden eine einzelne zusammenhängende
Serviceeinheit, z. B. ein Container, der Daten in einem gemeinsam genutzten
Volume öffentlich verfügbar macht, während ein separater _Sidecar_-Container
die Daten aktualisiert. Der Pod fasst die Container, die Speicherressourcen
und eine kurzlebiges Netzwerk-Identität als eine Einheit zusammen.
{{< note >}}
Das Gruppieren mehrerer gemeinsam lokalisierter und gemeinsam verwalteter
Container in einem einzigen Pod ist ein relativ fortgeschrittener
Anwendungsfall. Du solltest diese Architektur nur in bestimmten Fällen
verwenden, wenn deine Container stark voneinander abhängen.
{{< /note >}}
Jeder Pod sollte eine einzelne Instanz einer gegebenen Anwendung ausführen. Wenn
du deine Anwendung horizontal skalieren willst (um mehr Instanzen auszuführen
und dadurch mehr Gesamtressourcen bereitstellen), solltest du mehrere Pods
verwenden, einen für jede Instanz.
In Kubernetes wird dies typischerweise als Replikation bezeichnet.
Replizierte Pods werden normalerweise als eine Gruppe durch eine
Workload-Ressource und deren
{{<glossary_tooltip text="Controller" term_id="controller">}} erstellt
und verwaltet.
Der Abschnitt [Pods und Controller](#pods-und-controller) beschreibt, wie
Kubernetes Workload-Ressourcen und deren Controller verwendet, um Anwendungen
zu skalieren und zu heilen.
### Wie Pods mehrere Container verwalten
Pods unterstützen mehrere kooperierende Prozesse (als Container), die eine
zusammenhängende Serviceeinheit bilden. Kubernetes plant und stellt automatisch
sicher, dass sich die Container in einem Pod auf demselben physischen oder
virtuellen Server im Cluster befinden. Die Container können Ressourcen und
Abhängigkeiten gemeinsam nutzen, miteinander kommunizieren und
ferner koordinieren wann und wie sie beendet werden.
Zum Beispiel könntest du einen Container haben, der als Webserver für Dateien in
einem gemeinsamen Volume arbeitet. Und ein separater "Sidecar" -Container
aktualisiert die Daten von einer externen Datenquelle, siehe folgenden
Abbildung:
{{< figure src="/images/docs/pod.svg" alt="Pod-Beispieldiagramm" width="50%" >}}
Einige Pods haben sowohl {{<glossary_tooltip text="Initialisierungs-Container"
term_id="init-container">}} als auch {{<glossary_tooltip
text="Anwendungs-Container" term_id="app-container">}}.
Initialisierungs-Container werden gestartet und beendet bevor die
Anwendungs-Container gestartet werden.
Pods stellen standardmäßig zwei Arten von gemeinsam Ressourcen für die
enthaltenen Container bereit:
[Netzwerk](#pod-netzwerk) und [Speicher](#datenspeicherung-in-pods).
## Mit Pods arbeiten
Du wirst selten einzelne Pods direkt in Kubernetes erstellen, selbst
Singleton-Pods. Das liegt daran, dass Pods als relativ kurzlebige
Einweg-Einheiten konzipiert sind. Wann Ein Pod erstellt wird (entweder direkt
von Ihnen oder indirekt von einem
{{<glossary_tooltip text="Controller" term_id="controller">}}), wird die
Ausführung auf einem {{<glossary_tooltip term_id="node">}} in Ihrem Cluster
geplant. Der Pod bleibt auf diesem (virtuellen) Server, bis entweder der Pod die
Ausführung beendet hat, das Pod-Objekt gelöscht wird, der Pod aufgrund
mangelnder Ressourcen *evakuiert* wird oder oder der Node ausfällt.
{{< note >}}
Das Neustarten eines Containers in einem Pod sollte nicht mit dem Neustarten
eines Pods verwechselt werden. Ein Pod ist kein Prozess, sondern eine Umgebung
zur Ausführung von Containern. Ein Pod bleibt bestehen bis er gelöscht wird.
{{< /note >}}
Stelle beim Erstellen des Manifests für ein Pod-Objekt sicher, dass der
angegebene Name ein gültiger
[DNS-Subdomain-Name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
ist.
### Pods und Controller
Mit Workload-Ressourcen kannst du mehrere Pods erstellen und verwalten. Ein
Controller für die Ressource kümmert sich um Replikation, Roll-Out sowie
automatische Wiederherstellung im Fall von versagenden Pods. Wenn beispielsweise ein Node
ausfällt, bemerkt ein Controller, dass die Pods auf dem Node nicht mehr laufen
und plant die Ausführung eines Ersatzpods auf einem funktionierenden Node.
Hier sind einige Beispiele für Workload-Ressourcen, die einen oder mehrere Pods
verwalten:
* {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}
* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}
### Pod Vorlagen
Controller für
{{<glossary_tooltip text="Workload" term_id="workload">}}-Ressourcen
erstellen Pods von einer _Pod Vorlage_ und verwalten diese Pods für dich.
Pod Vorlagen sind Spezifikationen zum Erstellen von Pods und sind in
Workload-Ressourcen enthalten wie z. B.
[Deployments](/docs/concepts/workloads/controllers/deployment/),
[Jobs](/docs/concepts/workloads/controllers/job/), and
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
Jeder Controller für eine Workload-Ressource verwendet die Pod Vorlage innerhalb
des Workload-Objektes, um Pods zu erzeugen. Die Pod Vorlage ist Teil des
gewünschten Zustands der Workload-Ressource, mit der du deine Anwendung
ausgeführt hast.
Das folgende Beispiel ist ein Manifest für einen einfachen Job mit einer
`Vorlage`, die einen Container startet. Der Container in diesem Pod druckt
eine Nachricht und pausiert dann.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: hello
spec:
template:
# Dies is the Pod Vorlage
spec:
containers:
- name: hello
image: busybox
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
# Die Pod Vorlage endet hier
```
Das Ändern der Pod Vorlage oder der Wechsel zu einer neuen Pod Vorlage hat keine
direkten Auswirkungen auf bereits existierende Pods. Wenn du die Pod Vorlage für
eine Workload-Ressource änderst, dann muss diese Ressource die Ersatz-Pods
erstellen, welche die aktualisierte Vorlage verwenden.
Beispielsweise stellt der StatefulSet-Controller sicher, dass für jedes
StatefulSet-Objekt die ausgeführten Pods mit der aktueller Pod Vorlage
übereinstimmen. Wenn du das StatefulSet bearbeitest und die Vorlage änderst,
beginnt das StatefulSet mit der Erstellung neuer Pods basierend auf der
aktualisierten Vorlage. Schließlich werden alle alten Pods durch neue Pods
ersetzt, und das Update ist abgeschlossen.
Jede Workload-Ressource implementiert eigenen Regeln für die Umsetzung von
Änderungen der Pod Vorlage. Wenn du mehr über StatefulSet erfahren möchtest,
dann lese die Seite
[Update-Strategien](/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets)
im Tutorial StatefulSet Basics.
Auf Nodes beobachtet oder verwaltet das
{{< glossary_tooltip term_id="kubelet" text="Kubelet" >}}
nicht direkt die Details zu Pod Vorlagen und Updates. Diese Details sind
abstrahiert. Die Abstraktion und Trennung von Aufgaben vereinfacht die
Systemsemantik und ermöglicht so das Verhalten des Clusters zu ändern ohne
vorhandenen Code zu ändern.
## Pod Update und Austausch
Wie im vorherigen Abschnitt erwähnt, erstellt der Controller neue Pods basierend
auf der aktualisierten Vorlage, wenn die Pod Vorlage für eine Workload-Ressource
geändert wird anstatt die vorhandenen Pods zu aktualisieren oder zu patchen.
Kubernetes hindert dich nicht daran, Pods direkt zu verwalten. Es ist möglich,
einige Felder eines laufenden Pods zu aktualisieren. Allerdings haben
Pod-Aktualisierungsvorgänge wie zum Beispiel
[`patch`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#patch-pod-v1-core),
und
[`replace`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#replace-pod-v1-core)
einige Einschränkungen:
- Die meisten Metadaten zu einem Pod können nicht verändert werden. Zum Beispiel kannst
du nicht die Felder `namespace`, `name`, `uid`, oder `creationTimestamp`
ändern. Das `generation`-Feld muss eindeutig sein. Es werden nur Aktualisierungen
akzeptiert, die den Wert des Feldes inkrementieren.
- Wenn das Feld `metadata.deletionTimestamp` gesetzt ist, kann kein neuer
Eintrag zur Liste `metadata.finalizers` hinzugefügt werden.
- Pod-Updates dürfen keine Felder ändern, die Ausnahmen sind
`spec.containers[*].image`,
`spec.initContainers[*].image`,` spec.activeDeadlineSeconds` oder
`spec.tolerations`. Für `spec.tolerations` kannnst du nur neue Einträge
hinzufügen.
- Für `spec.activeDeadlineSeconds` sind nur zwei Änderungen erlaubt:
1. ungesetztes Feld in eine positive Zahl
1. positive Zahl in eine kleinere positive Zahl, die nicht negativ ist
## Gemeinsame Nutzung von Ressourcen und Kommunikation
Pods ermöglichen den Datenaustausch und die Kommunikation zwischen den
Containern, die im Pod enthalten sind.
### Datenspeicherung in Pods
Ein Pod kann eine Reihe von gemeinsam genutzten Speicher-
{{<glossary_tooltip text="Volumes" term_id="volume">}} spezifizieren. Alle
Container im Pod können auf die gemeinsamen Volumes zugreifen und dadurch Daten
austauschen. Volumes ermöglichen auch, dass Daten ohne Verlust gespeichert
werden, falls einer der Container neu gestartet werden muss.
Im Kapitel [Datenspeicherung](/docs/concepts/storage/) findest du weitere
Informationen, wie Kubernetes gemeinsam genutzten Speicher implementiert und
Pods zur Verfügung stellt.
### Pod-Netzwerk
Jedem Pod wird für jede Adressenfamilie eine eindeutige IP-Adresse zugewiesen.
Jeder Container in einem Pod nutzt den gemeinsamen Netzwerk-Namespace,
einschließlich der IP-Adresse und der Ports. In einem Pod (und **nur** dann)
können die Container, die zum Pod gehören, über `localhost` miteinander
kommunizieren. Wenn Container in einem Pod mit Entitäten *außerhalb des Pods*
kommunizieren, müssen sie koordinieren, wie die gemeinsam genutzten
Netzwerkressourcen (z. B. Ports) verwenden werden. Innerhalb eines Pods teilen
sich Container eine IP-Adresse und eine Reihe von Ports und können sich
gegenseitig über `localhost` finden. Die Container in einem Pod können auch die
üblichen Kommunikationsverfahren zwischen Prozessen nutzen, wie z. B.
SystemV-Semaphoren oder "POSIX Shared Memory". Container in verschiedenen Pods
haben unterschiedliche IP-Adressen und können nicht per IPC ohne
[spezielle Konfiguration](/docs/concepts/policy/pod-security-policy/)
kommunizieren. Container, die mit einem Container in einem anderen Pod
interagieren möchten, müssen IP Netzwerke verwenden.
Für die Container innerhalb eines Pods stimmt der "hostname" mit dem
konfigurierten `Namen` des Pods überein. Mehr dazu im Kapitel
[Netzwerke](/docs/concepts/cluster-administration/networking/).
## Privilegierter Modus für Container
Jeder Container in einem Pod kann den privilegierten Modus aktivieren, indem
das Flag `privileged` im
[Sicherheitskontext](/docs/tasks/configure-pod-container/security-context/)
der Container-Spezifikation verwendet wird.
Dies ist nützlich für Container, die Verwaltungsfunktionen des Betriebssystems
verwenden möchten, z. B. das Manipulieren des Netzwerk-Stacks oder den Zugriff
auf Hardware. Prozesse innerhalb eines privilegierten Containers erhalten fast
die gleichen Rechte wie sie Prozessen außerhalb eines Containers zur Verfügung
stehen.
{{< note >}}
Ihre
{{<glossary_tooltip text="Container-Umgebung" term_id="container-runtime">}}
muss das Konzept eines privilegierten Containers unterstützen, damit diese
Einstellung relevant ist.
{{< /note >}}
## Statische Pods
_Statische Pods_ werden direkt vom Kubelet-Daemon auf einem bestimmten Node
verwaltet ohne dass sie vom
{{<glossary_tooltip text="API Server" term_id="kube-apiserver">}} überwacht
werden.
Die meisten Pods werden von der Kontrollebene verwaltet (z. B.
{{< glossary_tooltip text="Deployment" term_id="deployment" >}}). Aber für
statische Pods überwacht das Kubelet jeden statischen Pod direkt (und startet
ihn neu, wenn er ausfällt).
Statische Pods sind immer an ein {{<glossary_tooltip term_id="kubelet">}} auf
einem bestimmten Node gebunden. Der Hauptanwendungsfall für statische Pods
besteht darin, eine selbst gehostete Steuerebene auszuführen. Mit anderen
Worten: Das Kubelet dient zur Überwachung der einzelnen
[Komponenten der Kontrollebene](/docs/concepts/overview/components/#control-plane-components).
Das Kubelet versucht automatisch auf dem Kubernetes API-Server für jeden
statischen Pod einen spiegelbildlichen Pod
(im Englischen: {{<glossary_tooltip text="mirror pod" term_id="mirror-pod">}})
zu erstellen.
Das bedeutet, dass die auf einem Node ausgeführten Pods auf dem API-Server
sichtbar sind jedoch von dort nicht gesteuert werden können.
## {{% heading "whatsnext" %}}
* Verstehe den
[Lebenszyklus eines Pods](/docs/concepts/workloads/pods/pod-lifecycle/).
* Erfahre mehr über [RuntimeClass](/docs/concepts/containers/runtime-class/)
und wie du damit verschiedene Pods mit unterschiedlichen
Container-Laufzeitumgebungen konfigurieren kannst.
* Mehr zum Thema
[Restriktionen für die Verteilung von Pods](/docs/concepts/workloads/pods/pod-topology-spread-constraints/).
* Lese
[Pod-Disruption-Budget](/docs/concepts/workloads/pods/disruptions/)
und wie du es verwenden kannst, um die Verfügbarkeit von Anwendungen bei
Störungen zu verwalten. Die
[Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
-Objektdefinition beschreibt das Objekt im Detail.
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
erläutert allgemeine Layouts für Pods mit mehr als einem Container.
Um den Hintergrund zu verstehen, warum Kubernetes eine gemeinsame Pod-API in
andere Ressourcen, wie z. B.
{{< glossary_tooltip text="StatefulSets" term_id="statefulset" >}}
oder {{< glossary_tooltip text="Deployments" term_id="deployment" >}} einbindet,
kannst du Artikel zu früheren Technologien lesen, unter anderem:
* [Aurora](https://aurora.apache.org/documentation/latest/reference/configuration/#job-schema)
* [Borg](https://research.google.com/pubs/pub43438.html)
* [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html)
* [Omega](https://research.google/pubs/pub41684/)
* [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/).

View File

@ -58,7 +58,7 @@ Take maven project as example, adding the following dependencies into your depen
Then we can make use of the provided builder libraries to write your own controller.
For example, the following one is a simple controller prints out node information
on watch notification, see complete example [here](https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/ControllerExample.java):
on watch notification, see complete example [here](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-13/src/main/java/io/kubernetes/client/examples/ControllerExample.java):
```java
...

View File

@ -31,9 +31,9 @@ Standard labels are used by Kubernetes components to support some features. For
The labels are reaching general availability in this release. Kubernetes components have been updated to populate the GA and beta labels and to react to both. However, if you are using the beta labels in your pod specs for features such as node affinity, or in your custom controllers, we recommend that you start migrating them to the new GA labels. You can find the documentation for the new labels here:
- [node.kubernetes.io/instance-type](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
- [topology.kubernetes.io/region](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion)
- [topology.kubernetes.io/zone](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
- [node.kubernetes.io/instance-type](/docs/reference/labels-annotations-taints/#nodekubernetesioinstance-type)
- [topology.kubernetes.io/region](/docs/reference/labels-annotations-taints/#topologykubernetesioregion)
- [topology.kubernetes.io/zone](/docs/reference/labels-annotations-taints/#topologykubernetesiozone)
## Volume Snapshot Moves to Beta

View File

@ -325,7 +325,7 @@ Now that we have a way to communicate helpful information to users in context,
we're already considering other ways we can use this to improve people's experience with Kubernetes.
A couple areas we're looking at next are warning about [known problematic values](http://issue.k8s.io/64841#issuecomment-395141013)
we cannot reject outright for compatibility reasons, and warning about use of deprecated fields or field values
(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)).
(like selectors using beta os/arch node labels, [deprecated in v1.14](/docs/reference/labels-annotations-taints/#beta-kubernetes-io-arch-deprecated)).
I'm excited to see progress in this area, continuing to make it easier to use Kubernetes.
---

View File

@ -64,7 +64,7 @@ The Kubernetes community has written a [detailed blog post about deprecation](ht
A longstanding bug regarding exec probe timeouts that may impact existing pod definitions has been fixed. Prior to this fix, the field `timeoutSeconds` was not respected for exec probes. Instead, probes would run indefinitely, even past their configured deadline, until a result was returned. With this change, the default value of `1 second` will be applied if a value is not specified and existing pod definitions may no longer be sufficient if a probe takes longer than one second. A feature gate, called `ExecProbeTimeout`, has been added with this fix that enables cluster operators to revert to the previous behavior, but this will be locked and removed in subsequent releases. In order to revert to the previous behavior, cluster operators should set this feature gate to `false`.
Please review the updated documentation regarding [configuring probes](docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for more details.
Please review the updated documentation regarding [configuring probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for more details.
## Other Updates

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 53 KiB

View File

@ -0,0 +1,105 @@
---
layout: blog
title: 'Kubernetes 1.21: CronJob Reaches GA'
date: 2021-04-09
slug: kubernetes-release-1.21-cronjob-ga
---
**Authors:** Alay Patel (Red Hat), and Maciej Szulik (Red Hat)
In Kubernetes v1.21, the
[CronJob](/docs/concepts/workloads/controllers/cron-jobs/) resource
reached general availability (GA). We've also substantially improved the
performance of CronJobs since Kubernetes v1.19, by implementing a new
controller.
In Kubernetes v1.20 we launched a revised v2 controller for CronJobs,
initially as an alpha feature. Kubernetes 1.21 uses the newer controller by
default, and the CronJob resource itself is now GA (group version: `batch/v1`).
In this article, we'll take you through the driving forces behind this new
development, give you a brief description of controller design for core
Kubernetes, and we'll outline what you will gain from this improved controller.
The driving force behind promoting the API was Kubernetes' policy choice to
[ensure APIs move beyond beta](/blog/2020/08/21/moving-forward-from-beta/).
That policy aims to prevent APIs from being stuck in a “permanent beta” state.
Over the years the old CronJob controller implementation had received healthy
feedback from the community, with reports of several widely recognized
[issues](https://github.com/kubernetes/kubernetes/issues/82659).
If the beta API for CronJob was to be supported as GA, the existing controller
code would need substantial rework. Instead, the SIG Apps community decided
to introduce a new controller and gradually replace the old one.
## How do controllers work?
Kubernetes [controllers](/docs/concepts/architecture/controller/) are control
loops that watch the state of resource(s) in your cluster, then make or
request changes where needed. Each controller tries to move part of the
current cluster state closer to the desired state.
The v1 CronJob controller works by performing a periodic poll and sweep of all
the CronJob objects in your cluster, in order to act on them. It is a single
worker implementation that gets all CronJobs every 10 seconds, iterates over
each one of them, and syncs them to their desired state. This was the default
way of doing things almost 5 years ago when the controller was initially
written. In hindsight, we can certainly say that such an approach can
overload the API server at scale.
These days, every core controller in kubernetes must follow the guidelines
described in [Writing Controllers](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/controllers.md#readme).
Among many details, that document prescribes using
[shared informers](https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/)
to “receive notifications of adds, updates, and deletes for a particular
resource”. Upon any such events, the related object(s) is placed in a queue.
Workers pull items from the queue and process them one at a time. This
approach ensures consistency and scalability.
The picture below shows the flow of information from kubernetes API server,
through shared informers and queue, to the main part of a controller - a
reconciliation loop which is responsible for performing the core functionality.
![Controller flowchart](controller-flowchart.svg)
The CronJob controller V2 uses a queue that implements the DelayingInterface to
handle the scheduling aspect. This queue allows processing an element after a
specific time interval. Every time there is a change in a CronJob or its related
Jobs, the key that represents the CronJob is pushed to the queue. The main
handler pops the key, processes the CronJob, and after completion
pushes the key back into the queue for the next scheduled time interval. This is
immediately a more performant implementation, as it no longer requires a linear
scan of all the CronJobs. On top of that, this controller can be scaled by
increasing the number of workers processing the CronJobs in parallel.
## Performance impact of the new controller {#performance-impact}
In order to test the performance difference of the two controllers a VM instance
with 128 GiB RAM and 64 vCPUs was used to set up a single node Kubernetes cluster.
Initially, a sample workload was created with 20 CronJob instances with a schedule
to run every minute, and 2100 CronJobs running every 20 hours. Additionally,
over the next few minutes we added 1000 CronJobs with a schedule to run every
20 hours, until we reached a total of 5120 CronJobs.
![Visualization of performance](performance-impact-graph.svg)
We observed that for every 1000 CronJobs added, the old controller used
around 90 to 120 seconds more wall-clock time to schedule 20 Jobs every cycle.
That is, at 5120 CronJobs, the old controller took approximately 9 minutes
to create 20 Jobs. Hence, during each cycle, about 8 schedules were missed.
The new controller, implemented with architectural change explained above,
created 20 Jobs without any delay, even when we created an additional batch
of 1000 CronJobs reaching a total of 6120.
As a closing remark, the new controller exposes a histogram metric
`cronjob_controller_cronjob_job_creation_skew_duration_seconds` which helps
monitor the time difference between when a CronJob is meant to run and when
the actual Job is created.
Hopefully the above description is a sufficient argument to follow the
guidelines and standards set in the Kubernetes project, even for your own
controllers. As mentioned before, the new controller is on by default starting
from Kubernetes v1.21; if you want to check it out in the previous release (1.20),
you can enable the `CronJobControllerV2`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
for the kube-controller-manager: `--feature-gate="CronJobControllerV2=true"`.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 104 KiB

View File

@ -9,7 +9,7 @@ slug: kubernetes-1-21-release-announcement
Were pleased to announce the release of Kubernetes 1.21, our first release of 2021! This release consists of 51 enhancements: 13 enhancements have graduated to stable, 16 enhancements are moving to beta, 20 enhancements are entering alpha, and 2 features have been deprecated.
This release cycle, we saw a major shift in ownership of processes around the release team. We moved from a synchronous mode of communcation, where we periodically asked the community for inputs, to a mode where the community opts-in features and/or blogs to the release. These changes have resulted in an increase in collaboration and teamwork across the community. The result of all that is reflected in Kubernetes 1.21 having the most number of features in the recent times.
This release cycle, we saw a major shift in ownership of processes around the release team. We moved from a synchronous mode of communication, where we periodically asked the community for inputs, to a mode where the community opts-in to contribute features and/or blogs to the release. These changes have resulted in an increase in collaboration and teamwork across the community. The result of all that is reflected in Kubernetes 1.21 having the most number of features in the recent times.
## Major Themes

View File

@ -0,0 +1,110 @@
---
title: "Introducing Suspended Jobs"
date: 2021-04-12
slug: introducing-suspended-jobs
layout: blog
---
**Author:** Adhityaa Chandrasekar (Google)
[Jobs](/docs/concepts/workloads/controllers/job/) are a crucial part of
Kubernetes' API. While other kinds of workloads such as [Deployments](/docs/concepts/workloads/controllers/deployment/),
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/),
[StatefulSets](/docs/concepts/workloads/controllers/statefulset/), and
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/)
solve use-cases that require Pods to run forever, Jobs are useful when Pods need
to run to completion. Commonly used in parallel batch processing, Jobs can be
used in a variety of applications ranging from video rendering and database
maintenance to sending bulk emails and scientific computing.
While the amount of parallelism and the conditions for Job completion are
configurable, the Kubernetes API lacked the ability to suspend and resume Jobs.
This is often desired when cluster resources are limited and a higher priority
Job needs to execute in the place of another Job. Deleting the lower priority
Job is a poor workaround as Pod completion history and other metrics associated
with the Job will be lost.
With the recent Kubernetes 1.21 release, you will be able to suspend a Job by
updating its spec. The feature is currently in **alpha** and requires you to
enable the `SuspendJob` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/)
and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
in order to use it.
## API changes
We introduced a new boolean field `suspend` into the `.spec` of Jobs. Let's say
I create the following Job:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
suspend: true
parallelism: 2
completions: 10
template:
spec:
containers:
- name: my-container
image: busybox
command: ["sleep", "5"]
restartPolicy: Never
```
Jobs are not suspended by default, so I'm explicitly setting the `suspend` field
to _true_ in the `.spec` of the above Job manifest. In the above example, the
Job controller will refrain from creating Pods until I'm ready to start the Job,
which I can do by updating `suspend` to false.
As another example, consider a Job that was created with the `suspend` field
omitted. The Job controller will happily create Pods to work towards Job
completion. However, before the Job completes, if I explicitly set the field to
true with a Job update, the Job controller will terminate all active Pods that
are running and will wait indefinitely for the flag to be flipped back to false.
Typically, Pod termination is done by sending a SIGTERM signal to all container
processes in the Pod; the [graceful termination period](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)
defined in the Pod spec will be honoured. Pods terminated this way will not be
counted as failures by the Job controller.
It is important to understand that succeeded and failed Pods from the past will
continue to exist after you suspend a Job. That is, that they will count towards
Job completion once you resume it. You can verify this by looking at Job's
status before and after suspension.
Read the [documentation](/docs/concepts/workloads/controllers/job#suspending-a-job)
for a full overview of this new feature.
## Where is this useful?
Let's say I'm the operator of a large cluster. I have many users submitting Jobs
to the cluster, but not all Jobs are created equal — some Jobs are more
important than others. Cluster resources aren't infinite either, so all users
must share resources. If all Jobs were created in the suspended state and placed
in a pending queue, I can achieve priority-based Job scheduling by resuming Jobs
in the right order.
As another motivational use-case, consider a cloud provider where compute
resources are cheaper at night than in the morning. If I have a long-running Job
that takes multiple days to complete, being able to suspend the Job in the
morning and then resume it in the evening every day can reduce costs.
Since this field is a part of the Job spec, [CronJobs](/docs/concepts/workloads/controllers/cron-jobs/)
automatically get this feature for free too.
## References and next steps
If you're interested in a deeper dive into the rationale behind this feature and
the decisions we have taken, consider reading the [enhancement proposal](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/2232-suspend-jobs).
There's more detail on suspending and resuming jobs in the documentation for [Job](/docs/concepts/workloads/controllers/job#suspending-a-job).
As previously mentioned, this feature is currently in alpha and is available
only if you explicitly opt-in through the `SuspendJob` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
If this is a feature you're interested in, please consider testing suspended
Jobs in your cluster and providing feedback. You can discuss this enhancement [on GitHub](https://github.com/kubernetes/enhancements/issues/2232).
The SIG Apps community also [meets regularly](https://github.com/kubernetes/community/tree/master/sig-apps#meetings)
and can be reached through [Slack or the mailing list](https://github.com/kubernetes/community/tree/master/sig-apps#contact).
Barring any unexpected changes to the API, we intend to graduate the feature to
beta in Kubernetes 1.22, so that the feature becomes available by default.

View File

@ -0,0 +1,45 @@
---
layout: blog
title: "kube-state-metrics goes v2.0"
date: 2021-04-13
slug: kube-state-metrics-v-2-0
---
**Authors:** Lili Cosic (Red Hat), Frederic Branczyk (Polar Signals), Manuel Rüger (Sony Interactive Entertainment), Tariq Ibrahim (Salesforce)
## What?
[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics), a project under the Kubernetes organization, generates Prometheus format metrics based on the current state of the Kubernetes native resources. It does this by listening to the Kubernetes API and gathering information about resources and objects, e.g. Deployments, Pods, Services, and StatefulSets. A full list of resources is available in the [documentation](https://github.com/kubernetes/kube-state-metrics/tree/master/docs) of kube-state-metrics.
## Why?
There are numerous useful metrics and insights provided by `kube-state-metrics` right out of the box! These metrics can be used to serve as an insight into your cluster: Either through metrics alone, in the form of dashboards, or through an alerting pipeline. To provide a few examples:
* `kube_pod_container_status_restarts_total` can be used to alert on a crashing pod.
* `kube_deployment_status_replicas` which together with `kube_deployment_status_replicas_available` can be used to alert on whether a deployment is rolled out successfully or stuck.
* `kube_pod_container_resource_requests` and `kube_pod_container_resource_limits` can be used in capacity planning dashboards.
And there are many more metrics available! To learn more about the other metrics and their details, please check out the [documentation](https://github.com/kubernetes/kube-state-metrics/tree/master/docs#readme).
## What is new in v2.0?
So now that we know what kube-state-metrics is, we are excited to announce the next release: kube-state-metrics v2.0! This release was long-awaited and started with an alpha release in September 2020. To ease maintenance we removed tech debt and also adjusted some confusing wording around user-facing flags and APIs. We also removed some metrics that caused unnecessarily high cardinality in Prometheus! For the 2.0 release, we took the time to set up scale and performance testing. This allows us to better understand if we hit any issues in large clusters and also to document resource request recommendations for your clusters. In this release (and v1.9.8) container builds providing support for multiple architectures were introduced allowing you to run kube-state-metrics on ARM, ARM64, PPC64 and S390x as well!
So without further ado, here is the list of more noteworthy user-facing breaking changes. A full list of changes, features and bug fixes is available in the changelog at the end of this post.
* Flag `--namespace` was renamed to `--namespaces`. If you are using the former, please make sure to update the flag before deploying the latest release.
* Flag `--collectors` was renamed to `--resources`.
* Flags `--metric-blacklist` and `--metric-whitelist` were renamed to `--metric-denylist` and `--metric-allowlist`.
* Flag `--metric-labels-allowlist` allows you to specify a list of Kubernetes labels that get turned into the dimensions of the `kube_<resource-name>_labels` metrics. By default, the metric contains only name and namespace labels.
* All metrics with a prefix of `kube_hpa_*` were renamed to `kube_horizontalpodautoscaler_*`.
* Metric labels that relate to Kubernetes were converted to snake_case.
* If you are importing kube-state-metrics as a library, we have updated our go module path to `k8s.io/kube-state-metrics/v2`
* All deprecated stable metrics were removed as per the [notice in the v1.9 release](https://github.com/kubernetes/kube-state-metrics/tree/release-1.9/docs#metrics-deprecation).
* `quay.io/coreos/kube-state-metrics` images will no longer be updated. `k8s.gcr.io/kube-state-metrics/kube-state-metrics` is the new canonical location.
* The helm chart that is part of the kubernetes/kube-state-metrics repository is deprecated. https://github.com/prometheus-community/helm-charts will be its new location.
For the full list of v2.0 release changes includes features, bug fixes and other breaking changes see the full [CHANGELOG](https://github.com/kubernetes/kube-state-metrics/blob/master/CHANGELOG.md).
## Found a problem?
Thanks to all our users for testing so far and thank you to all our contributors for your issue reports as well as code and documentation changes! If you find any problems, we the [maintainers](https://github.com/kubernetes/kube-state-metrics/blob/master/OWNERS) are more than happy to look into them, so please report them by opening a [GitHub issue](https://github.com/kubernetes/kube-state-metrics/issues/new/choose).

View File

@ -0,0 +1,216 @@
---
layout: blog
title: "Local Storage: Storage Capacity Tracking, Distributed Provisioning and Generic Ephemeral Volumes hit Beta"
date: 2021-04-14
slug: local-storage-features-go-beta
---
**Authors:** Patrick Ohly (Intel)
The ["generic ephemeral
volumes"](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes)
and ["storage capacity
tracking"](/docs/concepts/storage/storage-capacity/)
features in Kubernetes are getting promoted to beta in Kubernetes
1.21. Together with the [distributed provisioning
support](https://github.com/kubernetes-csi/external-provisioner#deployment-on-each-node)
in the CSI external-provisioner, development and deployment of
Container Storage Interface (CSI) drivers which manage storage locally
on a node become a lot easier.
This blog post explains how such drivers worked before and how these
features can be used to make drivers simpler.
## Problems we are solving
There are drivers for local storage, like
[TopoLVM](https://github.com/cybozu-go/topolvm) for traditional disks
and [PMEM-CSI](https://intel.github.io/pmem-csi/latest/README.html)
for [persistent memory](https://pmem.io/). They work and are ready for
usage today also on older Kubernetes releases, but making that possible
was not trivial.
### Central component required
The first problem is volume provisioning: it is handled through the
Kubernetes control plane. Some component must react to
[PersistentVolumeClaims](/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
(PVCs)
and create volumes. Usually, that is handled by a central deployment
of the [CSI
external-provisioner](https://kubernetes-csi.github.io/docs/external-provisioner.html)
and a CSI driver component that then connects to the storage
backplane. But for local storage, there is no such backplane.
TopoLVM solved this by having its different components communicate
with each other through the Kubernetes API server by creating and
reacting to custom resources. So although TopoLVM is based on CSI, a
standard that is independent of a particular container orchestrator,
TopoLVM only works on Kubernetes.
PMEM-CSI created its own storage backplane with communication through
gRPC calls. Securing that communication depends on TLS certificates,
which made driver deployment more complicated.
### Informing Pod scheduler about capacity
The next problem is scheduling. When volumes get created independently
of pods ("immediate binding"), the CSI driver must pick a node without
knowing anything about the pod(s) that are going to use it. Topology
information then forces those pods to run on the node where the volume
was created. If other resources like RAM or CPU are exhausted there,
the pod cannot start. This can be avoided by configuring in the
StorageClass that volume creation is meant to wait for the first pod
that uses a volume (`volumeBinding: WaitForFirstConsumer`). In that
mode, the Kubernetes scheduler tentatively picks a node based on other
constraints and then the external-provisioner is asked to create a
volume such that it is usable there. If local storage is exhausted,
the provisioner [can
ask](https://github.com/kubernetes-csi/external-provisioner/blob/master/doc/design.md)
for another scheduling round. But without information about available
capacity, the scheduler might always pick the same unsuitable node.
Both TopoLVM and PMEM-CSI solved this with scheduler extenders. This
works, but it is hard to configure when deploying the driver because
communication between kube-scheduler and the driver is very dependent
on how the cluster was set up.
### Rescheduling
A common use case for local storage is scratch space. A better fit for
that use case than persistent volumes are ephemeral volumes that get
created for a pod and destroyed together with it. The initial API for
supporting ephemeral volumes with CSI drivers (hence called ["*CSI*
ephemeral
volumes"](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes))
was [designed for light-weight
volumes](https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md)
where volume creation is unlikely to fail. Volume creation happens
after pods have been permanently scheduled onto a node, in contrast to
the traditional provisioning where volume creation is tried before
scheduling a pod onto a node. CSI drivers must be modified to support
"CSI ephemeral volumes", which was done for TopoLVM and PMEM-CSI. But
due to the design of the feature in Kubernetes, pods can get stuck
permanently if storage capacity runs out on a node. The scheduler
extenders try to avoid that, but cannot be 100% reliable.
## Enhancements in Kubernetes 1.21
### Distributed provisioning
Starting with [external-provisioner
v2.1.0](https://github.com/kubernetes-csi/external-provisioner/releases/tag/v2.1.0),
released for Kubernetes 1.20, provisioning can be handled by
external-provisioner instances that get [deployed together with the
CSI driver on each
node](https://github.com/kubernetes-csi/external-provisioner#deployment-on-each-node)
and then cooperate to provision volumes ("distributed
provisioning"). There is no need any more to have a central component
and thus no need for communication between nodes, at least not for
provisioning.
### Storage capacity tracking
A scheduler extender still needs some way to find out about capacity
on each node. When PMEM-CSI switched to distributed provisioning in
v0.9.0, this was done by querying the metrics data exposed by the
local driver containers. But it is better also for users to eliminate
the need for a scheduler extender completely because the driver
deployment becomes simpler. [Storage capacity
tracking](/docs/concepts/storage/storage-capacity/), [introduced in
1.19](/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/)
and promoted to beta in Kubernetes 1.21, achieves that. It works by
publishing information about capacity in `CSIStorageCapacity`
objects. The scheduler itself then uses that information to filter out
unsuitable nodes. Because information might be not quite up-to-date,
pods may still get assigned to nodes with insufficient storage, it's
just less likely and the next scheduling attempt for a pod should work
better once the information got refreshed.
### Generic ephemeral volumes
So CSI drivers still need the ability to recover from a bad scheduling
decision, something that turned out to be impossible to implement for
"CSI ephemeral volumes". ["*Generic* ephemeral
volumes"](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes),
another feature that got promoted to beta in 1.21, don't have that
limitation. This feature adds a controller that will create and manage
PVCs with the lifetime of the Pod and therefore the normal recovery
mechanism also works for them. Existing storage drivers will be able
to process these PVCs without any new logic to handle this new
scenario.
## Known limitations
Both generic ephemeral volumes and storage capacity tracking increase
the load on the API server. Whether that is a problem depends a lot on
the kind of workload, in particular how many pods have volumes and how
often those need to be created and destroyed.
No attempt was made to model how scheduling decisions affect storage
capacity. That's because the effect can vary considerably depending on
how the storage system handles storage. The effect is that multiple
pods with unbound volumes might get assigned to the same node even
though there is only sufficient capacity for one pod. Scheduling
should recover, but it would be more efficient if the scheduler knew
more about storage.
Because storage capacity gets published by a running CSI driver and
the cluster autoscaler needs information about a node that hasn't been
created yet, it will currently not scale up a cluster for pods that
need volumes. There is an [idea how to provide that
information](https://github.com/kubernetes/autoscaler/pull/3887), but
more work is needed in that area.
Distributed snapshotting and resizing are not currently supported. It
should be doable to adapt the respective sidecar and there are
tracking issues for external-snapshotter and external-resizer open
already, they just need some volunteer.
The recovery from a bad scheduling decising can fail for pods with
multiple volumes, in particular when those volumes are local to nodes:
if one volume can be created and then storage is insufficient for
another volume, the first volume continues to exist and forces the
scheduler to put the pod onto the node of that volume. There is an
idea how do deal with this, [rolling back the provision of the
volume](https://github.com/kubernetes/enhancements/pull/1703), but
this is only in the very early stages of brainstorming and not even a
merged KEP yet. For now it is better to avoid creating pods with more
than one persistent volume.
## Enabling the new features and next steps
With the feature entering beta in the 1.21 release, no additional actions are needed to enable it. Generic
ephemeral volumes also work without changes in CSI drivers. For more
information, see the
[documentation](/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes)
and the [previous blog
post](/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/)
about it. The API has not changed at all between alpha and beta.
For the other two features, the external-provisioner documentation
explains how CSI driver developers must change how their driver gets
deployed to support [storage capacity
tracking](https://github.com/kubernetes-csi/external-provisioner#capacity-support)
and [distributed
provisioning](https://github.com/kubernetes-csi/external-provisioner#deployment-on-each-node).
These two features are independent, therefore it is okay to enable
only one of them.
[SIG
Storage](https://github.com/kubernetes/community/tree/master/sig-storage)
would like to hear from you if you are using these new features. We
can be reached through
[email](https://groups.google.com/forum/#!forum/kubernetes-sig-storage),
[Slack](https://slack.k8s.io/) (channel [`#sig-storage`](https://kubernetes.slack.com/messages/sig-storage)) and in the
[regular SIG
meeting](https://github.com/kubernetes/community/tree/master/sig-storage#meeting).
A description of your workload would be very useful to validate design
decisions, set up performance tests and eventually promote these
features to GA.
## Acknowledgements
Thanks a lot to the members of the community who have contributed to these
features or given feedback including members of SIG Scheduling, SIG Auth,
and of course SIG Storage!

View File

@ -0,0 +1,80 @@
---
layout: blog
title: 'Three Tenancy Models For Kubernetes'
date: 2021-04-15
slug: three-tenancy-models-for-kubernetes
---
**Authors:** Ryan Bezdicek (Medtronic), Jim Bugwadia (Nirmata), Tasha Drew (VMware), Fei Guo (Alibaba), Adrian Ludwin (Google)
Kubernetes clusters are typically used by several teams in an organization. In other cases, Kubernetes may be used to deliver applications to end users requiring segmentation and isolation of resources across users from different organizations. Secure sharing of Kubernetes control plane and worker node resources allows maximizing productivity and saving costs in both cases.
The Kubernetes Multi-Tenancy Working Group is chartered with defining tenancy models for Kubernetes and making it easier to operationalize tenancy related use cases. This blog post, from the working group members, describes three common tenancy models and introduces related working group projects.
We will also be presenting on this content and discussing different use cases at our Kubecon EU 2021 panel session, [Multi-tenancy vs. Multi-cluster: When Should you Use What?](https://sched.co/iE66).
## Namespaces as a Service
With the *namespaces-as-a-service* model, tenants share a cluster and tenant workloads are restricted to a set of Namespaces assigned to the tenant. The cluster control plane resources like the API server and scheduler, and worker node resources like CPU, memory, etc. are available for use across all tenants.
To isolate tenant workloads, each namespace must also contain:
* **[role bindings](/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding):** for controlling access to the namespace
* **[network policies](/docs/concepts/services-networking/network-policies/):** to prevent network traffic across tenants
* **[resource quotas](/docs/concepts/policy/resource-quotas/):** to limit usage and ensure fairness across tenants
With this model, tenants share cluster-wide resources like ClusterRoles and CustomResourceDefinitions (CRDs) and hence cannot create or update these cluster-wide resources.
The [Hierarchical Namespace Controller (HNC)](/blog/2020/08/14/introducing-hierarchical-namespaces/) project makes it easier to manage namespace based tenancy by allowing users to create additional namespaces under a namespace, and propagating resources within the namespace hierarchy. This allows self-service namespaces for tenants, without requiring cluster-wide permissions.
The [Multi-Tenancy Benchmarks (MTB)](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) project provides benchmarks and a command-line tool that performs several configuration and runtime checks to report if tenant namespaces are properly isolated and the necessary security controls are implemented.
## Clusters as a Service
With the *clusters-as-a-service* usage model, each tenant gets their own cluster. This model allows tenants to have different versions of cluster-wide resources such as CRDs, and provides full isolation of the Kubernetes control plane.
The tenant clusters may be provisioned using projects like [Cluster API (CAPI)](https://cluster-api.sigs.k8s.io/) where a management cluster is used to provision multiple workload clusters. A workload cluster is assigned to a tenant and tenants have full control over cluster resources. Note that in most enterprises a central platform team may be responsible for managing required add-on services such as security and monitoring services, and for providing cluster lifecycle management services such as patching and upgrades. A tenant administrator may be restricted from modifying the centrally managed services and other critical cluster information.
## Control planes as a Service
In a variation of the *clusters-as-a-service* model, the tenant cluster may be a **virtual cluster** where each tenant gets their own dedicated Kubernetes control plane but share worker node resources. As with other forms of virtualization, users of a virtual cluster see no significant differences between a virtual cluster and other Kubernetes clusters. This is sometimes referred to as `Control Planes as a Service` (CPaaS).
A virtual cluster of this type shares worker node resources and workload state independent control plane components, like the scheduler. Other workload aware control-plane components, like the API server, are created on a per-tenant basis to allow overlaps, and additional components are used to synchronize and manage state across the per-tenant control plane and the underlying shared cluster resources. With this model users can manage their own cluster-wide resources.
The [Virtual Cluster](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/virtualcluster) project implements this model, where a `supercluster` is shared by multiple `virtual clusters`. The [Cluster API Nested](https://github.com/kubernetes-sigs/cluster-api-provider-nested) project is extending this work to conform to the CAPI model, allowing use of familiar API resources to provision and manage virtual clusters.
## Security considerations
Cloud native security involves different system layers and lifecycle phases as described in the [Cloud Native Security Whitepaper](/blog/2020/11/18/cloud-native-security-for-your-clusters) from CNCF SIG Security. Without proper security measures implemented across all layers and phases, Kubernetes tenant isolation can be compromised and a security breach with one tenant can threaten other tenants.
It is important for any new user to Kubernetes to realize that the default installation of a new upstream Kubernetes cluster is not secure, and you are going to need to invest in hardening it in order to avoid security issues.
At a minimum, the following security measures are required:
* image scanning: container image vulnerabilities can be exploited to execute commands and access additional resources.
* [RBAC](/docs/reference/access-authn-authz/rbac/): for *namespaces-as-a-service* user roles and permissions must be properly configured at a per-namespace level; for other models tenants may need to be restricted from accessing centrally managed add-on services and other cluster-wide resources.
* [network policies](/docs/concepts/services-networking/network-policies/): for *namespaces-as-a-service* default network policies that deny all ingress and egress traffic are recommended to prevent cross-tenant network traffic and may also be used as a best practice for other tenancy models.
* [Kubernetes Pod Security Standards](/docs/concepts/security/pod-security-standards/): to enforce Pod hardening best practices the `Restricted` policy is recommended as the default for tenant workloads with exclusions configured only as needed.
* [CIS Benchmarks for Kubernetes](https://www.cisecurity.org/benchmark/kubernetes/): the CIS Benchmarks for Kubernetes guidelines should be used to properly configure Kubernetes control-plane and worker node components.
Additional recommendations include using:
* policy engines: for configuration security best practices, such as only allowing trusted registries.
* runtime scanners: to detect and report runtime security events.
* VM-based container sandboxing: for stronger data plane isolation.
While proper security is required independently of tenancy models, not having essential security controls like [pod security](/docs/concepts/security/pod-security-standards/) in a shared cluster provides attackers with means to compromise tenancy models and possibly access sensitive information across tenants increasing the overall risk profile.
## Summary
A 2020 CNCF survey showed that production Kubernetes usage has increased by over 300% since 2016. As an increasing number of Kubernetes workloads move to production, organizations are looking for ways to share Kubernetes resources across teams for agility and cost savings.
The **namespaces as a service** tenancy model allows sharing clusters and hence enables resource efficiencies. However, it requires proper security configurations and has limitations as all tenants share the same cluster-wide resources.
The **clusters as a service** tenancy model addresses these limitations, but with higher management and resource overhead.
The **control planes as a service** model provides a way to share resources of a single Kubernetes cluster and also let tenants manage their own cluster-wide resources. Sharing worker node resources increases resource effeciencies, but also exposes cross tenant security and isolation concerns that exist for shared clusters.
In many cases, organizations will use multiple tenancy models to address different use cases and as different product and development teams will have varying needs. Following security and management best practices, such as applying [Pod Security Standards](/docs/concepts/security/pod-security-standards/) and not using the `default` namespace, makes it easer to switch from one model to another.
The [Kubernetes Multi-Tenancy Working Group](https://github.com/kubernetes-sigs/multi-tenancy) has created several projects like [Hierarchical Namespaces Controller](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/hnc), [Virtual Cluster](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/incubator/virtualcluster) / [CAPI Nested](https://github.com/kubernetes-sigs/cluster-api-provider-nested), and [Multi-Tenancy Benchmarks](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) to make it easier to provision and manage multi-tenancy models.
If you are interested in multi-tenancy topics, or would like to share your use cases, please join us in an upcoming [community meeting](https://github.com/kubernetes/community/blob/master/wg-multitenancy/README.md) or reach out on the *wg-multitenancy channel* on the [Kubernetes slack](https://slack.k8s.io/).

View File

@ -0,0 +1,95 @@
---
layout: blog
title: "Volume Health Monitoring Alpha Update"
date: 2021-04-16
slug: volume-health-monitoring-alpha-update
---
**Author:** Xing Yang (VMware)
The CSI Volume Health Monitoring feature, originally introduced in 1.19 has undergone a large update for the 1.21 release.
## Why add Volume Health Monitoring to Kubernetes?
Without Volume Health Monitoring, Kubernetes has no knowledge of the state of the underlying volumes of a storage system after a PVC is provisioned and used by a Pod. Many things could happen to the underlying storage system after a volume is provisioned in Kubernetes. For example, the volume could be deleted by accident outside of Kubernetes, the disk that the volume resides on could fail, it could be out of capacity, the disk may be degraded which affects its performance, and so on. Even when the volume is mounted on a pod and used by an application, there could be problems later on such as read/write I/O errors, file system corruption, accidental unmounting of the volume outside of Kubernetes, etc. It is very hard to debug and detect root causes when something happened like this.
Volume health monitoring can be very beneficial to Kubernetes users. It can communicate with the CSI driver to retrieve errors detected by the underlying storage system. PVC events can be reported up to the user to take action. For example, if the volume is out of capacity, they could request a volume expansion to get more space.
## What is Volume Health Monitoring?
CSI Volume Health Monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on PVCs or Pods.
The Kubernetes components that monitor the volumes and report events with volume health information include the following:
* Kubelet, in addition to gathering the existing volume stats will watch the volume health of the PVCs on that node. If a PVC has an abnormal health condition, an event will be reported on the pod object using the PVC. If multiple pods are using the same PVC, events will be reported on all pods using that PVC.
* An [External Volume Health Monitor Controller](https://github.com/kubernetes-csi/external-health-monitor) watches volume health of the PVCs and reports events on the PVCs.
Note that the node side volume health monitoring logic was an external agent when this feature was first introduced in the Kubernetes 1.19 release. In Kubernetes 1.21, the node side volume health monitoring logic was moved from the external agent into the Kubelet, to avoid making duplicate CSI function calls. With this change in 1.21, a new alpha [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `CSIVolumeHealth` was introduced for the volume health monitoring logic in Kubelet.
Currently the Volume Health Monitoring feature is informational only as it only reports abnormal volume health events on PVCs or Pods. Users will need to check these events and manually fix the problems. This feature serves as a stepping stone towards programmatic detection and resolution of volume health issues by Kubernetes in the future.
## How do I use Volume Health on my Kubernetes Cluster?
To use the Volume Health feature, first make sure the CSI driver you are using supports this feature. Refer to this [CSI drivers doc](https://kubernetes-csi.github.io/docs/drivers.html) to find out which CSI drivers support this feature.
To enable Volume Health Monitoring from the node side, the alpha feature gate `CSIVolumeHealth` needs to be enabled.
If a CSI driver supports the Volume Health Monitoring feature from the controller side, events regarding abnormal volume conditions will be recorded on PVCs.
If a CSI driver supports the Volume Health Monitoring feature from the controller side, user can also get events regarding node failures if the `enable-node-watcher` flag is set to true when deploying the External Health Monitor Controller. When a node failure event is detected, an event will be reported on the PVC to indicate that pods using this PVC are on a failed node.
If a CSI driver supports the Volume Health Monitoring feature from the node side, events regarding abnormal volume conditions will be recorded on pods using the PVCs.
## As a storage vendor, how do I add support for volume health to my CSI driver?
Volume Health Monitoring includes two parts:
* An External Volume Health Monitoring Controller monitors volume health from the controller side.
* Kubelet monitors volume health from the node side.
For details, see the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) and the [Kubernetes-CSI Driver Developer Guide](https://kubernetes-csi.github.io/docs/volume-health-monitor.html).
There is a sample implementation for volume health in [CSI host path driver](https://github.com/kubernetes-csi/csi-driver-host-path).
### Controller Side Volume Health Monitoring
To learn how to deploy the External Volume Health Monitoring controller, see [CSI external-health-monitor-controller](https://kubernetes-csi.github.io/docs/external-health-monitor-controller.html) in the CSI documentation.
The External Health Monitor Controller calls either `ListVolumes` or `ControllerGetVolume` CSI RPC and reports VolumeConditionAbnormal events with messages on PVCs if abnormal volume conditions are detected. Only CSI drivers with `LIST_VOLUMES` and `VOLUME_CONDITION` controller capability or `GET_VOLUME` and `VOLUME_CONDITION` controller capability support Volume Health Monitoring in the external controller.
To implement the volume health feature from the controller side, a CSI driver **must** add support for the new controller capabilities.
If a CSI driver supports `LIST_VOLUMES` and `VOLUME_CONDITION` controller capabilities, it **must** implement controller RPC `ListVolumes` and report the volume condition in the response.
If a CSI driver supports `GET_VOLUME` and `VOLUME_CONDITION` controller capability, it **must** implement controller PRC `ControllerGetVolume` and report the volume condition in the response.
If a CSI driver supports `LIST_VOLUMES`, `GET_VOLUME`, and `VOLUME_CONDITION` controller capabilities, only `ListVolumes` CSI RPC will be invoked by the External Health Monitor Controller.
### Node Side Volume Health Monitoring
Kubelet calls `NodeGetVolumeStats` CSI RPC and reports VolumeConditionAbnormal events with messages on Pods if abnormal volume conditions are detected. Only CSI drivers with `VOLUME_CONDITION` node capability support Volume Health Monitoring in Kubelet.
To implement the volume health feature from the node side, a CSI driver **must** add support for the new node capabilities.
If a CSI driver supports `VOLUME_CONDITION` node capability, it **must** report the volume condition in node RPC `NodeGetVoumeStats`.
## Whats next?
Depending on feedback and adoption, the Kubernetes team plans to push the CSI volume health implementation to beta in either 1.22 or 1.23.
We are also exploring how to use volume health information for programmatic detection and automatic reconcile in Kubernetes.
## How can I learn more?
To learn the design details for Volume Health Monitoring, read the [Volume Health Monitor](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor) enhancement proposal.
The Volume Health Monitor controller source code is at [https://github.com/kubernetes-csi/external-health-monitor](https://github.com/kubernetes-csi/external-health-monitor).
There are also more details about volume health checks in the [Container Storage Interface Documentation](https://kubernetes-csi.github.io/docs/).
## How do I get involved?
The [Kubernetes Slack channel #csi](https://kubernetes.slack.com/messages/csi) and any of the [standard SIG Storage communication channels](https://github.com/kubernetes/community/blob/master/sig-storage/README.md#contact) are great mediums to reach out to the SIG Storage and the CSI team.
We offer a huge thank you to the contributors who helped release this feature in 1.21. We want to thank Yuquan Ren ([NickrenREN](https://github.com/nickrenren)) who implemented the initial volume health monitor controller and agent in the external health monitor repo, thank Ran Xu ([fengzixu](https://github.com/fengzixu)) who moved the volume health monitoring logic from the external agent to Kubelet in 1.21, and we offer special thanks to the following people for their insightful reviews: David Ashpole ([dashpole](https://github.com/dashpole)), Michelle Au ([msau42](https://github.com/msau42)), David Eads ([deads2k](https://github.com/deads2k)), Elana Hashman ([ehashman](https://github.com/ehashman)), Seth Jennings ([sjenning](https://github.com/sjenning)), and Jiawei Wang ([Jiawei0227](https://github.com/Jiawei0227)).
Those interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the [Kubernetes Storage Special Interest Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). Were rapidly growing and always welcome new contributors.

View File

@ -0,0 +1,95 @@
---
title: "Introducing Indexed Jobs"
date: 2021-04-19
slug: introducing-indexed-jobs
---
**Author:** Aldo Culquicondor (Google)
Once you have containerized a non-parallel [Job](/docs/concepts/workloads/controllers/job/),
it is quite easy to get it up and running on Kubernetes without modifications to
the binary. In most cases, when running parallel distributed Jobs, you had
to set a separate system to partition the work among the workers. For
example, you could set up a task queue to [assign one work item to each
Pod](/docs/tasks/job/coarse-parallel-processing-work-queue/) or [multiple items
to each Pod until the queue is emptied](/docs/tasks/job/fine-parallel-processing-work-queue/).
The Kubernetes 1.21 release introduces a new field to control Job _completion mode_,
a configuration option that allows you to control how Pod completions affect the
overall progress of a Job, with two possible options (for now):
- `NonIndexed` (default): the Job is considered complete when there has been
a number of successfully completed Pods equal to the specified number in
`.spec.completions`. In other words, each Pod completion is homologous to
each other. Any Job you might have created before the introduction of
completion modes is implicitly NonIndexed.
- `Indexed`: the Job is considered complete when there is one successfully
completed Pod associated with each index from 0 to `.spec.completions-1`. The
index is exposed to each Pod in the `batch.kubernetes.io/job-completion-index`
annotation and the `JOB_COMPLETION_INDEX` environment variable.
You can start using Jobs with Indexed completion mode, or Indexed Jobs, for
short, to easily start parallel Jobs. Then, each worker Pod can have a statically
assigned partition of the data based on the index. This saves you from having to
set up a queuing system or even having to modify your binary!
## Creating an Indexed Job
To create an Indexed Job, you just have to add `completionMode: Indexed` to the
Job spec and make use of the `JOB_COMPLETION_INDEX` environment variable.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: 'sample-job'
spec:
completions: 3
parallelism: 3
completionMode: Indexed
template:
spec:
restartPolicy: Never
containers:
- command:
- 'bash'
- '-c'
- 'echo "My partition: ${JOB_COMPLETION_INDEX}"'
image: 'docker.io/library/bash'
name: 'sample-load'
```
Note that completion mode is an alpha feature in the 1.21 release. To be able to
use it in your cluster, make sure to enable the `IndexedJob` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/) on the
[API server](docs/reference/command-line-tools-reference/kube-apiserver/) and
the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/).
When you run the example, you will see that each of the three created Pods gets a
different completion index. For the user's convenience, the control plane sets the
`JOB_COMPLETION_INDEX` environment variable, but you can choose to [set your
own](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/)
or [expose the index as a file](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
See [Indexed Job for parallel processing with static work
assignment](/docs/tasks/job/indexed-parallel-processing-static/) for a
step-by-step guide, and a few more examples.
## Future plans
SIG Apps envisions that there might be more completion modes that enable more
use cases for the Job API. We welcome you to open issues in
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) with your
suggestions.
In particular, we are considering an `IndexedAndUnique` mode where the indexes
are not just available as annotation, but they are part of the Pod names,
similar to {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}.
This should facilitate inter-Pod communication for tightly coupled Pods.
You can join the discussion in the [open issue](https://github.com/kubernetes/kubernetes/issues/99497).
## Wrap-up
Indexed Jobs allows you to statically partition work among the workers of your
parallel Jobs. SIG Apps hopes that this feature facilitates the migration of
more batch workloads to Kubernetes.

View File

@ -0,0 +1,479 @@
---
layout: blog
title: "Defining Network Policy Conformance for Container Network Interface (CNI) providers"
date: 2021-04-20
slug: defining-networkpolicy-conformance-cni-providers
---
Authors: Matt Fenwick (Synopsys), Jay Vyas (VMWare), Ricardo Katz, Amim Knabben (Loadsmart), Douglas Schilling Landgraf (Red Hat), Christopher Tomkins (Tigera)
Special thanks to Tim Hockin and Bowie Du (Google), Dan Winship and Antonio Ojea (Red Hat),
Casey Davenport and Shaun Crampton (Tigera), and Abhishek Raut and Antonin Bas (VMware) for
being supportive of this work, and working with us to resolve issues in different Container Network Interfaces (CNIs) over time.
A brief conversation around "node local" Network Policies in April of 2020 inspired the creation of a NetworkPolicy subproject from SIG Network. It became clear that as a community,
we need a rock-solid story around how to do pod network security on Kubernetes, and this story needed a community around it, so as to grow the cultural adoption of enterprise security patterns in K8s.
In this post we'll discuss:
- Why we created a subproject for [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
- How we changed the Kubernetes e2e framework to `visualize` NetworkPolicy implementation of your CNI provider
- The initial results of our comprehensive NetworkPolicy conformance validator, _Cyclonus_, built around these principles
- Improvements subproject contributors have made to the NetworkPolicy user experience
## Why we created a subproject for NetworkPolicies
In April of 2020 it was becoming clear that many CNIs were emerging, and many vendors
implement these CNIs in subtly different ways. Users were beginning to express a little bit
of confusion around how to implement policies for different scenarios, and asking for new features.
It was clear that we needed to begin unifying the way we think about Network Policies
in Kubernetes, to avoid API fragmentation and unnecessary complexity.
For example:
- In order to be flexible to the users environment, Calico as a CNI provider can be run using IPIP or VXLAN mode, or without encapsulation overhead. CNIs such as Antrea
and Cilium offer similar configuration options as well.
- Some CNI plugins offer iptables for NetworkPolicies amongst other options, whereas other CNIs use a completely
different technology stack (for example, the Antrea project uses Open vSwitch rules).
- Some CNI plugins only implement a subset of the Kubernetes NetworkPolicy API, and some a superset. For example, certain plugins don't support the
ability to target a named port; others don't work with certain IP address types, and there are diverging semantics for similar policy types.
- Some CNI plugins combine with OTHER CNI plugins in order to implement NetworkPolicies (canal), some CNI's might mix implementations (multus), and some clouds do routing separately from NetworkPolicy implementation.
Although this complexity is to some extent necessary to support different environments, end-users find that they need to follow a multistep process to implement Network Policies to secure their applications:
- Confirm that their network plugin supports NetworkPolicies (some don't, such as Flannel)
- Confirm that their cluster's network plugin supports the specific NetworkPolicy features that they are interested in (again, the named port or port range examples come to mind here)
- Confirm that their application's Network Policy definitions are doing the right thing
- Find out the nuances of a vendor's implementation of policy, and check whether or not that implementation has a CNI neutral implementation (which is sometimes adequate for users)
The NetworkPolicy project in upstream Kubernetes aims at providing a community where
people can learn about, and contribute to, the Kubernetes NetworkPolicy API and the surrounding ecosystem.
## The First step: A validation framework for NetworkPolicies that was intuitive to use and understand
The Kubernetes end to end suite has always had NetworkPolicy tests, but these weren't
run in CI, and the way they were implemented didn't provide holistic, easily consumable
information about how a policy was working in a cluster.
This is because the original tests didn't provide any kind of visual summary of connectivity
across a cluster. We thus initially set out to make it easy to confirm CNI support for NetworkPolicies by
making the end to end tests (which are often used by administrators or users to diagnose cluster conformance) easy to interpret.
To solve the problem of confirming that CNIs support the basic features most users care about
for a policy, we built a new NetworkPolicy validation tool into the Kubernetes e2e
framework which allows for visual inspection of policies and their effect on a standard set of pods in a cluster.
For example, take the following test output. We found a bug in
[OVN Kubernetes](https://github.com/ovn-org/ovn-kubernetes/issues/1782). This bug has now been resolved. With this tool the bug was really
easy to characterize, wherein certain policies caused a state-modification that,
later on, caused traffic to incorrectly be blocked (even after all Network Policies were deleted from the cluster).
This is the network policy for the test in question:
```yaml
metadata:
creationTimestamp: null
name: allow-ingress-port-80
spec:
ingress:
- ports:
- port: serve-80-tcp
podSelector: {}
```
These are the expected connectivity results. The test setup is 9 pods (3 namespaces: x, y, and z;
and 3 pods in each namespace: a, b, and c); each pod runs a server on the same port and protocol
that can be reached through HTTP calls in the absence of network policies. Connectivity is verified
by using the [agnhost](https://github.com/kubernetes/kubernetes/tree/master/test/images/agnhost) network utility to issue HTTP calls on a port and protocol that other pods are
expected to be serving. A test scenario first
runs a connectivity check to ensure that each pod can reach each other pod, for 81 (= 9 x 9) data
points. This is the "control". Then perturbations are applied, depending on the test scenario:
policies are created, updated, and deleted; labels are added and removed from pods and namespaces,
and so on. After each change, the connectivity matrix is recollected and compared to the expected
connectivity.
These results give a visual indication of connectivity in a simple matrix. Going down the leftmost column is the "source"
pod, or the pod issuing the request; going across the topmost row is the "destination" pod, or the pod
receiving the request. A `.` means that the connection was allowed; an `X` means the connection was
blocked. For example:
```
Nov 4 16:58:43.449: INFO: expected:
- x/a x/b x/c y/a y/b y/c z/a z/b z/c
x/a . . . . . . . . .
x/b . . . . . . . . .
x/c . . . . . . . . .
y/a . . . . . . . . .
y/b . . . . . . . . .
y/c . . . . . . . . .
z/a . . . . . . . . .
z/b . . . . . . . . .
z/c . . . . . . . . .
```
Below are the observed connectivity results in the case of the OVN Kubernetes bug. Notice how the top three rows indicate that
all requests from namespace x regardless of pod and destination were blocked. Since these
experimental results do not match the expected results, a failure will be reported. Note
how the specific pattern of failure provides clear insight into the nature of the problem --
since all requests from a specific namespace fail, we have a clear clue to start our
investigation.
```
Nov 4 16:58:43.449: INFO: observed:
- x/a x/b x/c y/a y/b y/c z/a z/b z/c
x/a X X X X X X X X X
x/b X X X X X X X X X
x/c X X X X X X X X X
y/a . . . . . . . . .
y/b . . . . . . . . .
y/c . . . . . . . . .
z/a . . . . . . . . .
z/b . . . . . . . . .
z/c . . . . . . . . .
```
This was one of our earliest wins in the Network Policy group, as we were able to
identify and work with the OVN Kubernetes group to fix a bug in egress policy processing.
However, even though this tool has made it easy to validate roughly 30 common scenarios,
it doesn't validate *all* Network Policy scenarios - because there are an enormous number of possible
permutations that one might create (technically, we might say this number is
infinite given that there's an infinite number of possible namespace/pod/port/protocol variations one can create).
Once these tests were in play, we worked with the Upstream SIG Network and SIG Testing communities
(thanks to Antonio Ojea and Ben Elder) to put a testgrid Network Policy job in place. This job
continuously runs the entire suite of Network Policy tests against
[GCE with Calico as a Network Policy provider](https://testgrid.k8s.io/sig-network-gce#presubmit-network-policies,%20google-gce).
Part of our role as a subproject is to help make sure that, when these tests break, we can help triage them effectively.
## Cyclonus: The next step towards Network Policy conformance {#cyclonus}
Around the time that we were finishing the validation work, it became clear from the community that,
in general, we needed to solve the overall problem of testing ALL possible Network Policy implementations.
For example, a KEP was recently written which introduced the concept of micro versioning to
Network Policies to accommodate [describing this at the API level](https://github.com/kubernetes/enhancements/pull/2137/files), by Dan Winship.
In response to this increasingly obvious need to comprehensively evaluate Network
Policy implementations from all vendors, Matt Fenwick decided to evolve our approach to Network Policy validation again by creating Cyclonus.
Cyclonus is a comprehensive Network Policy fuzzing tool which verifies a CNI provider
against hundreds of different Network Policy scenarios, by defining similar truth table/policy
combinations as demonstrated in the end to end tests, while also providing a hierarchical
representation of policy "categories". We've found some interesting nuances and issues
in almost every CNI we've tested so far, and have even contributed some fixes back.
To perform a Cyclonus validation run, you create a Job manifest similar to:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: cyclonus
spec:
template:
spec:
restartPolicy: Never
containers:
- command:
- ./cyclonus
- generate
- --perturbation-wait-seconds=15
- --server-protocol=tcp,udp
name: cyclonus
imagePullPolicy: IfNotPresent
image: mfenwick100/cyclonus:latest
serviceAccount: cyclonus
```
Cyclonus outputs a report of all the test cases it will run:
```
test cases to run by tag:
- target: 6
- peer-ipblock: 4
- udp: 16
- delete-pod: 1
- conflict: 16
- multi-port/protocol: 14
- ingress: 51
- all-pods: 14
- egress: 51
- all-namespaces: 10
- sctp: 10
- port: 56
- miscellaneous: 22
- direction: 100
- multi-peer: 0
- any-port-protocol: 2
- set-namespace-labels: 1
- upstream-e2e: 0
- allow-all: 6
- namespaces-by-label: 6
- deny-all: 10
- pathological: 6
- action: 6
- rule: 30
- policy-namespace: 4
- example: 0
- tcp: 16
- target-namespace: 3
- named-port: 24
- update-policy: 1
- any-peer: 2
- target-pod-selector: 3
- IP-block-with-except: 2
- pods-by-label: 6
- numbered-port: 28
- protocol: 42
- peer-pods: 20
- create-policy: 2
- policy-stack: 0
- any-port: 14
- delete-namespace: 1
- delete-policy: 1
- create-pod: 1
- IP-block-no-except: 2
- create-namespace: 1
- set-pod-labels: 1
testing 112 cases
```
Note that Cyclonus tags its tests based on the type of policy being created, because
the policies themselves are auto-generated, and thus have no meaningful names to be recognized by.
For each test, Cyclonus outputs a truth table, which is again similar to that of the
E2E tests, along with the policy being validated:
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: base
namespace: x
spec:
egress:
- ports:
- port: 81
to:
- namespaceSelector:
matchExpressions:
- key: ns
operator: In
values:
- "y"
- z
podSelector:
matchExpressions:
- key: pod
operator: In
values:
- a
- b
- ports:
- port: 53
protocol: UDP
ingress:
- from:
- namespaceSelector:
matchExpressions:
- key: ns
operator: In
values:
- x
- "y"
podSelector:
matchExpressions:
- key: pod
operator: In
values:
- b
- c
ports:
- port: 80
protocol: TCP
podSelector:
matchLabels:
pod: a
policyTypes:
- Ingress
- Egress
0 wrong, 0 ignored, 81 correct
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| TCP/80 | X/A | X/B | X/C | Y/A | Y/B | Y/C | Z/A | Z/B | Z/C |
| TCP/81 | | | | | | | | | |
| UDP/80 | | | | | | | | | |
| UDP/81 | | | | | | | | | |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| x/a | X | X | X | X | X | X | X | X | X |
| | X | X | X | . | . | X | . | . | X |
| | X | X | X | X | X | X | X | X | X |
| | X | X | X | X | X | X | X | X | X |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| x/b | . | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| x/c | . | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| y/a | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| y/b | . | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| y/c | . | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| z/a | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| z/b | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| z/c | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
| | X | . | . | . | . | . | . | . | . |
+--------+-----+-----+-----+-----+-----+-----+-----+-----+-----+
```
Both Cyclonus and the e2e tests use the same strategy to validate a Network Policy - probing pods over TCP or UDP, with
SCTP support available as well for CNIs that support it (such as Calico).
As examples of how we use Cyclonus to help make CNI implementations better from a Network Policy perspective, you can see the following issues:
- [Antrea: NetworkPolicy: unable to allow ingress by CIDR](https://github.com/vmware-tanzu/antrea/issues/1764)
- [Calico: default missing protocol to TCP; don't let single port overwrite all ports](https://github.com/projectcalico/libcalico-go/pull/1373)
- [Cilium: Egress Network Policy allows traffic that should be denied](https://github.com/cilium/cilium/issues/14678)
The good news is that Antrea and Calico have already merged fixes for all the issues found and other CNI providers are working on it,
with the support of SIG Network and the Network Policy subproject.
Are you interested in verifying NetworkPolicy functionality on your cluster?
(if you care about security or offer multi-tenant SaaS, you should be)
If so, you can run the upstream end to end tests, or Cyclonus, or both.
- If you're just getting started with NetworkPolicies and want to simply
verify the "common" NetworkPolicy cases that most CNIs should be
implementing correctly, in a way that is quick to diagnose, then you're
better off running the e2e tests only.
- If you are deeply curious about your CNI provider's NetworkPolicy
implementation, and want to verify it: use Cyclonus.
- If you want to test *hundreds* of policies, and evaluate your CNI plugin
for comprehensive functionality, for deep discovery of potential security
holes: use Cyclonus, and also consider running end-to-end cluster tests.
- If you're thinking of getting involved with the upstream NetworkPolicy efforts:
use Cyclonus, and read at least an outline of which e2e tests are relevant.
## Where to start with NetworkPolicy testing?
- Cyclonus is easy to run on your cluster, check out the [instructions on github](https://github.com/mattfenwick/cyclonus#run-as-a-kubernetes-job),
and determine whether *your* specific CNI configuration is fully conformant to the hundreds of different
Kubernetes Network Policy API constructs.
- Alternatively, you can use a tool like [sonobuoy](https://github.com/vmware-tanzu/sonobuoy)
to run the existing E2E tests in Kubernetes, with the `--ginkgo.focus=NetworkPolicy` flag.
Make sure that you use the K8s conformance image for K8s 1.21 or above (for example, by using the `--kube-conformance-image-version v1.21.0` flag),
as older images will not have the *new* Network Policy tests in them.
## Improvements to the NetworkPolicy API and user experience
In addition to cleaning up the validation story for CNI plugins that implement NetworkPolicies,
subproject contributors have also spent some time improving the Kubernetes NetworkPolicy API for a few commonly requested features.
After months of deliberation, we eventually settled on a few core areas for improvement:
- Port Range policies: We now allow you to specify a *range* of ports for a policy.
This allows users interested in scenarios like FTP or virtualization to enable advanced policies.
The port range option for network policies will be available to use in Kubernetes 1.21.
Read more in [targeting a range of ports](/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports).
- Namespace as name policies: Allowing users in Kubernetes >= 1.21 to target namespaces using names,
when building Network Policy objects. This was done in collaboration with Jordan Liggitt and Tim Hockin on the API Machinery side.
This change allowed us to improve the Network Policy user experience without actually
changing the API! For more details, you can read
[Automatic labelling](/docs/concepts/overview/working-with-objects/namespaces/#automatic-labelling) in the page about Namespaces.
The TL,DR; is that for Kubernetes 1.21 and later, **all namespaces** have the following label added by default:
```
kubernetes.io/metadata.name: <name-of-namespace>
```
This means you can write a namespace policy against this namespace, even if you can't edit its labels.
For example, this policy, will 'just work', without needing to run a command such as `kubectl edit namespace`.
In fact, it will even work if you can't edit or view this namespace's data at all, because of the magic of API server defaulting.
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
# Allow inbound traffic to Pods labelled role=db, in the namespace 'default'
# provided that the source is a Pod in the namespace 'my-namespace'
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: my-namespace
```
## Results
In our tests, we found that:
- Antrea and Calico are at a point where they support all of cyclonus's scenarios, modulo a few very minor tweaks which we've made.
- Cilium also conformed to the majority of the policies, outside known features that aren't fully supported (for example, related to the way Cilium deals with pod CIDR policies).
If you are a CNI provider and interested in helping us to do a better job curating large tests of network policies, please reach out! We are continuing to curate the Network Policy conformance results from Cyclonus [here](https://raw.githubusercontent.com/K8sbykeshed/cyclonus-artifacts/), but
we are not capable of maintaining all of the subtleties in NetworkPolicy testing data on our own. For now, we use github actions and Kind to test in CI.
## The Future
We're also working on some improvements for the future of Network Policies, including:
- Fully qualified Domain policies: The Google Cloud team created a prototype (which
we are really excited about) of [FQDN policies](https://github.com/GoogleCloudPlatform/gke-fqdnnetworkpolicies-golang).
This tool uses the Network Policy API to enforce policies against L7 URLs, by finding
their IPs and blocking them proactively when requests are made.
- Cluster Administrative policies: We're working hard at enabling *administrative* or
*cluster scoped* Network Policies for the future. These are being presented iteratively to the NetworkPolicy subproject.
You can read about them here in [Cluster Scoped Network Policy](https://docs.google.com/presentation/d/1Jk86jtS3TcGAugVSM_I4Yds5ukXFJ4F1ZCvxN5v2BaY/).
The Network Policy subproject meets on mondays at 4PM EST. For details, check out the
[SIG Network community repo](https://github.com/kubernetes/community/tree/master/sig-network). We'd love
to hang out with you, hack on stuff, and help you adopt K8s Network Policies for your cluster wherever possible.
### A quick note on User Feedback
We've gotten a lot of ideas and feedback from users on Network Policies. A lot of people have interesting ideas about Network Policies,
but we've found that as a subproject, very few people were deeply interested in implementing these ideas to the full extent.
Almost every change to the NetworkPolicy API includes weeks or months of discussion to cover different cases, and ensure no CVEs are being introduced. Thus, long term ownership
is the biggest impediment in improving the NetworkPolicy user experience for us, over time.
- We've documented a lot of the history of the Network Policy dialogue [here](https://github.com/jayunit100/network-policy-subproject/blob/master/history.md).
- We've also taken a poll of users, for what they'd like to see in the Network Policy API [here](https://github.com/jayunit100/network-policy-subproject/blob/master/p0_user_stories.md).
We encourage anyone to provide us with feedback, but our most pressing issues right now
involve finding *long term owners to help us drive changes*.
This doesn't require a lot of technical knowledge, but rather, just a long term commitment to helping us stay organized, do paperwork,
and iterate through the many stages of the K8s feature process. If you want to help us and get involved, please reach out on the SIG Network mailing list, or in the SIG Network room in the k8s.io slack channel!
Anyone can put an oar in the water and help make NetworkPolices better!

View File

@ -0,0 +1,100 @@
---
layout: blog
title: 'Annotating Kubernetes Services for Humans'
date: 2021-04-20
slug: annotating-k8s-for-humans
---
**Author:** Richard Li, Ambassador Labs
Have you ever been asked to troubleshoot a failing Kubernetes service and struggled to find basic information about the service such as the source repository and owner?
One of the problems as Kubernetes applications grow is the proliferation of services. As the number of services grows, developers start to specialize working with specific services. When it comes to troubleshooting, however, developers need to be able to find the source, understand the service and dependencies, and chat with the owning team for any service.
## Human service discovery
Troubleshooting always begins with information gathering. While much attention has been paid to centralizing machine data (e.g., logs, metrics), much less attention has been given to the human aspect of service discovery. Who owns a particular service? What Slack channel does the team work on? Where is the source for the service? What issues are currently known and being tracked?
## Kubernetes annotations
Kubernetes annotations are designed to solve exactly this problem. Oft-overlooked, Kubernetes annotations are designed to add metadata to Kubernetes objects. The Kubernetes documentation says annotations can “attach arbitrary non-identifying metadata to objects.” This means that annotations should be used for attaching metadata that is external to Kubernetes (i.e., metadata that Kubernetes wont use to identify objects. As such, annotations can contain any type of data. This is a contrast to labels, which are designed for uses internal to Kubernetes. As such, label structure and values are [constrained](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) so they can be efficiently used by Kubernetes.
## Kubernetes annotations in action
Here is an example. Imagine you have a Kubernetes service for quoting, called the quote service. You can do the following:
```
kubectl annotate service quote a8r.io/owner=”@sally”
```
In this example, we've just added an annotation called `a8r.io/owner` with the value of @sally. Now, we can use `kubectl describe` to get the information.
```
Name: quote
Namespace: default
Labels: <none>
Annotations: a8r.io/owner: @sally
Selector: app=quote
Type: ClusterIP
IP: 10.109.142.131
Port: http 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
```
If youre practicing GitOps (and you should be!) youll want to code these values directly into your Kubernetes manifest, e.g.,
```yaml
apiVersion: v1
kind: Service
metadata:
name: quote
annotations:
a8r.io/owner: “@sally”
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: quote
```
## A Convention for Annotations
Adopting a common convention for annotations ensures consistency and understandability. Typically, youll want to attach the annotation to the service object, as services are the high-level resource that maps most clearly to a teams responsibility. Namespacing your annotations is also very important. Here is one set of conventions, documented at [a8r.io](https://a8r.io), and reproduced below:
{{< table caption="Annotation convention for human-readable services">}}
| Annotation | Description |
| ------------------------------------------ | ------------------------------------------- |
| `a8r.io/description` | Unstructured text description of the service for humans. |
| `a8r.io/owner` | SSO username (GitHub), email address (linked to GitHub account), or unstructured owner description. |
| `a8r.io/chat` | Slack channel, or link to external chat system. |
| `a8r.io/bugs` | Link to external bug tracker. |
| `a8r.io/logs` | Link to external log viewer. |
| `a8r.io/documentation` | Link to external project documentation. |
| `a8r.io/repository` | Link to external VCS repository. |
| `a8r.io/support` | Link to external support center. |
| `a8r.io/runbook` | Link to external project runbook. |
| `a8r.io/incidents` | Link to external incident dashboard. |
| `a8r.io/uptime` | Link to external uptime dashboard. |
| `a8r.io/performance` | Link to external performance dashboard. |
| `a8r.io/dependencies` | Unstructured text describing the service dependencies for humans. |
## Visualizing annotations: Service Catalogs
As the number of microservices and annotations proliferate, running `kubectl describe` can get tedious. Moreover, using `kubectl describe` requires every developer to have some direct access to the Kubernetes cluster. Over the past few years, service catalogs have gained greater visibility in the Kubernetes ecosystem. Popularized by tools such as [Shopify's ServicesDB](https://shopify.engineering/scaling-mobile-development-by-treating-apps-as-services) and [Spotify's System Z](https://dzone.com/articles/modeling-microservices-at-spotify-with-petter-mari), service catalogs are internally-facing developer portals that present critical information about microservices.
Note that these service catalogs should not be confused with the [Kubernetes Service Catalog project](https://svc-cat.io/). Built on the Open Service Broker API, the Kubernetes Service Catalog enables Kubernetes operators to plug in different services (e.g., databases) to their cluster.
## Annotate your services now and thank yourself later
Much like implementing observability within microservice systems, you often dont realize that you need human service discovery until its too late. Don't wait until something is on fire in production to start wishing you had implemented better metrics and also documented how to get in touch with the part of your organization that looks after it.
There's enormous benefits to building an effective “version 0” service: a [_dancing skeleton_](https://containerjournal.com/topics/container-management/dancing-skeleton-apis-and-microservices/) application with a thin slice of complete functionality that can be deployed to production with a minimal yet effective continuous delivery pipeline.
Adding service annotations should be an essential part of your “version 0” for all of your services. Add them now, and youll thank yourself later.

View File

@ -0,0 +1,80 @@
---
layout: blog
title: 'Graceful Node Shutdown Goes Beta'
date: 2021-04-21
slug: graceful-node-shutdown-beta
---
**Authors:** David Porter (Google), Mrunal Patel (Red Hat), and Tim Bannister (The Scale Factory)
Graceful node shutdown, beta in 1.21, enables kubelet to gracefully evict pods during a node shutdown.
Kubernetes is a distributed system and as such we need to be prepared for inevitable failures — nodes will fail, containers might crash or be restarted, and - ideally - your workloads will be able to withstand these catastrophic events.
One of the common classes of issues are workload failures on node shutdown or restart. The best practice prior to bringing your node down is to [safely drain and cordon your node](/docs/tasks/administer-cluster/safely-drain-node/). This will ensure that all pods running on this node can safely be evicted. An eviction will ensure your pods can follow the expected [pod termination lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination) meaning receiving a SIGTERM in your container and/or running `preStopHooks`.
Prior to Kubernetes 1.20 (when graceful node shutdown was introduced as an alpha feature), safe node draining was not easy: it required users to manually take action and drain the node beforehand. If someone or something shut down your node without draining it first, most likely your pods would not be safely evicted from your node and shutdown abruptly. Other services talking to those pods might see errors due to the pods exiting abruptly. Some examples of this situation may be caused by a reboot due to security patches or preemption of short lived cloud compute instances.
Kubernetes 1.21 brings graceful node shutdown to beta. Graceful node shutdown gives you more control over some of those unexpected shutdown situations. With graceful node shutdown, the kubelet is aware of underlying system shutdown events and can propagate these events to pods, ensuring containers can shut down as gracefully as possible. This gives the containers a chance to checkpoint their state or release back any resources they are holding.
Note, that for the best availability, even with graceful node shutdown, you should still design your deployments to be resilient to node failures.
## How does it work?
On Linux, your system can shut down in many different situations. For example:
* A user or script running `shutdown -h now` or `systemctl poweroff` or `systemctl reboot`.
* Physically pressing a power button on the machine.
* Stopping a VM instance on a cloud provider, e.g. `gcloud compute instances stop` on GCP.
* A Preemptible VM or Spot Instance that your cloud provider can terminate unexpectedly, but with a brief warning.
Many of these situations can be unexpected and there is no guarantee that a cluster administrator drained the node prior to these events. With the graceful node shutdown feature, kubelet uses a systemd mechanism called ["Inhibitor Locks"](https://www.freedesktop.org/wiki/Software/systemd/inhibit) to allow draining in most cases. Using Inhibitor Locks, kubelet instructs systemd to postpone system shutdown for a specified duration, giving a chance for the node to drain and evict pods on the system.
Kubelet makes use of this mechanism to ensure your pods will be terminated cleanly. When the kubelet starts, it acquires a systemd delay-type inhibitor lock. When the system is about to shut down, the kubelet can delay that shutdown for a configurable, short duration utilizing the delay-type inhibitor lock it acquired earlier. This gives your pods extra time to terminate. As a result, even during unexpected shutdowns, your application will receive a SIGTERM, [preStop hooks](/docs/concepts/containers/container-lifecycle-hooks/#container-hooks) will execute, and kubelet will properly update `Ready` node condition and respective pod statuses to the api-server.
For example, on a node with graceful node shutdown enabled, you can see that the inhibitor lock is taken by the kubelet:
```
kubelet-node ~ # systemd-inhibit --list
Who: kubelet (UID 0/root, PID 1515/kubelet)
What: shutdown
Why: Kubelet needs time to handle node shutdown
Mode: delay
1 inhibitors listed.
```
One important consideration we took when designing this feature is that not all pods are created equal. For example, some of the pods running on a node such as a logging related daemonset should stay running as long as possible to capture important logs during the shutdown itself. As a result, pods are split into two categories: "regular" and "critical". [Critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) are those that have `priorityClassName` set to `system-cluster-critical` or `system-node-critical`; all other pods are considered regular.
In our example, the logging DaemonSet would run as a critical pod. During the graceful node shutdown, regular pods are terminated first, followed by critical pods. As an example, this would allow a critical pod associated with a logging daemonset to continue functioning, and collecting logs during the termination of regular pods.
We will evaluate during the beta phase if we need more flexibility for different pod priority classes and add support if needed, please let us know if you have some scenarios in mind.
## How do I use it?
Graceful node shutdown is controlled with the `GracefulNodeShutdown` [feature gate](/docs/reference/command-line-tools-reference/feature-gates) and is enabled by default in Kubernetes 1.21.
You can configure the graceful node shutdown behavior using two kubelet configuration options: `ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods`. To configure these options, you edit the kubelet configuration file that is passed to kubelet via the `--config` flag; for more details, refer to [Set kubelet parameters via a configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).
During a shutdown, kubelet terminates pods in two phases. You can configure how long each of these phases lasts.
1. Terminate regular pods running on the node.
2. Terminate critical pods running on the node.
The settings that control the duration of shutdown are:
* `ShutdownGracePeriod`
* Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and critical pods.
* `ShutdownGracePeriodCriticalPods`
* Specifies the duration used to terminate critical pods during a node shutdown. This should be less than `ShutdownGracePeriod`.
For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods=10s`, kubelet will delay the node shutdown by 30 seconds. During this time, the first 20 seconds (30-10) would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating critical pods.
Note that by default, both configuration options described above, `ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` are set to zero, so you will need to configure them as appropriate for your environment to activate graceful node shutdown functionality.
## How can I learn more?
* Read the [documentation](/docs/concepts/architecture/nodes/#graceful-node-shutdown)
* Read the enhancement proposal, [KEP 2000](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2000-graceful-node-shutdown)
* View the [code](https://github.com/kubernetes/kubernetes/tree/release-1.21/pkg/kubelet/nodeshutdown)
## How do I get involved?
Your feedback is always welcome! SIG Node meets regularly and can be reached via [Slack](https://slack.k8s.io) (channel `#sig-node`), or the SIG's [mailing list](https://github.com/kubernetes/community/tree/master/sig-node#contact)

Binary file not shown.

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

View File

@ -0,0 +1,197 @@
---
layout: blog
title: 'Evolving Kubernetes networking with the Gateway API'
date: 2021-04-22
slug: evolving-kubernetes-networking-with-the-gateway-api
---
**Authors:** Mark Church (Google), Harry Bagdi (Kong), Daneyon Hanson (Red Hat), Nick Young (VMware), Manuel Zapf (Traefik Labs)
The Ingress resource is one of the many Kubernetes success stories. It created a [diverse ecosystem of Ingress controllers](/docs/concepts/services-networking/ingress-controllers/) which were used across hundreds of thousands of clusters in a standardized and consistent way. This standardization helped users adopt Kubernetes. However, five years after the creation of Ingress, there are signs of fragmentation into different but [strikingly similar CRDs](https://dave.cheney.net/paste/ingress-is-dead-long-live-ingressroute.pdf) and [overloaded annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/). The same portability that made Ingress pervasive also limited its future.
It was at Kubecon 2019 San Diego when a passionate group of contributors gathered to discuss the [evolution of Ingress](https://static.sched.com/hosted_files/kccncna19/a5/Kubecon%20San%20Diego%202019%20-%20Evolving%20the%20Kubernetes%20Ingress%20APIs%20to%20GA%20and%20Beyond%20%5BPUBLIC%5D.pdf). The discussion overflowed to the hotel lobby across the street and what came out of it would later be known as the [Gateway API](https://gateway-api.sigs.k8s.io). This discussion was based on a few key assumptions:
1. The API standards underlying route matching, traffic management, and service exposure are commoditized and provide little value to their implementers and users as custom APIs
2. Its possible to represent L4/L7 routing and traffic management through common core API resources
3. Its possible to provide extensibility for more complex capabilities in a way that does not sacrifice the user experience of the core API
## Introducing the Gateway API
This led to design principles that allow the Gateway API to improve upon Ingress:
- **Expressiveness** - In addition to HTTP host/path matching and TLS, Gateway API can express capabilities like HTTP header manipulation, traffic weighting & mirroring, TCP/UDP routing, and other capabilities that were only possible in Ingress through custom annotations.
- **Role-oriented design** - The API resource model reflects the separation of responsibilities that is common in routing and Kubernetes service networking.
- **Extensibility** - The resources allow arbitrary configuration attachment at various layers within the API. This makes granular customization possible at the most appropriate places.
- **Flexible conformance** - The Gateway API defines varying conformance levels - core (mandatory support), extended (portable if supported), and custom (no portability guarantee), known together as [flexible conformance](https://gateway-api.sigs.k8s.io/concepts/guidelines/#conformance). This promotes a highly portable core API (like Ingress) that still gives flexibility for Gateway controller implementers.
### What does the Gateway API look like?
The Gateway API introduces a few new resource types:
- **[GatewayClasses](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.GatewayClass)** are cluster-scoped resources that act as templates to explicitly define behavior for Gateways derived from them. This is similar in concept to StorageClasses, but for networking data-planes.
- **[Gateways](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.Gateway)** are the deployed instances of GatewayClasses. They are the logical representation of the data-plane which performs routing, which may be in-cluster proxies, hardware LBs, or cloud LBs.
- **Routes** are not a single resource, but represent many different protocol-specific Route resources. The [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.HTTPRoute) has matching, filtering, and routing rules that get applied to Gateways that can process HTTP and HTTPS traffic. Similarly, there are [TCPRoutes](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.TCPRoute), [UDPRoutes](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.UDPRoute), and [TLSRoutes](https://gateway-api.sigs.k8s.io/references/spec/#networking.x-k8s.io/v1alpha1.TLSRoute) which also have protocol-specific semantics. This model also allows the Gateway API to incrementally expand its protocol support in the future.
![The resources of the Gateway API](gateway-api-resources.png)
### Gateway Controller Implementations
The good news is that although Gateway is in [Alpha](https://github.com/kubernetes-sigs/gateway-api/releases), there are already several [Gateway controller implementations](https://gateway-api.sigs.k8s.io/references/implementations/) that you can run. Since its a standardized spec, the following example could be run on any of them and should function the exact same way. Check out [getting started](https://gateway-api.sigs.k8s.io/guides/getting-started/) to see how to install and use one of these Gateway controllers.
## Getting Hands-on with the Gateway API
In the following example, well demonstrate the relationships between the different API Resources and walk you through a common use case:
* Team foo has their app deployed in the foo Namespace. They need to control the routing logic for the different pages of their app.
* Team bar is running in the bar Namespace. They want to be able to do blue-green rollouts of their application to reduce risk.
* The platform team is responsible for managing the load balancer and network security of all the apps in the Kubernetes cluster.
The following foo-route does path matching to various Services in the foo Namespace and also has a default route to a 404 server. This exposes foo-auth and foo-home Services via `foo.example.com/login` and `foo.example.com/home` respectively.:
```yaml
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
name: foo-route
namespace: foo
labels:
gateway: external-https-prod
spec:
hostnames:
- "foo.example.com"
rules:
- matches:
- path:
type: Prefix
value: /login
forwardTo:
- serviceName: foo-auth
port: 8080
- matches:
- path:
type: Prefix
value: /home
forwardTo:
- serviceName: foo-home
port: 8080
- matches:
- path:
type: Prefix
value: /
forwardTo:
- serviceName: foo-404
port: 8080
```
The bar team, operating in the bar Namespace of the same Kubernetes cluster, also wishes to expose their application to the internet, but they also want to control their own canary and blue-green rollouts. The following HTTPRoute is configured for the following behavior:
* For traffic to `bar.example.com`:
* Send 90% of the traffic to bar-v1
* Send 10% of the traffic to bar-v2
* For traffic to `bar.example.com` with the HTTP header `env: canary`:
* Send all the traffic to bar-v2
![The routing rules configured for the bar-v1 and bar-v2 Services](httproute.png)
```yaml
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
name: bar-route
namespace: bar
labels:
gateway: external-https-prod
spec:
hostnames:
- "bar.example.com"
rules:
- forwardTo:
- serviceName: bar-v1
port: 8080
weight: 90
- serviceName: bar-v2
port: 8080
weight: 10
- matches:
- headers:
values:
env: canary
forwardTo:
- serviceName: bar-v2
port: 8080
```
### Route and Gateway Binding
So we have two HTTPRoutes matching and routing traffic to different Services. You might be wondering, where are these Services accessible? Through which networks or IPs are they exposed?
How Routes are exposed to clients is governed by [Route binding](https://gateway-api.sigs.k8s.io/concepts/api-overview/#route-binding), which describes how Routes and Gateways create a bidirectional relationship between each other. When Routes are bound to a Gateway it means their collective routing rules are configured on the underlying load balancers or proxies and the Routes are accessible through the Gateway. Thus, a Gateway is a logical representation of a networking data plane that can be configured through Routes.
![How Routes bind with Gateways](route-binding.png )
### Administrative Delegation
The split between Gateway and Route resources allows the cluster administrator to delegate some of the routing configuration to individual teams while still retaining centralized control. The following Gateway resource exposes HTTPS on port 443 and terminates all traffic on the port with a certificate controlled by the cluster administrator.
```yaml
kind: Gateway
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
name: prod-web
spec:
gatewayClassName: acme-lb
listeners:
- protocol: HTTPS
port: 443
routes:
kind: HTTPRoute
selector:
matchLabels:
gateway: external-https-prod
namespaces:
from: All
tls:
certificateRef:
name: admin-controlled-cert
```
The following HTTPRoute shows how the Route can ensure it matches the Gateway's selector via its `kind` (HTTPRoute) and resource labels (`gateway=external-https-prod`).
```yaml
# Matches the required kind selector on the Gateway
kind: HTTPRoute
apiVersion: networking.x-k8s.io/v1alpha1
metadata:
name: foo-route
namespace: foo-ns
labels:
# Matches the required label selector on the Gateway
gateway: external-https-prod
...
```
### Role Oriented Design
When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure. Its extensibility ensures that it will evolve for future use-cases while preserving portability. Ultimately these characteristics will allow Gateway API to adapt to different organizational models and implementations well into the future.
### Try it out and get involved
There are many resources to check out to learn more.
* Check out the [user guides](https://gateway-api.sigs.k8s.io/guides/getting-started/) to see what use-cases can be addressed.
* Try out one of the [existing Gateway controllers ](https://gateway-api.sigs.k8s.io/references/implementations/)
* Or [get involved](https://gateway-api.sigs.k8s.io/contributing/community/) and help design and influence the future of Kubernetes service networking!

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -0,0 +1,80 @@
---
layout: blog
title: 'Kubernetes 1.21: Metrics Stability hits GA'
date: 2021-04-23
slug: kubernetes-release-1.21-metrics-stability-ga
---
**Authors**: Han Kang (Google), Elana Hashman (Red Hat)
Kubernetes 1.21 marks the graduation of the metrics stability framework and along with it, the first officially supported stable metrics. Not only do stable metrics come with supportability guarantees, the metrics stability framework brings escape hatches that you can use if you encounter problematic metrics.
See the list of [stable Kubernetes metrics here](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
### What are stable metrics and why do we need them?
A stable metric is one which, from a consumption point of view, can be reliably consumed across a number of Kubernetes versions without risk of ingestion failure.
Metrics stability is an ongoing community concern. Cluster monitoring infrastructure often assumes the stability of some control plane metrics, so we have introduced a mechanism for versioning metrics as a proper API, with stability guarantees around a formal metrics deprecation process.
### What are the stability levels for metrics?
Metrics can currently have one of two stability levels: alpha or stable.
_Alpha metrics_ have no stability guarantees; as such they can be modified or deleted at any time. At this time, all Kubernetes metrics implicitly fall into this category.
_Stable metrics_ can be guaranteed to not change, except that the metric may become marked deprecated for a future Kubernetes version. By not change, we mean three things:
1. the metric itself will not be deleted or renamed
2. the type of metric will not be modified
3. no labels can be added or removed from this metric
From an ingestion point of view, it is backwards-compatible to add or remove possible values for labels which already do exist, but not labels themselves. Therefore, adding or removing values from an existing label is permitted. Stable metrics can also be marked as deprecated for a future Kubernetes version, since this is tracked in a metadata field and does not actually change the metric itself.
Removing or adding labels from stable metrics is not permitted. In order to add or remove a label from an existing stable metric, one would have to introduce a new metric and deprecate the stable one; otherwise this would violate compatibility agreements.
#### How are metrics deprecated?
While deprecation policies only affect stability guarantees for stable metrics (and not alpha ones), deprecation information may be optionally provided on alpha metrics to help component owners inform users of future intent and assist with transition plans.
A stable metric undergoing the deprecation process signals that the metric will eventually be deleted. The metrics deprecation lifecycle looks roughly like this (with each stage representing a Kubernetes release):
![Stable metric → Deprecated metric → Hidden metric → Deletion](lifecycle-metric.png)
_Deprecated metrics_ have the same stability guarantees of their stable counterparts. If a stable metric is deprecated, then a deprecated stable metric is guaranteed to not change. When deprecating a stable metric, a future Kubernetes release is specified as the point from which the metric will be considered deprecated.
Deprecated metrics will have their description text prefixed with a deprecation notice string “(Deprecated from x.y)” and a warning log will be emitted during metric registration, in the spirit of the official Kubernetes deprecation policy.
Like their stable metric counterparts, deprecated metrics will be automatically registered to the metrics endpoint. On a subsequent release (when the metric's deprecatedVersion is equal to _current\_kubernetes\_version - 4_)), a deprecated metric will become a _hidden_ metric. _Hidden metrics_ are not automatically registered, and hence are hidden by default from end users. These hidden metrics can be explicitly re-enabled for one release after they reach the hidden state, to provide a migration path for cluster operators.
#### As an owner of a Kubernetes component, how do I add stable metrics?
During metric instantiation, stability can be specified by setting the metadata field, StabilityLevel, to “Stable”. When a StabilityLevel is not explicitly set, metrics default to “Alpha” stability. Note that metrics which have fields determined at runtime cannot be marked as Stable. Stable metrics will be detected during static analysis during the pre-commit phase, and must be reviewed by sig-instrumentation.
```golang
var metricDefinition = kubemetrics.CounterOpts{
Name: "some_metric",
Help: "some description",
StabilityLevel: kubemetrics.STABLE,
}
```
For more examples of setting metrics stability and deprecation, see the [Metrics Stability KEP](http://bit.ly/metrics-stability).
### How do I get involved?
This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together.
We offer a huge thank you to all the contributors in Kubernetes community who helped review the design and implementation of the project, including but not limited to the following:
- Han Kang (logicalhan)
- Frederic Branczyk (brancz)
- Marek Siarkowicz (serathius)
- Elana Hashman (ehashman)
- Solly Ross (DirectXMan12)
- Stefan Schimanski (sttts)
- David Ashpole (dashpole)
- Yuchen Zhou (yoyinzyc)
- Yu Yi (erain)
If youre interested in getting involved with the design and development of instrumentation or any part of the Kubernetes metrics system, join the [Kubernetes Instrumentation Special Interest Group (SIG)](https://github.com/kubernetes/community/tree/master/sig-instrumentation). Were rapidly growing and always welcome new contributors.

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#64c3a5;}.cls-2,.cls-3,.cls-4,.cls-5{fill-rule:evenodd;}.cls-3{fill:#e9d661;}.cls-4{fill:#0582bd;}.cls-5{fill:#808285;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.21396" y="-3.67916" width="223.25536" height="134.51136"/><path class="cls-2" d="M109.66922,82.43741A19.57065,19.57065,0,0,1,90.35516,66.01248a19.57588,19.57588,0,0,1,17.35089-16.32394A19.571,19.571,0,0,0,96.842,72.59509a13.28048,13.28048,0,0,0,12.82726,9.84232"/><path class="cls-3" d="M107.70538,49.68854q.96877-.09621,1.96384-.09714a19.59034,19.59034,0,0,1,5.971.93243c-1.19879,3.42362-2.9112,5.60261-4.58257,5.4263a13.51316,13.51316,0,0,0-1.38838-.07249,13.2911,13.2911,0,0,0-12.94535,16.2528,19.572,19.572,0,0,1,10.98151-22.4419"/><path class="cls-4" d="M118.5549,59.2884c-1.24844-1.1244-.77543-3.85447.96614-7.03615a19.56137,19.56137,0,1,1-29.16566,13.762,19.57091,19.57091,0,0,0,19.31384,16.42316,13.27982,13.27982,0,0,0,8.88568-23.149"/><path class="cls-5" d="M148.72465,56.68991a6.07242,6.07242,0,0,0-9.10641,5.25828v24.793H133.331v-24.793a12.36017,12.36017,0,0,1,18.53664-10.70293l-3.143,5.44465Zm24.605-2.97568a12.35685,12.35685,0,0,1,21.57057,8.234v24.793h-6.28659v-24.793a6.07039,6.07039,0,1,0-12.14078,0v24.793h-6.28662v-24.793a6.07006,6.07006,0,1,0-12.14012,0v24.793h-6.28747v-24.793a12.35715,12.35715,0,0,1,21.571-8.234m-79.275-11.71068a6.07292,6.07292,0,0,0-9.10642,5.25869v3.15422h6.60732l-2.79511,6.27307H84.94817V86.74119H78.66091v-39.479A12.3589,12.3589,0,0,1,97.19756,36.55965L94.625,42.333l-.57036-.32949ZM20.49076,54.8028a13.15543,13.15543,0,0,1,10.7037-5.2114c7.03714,0,12.74106,5.1005,12.74106,11.39221l-.01237,16.34832c0,6.29168-5.704,11.39208-12.74112,11.39208s-12.741-5.1004-12.741-11.39208c0-5.26947,3.663-9.84144,9.772-11.47815l9.43621-2.52868.01242-2.34149c0-2.82005-2.895-5.106-6.46627-5.106a7.12669,7.12669,0,0,0-5.22007,2.0919l-.66586.73592L20.49076,54.8028ZM37.64921,69.84037l-.00073,7.49156c-.00018,2.81872-2.89507,5.10548-6.46645,5.10548s-6.46627-2.28565-6.46627-5.10548c0-2.41256,2.2001-4.61169,5.09086-5.38735l7.84259-2.10421Zm29.58343-32.952v14.2779a13.83819,13.83819,0,0,0-6.46716-1.57488c-7.03708,0-12.74094,5.1005-12.74094,11.39221V77.33193c0,6.29168,5.70386,11.39208,12.74094,11.39208s12.74117-5.1004,12.74117-11.39208V36.88838Zm0,24.09523V77.33193c0,2.81983-2.89487,5.10548-6.46628,5.10548s-6.46627-2.28565-6.46627-5.10548V60.98361c0-2.82005,2.89487-5.106,6.46627-5.106s6.46628,2.28592,6.46628,5.106"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#64c3a5;}.cls-2,.cls-3,.cls-4,.cls-5{fill-rule:evenodd;}.cls-3{fill:#e9d661;}.cls-4{fill:#0582bd;}.cls-5{fill:#808285;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.21396" y="-3.67916" width="223.25536" height="134.51136"/><path class="cls-2" d="M109.66922,82.43741A19.57065,19.57065,0,0,1,90.35516,66.01248a19.57588,19.57588,0,0,1,17.35089-16.32394A19.571,19.571,0,0,0,96.842,72.59509a13.28048,13.28048,0,0,0,12.82726,9.84232"/><path class="cls-3" d="M107.70538,49.68854q.96877-.09621,1.96384-.09714a19.59034,19.59034,0,0,1,5.971.93243c-1.19879,3.42362-2.9112,5.60261-4.58257,5.4263a13.51316,13.51316,0,0,0-1.38838-.07249,13.2911,13.2911,0,0,0-12.94535,16.2528,19.572,19.572,0,0,1,10.98151-22.4419"/><path class="cls-4" d="M118.5549,59.2884c-1.24844-1.1244-.77543-3.85447.96614-7.03615a19.56137,19.56137,0,1,1-29.16566,13.762,19.57091,19.57091,0,0,0,19.31384,16.42316,13.27982,13.27982,0,0,0,8.88568-23.149"/><path class="cls-5" d="M148.72465,56.68991a6.07242,6.07242,0,0,0-9.10641,5.25828v24.793H133.331v-24.793a12.36017,12.36017,0,0,1,18.53664-10.70293l-3.143,5.44465Zm24.605-2.97568a12.35685,12.35685,0,0,1,21.57057,8.234v24.793h-6.28659v-24.793a6.07039,6.07039,0,1,0-12.14078,0v24.793h-6.28662v-24.793a6.07006,6.07006,0,1,0-12.14012,0v24.793h-6.28747v-24.793a12.35715,12.35715,0,0,1,21.571-8.234m-79.275-11.71068a6.07292,6.07292,0,0,0-9.10642,5.25869v3.15422h6.60732l-2.79511,6.27307H84.94817V86.74119H78.66091v-39.479A12.3589,12.3589,0,0,1,97.19756,36.55965L94.625,42.333l-.57036-.32949ZM20.49076,54.8028a13.15543,13.15543,0,0,1,10.7037-5.2114c7.03714,0,12.74106,5.1005,12.74106,11.39221l-.01237,16.34832c0,6.29168-5.704,11.39208-12.74112,11.39208s-12.741-5.1004-12.741-11.39208c0-5.26947,3.663-9.84144,9.772-11.47815l9.43621-2.52868.01242-2.34149c0-2.82005-2.895-5.106-6.46627-5.106a7.12669,7.12669,0,0,0-5.22007,2.0919l-.66586.73592L20.49076,54.8028ZM37.64921,69.84037l-.00073,7.49156c-.00018,2.81872-2.89507,5.10548-6.46645,5.10548s-6.46627-2.28565-6.46627-5.10548c0-2.41256,2.2001-4.61169,5.09086-5.38735l7.84259-2.10421Zm29.58343-32.952v14.2779a13.83819,13.83819,0,0,0-6.46716-1.57488c-7.03708,0-12.74094,5.1005-12.74094,11.39221V77.33193c0,6.29168,5.70386,11.39208,12.74094,11.39208s12.74117-5.1004,12.74117-11.39208V36.88838Zm0,24.09523V77.33193c0,2.81983-2.89487,5.10548-6.46628,5.10548s-6.46627-2.28565-6.46627-5.10548V60.98361c0-2.82005,2.89487-5.106,6.46627-5.106s6.46628,2.28592,6.46628,5.106"/></svg>

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#040606;}</style></defs><title>kubernetes.io-54664</title><rect class="cls-1" x="-4.55738" y="-4.48481" width="223.25536" height="134.51136"/><path class="cls-2" d="M169.05853,71.307c.23549,6.07483,5.42615,10.38072,14.09569,10.38072,7.0783,0,12.91766-3.06693,12.91766-9.85009,0-4.71846-2.65436-7.49024-8.78885-8.66954l-4.77685-.94382c-3.06756-.59028-5.19066-1.17992-5.19066-3.0079,0-2.00568,2.06471-2.89047,4.65943-2.89047,3.77525,0,5.3081,1.887,5.42678,4.1288H194.951c-.41258-5.89838-5.1304-9.8501-12.73994-9.8501-7.845,0-12.50382,4.30588-12.50382,9.90912,0,6.84157,5.54483,7.96247,10.3217,8.84662l3.95171.70834c2.83082.53062,4.06977,1.35638,4.06977,3.0079,0,1.47444-1.41541,2.94887-4.77748,2.94887-4.89553,0-6.488-2.53631-6.54705-4.71845Zm-27.4259-5.13164a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58415,8.58415,0,0,1,8.61115,8.557q.00009.02706,0,.0541a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M158.79589,81.098h7.49149V50.95811h-7.49149v2.41825a15.58033,15.58033,0,1,0-9.01133,28.37034q.05285.00018.10567,0a15.47693,15.47693,0,0,0,8.90566-2.77179ZM106.59844,66.17532a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473m17.16387-25.47987V53.37636a15.58048,15.58048,0,1,0-9.01195,28.37034q.05251.00018.105,0a15.48233,15.48233,0,0,0,8.90691-2.77179V81.098h7.49023V40.69545ZM96.21772,50.95811H88.72811V81.098h7.49023Zm0-10.26266H88.72811v7.49023h7.49023ZM60.41805,66.17532a8.58414,8.58414,0,0,1,8.557-8.61113q.02673-.00009.05346,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61177q-.02736.00009-.05472,0a8.58412,8.58412,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M77.58067,40.69545V53.37636A15.58033,15.58033,0,1,0,68.56935,81.7467q.05283.00018.10567,0a15.4769,15.4769,0,0,0,8.90565-2.77179V81.098h7.4915V40.69545ZM25.38259,66.176a8.58414,8.58414,0,0,1,8.557-8.61114q.02736-.00009.05472,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.027,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61176q-.02736.00009-.05472,0a8.58413,8.58413,0,0,1-8.61177-8.55641q-.00009-.02768,0-.05535m17.16388,14.9227H50.038V50.95872H42.54647V53.377a15.58048,15.58048,0,1,0-9.01069,28.37035q.0522.00016.10441,0a15.48032,15.48032,0,0,0,8.90628-2.77179Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#040606;}</style></defs><title>kubernetes.io-54664</title><rect class="cls-1" x="-4.55738" y="-4.48481" width="223.25536" height="134.51136"/><path class="cls-2" d="M169.05853,71.307c.23549,6.07483,5.42615,10.38072,14.09569,10.38072,7.0783,0,12.91766-3.06693,12.91766-9.85009,0-4.71846-2.65436-7.49024-8.78885-8.66954l-4.77685-.94382c-3.06756-.59028-5.19066-1.17992-5.19066-3.0079,0-2.00568,2.06471-2.89047,4.65943-2.89047,3.77525,0,5.3081,1.887,5.42678,4.1288H194.951c-.41258-5.89838-5.1304-9.8501-12.73994-9.8501-7.845,0-12.50382,4.30588-12.50382,9.90912,0,6.84157,5.54483,7.96247,10.3217,8.84662l3.95171.70834c2.83082.53062,4.06977,1.35638,4.06977,3.0079,0,1.47444-1.41541,2.94887-4.77748,2.94887-4.89553,0-6.488-2.53631-6.54705-4.71845Zm-27.4259-5.13164a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58415,8.58415,0,0,1,8.61115,8.557q.00009.02706,0,.0541a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M158.79589,81.098h7.49149V50.95811h-7.49149v2.41825a15.58033,15.58033,0,1,0-9.01133,28.37034q.05285.00018.10567,0a15.47693,15.47693,0,0,0,8.90566-2.77179ZM106.59844,66.17532a8.58413,8.58413,0,0,1,8.557-8.61113q.02706-.00009.05409,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58413,8.58413,0,0,1-8.55641,8.61177q-.02738.00009-.05473,0a8.58413,8.58413,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473m17.16387-25.47987V53.37636a15.58048,15.58048,0,1,0-9.01195,28.37034q.05251.00018.105,0a15.48233,15.48233,0,0,0,8.90691-2.77179V81.098h7.49023V40.69545ZM96.21772,50.95811H88.72811V81.098h7.49023Zm0-10.26266H88.72811v7.49023h7.49023ZM60.41805,66.17532a8.58414,8.58414,0,0,1,8.557-8.61113q.02673-.00009.05346,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.02706,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61177q-.02736.00009-.05472,0a8.58412,8.58412,0,0,1-8.61113-8.557q-.00009-.02738,0-.05473M77.58067,40.69545V53.37636A15.58033,15.58033,0,1,0,68.56935,81.7467q.05283.00018.10567,0a15.4769,15.4769,0,0,0,8.90565-2.77179V81.098h7.4915V40.69545ZM25.38259,66.176a8.58414,8.58414,0,0,1,8.557-8.61114q.02736-.00009.05472,0a8.58414,8.58414,0,0,1,8.61114,8.557q.00009.027,0,.05409a8.58414,8.58414,0,0,1-8.55642,8.61176q-.02736.00009-.05472,0a8.58413,8.58413,0,0,1-8.61177-8.55641q-.00009-.02768,0-.05535m17.16388,14.9227H50.038V50.95872H42.54647V53.377a15.58048,15.58048,0,1,0-9.01069,28.37035q.0522.00016.10441,0a15.48032,15.48032,0,0,0,8.90628-2.77179Z"/></svg>

Before

Width:  |  Height:  |  Size: 2.5 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#57565a;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.37342" y="-3.34411" width="223.25536" height="134.51136"/><path class="cls-2" d="M81.14242,56.12567,77.99668,64.0731h6.29149l-3.14575-7.94743Zm7.0534,17.78238-2.58263-6.49093h-8.9411L74.089,73.90805h-3.775l9.17231-23.114h3.31256l9.17318,23.114Z"/><path class="cls-2" d="M102.9876,62.3189a2.97367,2.97367,0,0,0-2.74772-1.35853A2.91125,2.91125,0,0,0,97.4913,62.3189a3.7299,3.7299,0,0,0-.43087,1.48937v4.23808A3.73648,3.73648,0,0,0,97.4913,69.537a3.40988,3.40988,0,0,0,5.4963,0,7.0611,7.0611,0,0,0,.66165-3.60929,7.3468,7.3468,0,0,0-.66165-3.60884Zm2.94694,8.8744a6.31949,6.31949,0,0,1-5.69466,3.07919,6.89537,6.89537,0,0,1-3.17945-.72845v7.64965H93.68261v-23.379h3.37782v.49738a6.8926,6.8926,0,0,1,3.17945-.72842,6.31935,6.31935,0,0,1,5.69466,3.079,10.006,10.006,0,0,1,1.09252,5.29783,10.16391,10.16391,0,0,1-1.09252,5.23285Z"/><path class="cls-2" d="M118.79367,62.3189a2.97667,2.97667,0,0,0-2.74944-1.35853,2.9108,2.9108,0,0,0-2.74859,1.35853,3.75853,3.75853,0,0,0-.43,1.48937v4.23808a3.76517,3.76517,0,0,0,.43,1.49068,3.41159,3.41159,0,0,0,5.498,0,7.07027,7.07027,0,0,0,.66078-3.60929,7.35645,7.35645,0,0,0-.66078-3.60884Zm2.94608,8.8744a6.32,6.32,0,0,1-5.69552,3.07919,6.88941,6.88941,0,0,1-3.1786-.72845v7.64965h-3.3778v-23.379h3.3778v.49738a6.88664,6.88664,0,0,1,3.1786-.72842,6.31981,6.31981,0,0,1,5.69552,3.079,10.00267,10.00267,0,0,1,1.093,5.29783,10.16449,10.16449,0,0,1-1.093,5.23285Z"/><path class="cls-2" d="M132.96442,54.20565h-3.80825v16.259h3.80825c6.58968,0,6.35717-2.25263,6.35717-8.113,0-5.8944.23251-8.14595-6.35717-8.14595Zm9.60232,13.179c-.828,5.53-4.6687,6.52343-9.60232,6.52343h-7.31875v-23.114h7.31875c4.93362,0,8.7743.96057,9.60232,6.52326a36.3302,36.3302,0,0,1,.26577,5.03426,36.58042,36.58042,0,0,1-.26577,5.033Z"/><polygon class="cls-2" points="146.109 73.908 146.109 57.55 149.52 57.55 149.52 73.908 146.109 73.908 146.109 73.908"/><polygon class="cls-2" points="146.109 54.565 146.109 50.638 149.52 50.638 149.52 54.565 146.109 54.565 146.109 54.565"/><path class="cls-2" d="M161.67111,61.45749a3.97853,3.97853,0,0,0-2.185-.66277c-1.35871,0-3.08048.79467-3.08048,2.352V73.90805h-3.30952V57.71553h3.30952v.49609a7.36373,7.36373,0,0,1,3.08048-.662,8.37517,8.37517,0,0,1,3.676.86135l-1.491,3.04647Z"/><path class="cls-2" d="M173.35993,62.25243a3.00221,3.00221,0,0,0-2.71661-1.39145,3.30382,3.30382,0,0,0-3.609,2.98125h6.72194a4.32873,4.32873,0,0,0-.3963-1.5898ZM166.934,67.11957a5.0516,5.0516,0,0,0,.66295,2.352,3.97682,3.97682,0,0,0,3.345,1.48937,4.29333,4.29333,0,0,0,3.44307-1.55606l2.71705,2.01943a7.84563,7.84563,0,0,1-6.12728,2.78251,7.20688,7.20688,0,0,1-6.22535-2.94833,9.72791,9.72791,0,0,1-1.2589-5.397,9.71813,9.71813,0,0,1,1.2589-5.39724,6.77573,6.77573,0,0,1,5.92674-2.88061,6.353,6.353,0,0,1,5.5313,2.91333c1.19191,1.78819,1.02553,4.53689.99181,6.62261Z"/><path class="cls-2" d="M186.53109,74.17309c-4.70242,0-7.48386-3.54363-7.48386-8.411,0-4.90111,2.78144-8.44475,7.48386-8.44475,2.31728,0,3.84152.62923,6.2258,3.17962l-2.48411,2.18509c-1.78787-1.92-2.51607-2.18509-3.74169-2.18509a3.67433,3.67433,0,0,0-3.14532,1.49064,6.08445,6.08445,0,0,0-.86174,3.77449,6.04264,6.04264,0,0,0,.86174,3.74221,3.59542,3.59542,0,0,0,3.14532,1.48939c1.22562,0,1.95382-.26505,3.74169-2.15137l2.48411,2.18534c-2.38428,2.51638-3.90852,3.14543-6.2258,3.14543Z"/><path class="cls-2" d="M194.70206,57.54967V53.72892l3.24471-.55054v4.37129h2.88082v3.04669h-2.88082V69.6035a2.65978,2.65978,0,0,0,.03327.7282.3886.3886,0,0,0,.3963.23233h2.0878v3.344h-2.0878a3.50427,3.50427,0,0,1-3.07832-1.655,4.8599,4.8599,0,0,1-.596-2.64952V57.54967Z"/><polygon class="cls-2" points="16.87 77.995 30.665 71.993 37.34 52.213 30.585 41.486 16.87 77.995 16.87 77.995"/><polygon class="cls-2" points="30.277 38.06 47.2 64.828 57.151 60.51 41.284 38.06 30.277 38.06 30.277 38.06"/><polygon class="cls-2" points="64.356 59.132 14.131 80.907 24.836 88.213 58.498 73.908 64.356 59.132 64.356 59.132"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#57565a;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.37342" y="-3.34411" width="223.25536" height="134.51136"/><path class="cls-2" d="M81.14242,56.12567,77.99668,64.0731h6.29149l-3.14575-7.94743Zm7.0534,17.78238-2.58263-6.49093h-8.9411L74.089,73.90805h-3.775l9.17231-23.114h3.31256l9.17318,23.114Z"/><path class="cls-2" d="M102.9876,62.3189a2.97367,2.97367,0,0,0-2.74772-1.35853A2.91125,2.91125,0,0,0,97.4913,62.3189a3.7299,3.7299,0,0,0-.43087,1.48937v4.23808A3.73648,3.73648,0,0,0,97.4913,69.537a3.40988,3.40988,0,0,0,5.4963,0,7.0611,7.0611,0,0,0,.66165-3.60929,7.3468,7.3468,0,0,0-.66165-3.60884Zm2.94694,8.8744a6.31949,6.31949,0,0,1-5.69466,3.07919,6.89537,6.89537,0,0,1-3.17945-.72845v7.64965H93.68261v-23.379h3.37782v.49738a6.8926,6.8926,0,0,1,3.17945-.72842,6.31935,6.31935,0,0,1,5.69466,3.079,10.006,10.006,0,0,1,1.09252,5.29783,10.16391,10.16391,0,0,1-1.09252,5.23285Z"/><path class="cls-2" d="M118.79367,62.3189a2.97667,2.97667,0,0,0-2.74944-1.35853,2.9108,2.9108,0,0,0-2.74859,1.35853,3.75853,3.75853,0,0,0-.43,1.48937v4.23808a3.76517,3.76517,0,0,0,.43,1.49068,3.41159,3.41159,0,0,0,5.498,0,7.07027,7.07027,0,0,0,.66078-3.60929,7.35645,7.35645,0,0,0-.66078-3.60884Zm2.94608,8.8744a6.32,6.32,0,0,1-5.69552,3.07919,6.88941,6.88941,0,0,1-3.1786-.72845v7.64965h-3.3778v-23.379h3.3778v.49738a6.88664,6.88664,0,0,1,3.1786-.72842,6.31981,6.31981,0,0,1,5.69552,3.079,10.00267,10.00267,0,0,1,1.093,5.29783,10.16449,10.16449,0,0,1-1.093,5.23285Z"/><path class="cls-2" d="M132.96442,54.20565h-3.80825v16.259h3.80825c6.58968,0,6.35717-2.25263,6.35717-8.113,0-5.8944.23251-8.14595-6.35717-8.14595Zm9.60232,13.179c-.828,5.53-4.6687,6.52343-9.60232,6.52343h-7.31875v-23.114h7.31875c4.93362,0,8.7743.96057,9.60232,6.52326a36.3302,36.3302,0,0,1,.26577,5.03426,36.58042,36.58042,0,0,1-.26577,5.033Z"/><polygon class="cls-2" points="146.109 73.908 146.109 57.55 149.52 57.55 149.52 73.908 146.109 73.908 146.109 73.908"/><polygon class="cls-2" points="146.109 54.565 146.109 50.638 149.52 50.638 149.52 54.565 146.109 54.565 146.109 54.565"/><path class="cls-2" d="M161.67111,61.45749a3.97853,3.97853,0,0,0-2.185-.66277c-1.35871,0-3.08048.79467-3.08048,2.352V73.90805h-3.30952V57.71553h3.30952v.49609a7.36373,7.36373,0,0,1,3.08048-.662,8.37517,8.37517,0,0,1,3.676.86135l-1.491,3.04647Z"/><path class="cls-2" d="M173.35993,62.25243a3.00221,3.00221,0,0,0-2.71661-1.39145,3.30382,3.30382,0,0,0-3.609,2.98125h6.72194a4.32873,4.32873,0,0,0-.3963-1.5898ZM166.934,67.11957a5.0516,5.0516,0,0,0,.66295,2.352,3.97682,3.97682,0,0,0,3.345,1.48937,4.29333,4.29333,0,0,0,3.44307-1.55606l2.71705,2.01943a7.84563,7.84563,0,0,1-6.12728,2.78251,7.20688,7.20688,0,0,1-6.22535-2.94833,9.72791,9.72791,0,0,1-1.2589-5.397,9.71813,9.71813,0,0,1,1.2589-5.39724,6.77573,6.77573,0,0,1,5.92674-2.88061,6.353,6.353,0,0,1,5.5313,2.91333c1.19191,1.78819,1.02553,4.53689.99181,6.62261Z"/><path class="cls-2" d="M186.53109,74.17309c-4.70242,0-7.48386-3.54363-7.48386-8.411,0-4.90111,2.78144-8.44475,7.48386-8.44475,2.31728,0,3.84152.62923,6.2258,3.17962l-2.48411,2.18509c-1.78787-1.92-2.51607-2.18509-3.74169-2.18509a3.67433,3.67433,0,0,0-3.14532,1.49064,6.08445,6.08445,0,0,0-.86174,3.77449,6.04264,6.04264,0,0,0,.86174,3.74221,3.59542,3.59542,0,0,0,3.14532,1.48939c1.22562,0,1.95382-.26505,3.74169-2.15137l2.48411,2.18534c-2.38428,2.51638-3.90852,3.14543-6.2258,3.14543Z"/><path class="cls-2" d="M194.70206,57.54967V53.72892l3.24471-.55054v4.37129h2.88082v3.04669h-2.88082V69.6035a2.65978,2.65978,0,0,0,.03327.7282.3886.3886,0,0,0,.3963.23233h2.0878v3.344h-2.0878a3.50427,3.50427,0,0,1-3.07832-1.655,4.8599,4.8599,0,0,1-.596-2.64952V57.54967Z"/><polygon class="cls-2" points="16.87 77.995 30.665 71.993 37.34 52.213 30.585 41.486 16.87 77.995 16.87 77.995"/><polygon class="cls-2" points="30.277 38.06 47.2 64.828 57.151 60.51 41.284 38.06 30.277 38.06 30.277 38.06"/><polygon class="cls-2" points="64.356 59.132 14.131 80.907 24.836 88.213 58.498 73.908 64.356 59.132 64.356 59.132"/></svg>

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.0 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 224 KiB

After

Width:  |  Height:  |  Size: 224 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.00001" y="-3.45836" width="223.25536" height="134.51136"/><g id="layer1"><path id="path14" d="M52.57518,56.39988v16.0054h8.94419l-1.2105,3.09349H48.6747V56.39991h3.90048"/><path id="path16" d="M127.89467,65.07506c2.48824-.94148,5.38-3.09349,5.38-6.65769,0-4.43848-3.83322-6.65769-9.28044-6.65769h-7.12845V75.49877h4.23673V65.74759h2.48823l6.59044,9.75118h4.77473l-7.0612-10.42368Zm-4.70747-2.35373h-2.08473V54.58412h2.62273c3.42972,0,5.31272,1.076,5.31272,4.035,0,3.16072-2.28649,4.10223-5.85071,4.10223"/><path id="path18" d="M68.98407,56.39988l.60525,1.47949L62.19187,75.49874h3.63148l1.614-3.90048h8.40619l1.68124,3.90048h4.304L73.3553,56.39988H68.98407Zm-.33625,12.30668,2.89174-7.06121,2.959,7.06121H68.64783"/><path id="path28" d="M41.27726,62.78859a5.40414,5.40414,0,0,0,4.10222-5.24547c0-5.78347-6.99395-5.78347-9.21318-5.78347H28.7016V75.49874h7.12845c4.16947,0,10.89442-1.00874,10.89442-6.85945C46.72447,65.41131,44.77423,63.32659,41.27726,62.78859Zm-5.649-8.13721c3.16072,0,5.31271.33625,5.31271,3.16073,0,3.83322-3.90047,3.63146-5.78345,3.63146h-2.152V54.58412l2.62273.06726Zm.06724,17.75387-2.69-.06726V64.537h2.48823c3.766,0,6.7922.538,6.7922,3.90048,0,3.29522-2.48823,3.96772-6.59046,3.96772"/><path id="path30" d="M98.84287,56.39988h3.96772V75.49874H98.84287Zm12.23941,0-8.06994,9.14595,8.13719,9.95294h4.50571L107.4508,65.34408l7.59921-8.9442h-3.96773"/><path id="path40" d="M80.95449,65.34405c0,6.25421,3.90048,10.49094,9.21318,10.49094a10.18329,10.18329,0,0,0,6.38871-1.95023l-.94149-1.883A9.90036,9.90036,0,0,1,91.17642,73.145c-3.42973,0-5.918-2.48823-5.918-7.66644,0-4.16946,1.54675-6.65769,4.50572-6.65769,1.614,0,2.959.538,3.497,2.152l3.42972-.87425c-.33625-1.345-2.152-3.96771-7.12845-3.96771-4.573,0-8.60794,3.09349-8.60794,9.21318"/><path id="path50" d="M171.67412,56.39988h3.90048V75.49874h-3.90048Zm12.23941,0-8.13721,9.14595,8.13721,9.95294h4.64021L180.282,65.34408l7.5992-8.9442h-3.96771"/><path id="path52" d="M153.71849,65.34405c0,6.25421,3.83322,10.49094,9.07869,10.49094a10.11878,10.11878,0,0,0,6.456-1.95023l-.94148-1.883A9.953,9.953,0,0,1,163.806,73.145c-3.42973,0-5.8507-2.48823-5.8507-7.66644,0-4.16946,1.54674-6.65769,4.573-6.65769a3.35862,3.35862,0,0,1,3.497,2.152l3.36248-.87425c-.33625-1.345-2.08475-3.96771-7.0612-3.96771-4.64021,0-8.60795,3.09349-8.60795,9.21318"/><path id="path54" d="M143.16032,56.13087c-5.17821,0-8.94419,3.09349-8.94419,9.81844s3.56423,9.88568,8.94419,9.88568,8.9442-3.16072,8.9442-9.88568S148.33853,56.13087,143.16032,56.13087Zm0,17.01414c-3.96771,0-4.90922-3.228-4.90922-7.1957,0-3.90048.94148-7.26295,4.90922-7.26295s4.90922,3.36247,4.90922,7.26295c0,3.96772-.94148,7.1957-4.90922,7.1957"/></g></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.00001" y="-3.45836" width="223.25536" height="134.51136"/><g id="layer1"><path id="path14" d="M52.57518,56.39988v16.0054h8.94419l-1.2105,3.09349H48.6747V56.39991h3.90048"/><path id="path16" d="M127.89467,65.07506c2.48824-.94148,5.38-3.09349,5.38-6.65769,0-4.43848-3.83322-6.65769-9.28044-6.65769h-7.12845V75.49877h4.23673V65.74759h2.48823l6.59044,9.75118h4.77473l-7.0612-10.42368Zm-4.70747-2.35373h-2.08473V54.58412h2.62273c3.42972,0,5.31272,1.076,5.31272,4.035,0,3.16072-2.28649,4.10223-5.85071,4.10223"/><path id="path18" d="M68.98407,56.39988l.60525,1.47949L62.19187,75.49874h3.63148l1.614-3.90048h8.40619l1.68124,3.90048h4.304L73.3553,56.39988H68.98407Zm-.33625,12.30668,2.89174-7.06121,2.959,7.06121H68.64783"/><path id="path28" d="M41.27726,62.78859a5.40414,5.40414,0,0,0,4.10222-5.24547c0-5.78347-6.99395-5.78347-9.21318-5.78347H28.7016V75.49874h7.12845c4.16947,0,10.89442-1.00874,10.89442-6.85945C46.72447,65.41131,44.77423,63.32659,41.27726,62.78859Zm-5.649-8.13721c3.16072,0,5.31271.33625,5.31271,3.16073,0,3.83322-3.90047,3.63146-5.78345,3.63146h-2.152V54.58412l2.62273.06726Zm.06724,17.75387-2.69-.06726V64.537h2.48823c3.766,0,6.7922.538,6.7922,3.90048,0,3.29522-2.48823,3.96772-6.59046,3.96772"/><path id="path30" d="M98.84287,56.39988h3.96772V75.49874H98.84287Zm12.23941,0-8.06994,9.14595,8.13719,9.95294h4.50571L107.4508,65.34408l7.59921-8.9442h-3.96773"/><path id="path40" d="M80.95449,65.34405c0,6.25421,3.90048,10.49094,9.21318,10.49094a10.18329,10.18329,0,0,0,6.38871-1.95023l-.94149-1.883A9.90036,9.90036,0,0,1,91.17642,73.145c-3.42973,0-5.918-2.48823-5.918-7.66644,0-4.16946,1.54675-6.65769,4.50572-6.65769,1.614,0,2.959.538,3.497,2.152l3.42972-.87425c-.33625-1.345-2.152-3.96771-7.12845-3.96771-4.573,0-8.60794,3.09349-8.60794,9.21318"/><path id="path50" d="M171.67412,56.39988h3.90048V75.49874h-3.90048Zm12.23941,0-8.13721,9.14595,8.13721,9.95294h4.64021L180.282,65.34408l7.5992-8.9442h-3.96771"/><path id="path52" d="M153.71849,65.34405c0,6.25421,3.83322,10.49094,9.07869,10.49094a10.11878,10.11878,0,0,0,6.456-1.95023l-.94148-1.883A9.953,9.953,0,0,1,163.806,73.145c-3.42973,0-5.8507-2.48823-5.8507-7.66644,0-4.16946,1.54674-6.65769,4.573-6.65769a3.35862,3.35862,0,0,1,3.497,2.152l3.36248-.87425c-.33625-1.345-2.08475-3.96771-7.0612-3.96771-4.64021,0-8.60795,3.09349-8.60795,9.21318"/><path id="path54" d="M143.16032,56.13087c-5.17821,0-8.94419,3.09349-8.94419,9.81844s3.56423,9.88568,8.94419,9.88568,8.9442-3.16072,8.9442-9.88568S148.33853,56.13087,143.16032,56.13087Zm0,17.01414c-3.96771,0-4.90922-3.228-4.90922-7.1957,0-3.90048.94148-7.26295,4.90922-7.26295s4.90922,3.36247,4.90922,7.26295c0,3.96772-.94148,7.1957-4.90922,7.1957"/></g></svg>

Before

Width:  |  Height:  |  Size: 2.8 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path id="path5" d="M141.73824,63.05113H127.82345l2.25647-3.85486c.93932-1.59742,1.78587-2.16178,3.19675-2.16178,1.40994,0,1.88056,1.12776,1.12871,2.5377h14.197l1.4109-2.5377c1.034-1.88055,1.034-3.94955-1.50463-3.94955h-25.0097c-2.25648,0-4.04329.28218-5.35949,2.44492l-4.41922,7.52127c-1.2215,2.16274-1.12776,3.94956,1.88056,3.94956h14.3854L127.636,71.13676a3.487,3.487,0,0,1-3.47894,1.88056c-1.69307,0-1.59837-1.12776-.84654-2.25552H109.20724l-1.69307,2.82084c-1.034,1.78681-.18748,3.38519,2.25648,3.38519h24.82127a6.07295,6.07295,0,0,0,5.35949-2.91457l4.23078-7.05162C145.12343,65.21387,144.65281,63.05113,141.73824,63.05113Z"/><path id="path7" d="M202.14279,57.7299a2.01608,2.01608,0,1,0,2.06708,2.00395A2.04547,2.04547,0,0,0,202.14279,57.7299Zm0,3.63676a1.62259,1.62259,0,1,1,1.673-1.63377A1.64448,1.64448,0,0,1,202.14279,61.36666Z"/><path id="path9" d="M109.67787,53.08494H88.42834c-2.91458,0-6.67569,0-8.93217,3.94955l-9.4028,15.98283c-1.31619,2.16274-.18748,3.94956,2.25648,3.94956H97.07833c2.53865,0,4.137-.7528,6.11133-3.94956l9.40279-15.98283C113.814,54.96645,111.9334,53.08494,109.67787,53.08494ZM97.73642,59.19627l-7.05065,11.9405c-.94028,1.69307-1.78682,1.88055-3.94955,1.88055s-2.7271-.75184-2.068-1.88055l7.14535-11.94049a3.9896,3.9896,0,0,1,4.0433-2.16179C97.64364,57.03449,98.6767,57.59885,97.73642,59.19627Z"/><path id="path11" d="M72.44455,53.08494H45.17837L33.42631,73.01732h-25.95l.01294,3.94956H59.846c3.47893,0,4.60669-1.69308,5.35949-3.00928l2.81988-4.88887c.7528-1.3162.7528-2.82084-.75184-3.66642a5.94452,5.94452,0,0,0,4.79513-3.19676l2.82085-4.79514C76.30036,55.06019,74.60728,53.08494,72.44455,53.08494ZM58.436,69.16342,57.02512,71.419a2.662,2.662,0,0,1-2.6324,1.59837H47.43485l3.57554-6.01663H57.777C58.99942,67.00069,58.99942,68.12845,58.436,69.16342Zm5.9229-9.96715L62.948,61.45275a2.662,2.662,0,0,1-2.6324,1.59838H53.35774l3.57268-6.01664h6.76943C64.92327,57.03449,64.92327,58.16225,64.35891,59.19627Z"/><path id="path13" d="M203.20455,59.20871a.55269.55269,0,0,0-.25635-.485,1.13731,1.13731,0,0,0-.56532-.10809h-1.01107v2.25457h.34244V59.83046h.40462l.66193,1.0388h.3941l-.7021-1.0388C202.88507,59.81994,203.20455,59.6535,203.20455,59.20871Zm-1.11341.34244h-.37687V58.872h.5988c.29173,0,.54809.03922.54809.331C202.86211,59.60854,202.41636,59.55115,202.09114,59.55115Z"/><path id="polygon1317" d="M168.49306,57.03164h35.25806l-.011-3.94955H156.741L142.73151,76.964h32.71941l2.35022-3.94955H159.09122l3.57267-6.01664h18.70993l2.25649-3.94955H165.01508Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path id="path5" d="M141.73824,63.05113H127.82345l2.25647-3.85486c.93932-1.59742,1.78587-2.16178,3.19675-2.16178,1.40994,0,1.88056,1.12776,1.12871,2.5377h14.197l1.4109-2.5377c1.034-1.88055,1.034-3.94955-1.50463-3.94955h-25.0097c-2.25648,0-4.04329.28218-5.35949,2.44492l-4.41922,7.52127c-1.2215,2.16274-1.12776,3.94956,1.88056,3.94956h14.3854L127.636,71.13676a3.487,3.487,0,0,1-3.47894,1.88056c-1.69307,0-1.59837-1.12776-.84654-2.25552H109.20724l-1.69307,2.82084c-1.034,1.78681-.18748,3.38519,2.25648,3.38519h24.82127a6.07295,6.07295,0,0,0,5.35949-2.91457l4.23078-7.05162C145.12343,65.21387,144.65281,63.05113,141.73824,63.05113Z"/><path id="path7" d="M202.14279,57.7299a2.01608,2.01608,0,1,0,2.06708,2.00395A2.04547,2.04547,0,0,0,202.14279,57.7299Zm0,3.63676a1.62259,1.62259,0,1,1,1.673-1.63377A1.64448,1.64448,0,0,1,202.14279,61.36666Z"/><path id="path9" d="M109.67787,53.08494H88.42834c-2.91458,0-6.67569,0-8.93217,3.94955l-9.4028,15.98283c-1.31619,2.16274-.18748,3.94956,2.25648,3.94956H97.07833c2.53865,0,4.137-.7528,6.11133-3.94956l9.40279-15.98283C113.814,54.96645,111.9334,53.08494,109.67787,53.08494ZM97.73642,59.19627l-7.05065,11.9405c-.94028,1.69307-1.78682,1.88055-3.94955,1.88055s-2.7271-.75184-2.068-1.88055l7.14535-11.94049a3.9896,3.9896,0,0,1,4.0433-2.16179C97.64364,57.03449,98.6767,57.59885,97.73642,59.19627Z"/><path id="path11" d="M72.44455,53.08494H45.17837L33.42631,73.01732h-25.95l.01294,3.94956H59.846c3.47893,0,4.60669-1.69308,5.35949-3.00928l2.81988-4.88887c.7528-1.3162.7528-2.82084-.75184-3.66642a5.94452,5.94452,0,0,0,4.79513-3.19676l2.82085-4.79514C76.30036,55.06019,74.60728,53.08494,72.44455,53.08494ZM58.436,69.16342,57.02512,71.419a2.662,2.662,0,0,1-2.6324,1.59837H47.43485l3.57554-6.01663H57.777C58.99942,67.00069,58.99942,68.12845,58.436,69.16342Zm5.9229-9.96715L62.948,61.45275a2.662,2.662,0,0,1-2.6324,1.59838H53.35774l3.57268-6.01664h6.76943C64.92327,57.03449,64.92327,58.16225,64.35891,59.19627Z"/><path id="path13" d="M203.20455,59.20871a.55269.55269,0,0,0-.25635-.485,1.13731,1.13731,0,0,0-.56532-.10809h-1.01107v2.25457h.34244V59.83046h.40462l.66193,1.0388h.3941l-.7021-1.0388C202.88507,59.81994,203.20455,59.6535,203.20455,59.20871Zm-1.11341.34244h-.37687V58.872h.5988c.29173,0,.54809.03922.54809.331C202.86211,59.60854,202.41636,59.55115,202.09114,59.55115Z"/><path id="polygon1317" d="M168.49306,57.03164h35.25806l-.011-3.94955H156.741L142.73151,76.964h32.71941l2.35022-3.94955H159.09122l3.57267-6.01664h18.70993l2.25649-3.94955H165.01508Z"/></svg>

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#0071f7;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M174.27862,87.85821a5.22494,5.22494,0,0,1-.67887,7.12815,5.66277,5.66277,0,0,1-7.46759-.67888l-11.88024-15.2746L142.71111,93.968c-1.69718,2.37606-5.09153,2.37606-7.46758.67888a4.96011,4.96011,0,0,1-1.01831-7.12815l13.57742-17.65065L134.22522,52.21747a5.32925,5.32925,0,0,1,8.48589-6.44927l11.54081,15.2746,11.88024-14.59574c1.69718-2.376,4.75211-2.71548,7.46759-1.0183,2.376,1.69717,2.376,5.09153.67887,7.46758l-13.238,17.31121Zm-61.77727-2.03662a15.64926,15.64926,0,0,1-15.95348-15.614,15.95716,15.95716,0,0,1,31.907,0A16.08546,16.08546,0,0,1,112.50135,85.82159Zm-46.84212,0a15.64926,15.64926,0,0,1-15.95347-15.614,15.95716,15.95716,0,0,1,31.907,0A15.64926,15.64926,0,0,1,65.65923,85.82159Zm46.84212-41.41114A26.28086,26.28086,0,0,0,89.41973,57.98787,26.42342,26.42342,0,0,0,65.99867,44.41045,27.90043,27.90043,0,0,0,50.0452,49.502V27.77811a5.22038,5.22038,0,0,0-5.09153-5.09154,5.297,5.297,0,0,0-5.431,5.09154V70.547A26.00829,26.00829,0,0,0,65.65923,96.00466,26.718,26.718,0,0,0,89.08029,82.0878a26.58818,26.58818,0,0,0,23.08162,13.91686,26.22192,26.22192,0,0,0,26.476-26.13654C138.97732,55.95126,127.09708,44.41045,112.50135,44.41045Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#0071f7;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M174.27862,87.85821a5.22494,5.22494,0,0,1-.67887,7.12815,5.66277,5.66277,0,0,1-7.46759-.67888l-11.88024-15.2746L142.71111,93.968c-1.69718,2.37606-5.09153,2.37606-7.46758.67888a4.96011,4.96011,0,0,1-1.01831-7.12815l13.57742-17.65065L134.22522,52.21747a5.32925,5.32925,0,0,1,8.48589-6.44927l11.54081,15.2746,11.88024-14.59574c1.69718-2.376,4.75211-2.71548,7.46759-1.0183,2.376,1.69717,2.376,5.09153.67887,7.46758l-13.238,17.31121Zm-61.77727-2.03662a15.64926,15.64926,0,0,1-15.95348-15.614,15.95716,15.95716,0,0,1,31.907,0A16.08546,16.08546,0,0,1,112.50135,85.82159Zm-46.84212,0a15.64926,15.64926,0,0,1-15.95347-15.614,15.95716,15.95716,0,0,1,31.907,0A15.64926,15.64926,0,0,1,65.65923,85.82159Zm46.84212-41.41114A26.28086,26.28086,0,0,0,89.41973,57.98787,26.42342,26.42342,0,0,0,65.99867,44.41045,27.90043,27.90043,0,0,0,50.0452,49.502V27.77811a5.22038,5.22038,0,0,0-5.09153-5.09154,5.297,5.297,0,0,0-5.431,5.09154V70.547A26.00829,26.00829,0,0,0,65.65923,96.00466,26.718,26.718,0,0,0,89.08029,82.0878a26.58818,26.58818,0,0,0,23.08162,13.91686,26.22192,26.22192,0,0,0,26.476-26.13654C138.97732,55.95126,127.09708,44.41045,112.50135,44.41045Z"/></svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fff;}.cls-2{fill:#fbfbfb;}.cls-3{fill:#192534;}.cls-4{mask:url(#mask);}.cls-5{mask:url(#mask-2-2);}.cls-6{mask:url(#mask-3);}</style><mask id="mask" x="14.75095" y="70.72244" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-2"><polygon id="path-1" class="cls-1" points="14.751 81.053 14.751 70.722 47.239 70.722 47.239 81.053 14.751 81.053"/></g></mask><mask id="mask-2-2" x="14.75095" y="60.57418" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-4"><polygon id="path-3" class="cls-1" points="14.751 70.905 14.751 60.574 47.239 60.574 47.239 70.905 14.751 70.905"/></g></mask><mask id="mask-3" x="14.75079" y="45.87467" width="32.48853" height="14.92678" maskUnits="userSpaceOnUse"><g id="mask-6"><polygon id="path-5" class="cls-1" points="14.751 60.801 14.751 45.875 47.239 45.875 47.239 60.801 14.751 60.801"/></g></mask></defs><title>kubernetes.io-logos</title><rect class="cls-2" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><g id="Symbols"><g id="Mobile-Header"><g id="Group-4"><g id="Mobile-Logo"><g id="Group"><g id="Group-3"><path id="Fill-1" class="cls-3" d="M190.28379,79.32487h-6.37906V54.34018h6.37906v3.41436a10.33414,10.33414,0,0,1,7.73637-4.03515v6.41449a8.21414,8.21414,0,0,0-1.75906-.15516c-2.10911,0-4.92212,1.24163-5.97731,2.84489ZM173.10344,64.47949a5.69574,5.69574,0,0,0-5.97731-5.53592c-3.96771,0-5.67637,3.05266-5.97732,5.53592ZM167.5278,79.94566c-7.38379,0-12.9595-5.12161-12.9595-13.13943,0-7.24181,5.174-13.08684,12.55783-13.08684,7.234,0,12.15607,5.58723,12.15607,13.76022v1.4481H161.24952c.40168,3.15535,2.86336,5.79371,6.98219,5.79371a10.15786,10.15786,0,0,0,6.47972-2.48319l2.813,4.24168c-2.41134,2.27549-6.22917,3.46575-9.99665,3.46575Zm-19.24929-.62079h-6.43068V60.08257h-4.018V54.34018h4.018V52.99592c0-5.32809,3.31543-8.68991,8.18843-8.68991a8.489,8.489,0,0,1,6.32989,2.32674l-2.41135,3.88a3.50681,3.50681,0,0,0-2.66312-1.03509c-1.75782,0-3.01317,1.19026-3.01317,3.51828v1.34426h4.92207v5.74239h-4.92207Zm-16.426,0H125.4218V60.08257h-4.0194V54.34018h4.0194V52.99592c0-5.32809,3.31541-8.68991,8.18843-8.68991a8.48418,8.48418,0,0,1,6.3286,2.32674l-2.41012,3.88a3.50683,3.50683,0,0,0-2.66312-1.03509c-1.75782,0-3.01318,1.19026-3.01318,3.51828v1.34426h4.92212v5.74239h-4.92212v19.2423Zm-14.43053,0h-6.38022V76.16952a11.21263,11.21263,0,0,1-8.53978,3.77613c-5.32385,0-7.83586-3.00012-7.83586-7.86259V54.34018h6.379v15.157c0,3.46574,1.75911,4.60345,4.4714,4.60345a7.07632,7.07632,0,0,0,5.52531-2.8449V54.34017H117.422v24.9847ZM71.87264,71.307a7.0348,7.0348,0,0,0,5.4762,2.79359c3.7171,0,6.17877-2.8975,6.17877-7.24181,0-4.34567-2.46168-7.29441-6.17877-7.29441a6.9786,6.9786,0,0,0-5.47621,2.8975V71.307Zm0,8.01788h-6.379V43.734h6.379V57.54807a9.2539,9.2539,0,0,1,7.48587-3.82868c6.17748,0,10.74961,4.96639,10.74961,13.13944,0,8.32821-4.62118,13.08683-10.74961,13.08683a9.33043,9.33043,0,0,1-7.48587-3.82745v3.20666Z"/></g><g id="Group-6"><g class="cls-4"><path id="Fill-4" class="cls-3" d="M47.00136,72.68049l-3.69521-1.83918a1.4583,1.4583,0,0,0-1.15329,0L31.57117,76.108a1.455,1.455,0,0,1-1.15194,0L19.83748,70.84131a1.45828,1.45828,0,0,0-1.15328,0L14.989,72.68049c-.31738.15841-.31738.416,0,.57308L30.41923,80.934a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.68046c.31738-.15706.31738-.41467,0-.57308"/></g></g><g id="Group-9"><g class="cls-5"><path id="Fill-7" class="cls-3" d="M47.00136,62.53223l-3.69521-1.83917a1.45872,1.45872,0,0,0-1.15329,0L31.57117,65.95984a1.46694,1.46694,0,0,1-1.15194,0L19.83748,60.69306a1.4587,1.4587,0,0,0-1.15328,0L14.989,62.53223c-.31738.15842-.31738.416,0,.5745l15.43024,7.67905a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.67905c.31738-.15848.31738-.41608,0-.5745"/></g></g><g id="Group-12"><g class="cls-6"><path id="Fill-10" class="cls-3" d="M14.98887,53.6031l15.43018,7.08965a1.58033,1.58033,0,0,0,1.15194,0L47.00123,53.6031c.31744-.14628.31744-.38408,0-.52907L31.571,45.98438a1.56676,1.56676,0,0,0-1.15194,0L14.98887,53.074c-.31744.145-.31744.38279,0,.52907"/></g></g></g></g></g></g></g></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fff;}.cls-2{fill:transparent;}.cls-3{fill:#192534;}.cls-4{mask:url(#mask);}.cls-5{mask:url(#mask-2-2);}.cls-6{mask:url(#mask-3);}</style><mask id="mask" x="14.75095" y="70.72244" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-2"><polygon id="path-1" class="cls-1" points="14.751 81.053 14.751 70.722 47.239 70.722 47.239 81.053 14.751 81.053"/></g></mask><mask id="mask-2-2" x="14.75095" y="60.57418" width="32.48851" height="10.3304" maskUnits="userSpaceOnUse"><g id="mask-4"><polygon id="path-3" class="cls-1" points="14.751 70.905 14.751 60.574 47.239 60.574 47.239 70.905 14.751 70.905"/></g></mask><mask id="mask-3" x="14.75079" y="45.87467" width="32.48853" height="14.92678" maskUnits="userSpaceOnUse"><g id="mask-6"><polygon id="path-5" class="cls-1" points="14.751 60.801 14.751 45.875 47.239 45.875 47.239 60.801 14.751 60.801"/></g></mask></defs><title>kubernetes.io-logos</title><rect class="cls-2" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><g id="Symbols"><g id="Mobile-Header"><g id="Group-4"><g id="Mobile-Logo"><g id="Group"><g id="Group-3"><path id="Fill-1" class="cls-3" d="M190.28379,79.32487h-6.37906V54.34018h6.37906v3.41436a10.33414,10.33414,0,0,1,7.73637-4.03515v6.41449a8.21414,8.21414,0,0,0-1.75906-.15516c-2.10911,0-4.92212,1.24163-5.97731,2.84489ZM173.10344,64.47949a5.69574,5.69574,0,0,0-5.97731-5.53592c-3.96771,0-5.67637,3.05266-5.97732,5.53592ZM167.5278,79.94566c-7.38379,0-12.9595-5.12161-12.9595-13.13943,0-7.24181,5.174-13.08684,12.55783-13.08684,7.234,0,12.15607,5.58723,12.15607,13.76022v1.4481H161.24952c.40168,3.15535,2.86336,5.79371,6.98219,5.79371a10.15786,10.15786,0,0,0,6.47972-2.48319l2.813,4.24168c-2.41134,2.27549-6.22917,3.46575-9.99665,3.46575Zm-19.24929-.62079h-6.43068V60.08257h-4.018V54.34018h4.018V52.99592c0-5.32809,3.31543-8.68991,8.18843-8.68991a8.489,8.489,0,0,1,6.32989,2.32674l-2.41135,3.88a3.50681,3.50681,0,0,0-2.66312-1.03509c-1.75782,0-3.01317,1.19026-3.01317,3.51828v1.34426h4.92207v5.74239h-4.92207Zm-16.426,0H125.4218V60.08257h-4.0194V54.34018h4.0194V52.99592c0-5.32809,3.31541-8.68991,8.18843-8.68991a8.48418,8.48418,0,0,1,6.3286,2.32674l-2.41012,3.88a3.50683,3.50683,0,0,0-2.66312-1.03509c-1.75782,0-3.01318,1.19026-3.01318,3.51828v1.34426h4.92212v5.74239h-4.92212v19.2423Zm-14.43053,0h-6.38022V76.16952a11.21263,11.21263,0,0,1-8.53978,3.77613c-5.32385,0-7.83586-3.00012-7.83586-7.86259V54.34018h6.379v15.157c0,3.46574,1.75911,4.60345,4.4714,4.60345a7.07632,7.07632,0,0,0,5.52531-2.8449V54.34017H117.422v24.9847ZM71.87264,71.307a7.0348,7.0348,0,0,0,5.4762,2.79359c3.7171,0,6.17877-2.8975,6.17877-7.24181,0-4.34567-2.46168-7.29441-6.17877-7.29441a6.9786,6.9786,0,0,0-5.47621,2.8975V71.307Zm0,8.01788h-6.379V43.734h6.379V57.54807a9.2539,9.2539,0,0,1,7.48587-3.82868c6.17748,0,10.74961,4.96639,10.74961,13.13944,0,8.32821-4.62118,13.08683-10.74961,13.08683a9.33043,9.33043,0,0,1-7.48587-3.82745v3.20666Z"/></g><g id="Group-6"><g class="cls-4"><path id="Fill-4" class="cls-3" d="M47.00136,72.68049l-3.69521-1.83918a1.4583,1.4583,0,0,0-1.15329,0L31.57117,76.108a1.455,1.455,0,0,1-1.15194,0L19.83748,70.84131a1.45828,1.45828,0,0,0-1.15328,0L14.989,72.68049c-.31738.15841-.31738.416,0,.57308L30.41923,80.934a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.68046c.31738-.15706.31738-.41467,0-.57308"/></g></g><g id="Group-9"><g class="cls-5"><path id="Fill-7" class="cls-3" d="M47.00136,62.53223l-3.69521-1.83917a1.45872,1.45872,0,0,0-1.15329,0L31.57117,65.95984a1.46694,1.46694,0,0,1-1.15194,0L19.83748,60.69306a1.4587,1.4587,0,0,0-1.15328,0L14.989,62.53223c-.31738.15842-.31738.416,0,.5745l15.43024,7.67905a1.45553,1.45553,0,0,0,1.15194,0l15.43019-7.67905c.31738-.15848.31738-.41608,0-.5745"/></g></g><g id="Group-12"><g class="cls-6"><path id="Fill-10" class="cls-3" d="M14.98887,53.6031l15.43018,7.08965a1.58033,1.58033,0,0,0,1.15194,0L47.00123,53.6031c.31744-.14628.31744-.38408,0-.52907L31.571,45.98438a1.56676,1.56676,0,0,0-1.15194,0L14.98887,53.074c-.31744.145-.31744.38279,0,.52907"/></g></g></g></g></g></g></g></svg>

Before

Width:  |  Height:  |  Size: 4.1 KiB

After

Width:  |  Height:  |  Size: 4.1 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 6.7 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 9.0 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2,.cls-3{fill-rule:evenodd;}.cls-2{fill:url(#linear-gradient);}.cls-3{fill:url(#linear-gradient-2);}</style><linearGradient id="linear-gradient" x1="-3137.65754" y1="1644.36414" x2="-3137.70622" y2="1644.36292" gradientTransform="matrix(2793, 0, 0, -441.00006, 8763675.71297, 725229.10048)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#f5286e"/><stop offset="1" stop-color="#fa461e"/></linearGradient><linearGradient id="linear-gradient-2" x1="-3134.50784" y1="1645.52546" x2="-3134.55653" y2="1645.47965" gradientTransform="matrix(800, 0, 0, -776, 2507658.98376, 1276974.5)" xlink:href="#linear-gradient"/></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M73.12738,75.22779a11.43076,11.43076,0,0,0,7.99043-3.25709L77.915,69.25159a6.54848,6.54848,0,0,1-4.78767,2.18113,6.27512,6.27512,0,1,1,0-12.54865A6.54851,6.54851,0,0,1,77.915,61.0652l3.20276-2.71911A11.30858,11.30858,0,0,0,73.12738,55.089c-5.89573,0-10.92371,4.58875-10.92371,10.05486C62.20367,70.61115,67.23166,75.22779,73.12738,75.22779ZM87.2185,62.96274V60.24363H82.847V74.8885h4.37024V67.89326c0-2.74823,2.06429-4.504,5.29615-4.504V59.90555A5.81225,5.81225,0,0,0,87.21723,62.964ZM101.398,75.22779c4.57892,0,8.41035-3.51278,8.41035-7.67626s-3.83144-7.64715-8.41035-7.64715c-4.60807,0-8.46857,3.48368-8.46857,7.64715S96.79,75.22779,101.398,75.22779Zm0-3.48372a4.17862,4.17862,0,1,1,4.25-4.19254A4.2883,4.2883,0,0,1,101.398,71.74407Zm13.67233,3.14321h3.92121l2.90169-9.20544,2.87388,9.20544h3.91994l5.267-14.64366H129.675l-2.93333,9.12063-3.05221-9.12063H120.068l-3.05221,9.12063-2.93333-9.12063h-4.27915l5.267,14.64366Zm31.95657-19.7995v7.25216a6.48235,6.48235,0,0,0-5.20633-2.43556c-4.12993,0-7.12271,3.17228-7.12271,7.64715,0,4.50393,2.99273,7.67626,7.12144,7.67626a6.48139,6.48139,0,0,0,5.20758-2.43556V74.8885h4.37024V55.089h-4.37024v-.00121Zm-3.95028,16.65629a4.17829,4.17829,0,0,1,0-8.356,4.18457,4.18457,0,0,1,0,8.356ZM159.533,59.45116a1.52877,1.52877,0,0,1,1.466-1.6431,2.32868,2.32868,0,0,1,1.49515.48107l1.0486-2.52038a5.82958,5.82958,0,0,0-3.562-1.27474,4.36676,4.36676,0,0,0-4.63845,4.47492v1.27469h-2.125v3.34317h2.125V74.88728h4.18943V63.58679h3.352V60.24363h-3.352v-.79247h.00127Zm7.47558-1.21778a2.24007,2.24007,0,1,0,0-4.47609,2.24316,2.24316,0,1,0,0,4.47609Zm-2.21356,16.6539h4.369V60.24363h-4.37024V74.8885l.00126-.00122ZM176.46128,62.964V60.24362H172.091V74.88849h4.37024V67.89326c0-2.74823,2.06428-4.504,5.29615-4.504V59.90555a5.81228,5.81228,0,0,0-5.29615,3.05845Zm14.24029,8.61049a3.72844,3.72844,0,0,1-3.6809-2.66336h11.16279c0-5.523-2.8435-9.0067-7.69063-9.0067-4.52078,0-7.96131,3.20012-7.96131,7.6193,0,4.50393,3.59107,7.70411,8.1991,7.70411a9.76074,9.76074,0,0,0,6.31568-2.26594l-2.78279-2.69a5.79593,5.79593,0,0,1-3.562,1.30259Zm-.03038-8.01677a3.204,3.204,0,0,1,3.32165,2.37987h-6.91269a3.73543,3.73543,0,0,1,3.59228-2.37987h-.00124Z"/><path class="cls-3" d="M48.815,64.51747H41.02529v-7.5563H33.23554v-7.5563H48.815Zm3.89487,0H44.92016A11.51324,11.51324,0,0,1,33.23554,75.85073a11.33911,11.33911,0,1,1,0-22.66768v-7.5563c-10.75594,0-19.47437,8.45767-19.47437,18.89072S22.4796,83.407,33.23554,83.407,52.70991,74.94936,52.70991,64.5163Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2,.cls-3{fill-rule:evenodd;}.cls-2{fill:url(#linear-gradient);}.cls-3{fill:url(#linear-gradient-2);}</style><linearGradient id="linear-gradient" x1="-3137.65754" y1="1644.36414" x2="-3137.70622" y2="1644.36292" gradientTransform="matrix(2793, 0, 0, -441.00006, 8763675.71297, 725229.10048)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#f5286e"/><stop offset="1" stop-color="#fa461e"/></linearGradient><linearGradient id="linear-gradient-2" x1="-3134.50784" y1="1645.52546" x2="-3134.55653" y2="1645.47965" gradientTransform="matrix(800, 0, 0, -776, 2507658.98376, 1276974.5)" xlink:href="#linear-gradient"/></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.55202" y="-3.16104" width="223.25536" height="134.51136"/><path class="cls-2" d="M73.12738,75.22779a11.43076,11.43076,0,0,0,7.99043-3.25709L77.915,69.25159a6.54848,6.54848,0,0,1-4.78767,2.18113,6.27512,6.27512,0,1,1,0-12.54865A6.54851,6.54851,0,0,1,77.915,61.0652l3.20276-2.71911A11.30858,11.30858,0,0,0,73.12738,55.089c-5.89573,0-10.92371,4.58875-10.92371,10.05486C62.20367,70.61115,67.23166,75.22779,73.12738,75.22779ZM87.2185,62.96274V60.24363H82.847V74.8885h4.37024V67.89326c0-2.74823,2.06429-4.504,5.29615-4.504V59.90555A5.81225,5.81225,0,0,0,87.21723,62.964ZM101.398,75.22779c4.57892,0,8.41035-3.51278,8.41035-7.67626s-3.83144-7.64715-8.41035-7.64715c-4.60807,0-8.46857,3.48368-8.46857,7.64715S96.79,75.22779,101.398,75.22779Zm0-3.48372a4.17862,4.17862,0,1,1,4.25-4.19254A4.2883,4.2883,0,0,1,101.398,71.74407Zm13.67233,3.14321h3.92121l2.90169-9.20544,2.87388,9.20544h3.91994l5.267-14.64366H129.675l-2.93333,9.12063-3.05221-9.12063H120.068l-3.05221,9.12063-2.93333-9.12063h-4.27915l5.267,14.64366Zm31.95657-19.7995v7.25216a6.48235,6.48235,0,0,0-5.20633-2.43556c-4.12993,0-7.12271,3.17228-7.12271,7.64715,0,4.50393,2.99273,7.67626,7.12144,7.67626a6.48139,6.48139,0,0,0,5.20758-2.43556V74.8885h4.37024V55.089h-4.37024v-.00121Zm-3.95028,16.65629a4.17829,4.17829,0,0,1,0-8.356,4.18457,4.18457,0,0,1,0,8.356ZM159.533,59.45116a1.52877,1.52877,0,0,1,1.466-1.6431,2.32868,2.32868,0,0,1,1.49515.48107l1.0486-2.52038a5.82958,5.82958,0,0,0-3.562-1.27474,4.36676,4.36676,0,0,0-4.63845,4.47492v1.27469h-2.125v3.34317h2.125V74.88728h4.18943V63.58679h3.352V60.24363h-3.352v-.79247h.00127Zm7.47558-1.21778a2.24007,2.24007,0,1,0,0-4.47609,2.24316,2.24316,0,1,0,0,4.47609Zm-2.21356,16.6539h4.369V60.24363h-4.37024V74.8885l.00126-.00122ZM176.46128,62.964V60.24362H172.091V74.88849h4.37024V67.89326c0-2.74823,2.06428-4.504,5.29615-4.504V59.90555a5.81228,5.81228,0,0,0-5.29615,3.05845Zm14.24029,8.61049a3.72844,3.72844,0,0,1-3.6809-2.66336h11.16279c0-5.523-2.8435-9.0067-7.69063-9.0067-4.52078,0-7.96131,3.20012-7.96131,7.6193,0,4.50393,3.59107,7.70411,8.1991,7.70411a9.76074,9.76074,0,0,0,6.31568-2.26594l-2.78279-2.69a5.79593,5.79593,0,0,1-3.562,1.30259Zm-.03038-8.01677a3.204,3.204,0,0,1,3.32165,2.37987h-6.91269a3.73543,3.73543,0,0,1,3.59228-2.37987h-.00124Z"/><path class="cls-3" d="M48.815,64.51747H41.02529v-7.5563H33.23554v-7.5563H48.815Zm3.89487,0H44.92016A11.51324,11.51324,0,0,1,33.23554,75.85073a11.33911,11.33911,0,1,1,0-22.66768v-7.5563c-10.75594,0-19.47437,8.45767-19.47437,18.89072S22.4796,83.407,33.23554,83.407,52.70991,74.94936,52.70991,64.5163Z"/></svg>

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:none;}.cls-2{fill:#fbfbfb;}.cls-3{clip-path:url(#clip-path);}.cls-4{fill:#ee3248;fill-rule:evenodd;}</style><clipPath id="clip-path"><rect class="cls-1" x="4.70501" y="-8.81268" width="206.90403" height="145.3255"/></clipPath></defs><rect class="cls-2" x="-3.37342" y="-3.34411" width="223.25536" height="134.51135"/><g class="cls-3"><g class="cls-3"><path class="cls-4" d="M28.02166,50.76936l7.69058-.0617c6.9525.12305,7.75228,4.1837,4.61435,13.41275-2.64567,7.87531-6.76768,12.67414-14.58165,12.61279h-6.091ZM169.22325,61.044c4.55274-11.01316,10.52065-15.19686,22.76431-15.50454,10.9518-.24609,15.81224,4.79908,11.813,17.41187C199.49376,76.67151,191.67987,82.209,179.86693,82.209c-12.98175.06134-16.24263-7.87514-10.64368-21.165m8.429.73843c2.83006-7.01411,6.95215-11.87463,13.04345-11.93633,5.47555-.06125,6.46007,3.938,3.99905,12.36706-3.13794,10.82842-7.69068,15.5659-13.90466,15.44286-6.89106-.06135-6.768-6.82946-3.13784-15.87359m-15.07382-8.67536,2.09172-6.02932a34.76316,34.76316,0,0,0-10.95146-1.66134c-7.62924-.0616-13.35114,2.33806-15.6892,7.69066-3.69162,8.183,1.4766,10.70564,7.69084,13.59749,9.10583,4.245,3.876,11.56684-4.86069,10.82842-3.50688-.3077-6.58311-1.90724-9.65961-3.38384l-1.96859,6.15264A33.79646,33.79646,0,0,0,142.64393,82.209c9.0444-.55369,14.64308-4.184,16.91972-9.90595,2.584-6.52159-1.41525-10.52064-7.38324-12.42789-12.42831-3.99939-4.98338-15.75088,10.398-6.76811M95.57659,46.15492a19.153,19.153,0,0,1,2.215,3.62993L87.76306,81.47059H93.854l8.36766-26.45618,9.59791,26.45618h8.61375l-.36939-.98451,10.89012-34.33116h-6.15273l-7.99827,25.34837-9.22879-25.34837c-3.999,0-7.99836.06135-11.99767,0m-34.39259,0a14.395,14.395,0,0,1,2.21468,3.62993L53.43173,81.47059H79.826L81.05656,77.533H63.21437L67.152,65.16635H78.96492l1.23051-3.87627H68.3825L71.95081,49.9695H89.17822L90.347,46.15492c-9.72121,0-19.44208.06135-29.163,0m-42.26817,0a16.4482,16.4482,0,0,1,2.21468,3.62993L11.102,81.47059h8.61366l5.04517-.06135c14.766,0,21.77988-7.32179,24.73333-16.85828,4.245-13.5973-.92316-18.33469-13.47418-18.33469Z"/></g></g></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 215 127"><defs><style>.cls-1{fill:none;}.cls-2{fill:transparent;}.cls-3{clip-path:url(#clip-path);}.cls-4{fill:#ee3248;fill-rule:evenodd;}</style><clipPath id="clip-path"><rect class="cls-1" x="4.70501" y="-8.81268" width="206.90403" height="145.3255"/></clipPath></defs><rect class="cls-2" x="-3.37342" y="-3.34411" width="223.25536" height="134.51135"/><g class="cls-3"><g class="cls-3"><path class="cls-4" d="M28.02166,50.76936l7.69058-.0617c6.9525.12305,7.75228,4.1837,4.61435,13.41275-2.64567,7.87531-6.76768,12.67414-14.58165,12.61279h-6.091ZM169.22325,61.044c4.55274-11.01316,10.52065-15.19686,22.76431-15.50454,10.9518-.24609,15.81224,4.79908,11.813,17.41187C199.49376,76.67151,191.67987,82.209,179.86693,82.209c-12.98175.06134-16.24263-7.87514-10.64368-21.165m8.429.73843c2.83006-7.01411,6.95215-11.87463,13.04345-11.93633,5.47555-.06125,6.46007,3.938,3.99905,12.36706-3.13794,10.82842-7.69068,15.5659-13.90466,15.44286-6.89106-.06135-6.768-6.82946-3.13784-15.87359m-15.07382-8.67536,2.09172-6.02932a34.76316,34.76316,0,0,0-10.95146-1.66134c-7.62924-.0616-13.35114,2.33806-15.6892,7.69066-3.69162,8.183,1.4766,10.70564,7.69084,13.59749,9.10583,4.245,3.876,11.56684-4.86069,10.82842-3.50688-.3077-6.58311-1.90724-9.65961-3.38384l-1.96859,6.15264A33.79646,33.79646,0,0,0,142.64393,82.209c9.0444-.55369,14.64308-4.184,16.91972-9.90595,2.584-6.52159-1.41525-10.52064-7.38324-12.42789-12.42831-3.99939-4.98338-15.75088,10.398-6.76811M95.57659,46.15492a19.153,19.153,0,0,1,2.215,3.62993L87.76306,81.47059H93.854l8.36766-26.45618,9.59791,26.45618h8.61375l-.36939-.98451,10.89012-34.33116h-6.15273l-7.99827,25.34837-9.22879-25.34837c-3.999,0-7.99836.06135-11.99767,0m-34.39259,0a14.395,14.395,0,0,1,2.21468,3.62993L53.43173,81.47059H79.826L81.05656,77.533H63.21437L67.152,65.16635H78.96492l1.23051-3.87627H68.3825L71.95081,49.9695H89.17822L90.347,46.15492c-9.72121,0-19.44208.06135-29.163,0m-42.26817,0a16.4482,16.4482,0,0,1,2.21468,3.62993L11.102,81.47059h8.61366l5.04517-.06135c14.766,0,21.77988-7.32179,24.73333-16.85828,4.245-13.5973-.92316-18.33469-13.47418-18.33469Z"/></g></g></svg>

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#005da5;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-4.55738" y="-3.79362" width="223.25536" height="134.51136"/><path d="M70.73011,68.57377c-7.72723,0-10.07255-2.04907-10.07255-4.666V54.5512h5.061v8.83817c0,2.14782,2.24658,2.74033,5.25847,2.74033a39.0795,39.0795,0,0,0,4.71533-.1975V54.57589h5.061V68.006A79.45679,79.45679,0,0,1,70.73011,68.57377Z"/><polygon points="25.502 68.228 25.502 62.575 15.529 62.575 15.529 68.228 10.468 68.228 10.468 54.527 15.529 54.527 15.529 60.131 25.502 60.131 25.502 54.527 30.563 54.527 30.563 68.228 25.502 68.228"/><path d="M92.40584,57.26684h13.677V54.52651H91.34428c-3.9994,0-5.3819.71594-5.3819,2.7897V68.25283h5.061V62.5253H106.0581V60.08123H91.02333V58.1309C91.048,57.61246,91.34428,57.26684,92.40584,57.26684Z"/><path d="M116.501,57.26684c-1.08626,0-1.35782.34562-1.35782.83938v1.99969h15.03476v2.34533H115.14315v2.1972c0,.49375.29625.83938,1.35782.83938h13.67694v2.74032H115.4394c-3.9994,0-5.3819-.71594-5.3819-2.7897V57.29153c0-2.07377,1.3825-2.78971,5.3819-2.78971h14.73852v2.74033H116.501Z"/><path class="cls-2" d="M135.28825,65.66063c0-.69125.395-.93813,1.58-.93813h2.02439c1.16032,0,1.58.24688,1.58.93813V67.3147c0,.69126-.395.93813-1.58.93813h-2.02439c-1.16032,0-1.58-.24687-1.58-.93813Z"/><path d="M45.72154,59.785a50.47135,50.47135,0,0,0-8.1716.64187c-1.45657.22219-2.09845.79-2.09845,2.02439v3.45627c0,1.23438.64188,1.77751,2.09845,2.02438a50.47023,50.47023,0,0,0,8.1716.64188,66.931,66.931,0,0,0,9.85037-.61719V59.069c0-4.29564-1.65407-4.54252-7.25816-4.54252H37.05619v2.37H48.1903c2.09845,0,2.29595.56782,2.29595,1.82689V59.785Zm0,6.51753a24.08331,24.08331,0,0,1-4.1722-.32094c-.71595-.12344-1.03688-.395-1.03688-.98751V63.38937c0-.61719.32094-.88876,1.03688-.98751a24.08232,24.08232,0,0,1,4.1722-.32093h4.74V66.1297C49.49875,66.22845,47.62249,66.30251,45.72154,66.30251Z"/><path d="M152.17459,68.42564c-3.9994,0-7.1841-2.32063-7.1841-6.98659,0-4.22159,2.81438-6.9866,7.38159-6.9866a9.10468,9.10468,0,0,1,3.82659.74063l-.64188,1.13563a7.75929,7.75929,0,0,0-3.06127-.59251c-3.30814,0-5.1844,2.22189-5.1844,5.77691,0,3.82658,2.24657,5.62878,4.78941,5.62878a7.32732,7.32732,0,0,0,2.1972-.27157V61.76h-3.11065V60.62435h5.28316V67.7097A12.10491,12.10491,0,0,1,152.17459,68.42564Z"/><path d="M165.259,58.7481a4.6438,4.6438,0,0,0-1.35782-.17282,4.308,4.308,0,0,0-1.65407.27157v9.38129h-2.17251V58.15559a12.37618,12.37618,0,0,1,4.51783-.74063c.49375,0,1.03688.04938,1.16032.04938Z"/><path d="M172.44312,68.42564c-3.5797,0-5.20908-2.32063-5.20908-5.53s1.62938-5.50534,5.20908-5.50534,5.20908,2.32063,5.20908,5.50534S176.02283,68.42564,172.44312,68.42564Zm0-9.92442c-2.29594,0-3.06127,1.9997-3.06127,4.36971s.76531,4.3944,3.06127,4.3944,3.06127-2.02439,3.06127-4.3944S174.76376,58.50122,172.44312,58.50122Z"/><path d="M185.57694,68.42564c-3.16,0-4.81409-1.08625-4.81409-3.481V57.58778h2.14783v7.431c0,1.48126.86407,2.22189,2.691,2.22189A7.23093,7.23093,0,0,0,187.947,66.895V57.58778h2.14783V67.63564A12.37028,12.37028,0,0,1,185.57694,68.42564Z"/><path d="M197.89607,68.401a9.11009,9.11009,0,0,1-2.04907-.24688v4.32034h-2.12313V58.1309a10.93672,10.93672,0,0,1,4.09814-.74062c3.77721,0,5.851,2.09844,5.851,5.33252C203.64829,66.27782,201.377,68.401,197.89607,68.401Zm-.17281-9.94912a5.46342,5.46342,0,0,0-1.87626.27157v8.34441a5.51753,5.51753,0,0,0,1.58.22219c2.691,0,4.07346-1.62938,4.07346-4.46846C201.50046,60.15529,200.48827,58.45184,197.72326,58.45184Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#005da5;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-4.55738" y="-3.79362" width="223.25536" height="134.51136"/><path d="M70.73011,68.57377c-7.72723,0-10.07255-2.04907-10.07255-4.666V54.5512h5.061v8.83817c0,2.14782,2.24658,2.74033,5.25847,2.74033a39.0795,39.0795,0,0,0,4.71533-.1975V54.57589h5.061V68.006A79.45679,79.45679,0,0,1,70.73011,68.57377Z"/><polygon points="25.502 68.228 25.502 62.575 15.529 62.575 15.529 68.228 10.468 68.228 10.468 54.527 15.529 54.527 15.529 60.131 25.502 60.131 25.502 54.527 30.563 54.527 30.563 68.228 25.502 68.228"/><path d="M92.40584,57.26684h13.677V54.52651H91.34428c-3.9994,0-5.3819.71594-5.3819,2.7897V68.25283h5.061V62.5253H106.0581V60.08123H91.02333V58.1309C91.048,57.61246,91.34428,57.26684,92.40584,57.26684Z"/><path d="M116.501,57.26684c-1.08626,0-1.35782.34562-1.35782.83938v1.99969h15.03476v2.34533H115.14315v2.1972c0,.49375.29625.83938,1.35782.83938h13.67694v2.74032H115.4394c-3.9994,0-5.3819-.71594-5.3819-2.7897V57.29153c0-2.07377,1.3825-2.78971,5.3819-2.78971h14.73852v2.74033H116.501Z"/><path class="cls-2" d="M135.28825,65.66063c0-.69125.395-.93813,1.58-.93813h2.02439c1.16032,0,1.58.24688,1.58.93813V67.3147c0,.69126-.395.93813-1.58.93813h-2.02439c-1.16032,0-1.58-.24687-1.58-.93813Z"/><path d="M45.72154,59.785a50.47135,50.47135,0,0,0-8.1716.64187c-1.45657.22219-2.09845.79-2.09845,2.02439v3.45627c0,1.23438.64188,1.77751,2.09845,2.02438a50.47023,50.47023,0,0,0,8.1716.64188,66.931,66.931,0,0,0,9.85037-.61719V59.069c0-4.29564-1.65407-4.54252-7.25816-4.54252H37.05619v2.37H48.1903c2.09845,0,2.29595.56782,2.29595,1.82689V59.785Zm0,6.51753a24.08331,24.08331,0,0,1-4.1722-.32094c-.71595-.12344-1.03688-.395-1.03688-.98751V63.38937c0-.61719.32094-.88876,1.03688-.98751a24.08232,24.08232,0,0,1,4.1722-.32093h4.74V66.1297C49.49875,66.22845,47.62249,66.30251,45.72154,66.30251Z"/><path d="M152.17459,68.42564c-3.9994,0-7.1841-2.32063-7.1841-6.98659,0-4.22159,2.81438-6.9866,7.38159-6.9866a9.10468,9.10468,0,0,1,3.82659.74063l-.64188,1.13563a7.75929,7.75929,0,0,0-3.06127-.59251c-3.30814,0-5.1844,2.22189-5.1844,5.77691,0,3.82658,2.24657,5.62878,4.78941,5.62878a7.32732,7.32732,0,0,0,2.1972-.27157V61.76h-3.11065V60.62435h5.28316V67.7097A12.10491,12.10491,0,0,1,152.17459,68.42564Z"/><path d="M165.259,58.7481a4.6438,4.6438,0,0,0-1.35782-.17282,4.308,4.308,0,0,0-1.65407.27157v9.38129h-2.17251V58.15559a12.37618,12.37618,0,0,1,4.51783-.74063c.49375,0,1.03688.04938,1.16032.04938Z"/><path d="M172.44312,68.42564c-3.5797,0-5.20908-2.32063-5.20908-5.53s1.62938-5.50534,5.20908-5.50534,5.20908,2.32063,5.20908,5.50534S176.02283,68.42564,172.44312,68.42564Zm0-9.92442c-2.29594,0-3.06127,1.9997-3.06127,4.36971s.76531,4.3944,3.06127,4.3944,3.06127-2.02439,3.06127-4.3944S174.76376,58.50122,172.44312,58.50122Z"/><path d="M185.57694,68.42564c-3.16,0-4.81409-1.08625-4.81409-3.481V57.58778h2.14783v7.431c0,1.48126.86407,2.22189,2.691,2.22189A7.23093,7.23093,0,0,0,187.947,66.895V57.58778h2.14783V67.63564A12.37028,12.37028,0,0,1,185.57694,68.42564Z"/><path d="M197.89607,68.401a9.11009,9.11009,0,0,1-2.04907-.24688v4.32034h-2.12313V58.1309a10.93672,10.93672,0,0,1,4.09814-.74062c3.77721,0,5.851,2.09844,5.851,5.33252C203.64829,66.27782,201.377,68.401,197.89607,68.401Zm-.17281-9.94912a5.46342,5.46342,0,0,0-1.87626.27157v8.34441a5.51753,5.51753,0,0,0,1.58.22219c2.691,0,4.07346-1.62938,4.07346-4.46846C201.50046,60.15529,200.48827,58.45184,197.72326,58.45184Z"/></svg>

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 3.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215.9892 128.40633"><defs><style>.cls-1{fill:#f9f9f9;}.cls-2{fill:#4c81c2;}</style></defs><title>ibm_featured_logo</title><rect class="cls-1" x="-5.9997" y="-8.99955" width="229.48853" height="143.9928"/><polygon class="cls-2" points="190.441 33.693 162.454 33.693 164.178 28.868 190.441 28.868 190.441 33.693"/><path class="cls-2" d="M115.83346,28.867l25.98433-.003,1.7014,4.83715c.01251-.00687-27.677.00593-27.677,0C115.84224,33.69422,115.82554,28.867,115.83346,28.867Z"/><path class="cls-2" d="M95.19668,28.86593A18.6894,18.6894,0,0,1,106.37358,33.7s-47.10052.00489-47.10052,0V28.86488Z"/><rect class="cls-2" x="22.31176" y="28.86593" width="32.72063" height="4.82558"/><path class="cls-2" d="M190.44115,42.74673h-31.194s1.70142-4.79994,1.691-4.80193h29.50305Z"/><polygon class="cls-2" points="146.734 42.753 115.832 42.753 115.832 37.944 145.041 37.944 146.734 42.753"/><path class="cls-2" d="M110.04127,37.94271a12.47,12.47,0,0,1,1.35553,4.80214H59.28193V37.94271Z"/><rect class="cls-2" x="22.31176" y="37.94271" width="32.72063" height="4.80214"/><polygon class="cls-2" points="156.056 51.823 157.768 46.998 181.191 47.005 181.191 51.812 156.056 51.823"/><polygon class="cls-2" points="148.237 46.997 149.944 51.823 125.046 51.823 125.046 46.997 148.237 46.997"/><path class="cls-2" d="M111.81,46.99627a15.748,15.748,0,0,1-.68923,4.82641H96.85137V46.99627Z"/><rect class="cls-2" x="31.43162" y="47.01973" width="14.06406" height="4.8019"/><rect class="cls-2" x="68.7486" y="46.99627" width="14.03976" height="4.82537"/><path class="cls-2" d="M138.87572,57.03292s.004,3.65225.001,3.65913H125.04558V55.89h26.35583l1.637,4.4773c.00773.00292,1.57841-4.48815,1.58153-4.47835h26.56223V60.692h-13.763c-.00124-.00687-.00771-3.65819-.00771-3.65819l-1.273,3.65819-25.99183-.00687Z"/><path class="cls-2" d="M68.7486,55.889h40.30365v-.00188a18.13723,18.13723,0,0,1-3.99812,4.80494s-36.30647.00668-36.30647,0Z"/><rect class="cls-2" x="31.43162" y="55.88794" width="14.06406" height="4.80316"/><rect class="cls-2" x="167.41912" y="64.94348" width="13.76302" height="4.80212"/><path class="cls-2" d="M138.87572,64.94348H125.04558V69.7456c-.00688-.0025,13.83411.00167,13.83411,0C138.87969,69.7431,138.89532,64.94348,138.87572,64.94348Z"/><path class="cls-2" d="M164.63927,64.94348c-.06255-.007-1.61218,4.79962-1.67723,4.80212l-19.60378.00835c-.01543-.00751-1.72371-4.81745-1.725-4.81047Z"/><path class="cls-2" d="M68.74672,64.94233H104.985a23.7047,23.7047,0,0,1,4.32076,4.80327c.06609-.0025-40.5581.00167-40.5581,0Z"/><path class="cls-2" d="M45.49359,69.74436v-4.802H31.45487V69.7431Z"/><rect class="cls-2" x="167.41912" y="73.99693" width="13.76198" height="4.80295"/><rect class="cls-2" x="125.04474" y="73.99693" width="13.83097" height="4.80212"/><path class="cls-2" d="M159.74351,78.8224c.00376-.02169,1.69745-4.82964,1.72373-4.82547H144.80219c-.029-.00209,1.70848,4.80378,1.70848,4.80378S159.7404,78.84241,159.74351,78.8224Z"/><path class="cls-2" d="M68.74766,78.79905c0,.01919-.00094-4.80212,0-4.803H82.9958s.01272,4.80462,0,4.80462C82.98224,78.80072,68.74766,78.79489,68.74766,78.79905Z"/><path class="cls-2" d="M111.30529,73.9961a13.94783,13.94783,0,0,1,.89542,4.825H97.10364v-4.825Z"/><rect class="cls-2" x="31.45487" y="73.9961" width="14.03872" height="4.80171"/><rect class="cls-2" x="167.41912" y="82.86525" width="23.0212" height="4.80421"/><rect class="cls-2" x="115.83139" y="82.86525" width="23.04432" height="4.80421"/><polygon class="cls-2" points="156.647 87.669 149.618 87.669 147.931 82.865 158.272 82.865 156.647 87.669"/><path class="cls-2" d="M22.3099,82.86525v4.80212H55.008c.01366.00751-.01469-4.79919,0-4.79919Z"/><path class="cls-2" d="M111.60237,82.86525c-.3442,1.58445-.65962,3.5158-1.81732,4.80421l-.43175-.00209H59.28005V82.86525Z"/><polygon class="cls-2" points="153.461 96.733 152.814 96.733 151.171 91.92 155.147 91.92 153.461 96.733"/><rect class="cls-2" x="167.41788" y="91.91953" width="23.02244" height="4.82547"/><path class="cls-2" d="M59.27307,96.73333V91.92745s47.24073.00585,47.37623.00585A17.945,17.945,0,0,1,94.43864,96.745l-35.15859-.00959"/><rect class="cls-2" x="115.83139" y="91.91953" width="23.04432" height="4.82547"/><path class="cls-2" d="M55.008,91.94079s-.01469,4.79253,0,4.79253c.01366,0-32.6885.0196-32.69809.00961-.00888-.00961.00875-4.81548,0-4.81548S54.9933,91.95664,55.008,91.94079Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215.9892 128.40633"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#4c81c2;}</style></defs><title>ibm_featured_logo</title><rect class="cls-1" x="-5.9997" y="-8.99955" width="229.48853" height="143.9928"/><polygon class="cls-2" points="190.441 33.693 162.454 33.693 164.178 28.868 190.441 28.868 190.441 33.693"/><path class="cls-2" d="M115.83346,28.867l25.98433-.003,1.7014,4.83715c.01251-.00687-27.677.00593-27.677,0C115.84224,33.69422,115.82554,28.867,115.83346,28.867Z"/><path class="cls-2" d="M95.19668,28.86593A18.6894,18.6894,0,0,1,106.37358,33.7s-47.10052.00489-47.10052,0V28.86488Z"/><rect class="cls-2" x="22.31176" y="28.86593" width="32.72063" height="4.82558"/><path class="cls-2" d="M190.44115,42.74673h-31.194s1.70142-4.79994,1.691-4.80193h29.50305Z"/><polygon class="cls-2" points="146.734 42.753 115.832 42.753 115.832 37.944 145.041 37.944 146.734 42.753"/><path class="cls-2" d="M110.04127,37.94271a12.47,12.47,0,0,1,1.35553,4.80214H59.28193V37.94271Z"/><rect class="cls-2" x="22.31176" y="37.94271" width="32.72063" height="4.80214"/><polygon class="cls-2" points="156.056 51.823 157.768 46.998 181.191 47.005 181.191 51.812 156.056 51.823"/><polygon class="cls-2" points="148.237 46.997 149.944 51.823 125.046 51.823 125.046 46.997 148.237 46.997"/><path class="cls-2" d="M111.81,46.99627a15.748,15.748,0,0,1-.68923,4.82641H96.85137V46.99627Z"/><rect class="cls-2" x="31.43162" y="47.01973" width="14.06406" height="4.8019"/><rect class="cls-2" x="68.7486" y="46.99627" width="14.03976" height="4.82537"/><path class="cls-2" d="M138.87572,57.03292s.004,3.65225.001,3.65913H125.04558V55.89h26.35583l1.637,4.4773c.00773.00292,1.57841-4.48815,1.58153-4.47835h26.56223V60.692h-13.763c-.00124-.00687-.00771-3.65819-.00771-3.65819l-1.273,3.65819-25.99183-.00687Z"/><path class="cls-2" d="M68.7486,55.889h40.30365v-.00188a18.13723,18.13723,0,0,1-3.99812,4.80494s-36.30647.00668-36.30647,0Z"/><rect class="cls-2" x="31.43162" y="55.88794" width="14.06406" height="4.80316"/><rect class="cls-2" x="167.41912" y="64.94348" width="13.76302" height="4.80212"/><path class="cls-2" d="M138.87572,64.94348H125.04558V69.7456c-.00688-.0025,13.83411.00167,13.83411,0C138.87969,69.7431,138.89532,64.94348,138.87572,64.94348Z"/><path class="cls-2" d="M164.63927,64.94348c-.06255-.007-1.61218,4.79962-1.67723,4.80212l-19.60378.00835c-.01543-.00751-1.72371-4.81745-1.725-4.81047Z"/><path class="cls-2" d="M68.74672,64.94233H104.985a23.7047,23.7047,0,0,1,4.32076,4.80327c.06609-.0025-40.5581.00167-40.5581,0Z"/><path class="cls-2" d="M45.49359,69.74436v-4.802H31.45487V69.7431Z"/><rect class="cls-2" x="167.41912" y="73.99693" width="13.76198" height="4.80295"/><rect class="cls-2" x="125.04474" y="73.99693" width="13.83097" height="4.80212"/><path class="cls-2" d="M159.74351,78.8224c.00376-.02169,1.69745-4.82964,1.72373-4.82547H144.80219c-.029-.00209,1.70848,4.80378,1.70848,4.80378S159.7404,78.84241,159.74351,78.8224Z"/><path class="cls-2" d="M68.74766,78.79905c0,.01919-.00094-4.80212,0-4.803H82.9958s.01272,4.80462,0,4.80462C82.98224,78.80072,68.74766,78.79489,68.74766,78.79905Z"/><path class="cls-2" d="M111.30529,73.9961a13.94783,13.94783,0,0,1,.89542,4.825H97.10364v-4.825Z"/><rect class="cls-2" x="31.45487" y="73.9961" width="14.03872" height="4.80171"/><rect class="cls-2" x="167.41912" y="82.86525" width="23.0212" height="4.80421"/><rect class="cls-2" x="115.83139" y="82.86525" width="23.04432" height="4.80421"/><polygon class="cls-2" points="156.647 87.669 149.618 87.669 147.931 82.865 158.272 82.865 156.647 87.669"/><path class="cls-2" d="M22.3099,82.86525v4.80212H55.008c.01366.00751-.01469-4.79919,0-4.79919Z"/><path class="cls-2" d="M111.60237,82.86525c-.3442,1.58445-.65962,3.5158-1.81732,4.80421l-.43175-.00209H59.28005V82.86525Z"/><polygon class="cls-2" points="153.461 96.733 152.814 96.733 151.171 91.92 155.147 91.92 153.461 96.733"/><rect class="cls-2" x="167.41788" y="91.91953" width="23.02244" height="4.82547"/><path class="cls-2" d="M59.27307,96.73333V91.92745s47.24073.00585,47.37623.00585A17.945,17.945,0,0,1,94.43864,96.745l-35.15859-.00959"/><rect class="cls-2" x="115.83139" y="91.91953" width="23.04432" height="4.82547"/><path class="cls-2" d="M55.008,91.94079s-.01469,4.79253,0,4.79253c.01366,0-32.6885.0196-32.69809.00961-.00888-.00961.00875-4.81548,0-4.81548S54.9933,91.95664,55.008,91.94079Z"/></svg>

Before

Width:  |  Height:  |  Size: 4.3 KiB

After

Width:  |  Height:  |  Size: 4.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#0191e9;}.cls-3{fill:#0091e9;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-4.04384" y="-3.09231" width="223.25536" height="134.51136"/><path class="cls-2" d="M45.52289,58.40931v.67992q0,13.93346-.00007,27.86692c0,1.38065-.53827,1.91475-1.93728,1.91529q-5.28807.002-10.57613-.00008c-1.38822-.00063-1.92493-.54092-1.925-1.93233q-.00046-24.54695-.00005-49.09388c0-1.353.57919-1.92554,1.94686-1.92565,3.80461-.00032,7.60928.01,11.41379-.00895a2.04535,2.04535,0,0,1,1.71751.81129Q54.03893,46.45059,61.924,56.17238q4.04849,4.99347,8.103,9.98208c.07192.08853.12491.2127.32189.21055V65.8111q0-13.98573.00015-27.97148c0-1.36549.55959-1.92,1.93681-1.92011q5.28806-.00056,10.57612,0c1.34222.00018,1.90831.58125,1.90846,1.96856q.00156,14.66543.00056,29.33085,0,9.85536-.00062,19.71075A2.263,2.263,0,0,1,84.52869,88.14a1.37319,1.37319,0,0,1-1.04378.69444,5.03028,5.03028,0,0,1-.6784.03606c-3.76971.002-7.53949-.01164-11.30906.01172a2.24873,2.24873,0,0,1-1.91838-.92957Q62.60826,79.30668,55.596,70.69411q-4.824-5.946-9.65083-11.88973A1.04432,1.04432,0,0,0,45.52289,58.40931Z"/><path class="cls-3" d="M123.01876,56.06019c-.2594-1.23634-.48657-2.48055-.78675-3.70693a1.60083,1.60083,0,0,1,1.699-2.14585c3.24478.05866,6.49131.01913,9.73714.02113A1.51454,1.51454,0,0,1,135.39865,51.95q.00269,17.59165,0,35.18331a1.50965,1.50965,0,0,1-1.72062,1.73444q-4.89491.01271-9.78984.00037a1.51127,1.51127,0,0,1-1.62212-2.05353c.25809-1.184.504-2.37063.75558-3.55756-.358.33974-.71292.66321-1.053,1.00145a19.61428,19.61428,0,0,1-5.08847,4.04243,16.514,16.514,0,0,1-8.88021,1.51593,18.6292,18.6292,0,0,1-7.80715-2.40237A19.19129,19.19129,0,0,1,91.61794,77.127a22.109,22.109,0,0,1-1.24153-9.90284A20.27607,20.27607,0,0,1,95.94859,54.708a18.01769,18.01769,0,0,1,10.63178-5.53131,17.6862,17.6862,0,0,1,7.30216.3216,15.14214,15.14214,0,0,1,7.36191,4.72185c.56309.63191,1.13169,1.25893,1.69777,1.8882Zm-9.77385,22.141a8.79066,8.79066,0,1,0,.14059-17.58031,8.79085,8.79085,0,1,0-.14059,17.58031Z"/><path class="cls-3" d="M162.37991,77.63555c.49774-1.75171.99389-3.48281,1.48143-5.21633q2.78715-9.91015,5.57042-19.82139a3.04284,3.04284,0,0,1,3.14933-2.36811q5.00009-.00879,10.00019-.001c1.19946.00084,1.72.6945,1.36948,1.8371q-3.41624,11.13649-6.839,22.271-1.962,6.38839-3.92252,12.77725a2.069,2.069,0,0,1-2.39972,1.75589q-8.42949.00264-16.859.0004a2.08773,2.08773,0,0,1-2.45568-1.82338q-5.3462-17.42612-10.69272-34.85212c-.04576-.14911-.10449-.29524-.13769-.447a1.19831,1.19831,0,0,1,1.16194-1.50906c3.56019-.01552,7.12059-.02529,10.68066-.00068a3.01084,3.01084,0,0,1,2.7849,2.42663q1.102,3.90333,2.20008,7.80776,2.35458,8.35854,4.70637,16.71786A.84887.84887,0,0,0,162.37991,77.63555Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#0191e9;}.cls-3{fill:#0091e9;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-4.04384" y="-3.09231" width="223.25536" height="134.51136"/><path class="cls-2" d="M45.52289,58.40931v.67992q0,13.93346-.00007,27.86692c0,1.38065-.53827,1.91475-1.93728,1.91529q-5.28807.002-10.57613-.00008c-1.38822-.00063-1.92493-.54092-1.925-1.93233q-.00046-24.54695-.00005-49.09388c0-1.353.57919-1.92554,1.94686-1.92565,3.80461-.00032,7.60928.01,11.41379-.00895a2.04535,2.04535,0,0,1,1.71751.81129Q54.03893,46.45059,61.924,56.17238q4.04849,4.99347,8.103,9.98208c.07192.08853.12491.2127.32189.21055V65.8111q0-13.98573.00015-27.97148c0-1.36549.55959-1.92,1.93681-1.92011q5.28806-.00056,10.57612,0c1.34222.00018,1.90831.58125,1.90846,1.96856q.00156,14.66543.00056,29.33085,0,9.85536-.00062,19.71075A2.263,2.263,0,0,1,84.52869,88.14a1.37319,1.37319,0,0,1-1.04378.69444,5.03028,5.03028,0,0,1-.6784.03606c-3.76971.002-7.53949-.01164-11.30906.01172a2.24873,2.24873,0,0,1-1.91838-.92957Q62.60826,79.30668,55.596,70.69411q-4.824-5.946-9.65083-11.88973A1.04432,1.04432,0,0,0,45.52289,58.40931Z"/><path class="cls-3" d="M123.01876,56.06019c-.2594-1.23634-.48657-2.48055-.78675-3.70693a1.60083,1.60083,0,0,1,1.699-2.14585c3.24478.05866,6.49131.01913,9.73714.02113A1.51454,1.51454,0,0,1,135.39865,51.95q.00269,17.59165,0,35.18331a1.50965,1.50965,0,0,1-1.72062,1.73444q-4.89491.01271-9.78984.00037a1.51127,1.51127,0,0,1-1.62212-2.05353c.25809-1.184.504-2.37063.75558-3.55756-.358.33974-.71292.66321-1.053,1.00145a19.61428,19.61428,0,0,1-5.08847,4.04243,16.514,16.514,0,0,1-8.88021,1.51593,18.6292,18.6292,0,0,1-7.80715-2.40237A19.19129,19.19129,0,0,1,91.61794,77.127a22.109,22.109,0,0,1-1.24153-9.90284A20.27607,20.27607,0,0,1,95.94859,54.708a18.01769,18.01769,0,0,1,10.63178-5.53131,17.6862,17.6862,0,0,1,7.30216.3216,15.14214,15.14214,0,0,1,7.36191,4.72185c.56309.63191,1.13169,1.25893,1.69777,1.8882Zm-9.77385,22.141a8.79066,8.79066,0,1,0,.14059-17.58031,8.79085,8.79085,0,1,0-.14059,17.58031Z"/><path class="cls-3" d="M162.37991,77.63555c.49774-1.75171.99389-3.48281,1.48143-5.21633q2.78715-9.91015,5.57042-19.82139a3.04284,3.04284,0,0,1,3.14933-2.36811q5.00009-.00879,10.00019-.001c1.19946.00084,1.72.6945,1.36948,1.8371q-3.41624,11.13649-6.839,22.271-1.962,6.38839-3.92252,12.77725a2.069,2.069,0,0,1-2.39972,1.75589q-8.42949.00264-16.859.0004a2.08773,2.08773,0,0,1-2.45568-1.82338q-5.3462-17.42612-10.69272-34.85212c-.04576-.14911-.10449-.29524-.13769-.447a1.19831,1.19831,0,0,1,1.16194-1.50906c3.56019-.01552,7.12059-.02529,10.68066-.00068a3.01084,3.01084,0,0,1,2.7849,2.42663q1.102,3.90333,2.20008,7.80776,2.35458,8.35854,4.70637,16.71786A.84887.84887,0,0,0,162.37991,77.63555Z"/></svg>

Before

Width:  |  Height:  |  Size: 2.8 KiB

After

Width:  |  Height:  |  Size: 2.8 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#006cb7;}</style></defs><title>nokia</title><rect class="cls-1" x="-3.35417" y="-4.36563" width="223.25536" height="134.51136"/><polygon class="cls-2" points="23.535 77.585 14.624 77.585 14.624 48.305 30.04 48.305 47.717 70.171 47.717 48.305 56.626 48.305 56.626 77.585 41.529 77.585 23.535 55.356 23.535 77.585"/><path class="cls-2" d="M86.96953,70.53817c2.03268,0,2.74956-.11608,3.23784-.54371.45706-.39991.65794-.91707.65794-2.63643V58.4191c0-1.72-.20088-2.23418-.65794-2.63711-.48828-.42873-1.20515-.53939-3.23783-.53939H72.49669c-2.03268,0-2.74592.11066-3.233.53939-.45946.40293-.65671.91714-.65671,2.63711V67.358c0,1.71936.19725,2.23652.65671,2.63643.48711.42763,1.20035.54371,3.233.54371Zm12.2923-2.0014c0,4.06413-.74575,5.47257-1.97375,6.81726-1.89554,2.00436-4.6451,2.60576-9.9469,2.60576H72.12862c-5.30181,0-8.05253-.6014-9.94327-2.60576-1.2328-1.34469-1.97732-2.75313-1.97732-6.81726V57.2488c0-4.07017.74452-5.478,1.97732-6.82508,1.89074-2.00442,4.64146-2.60343,9.94327-2.60343H87.34118c5.3018,0,8.05135.599,9.9469,2.60343,1.228,1.34709,1.97375,2.75491,1.97375,6.82508Z"/><path class="cls-2" d="M127.93816,48.305h12.28866L123.64069,62.0021l18.7126,15.58666H129.19743L112.21318,62.31364Zm-15.725,29.28376h-9.30942V48.305h9.30942Z"/><rect class="cls-2" x="144.07442" y="48.305" width="9.31421" height="29.28376"/><path class="cls-2" d="M171.34711,66.124h11.57544L177.1324,55.52643Zm17.53874,11.46113L186.1315,72.486H168.11293l-2.72431,5.09914H155.10136L171.60454,48.305h11.57784l16.50553,29.28013Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#006cb7;}</style></defs><title>nokia</title><rect class="cls-1" x="-3.35417" y="-4.36563" width="223.25536" height="134.51136"/><polygon class="cls-2" points="23.535 77.585 14.624 77.585 14.624 48.305 30.04 48.305 47.717 70.171 47.717 48.305 56.626 48.305 56.626 77.585 41.529 77.585 23.535 55.356 23.535 77.585"/><path class="cls-2" d="M86.96953,70.53817c2.03268,0,2.74956-.11608,3.23784-.54371.45706-.39991.65794-.91707.65794-2.63643V58.4191c0-1.72-.20088-2.23418-.65794-2.63711-.48828-.42873-1.20515-.53939-3.23783-.53939H72.49669c-2.03268,0-2.74592.11066-3.233.53939-.45946.40293-.65671.91714-.65671,2.63711V67.358c0,1.71936.19725,2.23652.65671,2.63643.48711.42763,1.20035.54371,3.233.54371Zm12.2923-2.0014c0,4.06413-.74575,5.47257-1.97375,6.81726-1.89554,2.00436-4.6451,2.60576-9.9469,2.60576H72.12862c-5.30181,0-8.05253-.6014-9.94327-2.60576-1.2328-1.34469-1.97732-2.75313-1.97732-6.81726V57.2488c0-4.07017.74452-5.478,1.97732-6.82508,1.89074-2.00442,4.64146-2.60343,9.94327-2.60343H87.34118c5.3018,0,8.05135.599,9.9469,2.60343,1.228,1.34709,1.97375,2.75491,1.97375,6.82508Z"/><path class="cls-2" d="M127.93816,48.305h12.28866L123.64069,62.0021l18.7126,15.58666H129.19743L112.21318,62.31364Zm-15.725,29.28376h-9.30942V48.305h9.30942Z"/><rect class="cls-2" x="144.07442" y="48.305" width="9.31421" height="29.28376"/><path class="cls-2" d="M171.34711,66.124h11.57544L177.1324,55.52643Zm17.53874,11.46113L186.1315,72.486H168.11293l-2.72431,5.09914H155.10136L171.60454,48.305h11.57784l16.50553,29.28013Z"/></svg>

Before

Width:  |  Height:  |  Size: 1.6 KiB

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:none;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.85896" y="-3.29932" width="223.25536" height="134.51136"/><rect class="cls-2" x="12.98311" y="52.37137" width="189.57122" height="23.16998"/><path d="M44.97419,73.85506c4.48217,0,6.95205-3.55169,6.95205-9.965,0-6.3344-2.47024-9.965-6.95206-9.965-4.47852,0-6.952,3.63063-6.952,9.8119,0,6.56647,2.39459,10.11816,6.95206,10.11816m0-21.3978c6.25765,0,10.3495,4.63641,10.3495,11.43277,0,6.79929-4.17189,11.51427-10.3495,11.51427-6.10306,0-10.3517-4.87068-10.3517-11.51427,0-6.64286,4.24864-11.43277,10.3517-11.43277"/><path d="M95.44036,63.5834c0-6.02777-2.78053-9.42851-8.187-9.42851a23.51638,23.51638,0,0,0-3.30645.15569,4.15054,4.15054,0,0,1,.19736,1.40305c-.02084,1.926-.05921,5.45654-.05921,8.179,0,2.97313.06652,6.15057.11183,7.99036a4.5992,4.5992,0,0,1-.20247,1.58653,18.99839,18.99839,0,0,0,2.87117.15569c5.71529,0,8.57476-3.55606,8.57476-10.04177m3.3229-.07858c0,7.10519-4.0173,11.50952-10.89443,11.50952H80.9186s.1546-7.02881.1546-11.12175c0-4.02095-.1546-11.04976-.1546-11.04976h7.95674c6.48937,0,9.88792,3.63282,9.88792,10.662"/><path d="M166.24593,73.85506c4.48072,0,6.95205-3.55169,6.95205-9.965,0-6.3344-2.4717-9.965-6.95205-9.965-4.48328,0-6.95426,3.63063-6.95426,9.8119,0,6.56647,2.39239,10.11816,6.95426,10.11816m0-21.3978c6.2529,0,10.348,4.63641,10.348,11.43277,0,6.79929-4.1719,11.51427-10.348,11.51427-6.10671,0-10.35535-4.87068-10.35535-11.51427,0-6.64286,4.24863-11.43277,10.35535-11.43277"/><path d="M15.06887,67.21183c0,5.33556.23135,7.8047.23135,7.8047H12.98311s.30773-8.964.30773-17.53729c0-1.46847-.07894-4.63677-.07894-4.63677H14.839s6.17652,8.76188,10.28664,14.46182a23.721,23.721,0,0,1,1.47287,2.49948s-.17324-1.51233-.17324-2.88286V60.64754a63.2181,63.2181,0,0,0-.31029-7.80471h2.31455s-.30663,8.11134-.30663,17.5373c0,.929.07565,4.63677.07565,4.63677H26.57949S20.5181,66.37818,16.36375,60.61538a15.50923,15.50923,0,0,1-1.44143-2.34379s.14619,1.88621.14619,2.8803v6.05994Z"/><path d="M109.1135,73.85506a4.10637,4.10637,0,0,0,4.06041-4.62216c-.29311-1.89754-1.73052-3.036-4.13461-4.41566-3.07912-1.76049-4.90722-3.04989-5.21934-5.99744-.39-3.67265,2.683-6.28543,6.37607-6.43893a14.00586,14.00586,0,0,1,4.71316.73424V55.468a7.03561,7.03561,0,0,0-4.55747-1.69581c-1.941,0-4.06006,1.23348-4.06006,3.42158,0,1.944,1.52951,2.87811,4.21575,4.45513,3.58421,2.11135,5.38674,3.80715,5.38674,6.98458a6.8388,6.8388,0,0,1-7.08728,6.76895,14.27941,14.27941,0,0,1-4.71316-.73862v-2.585a7.15262,7.15262,0,0,0,5.01979,1.77621"/><path d="M133.52646,52.84283V54.9359a14.69308,14.69308,0,0,0-4.24609-.5409h-1.45422s.04642.58549.03727,1.41439c0,0-.05335,3.96284-.05335,5.0684,0,6.1809.30882,14.13692.30882,14.13692H124.644s.31138-7.956.31138-14.13692c0-1.10556-.05555-5.0684-.05555-5.0684-.01389-.83365.03727-1.41439.03727-1.41439h-1.44947a14.69393,14.69393,0,0,0-4.25084.54091V52.84283Z"/><path d="M202.48016,75.0074h-3.27537s-.18347-8.09088-.497-13.78168c-.03252-.678-.03-1.54011-.03-2.47134a24.35386,24.35386,0,0,1-.85959,2.395l-6.07894,14.392s-3.39855-7.77437-6.08368-13.61173c-.40421-.87787-.9316-2.41542-1.29159-3.17524a31.92871,31.92871,0,0,1-.102,3.49357c-.409,5.71419-.52044,12.75945-.52044,12.75945h-2.13254l1.37053-22.16932h1.90485s4.09039,9.20776,6.713,14.84046a11.52393,11.52393,0,0,1,.90126,2.267,23.44415,23.44415,0,0,1,.81319-2.24145c2.42272-5.70469,6.12279-14.8664,6.12279-14.8664h1.90484Z"/><path d="M151.38463,58.018c0,2.93622-1.8493,4.94524-5.10093,5.79533,2.09052,3.86306,6.3366,11.2007,6.3366,11.2007h-3.47493s-1.85113-3.78413-6.64359-12.20429h.38557c3.55424,0,5.40792-1.46811,5.40792-4.40433,0-2.55065-1.547-4.17153-4.48107-4.17153-.84754,0-2.1786.03-2.1786.03a6.50192,6.50192,0,0,1,.18348,1.92349V67.597c0,4.94523.23243,7.41694.23243,7.41694h-3.32179s.23463-8.72936.23463-13.28427c0-3.16866-.23463-8.88724-.23463-8.88724h6.5504c3.9442.00036,6.10451,1.85624,6.10451,5.17548"/><path d="M74.17121,58.018c0,2.93622-1.85479,4.94524-5.09983,5.79533,2.08613,3.86306,6.3344,11.2007,6.3344,11.2007H71.93085s-1.85478-3.78413-6.64323-12.20429h.38777c3.55059,0,5.40427-1.46811,5.40427-4.40433,0-2.55065-1.54376-4.17153-4.47962-4.17153-.849,0-2.179.03-2.179.03a6.41786,6.41786,0,0,1,.186,1.92349V67.597c0,4.94523.231,7.41694.231,7.41694H61.5152s.23244-8.72936.23244-13.28427c0-3.16866-.23244-8.88724-.23244-8.88724h6.553c3.94165.00036,6.10306,1.85624,6.10306,5.17548"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:none;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.85896" y="-3.29932" width="223.25536" height="134.51136"/><rect class="cls-2" x="12.98311" y="52.37137" width="189.57122" height="23.16998"/><path d="M44.97419,73.85506c4.48217,0,6.95205-3.55169,6.95205-9.965,0-6.3344-2.47024-9.965-6.95206-9.965-4.47852,0-6.952,3.63063-6.952,9.8119,0,6.56647,2.39459,10.11816,6.95206,10.11816m0-21.3978c6.25765,0,10.3495,4.63641,10.3495,11.43277,0,6.79929-4.17189,11.51427-10.3495,11.51427-6.10306,0-10.3517-4.87068-10.3517-11.51427,0-6.64286,4.24864-11.43277,10.3517-11.43277"/><path d="M95.44036,63.5834c0-6.02777-2.78053-9.42851-8.187-9.42851a23.51638,23.51638,0,0,0-3.30645.15569,4.15054,4.15054,0,0,1,.19736,1.40305c-.02084,1.926-.05921,5.45654-.05921,8.179,0,2.97313.06652,6.15057.11183,7.99036a4.5992,4.5992,0,0,1-.20247,1.58653,18.99839,18.99839,0,0,0,2.87117.15569c5.71529,0,8.57476-3.55606,8.57476-10.04177m3.3229-.07858c0,7.10519-4.0173,11.50952-10.89443,11.50952H80.9186s.1546-7.02881.1546-11.12175c0-4.02095-.1546-11.04976-.1546-11.04976h7.95674c6.48937,0,9.88792,3.63282,9.88792,10.662"/><path d="M166.24593,73.85506c4.48072,0,6.95205-3.55169,6.95205-9.965,0-6.3344-2.4717-9.965-6.95205-9.965-4.48328,0-6.95426,3.63063-6.95426,9.8119,0,6.56647,2.39239,10.11816,6.95426,10.11816m0-21.3978c6.2529,0,10.348,4.63641,10.348,11.43277,0,6.79929-4.1719,11.51427-10.348,11.51427-6.10671,0-10.35535-4.87068-10.35535-11.51427,0-6.64286,4.24863-11.43277,10.35535-11.43277"/><path d="M15.06887,67.21183c0,5.33556.23135,7.8047.23135,7.8047H12.98311s.30773-8.964.30773-17.53729c0-1.46847-.07894-4.63677-.07894-4.63677H14.839s6.17652,8.76188,10.28664,14.46182a23.721,23.721,0,0,1,1.47287,2.49948s-.17324-1.51233-.17324-2.88286V60.64754a63.2181,63.2181,0,0,0-.31029-7.80471h2.31455s-.30663,8.11134-.30663,17.5373c0,.929.07565,4.63677.07565,4.63677H26.57949S20.5181,66.37818,16.36375,60.61538a15.50923,15.50923,0,0,1-1.44143-2.34379s.14619,1.88621.14619,2.8803v6.05994Z"/><path d="M109.1135,73.85506a4.10637,4.10637,0,0,0,4.06041-4.62216c-.29311-1.89754-1.73052-3.036-4.13461-4.41566-3.07912-1.76049-4.90722-3.04989-5.21934-5.99744-.39-3.67265,2.683-6.28543,6.37607-6.43893a14.00586,14.00586,0,0,1,4.71316.73424V55.468a7.03561,7.03561,0,0,0-4.55747-1.69581c-1.941,0-4.06006,1.23348-4.06006,3.42158,0,1.944,1.52951,2.87811,4.21575,4.45513,3.58421,2.11135,5.38674,3.80715,5.38674,6.98458a6.8388,6.8388,0,0,1-7.08728,6.76895,14.27941,14.27941,0,0,1-4.71316-.73862v-2.585a7.15262,7.15262,0,0,0,5.01979,1.77621"/><path d="M133.52646,52.84283V54.9359a14.69308,14.69308,0,0,0-4.24609-.5409h-1.45422s.04642.58549.03727,1.41439c0,0-.05335,3.96284-.05335,5.0684,0,6.1809.30882,14.13692.30882,14.13692H124.644s.31138-7.956.31138-14.13692c0-1.10556-.05555-5.0684-.05555-5.0684-.01389-.83365.03727-1.41439.03727-1.41439h-1.44947a14.69393,14.69393,0,0,0-4.25084.54091V52.84283Z"/><path d="M202.48016,75.0074h-3.27537s-.18347-8.09088-.497-13.78168c-.03252-.678-.03-1.54011-.03-2.47134a24.35386,24.35386,0,0,1-.85959,2.395l-6.07894,14.392s-3.39855-7.77437-6.08368-13.61173c-.40421-.87787-.9316-2.41542-1.29159-3.17524a31.92871,31.92871,0,0,1-.102,3.49357c-.409,5.71419-.52044,12.75945-.52044,12.75945h-2.13254l1.37053-22.16932h1.90485s4.09039,9.20776,6.713,14.84046a11.52393,11.52393,0,0,1,.90126,2.267,23.44415,23.44415,0,0,1,.81319-2.24145c2.42272-5.70469,6.12279-14.8664,6.12279-14.8664h1.90484Z"/><path d="M151.38463,58.018c0,2.93622-1.8493,4.94524-5.10093,5.79533,2.09052,3.86306,6.3366,11.2007,6.3366,11.2007h-3.47493s-1.85113-3.78413-6.64359-12.20429h.38557c3.55424,0,5.40792-1.46811,5.40792-4.40433,0-2.55065-1.547-4.17153-4.48107-4.17153-.84754,0-2.1786.03-2.1786.03a6.50192,6.50192,0,0,1,.18348,1.92349V67.597c0,4.94523.23243,7.41694.23243,7.41694h-3.32179s.23463-8.72936.23463-13.28427c0-3.16866-.23463-8.88724-.23463-8.88724h6.5504c3.9442.00036,6.10451,1.85624,6.10451,5.17548"/><path d="M74.17121,58.018c0,2.93622-1.85479,4.94524-5.09983,5.79533,2.08613,3.86306,6.3344,11.2007,6.3344,11.2007H71.93085s-1.85478-3.78413-6.64323-12.20429h.38777c3.55059,0,5.40427-1.46811,5.40427-4.40433,0-2.55065-1.54376-4.17153-4.47962-4.17153-.849,0-2.179.03-2.179.03a6.41786,6.41786,0,0,1,.186,1.92349V67.597c0,4.94523.231,7.41694.231,7.41694H61.5152s.23244-8.72936.23244-13.28427c0-3.16866-.23244-8.88724-.23244-8.88724h6.553c3.94165.00036,6.10306,1.85624,6.10306,5.17548"/></svg>

Before

Width:  |  Height:  |  Size: 4.4 KiB

After

Width:  |  Height:  |  Size: 4.4 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#b9c200;}.cls-3{fill:#fff;}.cls-4{fill:#4d5152;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.86923" y="-3.17679" width="223.25536" height="134.51136"/><path class="cls-2" d="M132.01169,16.02155H83.40541a6.79387,6.79387,0,0,0-6.78691,6.78691V71.41474a6.79387,6.79387,0,0,0,6.78691,6.78691H138.6988V22.80846a6.64741,6.64741,0,0,0-6.68711-6.78691"/><path class="cls-3" d="M107.95807,68.2209A21.51088,21.51088,0,0,0,129.5165,46.76227c0-2.79461-.89826-6.48749-.89826-5.58922a13.67512,13.67512,0,0,1-15.27056,13.37421c-5.98845-.79846-12.67556-6.28787-11.47787-14.97113a12.75333,12.75333,0,0,1,14.07287-11.27825c8.68325,1.19769,10.47979,7.785,7.8848,10.879a4.41757,4.41757,0,0,1-7.98461-2.595,4.295,4.295,0,0,1,1.69673-3.49326c-4.39154-1.2975-8.98268,2.19576-9.58153,6.78691a8.369,8.369,0,0,0,7.18615,9.48172c7.68518,1.09788,12.07672-5.09019,12.07672-10.5796,0-6.48749-6.88674-13.77344-19.16306-13.77344a21.60862,21.60862,0,0,0-.09981,43.21667"/><path class="cls-4" d="M152.37243,91.77548a10.18085,10.18085,0,1,0,10.28018,10.18037,10.18026,10.18026,0,0,0-10.28018-10.18037m0,3.49327a6.68786,6.68786,0,1,1-6.5873,6.6871,6.603,6.603,0,0,1,6.5873-6.6871M104.864,108.643a6.68786,6.68786,0,1,1,6.5873-6.6871,6.66635,6.66635,0,0,1-6.5873,6.6871m6.5873-16.36844v2.19577h-.09981a9.43621,9.43621,0,0,0-6.58729-2.69481,10.18085,10.18085,0,1,0,0,20.36074,8.87657,8.87657,0,0,0,6.48749-2.79461h.0998v2.096h3.69289v-19.163ZM128.61824,108.643a6.68786,6.68786,0,1,1,6.58729-6.6871,6.66636,6.66636,0,0,1-6.58729,6.6871m6.48749-25.55073V94.47029h-.09981a8.71081,8.71081,0,0,0-6.48749-2.79461,10.18085,10.18085,0,1,0,0,20.36073,8.87657,8.87657,0,0,0,6.48749-2.79461h.09981v2.096h3.69287V83.09222h-3.69287ZM91.1904,106.547a6.47955,6.47955,0,0,1-4.79076,2.096,6.68786,6.68786,0,0,1,0-13.37421,6.47955,6.47955,0,0,1,4.79076,2.096l1.79654-3.39346a10.53878,10.53878,0,0,0-6.5873-2.29558,10.18085,10.18085,0,1,0,0,20.36074,10.755,10.755,0,0,0,6.5873-2.29558L91.1904,106.547ZM63.14448,91.77548a10.18085,10.18085,0,1,0,10.28018,10.18037A10.18026,10.18026,0,0,0,63.14448,91.77548m0,3.49327a6.68786,6.68786,0,1,1-6.5873,6.6871,6.603,6.603,0,0,1,6.5873-6.6871"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#b9c200;}.cls-3{fill:#fff;}.cls-4{fill:#4d5152;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.86923" y="-3.17679" width="223.25536" height="134.51136"/><path class="cls-2" d="M132.01169,16.02155H83.40541a6.79387,6.79387,0,0,0-6.78691,6.78691V71.41474a6.79387,6.79387,0,0,0,6.78691,6.78691H138.6988V22.80846a6.64741,6.64741,0,0,0-6.68711-6.78691"/><path class="cls-3" d="M107.95807,68.2209A21.51088,21.51088,0,0,0,129.5165,46.76227c0-2.79461-.89826-6.48749-.89826-5.58922a13.67512,13.67512,0,0,1-15.27056,13.37421c-5.98845-.79846-12.67556-6.28787-11.47787-14.97113a12.75333,12.75333,0,0,1,14.07287-11.27825c8.68325,1.19769,10.47979,7.785,7.8848,10.879a4.41757,4.41757,0,0,1-7.98461-2.595,4.295,4.295,0,0,1,1.69673-3.49326c-4.39154-1.2975-8.98268,2.19576-9.58153,6.78691a8.369,8.369,0,0,0,7.18615,9.48172c7.68518,1.09788,12.07672-5.09019,12.07672-10.5796,0-6.48749-6.88674-13.77344-19.16306-13.77344a21.60862,21.60862,0,0,0-.09981,43.21667"/><path class="cls-4" d="M152.37243,91.77548a10.18085,10.18085,0,1,0,10.28018,10.18037,10.18026,10.18026,0,0,0-10.28018-10.18037m0,3.49327a6.68786,6.68786,0,1,1-6.5873,6.6871,6.603,6.603,0,0,1,6.5873-6.6871M104.864,108.643a6.68786,6.68786,0,1,1,6.5873-6.6871,6.66635,6.66635,0,0,1-6.5873,6.6871m6.5873-16.36844v2.19577h-.09981a9.43621,9.43621,0,0,0-6.58729-2.69481,10.18085,10.18085,0,1,0,0,20.36074,8.87657,8.87657,0,0,0,6.48749-2.79461h.0998v2.096h3.69289v-19.163ZM128.61824,108.643a6.68786,6.68786,0,1,1,6.58729-6.6871,6.66636,6.66636,0,0,1-6.58729,6.6871m6.48749-25.55073V94.47029h-.09981a8.71081,8.71081,0,0,0-6.48749-2.79461,10.18085,10.18085,0,1,0,0,20.36073,8.87657,8.87657,0,0,0,6.48749-2.79461h.09981v2.096h3.69287V83.09222h-3.69287ZM91.1904,106.547a6.47955,6.47955,0,0,1-4.79076,2.096,6.68786,6.68786,0,0,1,0-13.37421,6.47955,6.47955,0,0,1,4.79076,2.096l1.79654-3.39346a10.53878,10.53878,0,0,0-6.5873-2.29558,10.18085,10.18085,0,1,0,0,20.36074,10.755,10.755,0,0,0,6.5873-2.29558L91.1904,106.547ZM63.14448,91.77548a10.18085,10.18085,0,1,0,10.28018,10.18037A10.18026,10.18026,0,0,0,63.14448,91.77548m0,3.49327a6.68786,6.68786,0,1,1-6.5873,6.6871,6.603,6.603,0,0,1,6.5873-6.6871"/></svg>

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.0 KiB

After

Width:  |  Height:  |  Size: 7.0 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.4 KiB

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#1a237e;}.cls-3{fill:#172d72;}.cls-4{fill:#10a6fa;}.cls-5{fill:#3351ff;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-4.74781" y="-4.12296" width="223.25536" height="134.51136"/><path class="cls-2" d="M37.10629,49.8563l-21.556-6.14854a2.32671,2.32671,0,0,0-2.96549,2.23761V85.67778a2.32882,2.32882,0,0,0,2.32242,2.328,2.34709,2.34709,0,0,0,.54415-.06435l21.55614-5.13762h.00012a2.31961,2.31961,0,0,0,1.78747-2.26369V52.09439A2.33746,2.33746,0,0,0,37.10629,49.8563Z"/><path class="cls-3" d="M153.704,69.18841h-.008l-.05617.08906a9.8204,9.8204,0,0,1-3.36007,3.20787,8.99162,8.99162,0,0,1-4.59992,1.33028,9.18343,9.18343,0,0,1-6.96149-3.17641A10.73277,10.73277,0,0,1,135.93129,63.05a10.56554,10.56554,0,0,1,2.87657-7.19837,9.31614,9.31614,0,0,1,14.802,1.42571l.05637.08757h.0541l2.2873-1.41418a12.68144,12.68144,0,0,0-3.77833-4.01627,11.79162,11.79162,0,0,0-6.54776-1.98039,11.48418,11.48418,0,0,0-8.74292,3.725A13.14057,13.14057,0,0,0,133.27,63.17767a13.73041,13.73041,0,0,0,3.63437,9.52778,11.41872,11.41872,0,0,0,8.6544,3.92759c.04148,0,.08315-.00021.12475-.00063a11.973,11.973,0,0,0,10.3151-6.02531Z"/><polygon class="cls-3" points="170.788 50.285 167.681 50.285 156.131 76.137 159.817 76.137 163.386 67.544 163.392 67.544 164.653 64.859 164.648 64.859 169.297 53.948 173.854 64.859 167.313 64.859 166.121 67.544 174.915 67.544 178.517 76.137 181.766 76.137 170.788 50.285"/><path class="cls-3" d="M198.66156,52.65016a9.26362,9.26362,0,0,0-6.41458-2.33247h-8.3653V76.13651H186.93V66.02369h.00075V63.30585H186.93V53.10174h5.44916a6.05209,6.05209,0,0,1,3.97382,1.37851,4.43489,4.43489,0,0,1,1.806,3.4215,4.53924,4.53924,0,0,1-1.61977,3.93662,5.73177,5.73177,0,0,1-3.99487,1.46748h-3.15645v2.71784H193.04a8.16929,8.16929,0,0,0,5.6887-2.16806,7.1,7.1,0,0,0,2.44523-5.368A7.30676,7.30676,0,0,0,198.66156,52.65016Z"/><path class="cls-4" d="M51.79014,48.44325a2.33792,2.33792,0,0,0-1.68623-2.23235L22.66667,38.35125a2.32707,2.32707,0,0,0-2.968,2.2369v30.149a2.32848,2.32848,0,0,0,2.3223,2.3278,2.3421,2.3421,0,0,0,.54942-.06555l27.46821-6.6119a2.31961,2.31961,0,0,0,1.78239-2.267Z"/><path class="cls-5" d="M37.10629,49.8563,19.69867,44.891v25.8461a2.32848,2.32848,0,0,0,2.3223,2.3278,2.3421,2.3421,0,0,0,.54942-.06555L38.79508,69.0939V52.09439A2.33746,2.33746,0,0,0,37.10629,49.8563Z"/><rect class="cls-2" x="85.15301" y="50.79427" width="3.04719" height="3.04719"/><rect class="cls-2" x="85.15301" y="57.62199" width="3.04719" height="18.55402"/><path class="cls-2" d="M131.05244,74.31986v-7.967a10.13143,10.13143,0,0,0-10.19062-10.08615,9.59233,9.59233,0,0,0-6.81493,3.07035,9.83188,9.83188,0,0,0-2.76544,6.98019,10.02667,10.02667,0,0,0,3.00052,7.24833,9.309,9.309,0,0,0,7.15662,2.86854,9.49975,9.49975,0,0,0,6.73066-3.01977V74.6786m0,0a5.80025,5.80025,0,0,1-1.986,4.38139,6.64244,6.64244,0,0,1-4.65272,1.89407,6.80987,6.80987,0,0,1-4.14535-1.3833,6.35366,6.35366,0,0,1-2.19729-2.58174l-2.33331,1.34714a8.95056,8.95056,0,0,0,2.436,2.945,10.01087,10.01087,0,0,0,5.93453,2.093q.21807.00859.43392.00849a10.01393,10.01393,0,0,0,6.376-2.26668,7.48089,7.48089,0,0,0,3.01743-6.074v-.7222m-2.94919-7.77467a7.15858,7.15858,0,0,1-2.05924,4.95338A6.29606,6.29606,0,0,1,121.2662,73.551a6.68629,6.68629,0,0,1-4.94657-2.05594,6.90735,6.90735,0,0,1-2.12205-4.98282,7.01857,7.01857,0,0,1,1.93033-5.04913,6.6733,6.6733,0,0,1,4.78429-2.31366c.124-.006.24635-.00893.36783-.00893a6.25462,6.25462,0,0,1,4.7642,2.12788A6.97973,6.97973,0,0,1,128.10325,66.54519Z"/><path class="cls-2" d="M79.72761,52.65016A9.26359,9.26359,0,0,0,73.313,50.31769h-8.3653V76.13651h3.04835V66.02369h.00075V63.30585h-.00075V53.10174h5.44917a6.05211,6.05211,0,0,1,3.97382,1.37851,4.43492,4.43492,0,0,1,1.806,3.4215,4.53927,4.53927,0,0,1-1.61977,3.93662,5.73182,5.73182,0,0,1-3.99488,1.46748H70.45394v2.71784h3.65214a8.16926,8.16926,0,0,0,5.68869-2.16806,7.1,7.1,0,0,0,2.44523-5.368A7.3067,7.3067,0,0,0,79.72761,52.65016Z"/><path class="cls-2" d="M100.16589,56.48686a8.23659,8.23659,0,0,0-6.10754,2.49853,8.5458,8.5458,0,0,0-2.42971,6.20353V76.09816h2.94019V64.92532a5.33734,5.33734,0,0,1,1.6541-3.943,5.40682,5.40682,0,0,1,3.943-1.65422,5.26391,5.26391,0,0,1,3.87691,1.65422,5.35592,5.35592,0,0,1,1.68721,3.943V76.09816h2.94019V65.18892a8.54585,8.54585,0,0,0-2.42988-6.20353A8.19692,8.19692,0,0,0,100.16589,56.48686Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#1a237e;}.cls-3{fill:#172d72;}.cls-4{fill:#10a6fa;}.cls-5{fill:#3351ff;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-4.74781" y="-4.12296" width="223.25536" height="134.51136"/><path class="cls-2" d="M37.10629,49.8563l-21.556-6.14854a2.32671,2.32671,0,0,0-2.96549,2.23761V85.67778a2.32882,2.32882,0,0,0,2.32242,2.328,2.34709,2.34709,0,0,0,.54415-.06435l21.55614-5.13762h.00012a2.31961,2.31961,0,0,0,1.78747-2.26369V52.09439A2.33746,2.33746,0,0,0,37.10629,49.8563Z"/><path class="cls-3" d="M153.704,69.18841h-.008l-.05617.08906a9.8204,9.8204,0,0,1-3.36007,3.20787,8.99162,8.99162,0,0,1-4.59992,1.33028,9.18343,9.18343,0,0,1-6.96149-3.17641A10.73277,10.73277,0,0,1,135.93129,63.05a10.56554,10.56554,0,0,1,2.87657-7.19837,9.31614,9.31614,0,0,1,14.802,1.42571l.05637.08757h.0541l2.2873-1.41418a12.68144,12.68144,0,0,0-3.77833-4.01627,11.79162,11.79162,0,0,0-6.54776-1.98039,11.48418,11.48418,0,0,0-8.74292,3.725A13.14057,13.14057,0,0,0,133.27,63.17767a13.73041,13.73041,0,0,0,3.63437,9.52778,11.41872,11.41872,0,0,0,8.6544,3.92759c.04148,0,.08315-.00021.12475-.00063a11.973,11.973,0,0,0,10.3151-6.02531Z"/><polygon class="cls-3" points="170.788 50.285 167.681 50.285 156.131 76.137 159.817 76.137 163.386 67.544 163.392 67.544 164.653 64.859 164.648 64.859 169.297 53.948 173.854 64.859 167.313 64.859 166.121 67.544 174.915 67.544 178.517 76.137 181.766 76.137 170.788 50.285"/><path class="cls-3" d="M198.66156,52.65016a9.26362,9.26362,0,0,0-6.41458-2.33247h-8.3653V76.13651H186.93V66.02369h.00075V63.30585H186.93V53.10174h5.44916a6.05209,6.05209,0,0,1,3.97382,1.37851,4.43489,4.43489,0,0,1,1.806,3.4215,4.53924,4.53924,0,0,1-1.61977,3.93662,5.73177,5.73177,0,0,1-3.99487,1.46748h-3.15645v2.71784H193.04a8.16929,8.16929,0,0,0,5.6887-2.16806,7.1,7.1,0,0,0,2.44523-5.368A7.30676,7.30676,0,0,0,198.66156,52.65016Z"/><path class="cls-4" d="M51.79014,48.44325a2.33792,2.33792,0,0,0-1.68623-2.23235L22.66667,38.35125a2.32707,2.32707,0,0,0-2.968,2.2369v30.149a2.32848,2.32848,0,0,0,2.3223,2.3278,2.3421,2.3421,0,0,0,.54942-.06555l27.46821-6.6119a2.31961,2.31961,0,0,0,1.78239-2.267Z"/><path class="cls-5" d="M37.10629,49.8563,19.69867,44.891v25.8461a2.32848,2.32848,0,0,0,2.3223,2.3278,2.3421,2.3421,0,0,0,.54942-.06555L38.79508,69.0939V52.09439A2.33746,2.33746,0,0,0,37.10629,49.8563Z"/><rect class="cls-2" x="85.15301" y="50.79427" width="3.04719" height="3.04719"/><rect class="cls-2" x="85.15301" y="57.62199" width="3.04719" height="18.55402"/><path class="cls-2" d="M131.05244,74.31986v-7.967a10.13143,10.13143,0,0,0-10.19062-10.08615,9.59233,9.59233,0,0,0-6.81493,3.07035,9.83188,9.83188,0,0,0-2.76544,6.98019,10.02667,10.02667,0,0,0,3.00052,7.24833,9.309,9.309,0,0,0,7.15662,2.86854,9.49975,9.49975,0,0,0,6.73066-3.01977V74.6786m0,0a5.80025,5.80025,0,0,1-1.986,4.38139,6.64244,6.64244,0,0,1-4.65272,1.89407,6.80987,6.80987,0,0,1-4.14535-1.3833,6.35366,6.35366,0,0,1-2.19729-2.58174l-2.33331,1.34714a8.95056,8.95056,0,0,0,2.436,2.945,10.01087,10.01087,0,0,0,5.93453,2.093q.21807.00859.43392.00849a10.01393,10.01393,0,0,0,6.376-2.26668,7.48089,7.48089,0,0,0,3.01743-6.074v-.7222m-2.94919-7.77467a7.15858,7.15858,0,0,1-2.05924,4.95338A6.29606,6.29606,0,0,1,121.2662,73.551a6.68629,6.68629,0,0,1-4.94657-2.05594,6.90735,6.90735,0,0,1-2.12205-4.98282,7.01857,7.01857,0,0,1,1.93033-5.04913,6.6733,6.6733,0,0,1,4.78429-2.31366c.124-.006.24635-.00893.36783-.00893a6.25462,6.25462,0,0,1,4.7642,2.12788A6.97973,6.97973,0,0,1,128.10325,66.54519Z"/><path class="cls-2" d="M79.72761,52.65016A9.26359,9.26359,0,0,0,73.313,50.31769h-8.3653V76.13651h3.04835V66.02369h.00075V63.30585h-.00075V53.10174h5.44917a6.05211,6.05211,0,0,1,3.97382,1.37851,4.43492,4.43492,0,0,1,1.806,3.4215,4.53927,4.53927,0,0,1-1.61977,3.93662,5.73182,5.73182,0,0,1-3.99488,1.46748H70.45394v2.71784h3.65214a8.16926,8.16926,0,0,0,5.68869-2.16806,7.1,7.1,0,0,0,2.44523-5.368A7.3067,7.3067,0,0,0,79.72761,52.65016Z"/><path class="cls-2" d="M100.16589,56.48686a8.23659,8.23659,0,0,0-6.10754,2.49853,8.5458,8.5458,0,0,0-2.42971,6.20353V76.09816h2.94019V64.92532a5.33734,5.33734,0,0,1,1.6541-3.943,5.40682,5.40682,0,0,1,3.943-1.65422,5.26391,5.26391,0,0,1,3.87691,1.65422,5.35592,5.35592,0,0,1,1.68721,3.943V76.09816h2.94019V65.18892a8.54585,8.54585,0,0,0-2.42988-6.20353A8.19692,8.19692,0,0,0,100.16589,56.48686Z"/></svg>

Before

Width:  |  Height:  |  Size: 4.3 KiB

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#fff;}.cls-3{fill:#e60019;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.35417" y="-4.36563" width="223.25536" height="134.51136"/><path class="cls-2" d="M161.62125,64.02487A54.12125,54.12125,0,1,1,107.5,9.90362,54.12125,54.12125,0,0,1,161.62125,64.02487Z"/><path class="cls-3" d="M107.5,9.90362A54.13832,54.13832,0,0,0,87.76673,114.44909c-.49-4.27669-.89069-10.86887.17849-15.54628.97993-4.23162,6.32534-26.90466,6.32534-26.90466a19.71223,19.71223,0,0,1-1.60376-8.0179c0-7.52794,4.36505-13.14064,9.7997-13.14064,4.63278,0,6.85993,3.4748,6.85993,7.61717,0,4.63278-2.94022,11.58151-4.49891,18.04027-1.29184,5.39,2.7171,9.80015,8.01789,9.80015,9.62166,0,17.01573-10.15625,17.01573-24.76677,0-12.96259-9.30975-22.005-22.62843-22.005-15.4124,0-24.45486,11.53689-24.45486,23.47493a21.12584,21.12584,0,0,0,4.009,12.33876,1.61123,1.61123,0,0,1,.35654,1.55913c-.40117,1.69256-1.33646,5.39-1.51451,6.14685-.22267.98036-.80188,1.203-1.82643.71307-6.77068-3.16289-11.0023-13.00677-11.0023-20.98049,0-17.06034,12.38338-32.74,35.76862-32.74,18.75334,0,33.36387,13.36331,33.36387,31.27015,0,18.6641-11.75955,33.67535-28.06264,33.67535-5.47928,0-10.6462-2.851-12.38382-6.2361L98.101,101.62035c-1.2026,4.72158-4.49892,10.60158-6.72606,14.20937A54.16963,54.16963,0,1,0,107.5,9.90362Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#fff;}.cls-3{fill:#e60019;}</style></defs><title>kubernetes.io-logos</title><rect class="cls-1" x="-3.35417" y="-4.36563" width="223.25536" height="134.51136"/><path class="cls-2" d="M161.62125,64.02487A54.12125,54.12125,0,1,1,107.5,9.90362,54.12125,54.12125,0,0,1,161.62125,64.02487Z"/><path class="cls-3" d="M107.5,9.90362A54.13832,54.13832,0,0,0,87.76673,114.44909c-.49-4.27669-.89069-10.86887.17849-15.54628.97993-4.23162,6.32534-26.90466,6.32534-26.90466a19.71223,19.71223,0,0,1-1.60376-8.0179c0-7.52794,4.36505-13.14064,9.7997-13.14064,4.63278,0,6.85993,3.4748,6.85993,7.61717,0,4.63278-2.94022,11.58151-4.49891,18.04027-1.29184,5.39,2.7171,9.80015,8.01789,9.80015,9.62166,0,17.01573-10.15625,17.01573-24.76677,0-12.96259-9.30975-22.005-22.62843-22.005-15.4124,0-24.45486,11.53689-24.45486,23.47493a21.12584,21.12584,0,0,0,4.009,12.33876,1.61123,1.61123,0,0,1,.35654,1.55913c-.40117,1.69256-1.33646,5.39-1.51451,6.14685-.22267.98036-.80188,1.203-1.82643.71307-6.77068-3.16289-11.0023-13.00677-11.0023-20.98049,0-17.06034,12.38338-32.74,35.76862-32.74,18.75334,0,33.36387,13.36331,33.36387,31.27015,0,18.6641-11.75955,33.67535-28.06264,33.67535-5.47928,0-10.6462-2.851-12.38382-6.2361L98.101,101.62035c-1.2026,4.72158-4.49892,10.60158-6.72606,14.20937A54.16963,54.16963,0,1,0,107.5,9.90362Z"/></svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 6.1 KiB

After

Width:  |  Height:  |  Size: 6.1 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.5 KiB

After

Width:  |  Height:  |  Size: 7.5 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.0 KiB

After

Width:  |  Height:  |  Size: 5.0 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 9.0 KiB

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-2.92293" y="-4.57688" width="223.25536" height="134.51136"/><path d="M173.01724,36.02417H168.599a.72106.72106,0,0,0-.71471.71471V57.46553c0,2.20911-1.81927,3.70351-4.48319,3.70351s-4.41822-1.55938-4.41822-3.70351V36.73888a.72106.72106,0,0,0-.71472-.71471H153.785a.72105.72105,0,0,0-.71471.71471V57.46553c0,5.32785,4.48318,9.55115,10.20088,9.55115s10.33084-4.2233,10.33084-9.55115V36.73888a.58881.58881,0,0,0-.38985-.71471Z"/><path d="M192.76928,36.02417h-9.68109a.72106.72106,0,0,0-.71472.71471V65.84715a.72106.72106,0,0,0,.71472.71471h4.41821a.72105.72105,0,0,0,.71471-.71471V54.8016a.59027.59027,0,0,1,.51979-.64974h4.09336a9.09459,9.09459,0,0,0,1.49441-18.12769,6.26236,6.26236,0,0,0-1.55939,0Zm0,12.345h-3.9634a.59027.59027,0,0,1-.64974-.51979V42.45658a.59027.59027,0,0,1,.51978-.64974h4.09336a3.393,3.393,0,0,1,3.24869,3.24869,3.31415,3.31415,0,0,1-3.24869,3.31366Z"/><path d="M33.58342,48.36919H14.676a.72105.72105,0,0,0-.71471.71472v4.41822a.61385.61385,0,0,0,.64973.64973h6.04257a.59028.59028,0,0,1,.64974.51979V78.19218a.72106.72106,0,0,0,.71471.71471H26.4363a.72106.72106,0,0,0,.71471-.71471V54.8016a.69093.69093,0,0,1,.58477-.64974h6.04256a.72105.72105,0,0,0,.71471-.71471V49.01893a.97721.97721,0,0,0-.90963-.64974Z"/><path d="M61.71708,48.36919H57.29886a.72106.72106,0,0,0-.71471.71472V60.12946a.69091.69091,0,0,1-.58477.64973H49.24211a.59027.59027,0,0,1-.64974-.51979V49.08391a.72106.72106,0,0,0-.71471-.71472H43.52441a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471h4.2233c.38984,0,.90964-.32487.90964-.71471V67.14663a.59027.59027,0,0,1,.51979-.64974h6.82224a.69092.69092,0,0,1,.64974.58477V78.19218a.72106.72106,0,0,0,.71471.71471h4.41823a.72106.72106,0,0,0,.71471-.71471V49.08391A.83056.83056,0,0,0,61.71708,48.36919Z"/><path d="M91.08525,57.46553a9.12308,9.12308,0,0,0-9.09634-9.09634H72.37279a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471H76.791a.72106.72106,0,0,0,.71471-.71471V67.14663a.59028.59028,0,0,1,.51979-.64974H79.325a.976.976,0,0,1,.97461.71471l4.48319,10.9156a1.13272,1.13272,0,0,0,1.10456.77968h4.54816a.56948.56948,0,0,0,.64974-.32487.82839.82839,0,0,0,0-.77968L86.34216,66.75678c-.32487-.71471-.38984-1.16953.32487-1.68932C89.78577,62.9883,91.08525,60.64925,91.08525,57.46553Zm-9.09634,3.24869h-3.9634a.59027.59027,0,0,1-.64974-.51979V54.8016a.69093.69093,0,0,1,.58477-.64974h4.02837a3.3012,3.3012,0,0,1,3.24869,3.2487,3.19187,3.19187,0,0,1-3.05376,3.31366Z"/><path d="M115.97022,73.05924H105.57441a.59027.59027,0,0,1-.64974-.51979V67.14663a.59028.59028,0,0,1,.51979-.64974h7.73189a.72106.72106,0,0,0,.71472-.71471V61.364a.72106.72106,0,0,0-.71472-.71471h-7.60194a.59027.59027,0,0,1-.64974-.51979V54.73663a.69092.69092,0,0,1,.58476-.64974h10.46079a.72106.72106,0,0,0,.71472-.71471V48.954a.72106.72106,0,0,0-.71472-.71471H99.79174a.72106.72106,0,0,0-.71471.71471V78.06223a.72106.72106,0,0,0,.71471.71471h16.17848a.72106.72106,0,0,0,.71472-.71471V73.644A.7613.7613,0,0,0,115.97022,73.05924Z"/><path d="M135.26744,48.36919h-9.29126a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471h9.29126c5.003,0,8.96639-3.9634,9.61613-9.55115a47.00813,47.00813,0,0,0,0-11.30545C144.23383,52.3326,140.27042,48.36919,135.26744,48.36919Zm3.76849,20.98655a3.95536,3.95536,0,0,1-3.83346,3.76848h-3.50858a.59027.59027,0,0,1-.64974-.51979V54.8016a.69093.69093,0,0,1,.58476-.64974h3.57356a3.95536,3.95536,0,0,1,3.83346,3.76849A48.09012,48.09012,0,0,1,139.03593,69.35574Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-2.92293" y="-4.57688" width="223.25536" height="134.51136"/><path d="M173.01724,36.02417H168.599a.72106.72106,0,0,0-.71471.71471V57.46553c0,2.20911-1.81927,3.70351-4.48319,3.70351s-4.41822-1.55938-4.41822-3.70351V36.73888a.72106.72106,0,0,0-.71472-.71471H153.785a.72105.72105,0,0,0-.71471.71471V57.46553c0,5.32785,4.48318,9.55115,10.20088,9.55115s10.33084-4.2233,10.33084-9.55115V36.73888a.58881.58881,0,0,0-.38985-.71471Z"/><path d="M192.76928,36.02417h-9.68109a.72106.72106,0,0,0-.71472.71471V65.84715a.72106.72106,0,0,0,.71472.71471h4.41821a.72105.72105,0,0,0,.71471-.71471V54.8016a.59027.59027,0,0,1,.51979-.64974h4.09336a9.09459,9.09459,0,0,0,1.49441-18.12769,6.26236,6.26236,0,0,0-1.55939,0Zm0,12.345h-3.9634a.59027.59027,0,0,1-.64974-.51979V42.45658a.59027.59027,0,0,1,.51978-.64974h4.09336a3.393,3.393,0,0,1,3.24869,3.24869,3.31415,3.31415,0,0,1-3.24869,3.31366Z"/><path d="M33.58342,48.36919H14.676a.72105.72105,0,0,0-.71471.71472v4.41822a.61385.61385,0,0,0,.64973.64973h6.04257a.59028.59028,0,0,1,.64974.51979V78.19218a.72106.72106,0,0,0,.71471.71471H26.4363a.72106.72106,0,0,0,.71471-.71471V54.8016a.69093.69093,0,0,1,.58477-.64974h6.04256a.72105.72105,0,0,0,.71471-.71471V49.01893a.97721.97721,0,0,0-.90963-.64974Z"/><path d="M61.71708,48.36919H57.29886a.72106.72106,0,0,0-.71471.71472V60.12946a.69091.69091,0,0,1-.58477.64973H49.24211a.59027.59027,0,0,1-.64974-.51979V49.08391a.72106.72106,0,0,0-.71471-.71472H43.52441a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471h4.2233c.38984,0,.90964-.32487.90964-.71471V67.14663a.59027.59027,0,0,1,.51979-.64974h6.82224a.69092.69092,0,0,1,.64974.58477V78.19218a.72106.72106,0,0,0,.71471.71471h4.41823a.72106.72106,0,0,0,.71471-.71471V49.08391A.83056.83056,0,0,0,61.71708,48.36919Z"/><path d="M91.08525,57.46553a9.12308,9.12308,0,0,0-9.09634-9.09634H72.37279a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471H76.791a.72106.72106,0,0,0,.71471-.71471V67.14663a.59028.59028,0,0,1,.51979-.64974H79.325a.976.976,0,0,1,.97461.71471l4.48319,10.9156a1.13272,1.13272,0,0,0,1.10456.77968h4.54816a.56948.56948,0,0,0,.64974-.32487.82839.82839,0,0,0,0-.77968L86.34216,66.75678c-.32487-.71471-.38984-1.16953.32487-1.68932C89.78577,62.9883,91.08525,60.64925,91.08525,57.46553Zm-9.09634,3.24869h-3.9634a.59027.59027,0,0,1-.64974-.51979V54.8016a.69093.69093,0,0,1,.58477-.64974h4.02837a3.3012,3.3012,0,0,1,3.24869,3.2487,3.19187,3.19187,0,0,1-3.05376,3.31366Z"/><path d="M115.97022,73.05924H105.57441a.59027.59027,0,0,1-.64974-.51979V67.14663a.59028.59028,0,0,1,.51979-.64974h7.73189a.72106.72106,0,0,0,.71472-.71471V61.364a.72106.72106,0,0,0-.71472-.71471h-7.60194a.59027.59027,0,0,1-.64974-.51979V54.73663a.69092.69092,0,0,1,.58476-.64974h10.46079a.72106.72106,0,0,0,.71472-.71471V48.954a.72106.72106,0,0,0-.71472-.71471H99.79174a.72106.72106,0,0,0-.71471.71471V78.06223a.72106.72106,0,0,0,.71471.71471h16.17848a.72106.72106,0,0,0,.71472-.71471V73.644A.7613.7613,0,0,0,115.97022,73.05924Z"/><path d="M135.26744,48.36919h-9.29126a.72106.72106,0,0,0-.71471.71472V78.19218a.72106.72106,0,0,0,.71471.71471h9.29126c5.003,0,8.96639-3.9634,9.61613-9.55115a47.00813,47.00813,0,0,0,0-11.30545C144.23383,52.3326,140.27042,48.36919,135.26744,48.36919Zm3.76849,20.98655a3.95536,3.95536,0,0,1-3.83346,3.76848h-3.50858a.59027.59027,0,0,1-.64974-.51979V54.8016a.69093.69093,0,0,1,.58476-.64974h3.57356a3.95536,3.95536,0,0,1,3.83346,3.76849A48.09012,48.09012,0,0,1,139.03593,69.35574Z"/></svg>

Before

Width:  |  Height:  |  Size: 3.6 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#900;}.cls-3{fill:#069;}.cls-4{fill:#396;}.cls-5{fill:#484848;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.75123" y="-4.36563" width="223.25536" height="134.51136"/><circle id="circle3" class="cls-2" cx="107.84735" cy="21.48597" r="15.77727"/><path id="path5" class="cls-3" d="M139.7013,18.0353l-8.93976,8.9394a34.71,34.71,0,1,1-45.8277,0l-8.93975-8.93975a47.33269,47.33269,0,1,0,63.70721.00035Z"/><path id="path7" class="cls-4" d="M104.69189,46.73345,89.41109,31.45229a28.38989,28.38989,0,0,0,15.2808,49.812Zm21.59242-15.28116L111.00315,46.7331V81.26427a28.39011,28.39011,0,0,0,15.28116-49.812Z"/><path id="path9" class="cls-5" d="M170.3688,119.11393h-4.10315l-1.08337-2.69545H159.411l-1.103,2.69545h-4.10314l5.87686-13.29007h4.42955Zm-6.10686-5.00735-1.94656-4.85378-1.9648,4.85378Zm-11.848,5.00735h-3.99937V105.82386h3.999v13.29007Zm-6.49357-6.40066a6.31385,6.31385,0,0,1-.74329,3.27677,7.01233,7.01233,0,0,1-1.74987,2.003,6.32532,6.32532,0,0,1-3.99445,1.12088h-7.16393V105.82387h5.51678a14.9088,14.9088,0,0,1,2.13134.13008,9.38954,9.38954,0,0,1,1.59455.365,6.28678,6.28678,0,0,1,1.20678.52942,6.20368,6.20368,0,0,1,.89089.63,6.10431,6.10431,0,0,1,1.19346,1.31337,6.40745,6.40745,0,0,1,.81516,1.75478A7.42171,7.42171,0,0,1,145.92035,112.71327Zm-4.0758-.1725a4.43822,4.43822,0,0,0-.61812-2.5247,3.02773,3.02773,0,0,0-1.48516-1.21205,5.25783,5.25783,0,0,0-1.76775-.30678h-1.7057v7.86689h1.7057a4.88661,4.88661,0,0,0,2.7214-.74714Q141.84454,114.86915,141.84455,112.54077Zm-12.2684,6.57316H119.566V105.82386h9.84712v2.67337H123.565v2.4325h5.57919v2.67337H123.565v2.83815h6.01113Zm-13.38193,0h-3.99936v-7.70246l-3.61614,4.45935h-.3166l-3.62632-4.45935v7.70246h-3.88962V105.82386h3.63052l4.08632,4.98562,4.10524-4.98562h3.62632l-.00036,13.29007Zm-18.80965,0H93.38591V105.82386h3.99866Zm-5.76291,0h-4.8892l-5.081-6.22326v6.22326H77.65282V105.82386h3.99866v5.94978l4.81312-5.94978H90.817l-4.9467,6.22325Zm-17.34132,0h-3.999V105.82386h3.999v13.29007Zm-5.859-13.29041-5.752,13.46537h-2.32L56.898,111.455l-3.53761,7.83394H51.02114l-5.637-13.46537h4.15118l2.761,7.36272,3.16351-7.36272h2.86656l3.10567,7.36272,2.88548-7.36272Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#900;}.cls-3{fill:#069;}.cls-4{fill:#396;}.cls-5{fill:#484848;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.75123" y="-4.36563" width="223.25536" height="134.51136"/><circle id="circle3" class="cls-2" cx="107.84735" cy="21.48597" r="15.77727"/><path id="path5" class="cls-3" d="M139.7013,18.0353l-8.93976,8.9394a34.71,34.71,0,1,1-45.8277,0l-8.93975-8.93975a47.33269,47.33269,0,1,0,63.70721.00035Z"/><path id="path7" class="cls-4" d="M104.69189,46.73345,89.41109,31.45229a28.38989,28.38989,0,0,0,15.2808,49.812Zm21.59242-15.28116L111.00315,46.7331V81.26427a28.39011,28.39011,0,0,0,15.28116-49.812Z"/><path id="path9" class="cls-5" d="M170.3688,119.11393h-4.10315l-1.08337-2.69545H159.411l-1.103,2.69545h-4.10314l5.87686-13.29007h4.42955Zm-6.10686-5.00735-1.94656-4.85378-1.9648,4.85378Zm-11.848,5.00735h-3.99937V105.82386h3.999v13.29007Zm-6.49357-6.40066a6.31385,6.31385,0,0,1-.74329,3.27677,7.01233,7.01233,0,0,1-1.74987,2.003,6.32532,6.32532,0,0,1-3.99445,1.12088h-7.16393V105.82387h5.51678a14.9088,14.9088,0,0,1,2.13134.13008,9.38954,9.38954,0,0,1,1.59455.365,6.28678,6.28678,0,0,1,1.20678.52942,6.20368,6.20368,0,0,1,.89089.63,6.10431,6.10431,0,0,1,1.19346,1.31337,6.40745,6.40745,0,0,1,.81516,1.75478A7.42171,7.42171,0,0,1,145.92035,112.71327Zm-4.0758-.1725a4.43822,4.43822,0,0,0-.61812-2.5247,3.02773,3.02773,0,0,0-1.48516-1.21205,5.25783,5.25783,0,0,0-1.76775-.30678h-1.7057v7.86689h1.7057a4.88661,4.88661,0,0,0,2.7214-.74714Q141.84454,114.86915,141.84455,112.54077Zm-12.2684,6.57316H119.566V105.82386h9.84712v2.67337H123.565v2.4325h5.57919v2.67337H123.565v2.83815h6.01113Zm-13.38193,0h-3.99936v-7.70246l-3.61614,4.45935h-.3166l-3.62632-4.45935v7.70246h-3.88962V105.82386h3.63052l4.08632,4.98562,4.10524-4.98562h3.62632l-.00036,13.29007Zm-18.80965,0H93.38591V105.82386h3.99866Zm-5.76291,0h-4.8892l-5.081-6.22326v6.22326H77.65282V105.82386h3.99866v5.94978l4.81312-5.94978H90.817l-4.9467,6.22325Zm-17.34132,0h-3.999V105.82386h3.999v13.29007Zm-5.859-13.29041-5.752,13.46537h-2.32L56.898,111.455l-3.53761,7.83394H51.02114l-5.637-13.46537h4.15118l2.761,7.36272,3.16351-7.36272h2.86656l3.10567,7.36272,2.88548-7.36272Z"/></svg>

Before

Width:  |  Height:  |  Size: 2.2 KiB

After

Width:  |  Height:  |  Size: 2.2 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#22b9ec;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.35417" y="-3.899" width="223.25536" height="134.51136"/><g id="Wink"><path id="W" class="cls-2" d="M12.91,48.96543H27.30572l7.69292,19.15615c.89117-3.32547,6.79187-13.48167,10.18742-20.2225h1.02826c3.54789,6.66467,9.92312,17.30071,10.64443,19.994L64.7421,48.96543H77.80484l-19.23232,42.273H57.37288c-3.97976-7.59162-11.02069-19.34657-11.93927-22.77411-.95819,3.57683-8.26419,15.18249-12.39628,22.77411H31.91386Z"/><path id="i" class="cls-2" d="M81.87981,31.82771,89.72507,26.953l7.84527,4.87472V41.958H81.87981Zm1.33293,17.13772H96.27549V90.1721H83.21274Z"/><path id="n" class="cls-2" d="M106.71045,48.96543h13.06274l-.2285,6.66466c5.0621-11.18293,28.06624-11.64069,28.52477,10.09221V90.1721H135.00672l-.03809-23.15495c.2445-10.39307-15.93045-8.63817-15.15735,1.06635l-.03809,22.0886H106.71045Z"/><path id="k" class="cls-2" d="M156.14323,49.00351H169.206l-.03808,15.005,12.87233-15.0431h14.50993l-14.548,16.833L198.13062,90.1721H182.91614L172.6716,74.63391,169.206,78.29V90.1721H156.14323Z"/></g></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#22b9ec;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.35417" y="-3.899" width="223.25536" height="134.51136"/><g id="Wink"><path id="W" class="cls-2" d="M12.91,48.96543H27.30572l7.69292,19.15615c.89117-3.32547,6.79187-13.48167,10.18742-20.2225h1.02826c3.54789,6.66467,9.92312,17.30071,10.64443,19.994L64.7421,48.96543H77.80484l-19.23232,42.273H57.37288c-3.97976-7.59162-11.02069-19.34657-11.93927-22.77411-.95819,3.57683-8.26419,15.18249-12.39628,22.77411H31.91386Z"/><path id="i" class="cls-2" d="M81.87981,31.82771,89.72507,26.953l7.84527,4.87472V41.958H81.87981Zm1.33293,17.13772H96.27549V90.1721H83.21274Z"/><path id="n" class="cls-2" d="M106.71045,48.96543h13.06274l-.2285,6.66466c5.0621-11.18293,28.06624-11.64069,28.52477,10.09221V90.1721H135.00672l-.03809-23.15495c.2445-10.39307-15.93045-8.63817-15.15735,1.06635l-.03809,22.0886H106.71045Z"/><path id="k" class="cls-2" d="M156.14323,49.00351H169.206l-.03808,15.005,12.87233-15.0431h14.50993l-14.548,16.833L198.13062,90.1721H182.91614L172.6716,74.63391,169.206,78.29V90.1721H156.14323Z"/></g></svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

@ -1 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:#fbfbfb;}.cls-2{fill:#6c0;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.35417" y="-2.26418" width="223.25536" height="134.51136"/><path class="cls-2" d="M55.43477,39.86952,43.85324,75.22367l-4.90852-16.3938h-7.86l9.015,25.9221H48.473L64.22519,39.86952ZM25.53453,75.22368,19.98437,58.82987H12.02809l9.015,26.01835h9.1754L33.84371,73.78,29.86557,62.23055Zm95.44338-4.74812,12.7044-11.70985h-10.587l-8.18086,7.66755V47.12h-7.47506V84.752h7.5713V74.38955L125.14854,84.752H135.2864Zm-16.13716-3.914a6.62431,6.62431,0,0,0-3.59316-.67372,5.4975,5.4975,0,0,0-4.17064,1.957,9.22462,9.22462,0,0,0-1.31536,5.42183V84.752h-7.86V58.82987H95.7616v3.68941h.09625a6.86333,6.86333,0,0,1,6.44844-4.17063,5.54685,5.54685,0,0,1,2.59863.51331ZM71.02653,85.48986c-8.245,0-14.4689-5.19726-14.4689-13.5706s6.25595-13.5706,14.4689-13.5706,14.4689,5.16517,14.4689,13.63476S79.30364,85.48986,71.02653,85.48986Zm0-20.4682c-3.68941,0-5.93514,2.91944-5.93514,6.76926s2.18157,6.76926,5.93514,6.76926,5.93513-2.91944,5.93513-6.76926-2.18156-6.80135-5.93513-6.80135ZM201.15034,84.752h-8.02045l-.16041-2.66279a9.90263,9.90263,0,0,1-7.41089,3.33651c-4.58769,0-9.46413-2.4703-9.46413-8.30919s5.32557-7.7638,10.13784-7.98837l6.5447-.25665V68.294c0-2.759-1.957-4.20272-5.35767-4.20272a16.74679,16.74679,0,0,0-7.98837,2.342l-2.21366-5.51808a26.05948,26.05948,0,0,1,11.06823-2.59863c4.87644,0,7.66756,1.12286,9.72078,3.04777,2.05324,1.957,3.04777,4.45937,3.04777,8.85458Zm-8.40543-11.357-4.17064.25665c-2.59862.09624-4.13856,1.25119-4.13856,3.1761s1.63616,3.20818,3.97815,3.20818a5.198,5.198,0,0,0,4.33105-2.374ZM175.902,58.79779,165.18663,84.7199h-7.73171L146.80375,58.7978h8.75833l5.903,16.3938,5.8389-16.42589ZM144.558,84.752h-8.43752V58.82987H144.558ZM136.1526,56.68039V47.08793h10.33034Zm65.79979,2.14948a1.81539,1.81539,0,1,0-.06417-.06416h-.06417Zm0-4.7481a2.19766,2.19766,0,1,0,2.18156,2.21364,2.14955,2.14955,0,0,0-2.11739-2.18156h0Zm1.2191,3.68941h-.7058l-.64164-1.21911h-.32081v1.21911h-.60957V54.72341h1.15494c.67371,0,1.0587.3529,1.0587.89828a.83236.83236,0,0,1-.60957.86621Zm-1.34744-2.47031h-.35289V56.135h.35289c.28875,0,.5454,0,.5454-.385s-.19248-.48123-.5133-.48123Z"/></svg>
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 215 127"><defs><style>.cls-1{fill:transparent;}.cls-2{fill:#6c0;}</style></defs><title>kubernetes.io-logos2</title><rect class="cls-1" x="-3.35417" y="-2.26418" width="223.25536" height="134.51136"/><path class="cls-2" d="M55.43477,39.86952,43.85324,75.22367l-4.90852-16.3938h-7.86l9.015,25.9221H48.473L64.22519,39.86952ZM25.53453,75.22368,19.98437,58.82987H12.02809l9.015,26.01835h9.1754L33.84371,73.78,29.86557,62.23055Zm95.44338-4.74812,12.7044-11.70985h-10.587l-8.18086,7.66755V47.12h-7.47506V84.752h7.5713V74.38955L125.14854,84.752H135.2864Zm-16.13716-3.914a6.62431,6.62431,0,0,0-3.59316-.67372,5.4975,5.4975,0,0,0-4.17064,1.957,9.22462,9.22462,0,0,0-1.31536,5.42183V84.752h-7.86V58.82987H95.7616v3.68941h.09625a6.86333,6.86333,0,0,1,6.44844-4.17063,5.54685,5.54685,0,0,1,2.59863.51331ZM71.02653,85.48986c-8.245,0-14.4689-5.19726-14.4689-13.5706s6.25595-13.5706,14.4689-13.5706,14.4689,5.16517,14.4689,13.63476S79.30364,85.48986,71.02653,85.48986Zm0-20.4682c-3.68941,0-5.93514,2.91944-5.93514,6.76926s2.18157,6.76926,5.93514,6.76926,5.93513-2.91944,5.93513-6.76926-2.18156-6.80135-5.93513-6.80135ZM201.15034,84.752h-8.02045l-.16041-2.66279a9.90263,9.90263,0,0,1-7.41089,3.33651c-4.58769,0-9.46413-2.4703-9.46413-8.30919s5.32557-7.7638,10.13784-7.98837l6.5447-.25665V68.294c0-2.759-1.957-4.20272-5.35767-4.20272a16.74679,16.74679,0,0,0-7.98837,2.342l-2.21366-5.51808a26.05948,26.05948,0,0,1,11.06823-2.59863c4.87644,0,7.66756,1.12286,9.72078,3.04777,2.05324,1.957,3.04777,4.45937,3.04777,8.85458Zm-8.40543-11.357-4.17064.25665c-2.59862.09624-4.13856,1.25119-4.13856,3.1761s1.63616,3.20818,3.97815,3.20818a5.198,5.198,0,0,0,4.33105-2.374ZM175.902,58.79779,165.18663,84.7199h-7.73171L146.80375,58.7978h8.75833l5.903,16.3938,5.8389-16.42589ZM144.558,84.752h-8.43752V58.82987H144.558ZM136.1526,56.68039V47.08793h10.33034Zm65.79979,2.14948a1.81539,1.81539,0,1,0-.06417-.06416h-.06417Zm0-4.7481a2.19766,2.19766,0,1,0,2.18156,2.21364,2.14955,2.14955,0,0,0-2.11739-2.18156h0Zm1.2191,3.68941h-.7058l-.64164-1.21911h-.32081v1.21911h-.60957V54.72341h1.15494c.67371,0,1.0587.3529,1.0587.89828a.83236.83236,0,0,1-.60957.86621Zm-1.34744-2.47031h-.35289V56.135h.35289c.28875,0,.5454,0,.5454-.385s-.19248-.48123-.5133-.48123Z"/></svg>

Before

Width:  |  Height:  |  Size: 2.3 KiB

After

Width:  |  Height:  |  Size: 2.3 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.7 KiB

After

Width:  |  Height:  |  Size: 5.7 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.8 KiB

After

Width:  |  Height:  |  Size: 7.8 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -309,13 +309,6 @@ The node controller also adds {{< glossary_tooltip text="taints" term_id="taint"
corresponding to node problems like node unreachable or not ready. This means
that the scheduler won't place Pods onto unhealthy nodes.
{{< caution >}}
`kubectl cordon` marks a node as 'unschedulable', which has the side effect of the service
controller removing the node from any LoadBalancer node target lists it was previously
eligible for, effectively removing incoming load balancer traffic from the cordoned node(s).
{{< /caution >}}
### Node capacity
Node objects track information about the Node's resource capacity: for example, the amount

View File

@ -174,4 +174,5 @@ Here is an example:
## {{% heading "whatsnext" %}}
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)

View File

@ -224,7 +224,7 @@ When a ConfigMap currently consumed in a volume is updated, projected keys are e
The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync.
However, the kubelet uses its local cache for getting the current value of the ConfigMap.
The type of the cache is configurable using the `ConfigMapAndSecretChangeDetectionStrategy` field in
the [KubeletConfiguration struct](/docs/reference/config-api/kubelet-config.v1beta1/)).
the [KubeletConfiguration struct](/docs/reference/config-api/kubelet-config.v1beta1/).
A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting
all requests directly to the API server.
As a result, the total delay from the moment when the ConfigMap is updated to the moment

View File

@ -353,13 +353,15 @@ the removal of the lowest priority Pods is not sufficient to allow the scheduler
to schedule the preemptor Pod, or if the lowest priority Pods are protected by
`PodDisruptionBudget`.
The only component that considers both QoS and Pod priority is
[kubelet out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
The kubelet ranks Pods for eviction first by whether or not their usage of the
starved resource exceeds requests, then by Priority, and then by the consumption
of the starved compute resource relative to the Pods' scheduling requests.
See
[evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
The kubelet uses Priority to determine pod order for [out-of-resource eviction](/docs/tasks/administer-cluster/out-of-resource/).
You can use the QoS class to estimate the order in which pods are most likely
to get evicted. The kubelet ranks pods for eviction based on the following factors:
1. Whether the starved resource usage exceeds requests
1. Pod Priority
1. Amount of resource usage relative to requests
See [evicting end-user pods](/docs/tasks/administer-cluster/out-of-resource/#evicting-end-user-pods)
for more details.
kubelet out-of-resource eviction does not evict Pods when their
@ -367,7 +369,6 @@ usage does not exceed their requests. If a Pod with lower priority is not
exceeding its requests, it won't be evicted. Another Pod with higher priority
that exceeds its requests may be evicted.
## {{% heading "whatsnext" %}}
* Read about using ResourceQuotas in connection with PriorityClasses: [limit Priority Class consumption by default](/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default)

View File

@ -50,11 +50,10 @@ A more detailed description of the termination behavior can be found in
### Hook handler implementations
Containers can access a hook by implementing and registering a handler for that hook.
There are three types of hook handlers that can be implemented for Containers:
There are two types of hook handlers that can be implemented for Containers:
* Exec - Executes a specific command, such as `pre-stop.sh`, inside the cgroups and namespaces of the Container.
Resources consumed by the command are counted against the Container.
* TCP - Opens a TCP connecton against a specific port on the Container.
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
### Hook handler execution

View File

@ -109,7 +109,8 @@ For more details on setting up CRI runtimes, see [CRI installation](/docs/setup/
#### dockershim
Kubernetes built-in dockershim CRI does not support runtime handlers.
RuntimeClasses with dockershim must set the runtime handler to `docker`. Dockershim does not support
custom configurable runtime handlers.
#### {{< glossary_tooltip term_id="containerd" >}}
@ -163,7 +164,7 @@ Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/).
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
You can specify _overhead_ resources that are associated with running a Pod. Declaring overhead allows
the cluster (including the scheduler) to account for it when making decisions about Pods and resources.
the cluster (including the scheduler) to account for it when making decisions about Pods and resources.
To use Pod overhead, you must have the PodOverhead [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled (it is on by default).

View File

@ -7,6 +7,10 @@ reviewers:
- lavalamp
- cheftako
- chenopis
feature:
title: Designed for extensibility
description: >
Add features to your Kubernetes cluster without changing upstream source code.
content_type: concept
no_list: true
---
@ -80,18 +84,15 @@ and by kubectl.
Below is a diagram showing how the extension points interact with the
Kubernetes control plane.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vQBRWyXLVUlQPlp7BvxvV9S1mxyXSM6rAc_cbLANvKlu6kCCf-kGTporTMIeG5GZtUdxXz1xowN7RmL/pub?w=960&h=720">
<!-- image source drawing https://docs.google.com/drawings/d/1muJ7Oxuj_7Gtv7HV9-2zJbOnkQJnjxq-v1ym_kZfB-4/edit?ts=5a01e054 -->
![Extension Points and the Control Plane](/docs/concepts/extend-kubernetes/control-plane.png)
## Extension Points
This diagram shows the extension points in a Kubernetes system.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vSH5ZWUO2jH9f34YHenhnCd14baEb4vT-pzfxeFC7NzdNqRDgdz4DDAVqArtH4onOGqh0bhwMX0zGBb/pub?w=425&h=809">
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
![Extension Points](/docs/concepts/extend-kubernetes/extension-points.png)
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
@ -103,12 +104,11 @@ This diagram shows the extension points in a Kubernetes system.
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vRWXNNIVWFDqzDY0CsKZJY3AR8sDeFDXItdc5awYxVH8s0OLherMlEPVUpxPIB1CSUu7GPk7B2fEnzM/pub?w=1440&h=1080">
<!-- image source drawing: https://docs.google.com/drawings/d/1sdviU6lDz4BpnzJNHfNpQrqI9F19QZ07KnhnxVrp2yg/edit -->
![Flowchart for Extension](/docs/concepts/extend-kubernetes/flowchart.png)
## API Extensions
### User-Defined Types
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`.
@ -157,7 +157,6 @@ After a request is authorized, if it is a write operation, it also goes through
## Infrastructure Extensions
### Storage Plugins
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

View File

@ -1,207 +0,0 @@
---
title: Extending your Kubernetes Cluster
reviewers:
- erictune
- lavalamp
- cheftako
- chenopis
content_type: concept
weight: 10
---
<!-- overview -->
Kubernetes is highly configurable and extensible. As a result,
there is rarely a need to fork or submit patches to the Kubernetes
project code.
This guide describes the options for customizing a Kubernetes cluster. It is
aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}}
who want to understand how to adapt their
Kubernetes cluster to the needs of their work environment. Developers who are prospective
{{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}}
or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}}
will also find it useful as an introduction to what extension points and
patterns exist, and their trade-offs and limitations.
<!-- body -->
## Overview
Customization approaches can be broadly divided into *configuration*, which only involves changing flags, local configuration files, or API resources; and *extensions*, which involve running additional programs or services. This document is primarily about extensions.
## Configuration
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
## Extensions
Extensions are software components that extend and deeply integrate with Kubernetes.
They adapt it to support new types and new kinds of hardware.
Most cluster administrators will use a hosted or distribution
instance of Kubernetes. As a result, most Kubernetes users will not need to
install extensions and fewer will need to author new ones.
## Extension Patterns
Kubernetes is designed to be automated by writing client programs. Any
program that reads and/or writes to the Kubernetes API can provide useful
automation. *Automation* can run on the cluster or off it. By following
the guidance in this doc you can write highly available and robust automation.
Automation generally works with any Kubernetes cluster, including hosted
clusters and managed installations.
There is a specific pattern for writing client programs that work well with
Kubernetes called the *Controller* pattern. Controllers typically read an
object's `.spec`, possibly do things, and then update the object's `.status`.
A controller is a client of Kubernetes. When Kubernetes is the client and
calls out to a remote service, it is called a *Webhook*. The remote service
is called a *Webhook Backend*. Like Controllers, Webhooks do add a point of
failure.
In the webhook model, Kubernetes makes a network request to a remote service.
In the *Binary Plugin* model, Kubernetes executes a binary (program).
Binary plugins are used by the kubelet (e.g.
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexvolume)
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
and by kubectl.
Below is a diagram showing how the extension points interact with the
Kubernetes control plane.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vQBRWyXLVUlQPlp7BvxvV9S1mxyXSM6rAc_cbLANvKlu6kCCf-kGTporTMIeG5GZtUdxXz1xowN7RmL/pub?w=960&h=720">
<!-- image source drawing https://docs.google.com/drawings/d/1muJ7Oxuj_7Gtv7HV9-2zJbOnkQJnjxq-v1ym_kZfB-4/edit?ts=5a01e054 -->
## Extension Points
This diagram shows the extension points in a Kubernetes system.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vSH5ZWUO2jH9f34YHenhnCd14baEb4vT-pzfxeFC7NzdNqRDgdz4DDAVqArtH4onOGqh0bhwMX0zGBb/pub?w=425&h=809">
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/extend-kubernetes/#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/extend-kubernetes/#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/extend-kubernetes/#storage-plugins).
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
<img src="https://docs.google.com/drawings/d/e/2PACX-1vRWXNNIVWFDqzDY0CsKZJY3AR8sDeFDXItdc5awYxVH8s0OLherMlEPVUpxPIB1CSUu7GPk7B2fEnzM/pub?w=1440&h=1080">
<!-- image source drawing: https://docs.google.com/drawings/d/1sdviU6lDz4BpnzJNHfNpQrqI9F19QZ07KnhnxVrp2yg/edit -->
## API Extensions
### User-Defined Types
Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as `kubectl`.
Do not use a Custom Resource as data storage for application, user, or monitoring data.
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
### Combining New APIs with Automation
The combination of a custom resource API and a control loop is called the [Operator pattern](/docs/concepts/extend-kubernetes/operator/). The Operator pattern is used to manage specific, usually stateful, applications. These custom APIs and control loops can also be used to control other resources, such as storage or policies.
### Changing Built-in Resources
When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups.
Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do.
### API Access Extensions
When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access/) for more on this flow.
Each of these steps offers extension points.
Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the [Authentication documentation](/docs/reference/access-authn-authz/authentication/).
### Authentication
[Authentication](/docs/reference/access-authn-authz/authentication/) maps headers or certificates in all requests to a username for the client making the request.
Kubernetes provides several built-in authentication methods, and an [Authentication webhook](/docs/reference/access-authn-authz/authentication/#webhook-token-authentication) method if those don't meet your needs.
### Authorization
[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision.
### Dynamic Admission Control
After a request is authorized, if it is a write operation, it also goes through [Admission Control](/docs/reference/access-authn-authz/admission-controllers/) steps. In addition to the built-in steps, there are several extensions:
* The [Image Policy webhook](/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) restricts what images can be run in containers.
* To make arbitrary admission control decisions, a general [Admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) can be used. Admission Webhooks can reject creations or updates.
## Infrastructure Extensions
### Storage Plugins
[Flex Volumes](/docs/concepts/storage/volumes/#flexvolume)
allow users to mount volume types without built-in support by having the
Kubelet call a Binary Plugin to mount the volume.
### Device Plugins
Device plugins allow a node to discover new Node resources (in addition to the
builtin ones like cpu and memory) via a
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
### Network Plugins
Different networking fabrics can be supported via node-level
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
### Scheduler Extensions
The scheduler is a special type of controller that watches pods, and assigns
pods to nodes. The default scheduler can be replaced entirely, while
continuing to use other Kubernetes components, or
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
can run at the same time.
This is a significant undertaking, and almost all Kubernetes users find they
do not need to modify the scheduler.
The scheduler also supports a
[webhook](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md)
that permits a webhook backend (scheduler extension) to filter and prioritize
the nodes chosen for a pod.
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
* Learn more about Infrastructure extensions
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

View File

@ -53,8 +53,8 @@ If the prefix is omitted, the label Key is presumed to be private to the user. A
The `kubernetes.io/` and `k8s.io/` prefixes are reserved for Kubernetes core components.
Valid label value:
* must be 63 characters or less (cannot be empty),
* must begin and end with an alphanumeric character (`[a-z0-9A-Z]`),
* must be 63 characters or less (can be empty),
* unless empty, must begin and end with an alphanumeric character (`[a-z0-9A-Z]`),
* could contain dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
For example, here's the configuration file for a Pod that has two labels `environment: production` and `app: nginx` :
@ -237,4 +237,3 @@ selector:
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.

View File

@ -19,4 +19,4 @@ The configuration of individual managers is elaborated in dedicated documents:
- [CPU Manager Policies](/docs/tasks/administer-cluster/cpu-management-policies/)
- [Device Manager](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager)
- [Memory Manger Policies](/docs/tasks/administer-cluster/memory-manager/)
- [Memory Manager Policies](/docs/tasks/administer-cluster/memory-manager/)

View File

@ -72,7 +72,7 @@ verify that it worked by running `kubectl get pods -o wide` and looking at the
## Interlude: built-in node labels {#built-in-node-labels}
In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/kubernetes-api/labels-annotations-taints/) for a list of these.
with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of these.
{{< note >}}
The value of these labels is cloud provider specific and is not guaranteed to be reliable.

View File

@ -210,9 +210,9 @@ are true. The following taints are built in:
the NodeCondition `Ready` being "`False`".
* `node.kubernetes.io/unreachable`: Node is unreachable from the node
controller. This corresponds to the NodeCondition `Ready` being "`Unknown`".
* `node.kubernetes.io/out-of-disk`: Node becomes out of disk.
* `node.kubernetes.io/memory-pressure`: Node has memory pressure.
* `node.kubernetes.io/disk-pressure`: Node has disk pressure.
* `node.kubernetes.io/pid-pressure`: Node has PID pressure.
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
@ -275,7 +275,7 @@ tolerations to all daemons, to prevent DaemonSets from breaking.
* `node.kubernetes.io/memory-pressure`
* `node.kubernetes.io/disk-pressure`
* `node.kubernetes.io/out-of-disk` (*only for critical pods*)
* `node.kubernetes.io/pid-pressure` (1.14 or later)
* `node.kubernetes.io/unschedulable` (1.10 or later)
* `node.kubernetes.io/network-unavailable` (*host network only*)

Some files were not shown because too many files have changed in this diff Show More