Merge pull request #19823 from VineethReddy02/vineeth-merged-master-into-dev-1.18-for-syncup

Merged master into dev-1.18 for syncup
pull/19116/head
Kubernetes Prow Robot 2020-03-24 10:06:45 -07:00 committed by GitHub
commit f115a2bf52
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
64 changed files with 904 additions and 921 deletions

View File

@ -578,6 +578,9 @@ section
li
display: inline-block
height: 100%
margin-right: 10px
&:last-child
margin-right: 0
a
display: block
@ -598,11 +601,11 @@ section
#vendorStrip
line-height: 44px
max-width: 100%
overflow-x: auto
-webkit-overflow-scrolling: touch
ul
float: none
overflow-x: auto
#searchBox
float: none
@ -1052,6 +1055,9 @@ dd
a.issue
margin-left: 0px
.gridPageHome .flyout-button
display: none
.feedback--no
margin-left: 1em

View File

@ -3,7 +3,7 @@ title: Kubernetes Dokumentation
noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage
class: gridPage gridPageHome
linkTitle: "Home"
main_menu: true
weight: 10

View File

@ -68,13 +68,7 @@ resource requests/limits of that type for each Container in the Pod.
## Meaning of CPU
Limits and requests for CPU resources are measured in *cpu* units.
One cpu, in Kubernetes, is equivalent to:
- 1 AWS vCPU
- 1 GCP Core
- 1 Azure vCore
- 1 IBM vCPU
- 1 *Hyperthread* on a bare-metal Intel processor with Hyperthreading
One cpu, in Kubernetes, is equivalent to **1 vCPU/Core** for cloud providers and **1 hyperthread** on bare-metal Intel processors.
Fractional requests are allowed. A Container with
`spec.containers[].resources.requests.cpu` of `0.5` is guaranteed half as much

View File

@ -312,6 +312,10 @@ spec:
server: 172.17.0.2
```
{{< note >}}
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
{{< /note >}}
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](https://git.k8s.io/community/contributors/design-proposals/scheduling/resources.md) to understand the units expected by `capacity`.

View File

@ -14,10 +14,10 @@ This page gives writing style guidelines for the Kubernetes documentation.
These are guidelines, not rules. Use your best judgment, and feel free to
propose changes to this document in a pull request.
For additional information on creating new content for the Kubernetes
documentation, read the [Documentation Content
Guide](/docs/contribute/style/content-guide/) and follow the instructions on
[using page templates](/docs/contribute/style/page-templates/) and [creating a
For additional information on creating new content for the Kubernetes
documentation, read the [Documentation Content
Guide](/docs/contribute/style/content-guide/) and follow the instructions on
[using page templates](/docs/contribute/style/page-templates/) and [creating a
documentation pull request](/docs/contribute/start/#improve-existing-content).
{{% /capture %}}
@ -58,11 +58,11 @@ leads to an awkward construction.
{{< table caption = "Do and Don't - API objects" >}}
Do | Don't
:--| :-----
The Pod has two containers. | The pod has two containers.
The Pod has two containers. | The pod has two containers.
The Deployment is responsible for ... | The Deployment object is responsible for ...
A PodList is a list of Pods. | A Pod List is a list of pods.
The two ContainerPorts ... | The two ContainerPort objects ...
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
The two ContainerPorts ... | The two ContainerPort objects ...
The two ContainerStateTerminated objects ... | The two ContainerStateTerminateds ...
{{< /table >}}
@ -83,11 +83,11 @@ represents.
Do | Don't
:--| :-----
Click **Fork**. | Click "Fork".
Select **Other**. | Select "Other".
Select **Other**. | Select "Other".
{{< /table >}}
### Use italics to define or introduce new terms
{{< table caption = "Do and Don't - Use italics for new terms" >}}
Do | Don't
:--| :-----
@ -102,7 +102,7 @@ Do | Don't
:--| :-----
Open the `envars.yaml` file. | Open the envars.yaml file.
Go to the `/docs/tutorials` directory. | Go to the /docs/tutorials directory.
Open the `/_data/concepts.yaml` file. | Open the /_data/concepts.yaml file.
Open the `/_data/concepts.yaml` file. | Open the /\_data/concepts.yaml file.
{{< /table >}}
### Use the international standard for punctuation inside quotes
@ -119,18 +119,18 @@ The copy is called a "fork". | The copy is called a "fork."
### Use code style for inline code and commands
For inline code in an HTML document, use the `<code>` tag. In a Markdown
document, use the backtick (`).
document, use the backtick (`` ` ``).
{{< table caption = "Do and Don't - Use code style for inline code and commands" >}}
Do | Don't
:--| :-----
The `kubectl run`command creates a Deployment. | The "kubectl run" command creates a Deployment.
For declarative management, use `kubectl apply`. | For declarative management, use "kubectl apply".
Enclose code samples with triple backticks. `(```)`| Enclose code samples with any other syntax.
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (**) or an underscore (_) to enclose inline code. For example, **var example = true**.
Enclose code samples with triple backticks. (\`\`\`)| Enclose code samples with any other syntax.
Use single backticks to enclose inline code. For example, `var example = true`. | Use two asterisks (`**`) or an underscore (`_`) to enclose inline code. For example, **var example = true**.
Use triple backticks before and after a multi-line block of code for fenced code blocks. | Use multi-line blocks of code to create diagrams, flowcharts, or other illustrations.
Use meaningful variable names that have a context. | Use variable names such as 'foo','bar', and 'baz' that are not meaningful and lack context.
Remove trailing spaces in the code. | Add trailing spaces in the code, where these are important, because the screen reader will read out the spaces as well.
Remove trailing spaces in the code. | Add trailing spaces in the code, where these are important, because the screen reader will read out the spaces as well.
{{< /table >}}
{{< note >}}
@ -185,7 +185,7 @@ Do | Don't
Set the value of `imagePullPolicy` to Always. | Set the value of `imagePullPolicy` to "Always".
Set the value of `image` to nginx:1.16. | Set the value of `image` to `nginx:1.16`.
Set the value of the `replicas` field to 2. | Set the value of the `replicas` field to `2`.
{{< /table >}}
{{< /table >}}
## Code snippet formatting
@ -196,7 +196,7 @@ Set the value of the `replicas` field to 2. | Set the value of the `replicas` fi
Do | Don't
:--| :-----
kubectl get pods | $ kubectl get pods
{{< /table >}}
{{< /table >}}
### Separate commands from output
@ -214,7 +214,7 @@ The output is similar to this:
Code examples and configuration examples that include version information should be consistent with the accompanying text.
If the information is version specific, the Kubernetes version needs to be defined in the `prerequisites` section of the [Task template](/docs/contribute/style/page-templates/#task-template) or the [Tutorial template] (/docs/contribute/style/page-templates/#tutorial-template). Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
If the information is version specific, the Kubernetes version needs to be defined in the `prerequisites` section of the [Task template](/docs/contribute/style/page-templates/#task-template) or the [Tutorial template](/docs/contribute/style/page-templates/#tutorial-template). Once the page is saved, the `prerequisites` section is shown as **Before you begin**.
To specify the Kubernetes version for a task or tutorial page, include `min-kubernetes-server-version` in the front matter of the page.
@ -251,11 +251,11 @@ Kubernetes | Kubernetes should always be capitalized.
Docker | Docker should always be capitalized.
SIG Docs | SIG Docs rather than SIG-DOCS or other variations.
On-premises | On-premises or On-prem rather than On-premise or other variations.
{{< /table >}}
{{< /table >}}
## Shortcodes
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create different rhetorical appeal levels. Our documentation supports three different shortcodes in this category: **Note** {{</* note */>}}, **Caution** {{</* caution */>}}, and **Warning** {{</* warning */>}}.
Hugo [Shortcodes](https://gohugo.io/content-management/shortcodes) help create different rhetorical appeal levels. Our documentation supports three different shortcodes in this category: **Note** `{{</* note */>}}`, **Caution** `{{</* caution */>}}`, and **Warning** `{{</* warning */>}}`.
1. Surround the text with an opening and closing shortcode.
@ -275,7 +275,7 @@ The prefix you choose is the same text for the tag.
### Note
Use {{</* note */>}} to highlight a tip or a piece of information that may be helpful to know.
Use `{{</* note */>}}` to highlight a tip or a piece of information that may be helpful to know.
For example:
@ -291,7 +291,7 @@ The output is:
You can _still_ use Markdown inside these callouts.
{{< /note >}}
You can use a {{</* note */>}} in a list:
You can use a `{{</* note */>}}` in a list:
```
1. Use the note shortcode in a list
@ -323,7 +323,7 @@ The output is:
### Caution
Use {{</* caution */>}} to call attention to an important piece of information to avoid pitfalls.
Use `{{</* caution */>}}` to call attention to an important piece of information to avoid pitfalls.
For example:
@ -341,7 +341,7 @@ The callout style only applies to the line directly above the tag.
### Warning
Use {{</* warning */>}} to indicate danger or a piece of information that is crucial to follow.
Use `{{</* warning */>}}` to indicate danger or a piece of information that is crucial to follow.
For example:
@ -359,11 +359,11 @@ Beware.
### Katacoda Embedded Live Environment
This button lets users run Minikube in their browser using the [Katacoda Terminal](https://www.katacoda.com/embed/panel).
It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete
This button lets users run Minikube in their browser using the [Katacoda Terminal](https://www.katacoda.com/embed/panel).
It lowers the barrier of entry by allowing users to use Minikube with one click instead of going through the complete
Minikube and Kubectl installation process locally.
The Embedded Live Environment is configured to run `minikube start` and lets users complete tutorials in the same window
The Embedded Live Environment is configured to run `minikube start` and lets users complete tutorials in the same window
as the documentation.
{{< caution >}}
@ -376,7 +376,7 @@ For example:
{{</* kat-button */>}}
```
The output is:
The output is:
{{< kat-button >}}
@ -391,7 +391,7 @@ For example:
1. Preheat oven to 350˚F
1. Prepare the batter, and pour into springform pan.
{{</* note */>}}Grease the pan for best results.{{</* /note */>}}
`{{</* note */>}}Grease the pan for best results.{{</* /note */>}}`
1. Bake for 20-25 minutes or until set.
@ -429,9 +429,9 @@ Do | Don't
:--| :-----
Update the title in the front matter of the page or blog post. | Use first level heading, as Hugo automatically converts the title in the front matter of the page into a first-level heading.
Use ordered headings to provide a meaningful high-level outline of your content. | Use headings level 4 through 6, unless it is absolutely necessary. If your content is that detailed, it may need to be broken into separate articles.
Use pound or hash signs (#) for non-blog post content. | Use underlines (--- or ===) to designate first-level headings.
Use pound or hash signs (`#`) for non-blog post content. | Use underlines (`---` or `===`) to designate first-level headings.
Use sentence case for headings. For example, **Extend kubectl with plugins** | Use title case for headings. For example, **Extend Kubectl With Plugins**
{{< /table >}}
{{< /table >}}
### Paragraphs
@ -439,8 +439,8 @@ Use sentence case for headings. For example, **Extend kubectl with plugins** | U
Do | Don't
:--| :-----
Try to keep paragraphs under 6 sentences. | Indent the first paragraph with space characters. For example, ⋅⋅⋅Three spaces before a paragraph will indent it.
Use three hyphens (---) to create a horizontal rule. Use horizontal rules for breaks in paragraph content. For example, a change of scene in a story, or a shift of topic within a section. | Use horizontal rules for decoration.
{{< /table >}}
Use three hyphens (`---`) to create a horizontal rule. Use horizontal rules for breaks in paragraph content. For example, a change of scene in a story, or a shift of topic within a section. | Use horizontal rules for decoration.
{{< /table >}}
### Links
@ -449,7 +449,7 @@ Do | Don't
:--| :-----
Write hyperlinks that give you context for the content they link to. For example: Certain ports are open on your machines. See <a href="#check-required-ports">Check required ports</a> for more details. | Use ambiguous terms such as “click here”. For example: Certain ports are open on your machines. See <a href="#check-required-ports">here</a> for more details.
Write Markdown-style links: `[link text](URL)`. For example: `[Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions)` and the output is [Hugo shortcodes](/docs/contribute/style/hugo-shortcodes/#table-captions). | Write HTML-style links: `<a href="/media/examples/link-element-example.css" target="_blank">Visit our tutorial!</a>`, or create links that open in new tabs or windows. For example: `[example website](https://example.com){target="_blank"}`
{{< /table >}}
{{< /table >}}
### Lists
@ -457,17 +457,17 @@ Group items in a list that are related to each other and need to appear in a spe
Website navigation links can also be marked up as list items; after all they are nothing but a group of related links.
- End each item in a list with a period if one or more items in the list are complete sentences. For the sake of consistency, normally either all items or none should be complete sentences.
{{< note >}} Ordered lists that are part of an incomplete introductory sentence can be in lowercase and punctuated as if each item was a part of the introductory sentence.{{< /note >}}
- Use the number one (1.) for ordered lists.
- Use (+), (* ), or (-) for unordered lists.
- Leave a blank line after each list.
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
- Use the number one (`1.`) for ordered lists.
- Use (`+`), (`*`), or (`-`) for unordered lists.
- Leave a blank line after each list.
- Indent nested lists with four spaces (for example, ⋅⋅⋅⋅).
- List items may consist of multiple paragraphs. Each subsequent paragraph in a list item must be indented by either four spaces or one tab.
### Tables
@ -486,7 +486,7 @@ This section contains suggested best practices for clear, concise, and consisten
Do | Don't
:--| :-----
This command starts a proxy. | This command will start a proxy.
{{< /table >}}
{{< /table >}}
Exception: Use future or past tense if it is required to convey the correct
@ -512,7 +512,7 @@ Use simple and direct language. Avoid using unnecessary phrases, such as saying
Do | Don't
:--| :-----
To create a ReplicaSet, ... | In order to create a ReplicaSet, ...
See the configuration file. | Please see the configuration file.
See the configuration file. | Please see the configuration file.
View the Pods. | With this next command, we'll view the Pods.
{{< /table >}}
@ -522,7 +522,7 @@ View the Pods. | With this next command, we'll view the Pods.
Do | Don't
:--| :-----
You can create a Deployment by ... | We'll create a Deployment by ...
In the preceding output, you can see... | In the preceding output, we can see ...
In the preceding output, you can see... | In the preceding output, we can see ...
{{< /table >}}
@ -583,7 +583,7 @@ considered new in a few months.
Do | Don't
:--| :-----
In version 1.4, ... | In the current version, ...
The Federation feature provides ... | The new Federation feature provides ...
The Federation feature provides ... | The new Federation feature provides ...
{{< /table >}}

View File

@ -5,7 +5,7 @@ title: Kubernetes Documentation
noedit: true
cid: docsHome
layout: docsportal_home
class: gridPage
class: gridPage gridPageHome
linkTitle: "Home"
main_menu: true
weight: 10

View File

@ -184,27 +184,48 @@ sysctl --system
```
{{< tabs name="tab-cri-cri-o-installation" >}}
{{< tab name="Ubuntu 16.04" codelang="bash" >}}
{{< tab name="Debian" codelang="bash" >}}
# Debian Unstable/Sid
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add -
# Install prerequisites
apt-get update
apt-get install -y software-properties-common
# Debian Testing
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add -
add-apt-repository ppa:projectatomic/ppa
apt-get update
# Debian 10
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add -
# Raspbian 10
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add -
# Install CRI-O
apt-get install -y cri-o-1.15
sudo apt-get install cri-o-1.17
{{< /tab >}}
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
{{< tab name="Ubuntu 18.04, 19.04 and 19.10" codelang="bash" >}}
# Setup repository
. /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
sudo apt-get update
# Install CRI-O
sudo apt-get install cri-o-1.17
{{< /tab >}}
{{< tab name="CentOS/RHEL 7.4+" codelang="bash" >}}
# Install prerequisites
yum-config-manager --add-repo=https://cbs.centos.org/repos/paas7-crio-115-release/x86_64/os/
# Install CRI-O
yum install --nogpgcheck -y cri-o
{{< /tab >}}
{{< tab name="openSUSE Tumbleweed" codelang="bash" >}}
sudo zypper install cri-o
{{< /tab >}}
{{< /tabs >}}

View File

@ -48,7 +48,7 @@ W wersjach wcześniejszych niż 1.14, punkty końcowe określone przez ich forma
**Przykłady pobierania specyfikacji OpenAPI**:
Przed 1.10 | Począwszy od Kubernetes 1.10
Przed 1.10 | Kubernetes 1.10 i nowszy
----------- | -----------------------------
GET /swagger.json | GET /openapi/v2 **Accept**: application/json
GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
@ -108,20 +108,21 @@ API może być rozbudowane na dwa sposoby przy użyciu [custom resources](/docs/
i użyć [agregatora](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/),
aby zintegrować je w sposób niezauważalny dla klientów.
## Włączanie grup API
## Włączanie i wyłączanie grup API
Określone zasoby i grupy API są włączone domyślnie. Włączanie i wyłączanie odbywa się poprzez ustawienie `--runtime-config`
w apiserwerze. `--runtime-config` przyjmuje wartości oddzielane przecinkami. Przykładowo, aby wyłączyć batch/v1, należy ustawić
`--runtime-config=batch/v1=false`, aby włączyć batch/v2alpha1, należy ustawić `--runtime-config=batch/v2alpha1`.
Ta opcja przyjmuje rozdzielony przecinkami zbiór par klucz=wartość, który opisuje konfigurację wykonawczą apiserwera.
WAŻNE: Włączenie lub wyłączenie grup lub zasobów wymaga restartu apiserver i controller-manager, aby zmiany w `--runtime-config` zostały wprowadzone.
{{< note >}}Włączenie lub wyłączenie grup lub zasobów wymaga restartu apiserver i controller-manager, aby zmiany w `--runtime-config` zostały wprowadzone.{{< /note >}}
## Jak włączać dostęp do grup zasobów
## Jak włączać dostęp do grup zasobów extensions/v1beta1
DaemonSets, Deployments, HorizontalPodAutoscalers, Ingresses, Jobs and ReplicaSets są domyślnie włączone.
Pozostałe rozszerzenia mogą być włączane poprzez ustawienie `--runtime-config` w
apiserver. `--runtime-config` przyjmuje wartości rozdzielane przecinkami. Na przykład, aby zablokować deployments oraz ingress, ustaw
`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/ingresses=false`
DaemonSets, Deployments, HorizontalPodAutoscalers, Ingresses, Jobs i ReplicaSets znajdują się w grupie API `extensions/v1beta1` i są domyślnie włączone.
Przykładowo: aby włączyć deployments i daemonsets, ustaw
`--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`.
{{< note >}}Włączanie i wyłączanie pojedynczych zasobów możliwe jest jedynie w ramach grupy API `extensions/v1beta1` z przyczyn historycznych{{< /note >}}
{{% /capture %}}

View File

@ -1,5 +1,7 @@
---
title: Kubernetes — co to jest?
description: >
Kubernetes to przenośna, rozszerzalna platforma oprogramowania *open-source* służąca do zarządzania zadaniami i serwisami uruchamianymi w kontenerach. Umożliwia ich deklaratywną konfigurację i automatyzację. Kubernetes posiada duży i dynamicznie rozwijający się ekosystem. Szeroko dostępne są serwisy, wsparcie i dodatkowe narzędzia.
content_template: templates/concept
weight: 10
card:
@ -14,7 +16,7 @@ Na tej stronie znajdziesz ogólne informacje o Kubernetesie.
{{% capture body %}}
Kubernetes to przenośna, rozszerzalna platforma oprogramowania *open-source* służąca do zarządzania zadaniami i serwisami uruchamianymi w kontenerach, która umożliwia deklaratywną konfigurację i automatyzację. Ekosystem Kubernetesa jest duży i dynamicznie się rozwija. Serwisy Kubernetesa, wsparcie i narzędzia są szeroko dostępne.
Nazwa Kubernetes pochodzi z greki i oznacza sternika albo pilota. Google otworzyło projekt Kubernetes publicznie w 2014. Kubernetes korzysta z [piętnastoletniego doświadczenia Google w uruchamianiu wielkoskalowych serwisów](https://ai.google/research/pubs/pub43438) i łączy je z najlepszymi pomysłami i praktykami wypracowanymi przez społeczność.
Nazwa Kubernetes pochodzi z greki i oznacza sternika albo pilota. Google otworzyło projekt Kubernetes publicznie w 2014. Kubernetes korzysta z [piętnastoletniego doświadczenia Google w uruchamianiu wielkoskalowych serwisów](/blog/2015/04/borg-predecessor-to-kubernetes/) i łączy je z najlepszymi pomysłami i praktykami wypracowanymi przez społeczność.
## Trochę historii
@ -42,7 +44,7 @@ Kontenery zyskały popularność ze względu na swoje zalety, takie jak:
* Rozdzielenie zadań *Dev* i *Ops*: obrazy kontenerów powstają w fazie *build/release*, oddzielając w ten sposób aplikacje od infrastruktury.
* Obserwowalność obejmuje nie tylko informacje i metryki z poziomu systemu operacyjnego, ale także poprawność działania samej aplikacji i inne sygnały.
* Spójność środowiska na etapach rozwoju oprogramowania, testowania i działania w trybie produkcyjnym: działa w ten sam sposób na laptopie i w chmurze.
* Możliwość przenoszenia pomiędzy systemami operacyjnymi i platformami chmurowymi: Ubuntu, RHEL, CoreOS, prywatnymi centrami danych, Google Kubernetes Engine czy gdziekolwiek indziej.
* Możliwość przenoszenia pomiędzy systemami operacyjnymi i platformami chmurowymi: Ubuntu, RHEL, CoreOS, prywatnymi centrami danych, największymi dostawcami usług chmurowych czy gdziekolwiek indziej.
* Zarządzanie, które w centrum uwagi ma aplikacje: Poziom abstrakcji przeniesiony jest z warstwy systemu operacyjnego działającego na maszynie wirtualnej na poziom działania aplikacji, która działa na systemie operacyjnym używając zasobów logicznych.
* Luźno powiązane, rozproszone i elastyczne "swobodne" mikro serwisy: Aplikacje podzielone są na mniejsze, niezależne komponenty, które mogą być dynamicznie uruchamiane i zarządzane - nie jest to monolityczny system działający na jednej, dużej maszynie dedykowanej na wyłączność.
* Izolacja zasobów: wydajność aplikacji możliwa do przewidzenia

View File

@ -14,6 +14,8 @@ menu:
weight: 20
post: >
<p>Naucz się, jak korzystać z Kubernetesa z pomocą dokumentacji, która opisuje pojęcia, zawiera samouczki i informacje źródłowe. Możesz także <a href="/editdocs/" data-auto-burger-exclude>pomóc w jej tworzeniu</a>!</p>
description: >
Kubernetes to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach. Gospodarzem tego projektu o otwartym kodzie źródłowym jest Cloud Native Computing Foundation.
overview: >
Kubernetes to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach. Gospodarzem tego projektu o otwartym kodzie źródłowym jest Cloud Native Computing Foundation (<a href="https://www.cncf.io/about">CNCF</a>).
cards:
@ -37,6 +39,11 @@ cards:
description: "Wyszukaj popularne zadania i dowiedz się, jak sobie z nimi efektywnie poradzić."
button: "Przegląd zadań"
button_path: "/docs/tasks"
- name: training
title: "Szkolenia"
description: "Uzyskaj certyfikat Kubernetes i spraw, aby Twoje projekty cloud native zakończyły się sukcesem!"
button: "Oferta szkoleń"
button_path: "/training"
- name: reference
title: Dokumentacja źródłowa
description: Zapoznaj się z terminologią, składnią poleceń, typami zasobów API i dokumentacją narzędzi instalacyjnych.

View File

@ -4,7 +4,8 @@ id: cluster
date: 2019-06-15
full_link:
short_description: >
Zestaw maszyn roboczych, nazywanych węzłami, na których uruchamiane są aplikacje w kontenerach. Każdy klaster musi posiadać przynajmniej jeden węzeł.
Zestaw maszyn roboczych, nazywanych {{< glossary_tooltip text="węzłami" term_id="node" >}}, na których uruchamiane są aplikacje w kontenerach.
Każdy klaster musi posiadać przynajmniej jeden węzeł.
aka:
tags:
@ -14,4 +15,9 @@ tags:
Zestaw maszyn roboczych, nazywanych węzłami, na których uruchamiane są aplikacje w kontenerach. Każdy klaster musi posiadać przynajmniej jeden węzeł.
<!--more-->
Na węźle (lub węzłach) roboczych rozmieszczane są pody, które są częściami składowymi aplikacji. Warstwa sterowania zarządza węzłami roboczymi i podami należącymi do klastra. W środowisku produkcyjnym warstwa sterowania rozłożona jest zazwyczaj na kilka maszyn, a klaster uruchomiony jest na wielu węzłach zapewniając większą niezawodność i odporność na awarie.
Na węźle (lub węzłach) roboczych rozmieszczane są {{< glossary_tooltip text="pody" term_id="pod" >}},
które są częściami składowymi aplikacji.
{{< glossary_tooltip text="Warstwa sterowania" term_id="control-plane" >}} zarządza
węzłami roboczymi i podami należącymi do klastra. W środowisku produkcyjnym warstwa sterowania
rozłożona jest zazwyczaj na kilka maszyn, a klaster uruchomiony jest na wielu węzłach zapewniając
większą niezawodność i odporność na awarie.

View File

@ -11,13 +11,17 @@ tags:
- fundamental
- networking
---
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) to *proxy* sieciowe, które uruchomione jest na każdym węźle klastra
i uczestniczy w tworzeniu {{< glossary_tooltip term_id="service">}}.
kube-proxy to *proxy* sieciowe, które uruchomione jest na każdym
{{< glossary_tooltip text="węźle" term_id="node" >}} klastra
i uczestniczy w tworzeniu
{{< glossary_tooltip text="serwisu" term_id="service">}}.
<!--more-->
kube-proxy utrzymuje reguły sieciowe na węźle. Dzięki tym regułom
sieci na zewnątrz i wewnątrz klastra mogą komunikować się z Podami.
[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)
utrzymuje reguły sieciowe na węźle. Dzięki tym regułom
sieci na zewnątrz i wewnątrz klastra mogą komunikować się
z podami.
kube-proxy używa warstwy filtrowania pakietów dostarczanych przez system operacyjny, o ile taka jest dostępna.
W przeciwnym przypadku, kube-proxy samo zajmuje sie przekazywaniem ruchu sieciowego.

View File

@ -10,8 +10,13 @@ aka:
tags:
- architecture
---
Składnik warstwy sterowania, który śledzi tworzenie nowych podów i przypisuje im węzły, na których powinny zostać uruchomione.
Składnik warstwy sterowania, który śledzi tworzenie nowych
{{< glossary_tooltip term_id="pod" text="podów" >}} i przypisuje im {{< glossary_tooltip term_id="node" text="węzły">}},
na których powinny zostać uruchomione.
<!--more-->
Przy podejmowaniu decyzji o wyborze węzła brane pod uwagę są wymagania indywidualne i zbiorcze odnośnie zasobów, ograniczenia wynikające z polityk sprzętu i oprogramowania, wymagania *affinity* i *anty-affinity*, lokalizacja danych, zależności między zadaniami i wymagania czasowe.
Przy podejmowaniu decyzji o wyborze węzła brane pod uwagę są wymagania
indywidualne i zbiorcze odnośnie zasobów, ograniczenia wynikające z polityk
sprzętu i oprogramowania, wymagania *affinity* i *anty-affinity*, lokalizacja danych,
zależności między zadaniami i wymagania czasowe.

View File

@ -11,8 +11,8 @@ tags:
- fundamental
- core-object
---
Agent, który działa na każdym węźle klastra. Odpowiada za uruchamianie kontenerów w ramach poda.
Agent, który działa na każdym {{< glossary_tooltip text="węźle" term_id="node" >}} klastra. Odpowiada za uruchamianie {{< glossary_tooltip text="kontenerów" term_id="container" >}} w ramach {{< glossary_tooltip text="poda" term_id="pod" >}}.
<!--more-->
<!--more-->
Kubelet korzysta z dostarczanych na różne sposoby PodSpecs i gwarantuje, że kontenery opisane przez te PodSpecs są uruchomione i działają poprawnie. Kubelet nie zarządza kontenerami, które nie zostały utworzone przez Kubernetes.

View File

@ -48,62 +48,6 @@ Aby uruchomić klaster Kubernetes do nauki na lokalnym komputerze, skorzystaj z
Wybierając rozwiązanie dla środowiska produkcyjnego musisz zdecydować, którymi poziomami zarządzania klastrem (_abstrakcjami_) chcesz zajmować się sam, a które będą realizowane po stronie zewnętrznego operatora.
Przykładowe poziomy abstrakcji klastra Kubernetesa to: {{< glossary_tooltip text="aplikacje" term_id="applications" >}}, {{< glossary_tooltip text="warstwa danych" term_id="data-plane" >}}, {{< glossary_tooltip text="warstwa sterowania" term_id="control-plane" >}}, {{< glossary_tooltip text="infrastruktura klastra" term_id="cluster-infrastructure" >}} i {{< glossary_tooltip text="operacje na klastrze" term_id="cluster-operations" >}}.
Poniższy schemat pokazuje poszczególne poziomy abstrakcji klastra Kubernetes oraz informacje, kto jest za nie odpowiedzialny (sam użytkownik czy zewnętrzny operator).
Rozwiązania dla środowisk produkcyjnych![Rozwiązania dla środowisk produkcyjnych](/images/docs/KubernetesSolutions.svg)
{{< table caption="Tabela z dostawcami i rozwiązaniami dla środowisk produkcyjnych." >}}
Poniższa tabela zawiera przegląd dostawców środowisk produkcyjnych i rozwiązań, które oferują.
|Dostawca | Zarządzana | Chmura "pod klucz" | Prywatne centrum danych | Własne (w chmurze) | Własne (VM lokalne)| Własne (Bare Metal) |
| --------- | ------ | ------ | ------ | ------ | ------ | ----- |
| [Agile Stacks](https://www.agilestacks.com/products/kubernetes)| | &#x2714; | &#x2714; | | |
| [Alibaba Cloud](https://www.alibabacloud.com/product/kubernetes)| | &#x2714; | | | |
| [Amazon](https://aws.amazon.com) | [Amazon EKS](https://aws.amazon.com/eks/) |[Amazon EC2](https://aws.amazon.com/ec2/) | | | |
| [AppsCode](https://appscode.com/products/pharmer/) | &#x2714; | | | | |
| [APPUiO](https://appuio.ch/)  | &#x2714; | &#x2714; | &#x2714; | | | |
| [Banzai Cloud Pipeline Kubernetes Engine (PKE)](https://banzaicloud.com/products/pke/) | | &#x2714; | | &#x2714; | &#x2714; | &#x2714; |
| [CenturyLink Cloud](https://www.ctl.io/) | | &#x2714; | | | |
| [Cisco Container Platform](https://cisco.com/go/containers) | | | &#x2714; | | |
| [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/) | | | | &#x2714; |&#x2714; |
| [CloudStack](https://cloudstack.apache.org/) | | | | | &#x2714;|
| [Canonical](https://ubuntu.com/kubernetes) | &#x2714; | &#x2714; | &#x2714; | &#x2714; |&#x2714; | &#x2714;
| [Containership](https://containership.io) | &#x2714; |&#x2714; | | | |
| [D2iQ](https://d2iq.com/) | | [Kommander](https://d2iq.com/solutions/ksphere) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) | [Konvoy](https://d2iq.com/solutions/ksphere/konvoy) |
| [Digital Rebar](https://provision.readthedocs.io/en/tip/README.html) | | | | | | &#x2714;
| [DigitalOcean](https://www.digitalocean.com/products/kubernetes/) | &#x2714; | | | | |
| [Docker Enterprise](https://www.docker.com/products/docker-enterprise) | |&#x2714; | &#x2714; | | | &#x2714;
| [Gardener](https://gardener.cloud/) | &#x2714; | &#x2714; | &#x2714; | &#x2714; | &#x2714; | [Custom Extensions](https://github.com/gardener/gardener/blob/master/docs/extensions/overview.md) |
| [Giant Swarm](https://www.giantswarm.io/) | &#x2714; | &#x2714; | &#x2714; | |
| [Google](https://cloud.google.com/) | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) | [Google Compute Engine (GCE)](https://cloud.google.com/compute/)|[GKE On-Prem](https://cloud.google.com/gke-on-prem/) | | | | | | | |
| [IBM](https://www.ibm.com/in-en/cloud) | [IBM Cloud Kubernetes Service](https://cloud.ibm.com/kubernetes/catalog/cluster)| |[IBM Cloud Private](https://www.ibm.com/in-en/cloud/private) | |
| [Ionos](https://www.ionos.com/enterprise-cloud) | [Ionos Managed Kubernetes](https://www.ionos.com/enterprise-cloud/managed-kubernetes) | [Ionos Enterprise Cloud](https://www.ionos.com/enterprise-cloud) | |
| [Kontena Pharos](https://www.kontena.io/pharos/) | |&#x2714;| &#x2714; | | |
| [KubeOne](https://kubeone.io/) | | &#x2714; | &#x2714; | &#x2714; | &#x2714; | &#x2714; |
| [Kubermatic](https://kubermatic.io/) | &#x2714; | &#x2714; | &#x2714; | &#x2714; | &#x2714; | &#x2714; |
| [KubeSail](https://kubesail.com/) | &#x2714; | | | | |
| [Kubespray](https://kubespray.io/#/) | | | |&#x2714; | &#x2714; | &#x2714; |
| [Kublr](https://kublr.com/) |&#x2714; | &#x2714; |&#x2714; |&#x2714; |&#x2714; |&#x2714; |
| [Microsoft Azure](https://azure.microsoft.com) | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) | | | | |
| [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/) | | | &#x2714; | | |
| [NetApp Kubernetes Service (NKS)](https://cloud.netapp.com/kubernetes-service) | &#x2714; | &#x2714; | &#x2714; | | |
| [Nirmata](https://www.nirmata.com/) | | &#x2714; | &#x2714; | | |
| [Nutanix](https://www.nutanix.com/en) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | [Nutanix Karbon](https://www.nutanix.com/products/karbon) | | | [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization) |
| [OpenNebula](https://www.opennebula.org) |[OpenNebula Kubernetes](https://marketplace.opennebula.systems/docs/service/kubernetes.html) | | | | |
| [OpenShift](https://www.openshift.com) |[OpenShift Dedicated](https://www.openshift.com/products/dedicated/) i [OpenShift Online](https://www.openshift.com/products/online/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) | | [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) |[OpenShift Container Platform](https://www.openshift.com/products/container-platform/)
| [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm) | &#x2714; | &#x2714; | | | |
| [oVirt](https://www.ovirt.org/) | | | | | &#x2714; |
| [Pivotal](https://pivotal.io/) | | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | [Enterprise Pivotal Container Service (PKS)](https://pivotal.io/platform/pivotal-container-service) | | |
| [Platform9](https://platform9.com/) | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | | [Platform9 Managed Kubernetes](https://platform9.com/managed-kubernetes/) | &#x2714; | &#x2714; | &#x2714;
| [Rancher](https://rancher.com/) | | [Rancher 2.x](https://rancher.com/docs/rancher/v2.x/en/) | | [Rancher Kubernetes Engine (RKE)](https://rancher.com/docs/rke/latest/en/) | | [k3s](https://k3s.io/)
| [Supergiant](https://supergiant.io/) | |&#x2714; | | | |
| [SUSE](https://www.suse.com/) | | &#x2714; | | | |
| [SysEleven](https://www.syseleven.io/) | &#x2714; | | | | |
| [Tencent Cloud](https://intl.cloud.tencent.com/) | [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/product/tke) | &#x2714; | &#x2714; | | | &#x2714; |
| [VEXXHOST](https://vexxhost.com/) | &#x2714; | &#x2714; | | | |
| [VMware](https://cloud.vmware.com/) | [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) |[VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Enterprise PKS](https://cloud.vmware.com/vmware-enterprise-pks) | [VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks) | |[VMware Essential PKS](https://cloud.vmware.com/vmware-essential-pks)
| [Z.A.R.V.I.S.](https://zarvis.ai/) | &#x2714; | | | | | |
Aby zapoznać się z listą dostawców posiadających [certyfikację Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes), odwiedź stronę "[Partnerzy](https://kubernetes.io/partners/#conformance)".
{{% /capture %}}

View File

@ -77,7 +77,7 @@ weight: 10
</div>
<div class="col-md-4">
<div class="content__box content__box_fill">
<p><i>Węzły typu master zarządzają klastrem, pozostałe węzły są wykorzystywane do uruchamiania na nich aplikacji. </i></p>
<p><i>Węzły typu master zarządzają klastrem i węzłami wykorzystywanymi do uruchamiania aplikacji. </i></p>
</div>
</div>
</div>

View File

@ -17,7 +17,7 @@ weight: 40
Чтобы работать с Kubernetes, вы используете *объекты API Kubernetes* для описания *желаемого состояния вашего кластера*: какие приложения или другие рабочие нагрузки вы хотите запустить, какие образы контейнеров они используют, количество реплик, какие сетевые и дисковые ресурсы вы хотите использовать и сделать доступными и многое другое. Вы устанавливаете желаемое состояние, создавая объекты с помощью API Kubernetes, обычно через интерфейс командной строки `kubectl`. Вы также можете напрямую использовать API Kubernetes для взаимодействия с кластером и установки или изменения желаемого состояния.
После того, как вы установили желаемое состояние, анель управления Kubernetes* заставляет текущее состояние кластера соответствовать желаемому состоянию с помощью генератора событий жизненного цикла подов ([Pod Lifecycle Event Generator, PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для этого Kubernetes автоматически выполняет множество задач, таких как запуск или перезапуск контейнеров, масштабирование количества реплик данного приложения и многое другое. Плоскость управления Kubernetes состоит из набора процессов, запущенных в вашем кластере:
После того, как вы установили желаемое состояние, лоскость управления Kubernetes* заставляет текущее состояние кластера соответствовать желаемому состоянию с помощью генератора событий жизненного цикла подов ([Pod Lifecycle Event Generator, PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). Для этого Kubernetes автоматически выполняет множество задач, таких как запуск или перезапуск контейнеров, масштабирование количества реплик данного приложения и многое другое. Плоскость управления Kubernetes состоит из набора процессов, запущенных в вашем кластере:
* **Мастер Kubernetes** — это коллекция из трех процессов, которые выполняются на одном узле в вашем кластере, который обозначен как главный узел. Это процессы: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) и [kube-scheduler](/docs/admin/kube-scheduler/).
* Каждый отдельный неосновной узел в вашем кластере выполняет два процесса:
@ -43,11 +43,11 @@ Kubernetes также содержит абстракции более высо
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
## Панель управления Kubernetes
## Плоскость управления Kubernetes
Различные части панели управления Kubernetes, такие как мастер Kubernetes и процессы kubelet, определяют, как Kubernetes взаимодействует с кластером. Панель управления поддерживает запись всех объектов Kubernetes в системе и запускает непрерывные циклы управления для обработки состояния этих объектов. В любое время циклы управления панели управления будут реагировать на изменения в кластере и работать, чтобы фактическое состояние всех объектов в системе соответствовало желаемому состоянию, которое вы указали.
Различные части панели управления Kubernetes, такие как мастер Kubernetes и процессы kubelet, определяют, как Kubernetes взаимодействует с кластером. Плоскость управления поддерживает запись всех объектов Kubernetes в системе и запускает непрерывные циклы управления для обработки состояния этих объектов. В любое время циклы управления панели управления будут реагировать на изменения в кластере и работать, чтобы фактическое состояние всех объектов в системе соответствовало желаемому состоянию, которое вы указали.
Например, когда вы используете API Kubernetes для создания развертывания, вы предоставляете новое желаемое состояние для системы. Панель управления Kubernetes записывает создание этого объекта и выполняет ваши инструкции, запуская необходимые приложения и планируя их на узлы кластера, чтобы фактическое состояние кластера соответствовало желаемому состоянию.
Например, когда вы используете API Kubernetes для создания развертывания, вы предоставляете новое желаемое состояние для системы. Плоскость управления Kubernetes записывает создание этого объекта и выполняет ваши инструкции, запуская необходимые приложения и планируя их на узлы кластера, чтобы фактическое состояние кластера соответствовало желаемому состоянию.
### Мастер Kubernetes

View File

@ -23,7 +23,7 @@ card:
{{% capture body %}}
## Панель управления компонентами
## Плоскость управления компонентами
Компоненты панели управления отвечают за основные операции кластера (например, планирование), а также обрабатывают события кластера (например, запускают новый {{< glossary_tooltip text="под" term_id="pod">}}, когда поле `replicas` развертывания не соответствует требуемому количеству реплик).

View File

@ -74,7 +74,7 @@ PodList — это список Pod. | Pod List — это список подо
Можно | Нельзя
:--| :-----
_Кластер_ — это набор узлов ... | "Кластер" — это набор узлов ...
Эти компоненты формируют _панель управления_. | Эти компоненты формируют **панель управления**.
Эти компоненты формируют _плоскость управления_. | Эти компоненты формируют **плоскость управления**.
{{< /table >}}
### Оформляйте как код имена файлов, директории и пути

View File

@ -14,4 +14,4 @@ tags:
Набор машин, так называемые узлы, которые запускают контейнеризированные приложения. Кластер имеет как минимум один рабочий узел.
<!--more-->
В рабочих узлах размещены поды, являющиеся компонентами приложения. Панель управления управляет рабочими узлами и подами в кластере. В промышленных средах панель управления обычно запускается на нескольких компьютерах, а кластер, как правило, развёртывается на нескольких узлах, гарантируя отказоустойчивость и высокую надёжность.
В рабочих узлах размещены поды, являющиеся компонентами приложения. Плоскость управления управляет рабочими узлами и подами в кластере. В промышленных средах плоскость управления обычно запускается на нескольких компьютерах, а кластер, как правило, развёртывается на нескольких узлах, гарантируя отказоустойчивость и высокую надёжность.

View File

@ -14,4 +14,4 @@ tags:
<!--more-->
Рабочий узел может быть как виртуальной, так и физической машиной, в зависимости от кластера. У него есть локальные демоны или сервисы, необходимые для запуска {{< glossary_tooltip text="подов" term_id="pod" >}}, а сам он управляется панелью управления. Демоны на узле включают в себя {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} и среду выполнения контейнера, основанную на {{< glossary_tooltip text="CRI" term_id="cri" >}}, например {{< glossary_tooltip term_id="docker" >}}.
Рабочий узел может быть как виртуальной, так и физической машиной, в зависимости от кластера. У него есть локальные демоны или сервисы, необходимые для запуска {{< glossary_tooltip text="подов" term_id="pod" >}}, а сам он управляется плоскостью управления. Демоны на узле включают в себя {{< glossary_tooltip text="kubelet" term_id="kubelet" >}}, {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} и среду выполнения контейнера, основанную на {{< glossary_tooltip text="CRI" term_id="cri" >}}, например {{< glossary_tooltip term_id="docker" >}}.

View File

@ -386,7 +386,7 @@ kubectl config use-context minikube
### Панель управления
Чтобы получить доступ к [панели управления Kubernetes](/docs/tasks/access-application-cluster/web-ui-dashboard/), запустите эту команду в командной оболочке после запуска Minikube, чтобы получить адрес:
Чтобы получить доступ к [веб-панели управления Kubernetes](/docs/tasks/access-application-cluster/web-ui-dashboard/), запустите эту команду в командной оболочке после запуска Minikube, чтобы получить адрес:
```shell
minikube dashboard

View File

@ -8,7 +8,7 @@ menu:
weight: 10
post: >
<p>Готовы испачкать руки? Создайте простой кластер Kubernetes с запуском "Hello World" на Node.js</p>
card:
card:
name: tutorials
weight: 10
---
@ -17,7 +17,7 @@ card:
Это руководство покажет вам, как запустить простое Hello World Node.js приложение
на Kubernetes используя [Minikube](/docs/getting-started-guides/minikube) и Katacoda.
Katacoda предоставляет бесплатную, встроенную в браузер Kubernetes среду.
Katacoda предоставляет бесплатную, встроенную в браузер Kubernetes среду.
{{< note >}}
Вы также можете следовать этому руководству, если вы установили [Minikube locally](/docs/tasks/tools/install-minikube/).
@ -49,13 +49,13 @@ Katacoda предоставляет бесплатную, встроенную
## Создание кластера Minikube
1. Нажмите **Запуск Терминала**
1. Нажмите **Запуск Терминала**
{{< kat-button >}}
{{< note >}}Если у вас локально установлен Minikube, выполните `minikube start`.{{< /note >}}
2. Откройте панель Kubernetes в браузере:
2. Откройте веб-панель Kubernetes в браузере:
```shell
minikube dashboard
@ -111,7 +111,7 @@ Katacoda предоставляет бесплатную, встроенную
```shell
kubectl config view
```
{{< note >}}Больше информации о командах `kubectl` можно найти по ссылке [обзор kubectl](/docs/user-guide/kubectl-overview/).{{< /note >}}
## Создание сервиса
@ -123,7 +123,7 @@ Katacoda предоставляет бесплатную, встроенную
```shell
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
```
Флаг `--type=LoadBalancer` показывает, что сервис должен быть виден вне кластера.
2. Посмотреть только что созданный сервис:
@ -150,7 +150,7 @@ Katacoda предоставляет бесплатную, встроенную
4. Только для окружения Katacoda: Нажмите на знак "Плюс", затем нажмите **Select port to view on Host 1**.
5. Только для окружения Katacoda: Введите `30369` (порт указан рядом с `8080` в выводе сервиса), затем нажмите ???.
5. Только для окружения Katacoda: Введите `30369` (порт указан рядом с `8080` в выводе сервиса), затем нажмите ???.
Откроется окно браузера, в котором запущено ваше приложение и будет отображено сообщение "Hello World".
@ -186,13 +186,13 @@ Katacoda предоставляет бесплатную, встроенную
storage-provisioner: enabled
storage-provisioner-gluster: disabled
```
2. Включить дополнение, например, `metrics-server`:
```shell
minikube addons enable metrics-server
```
Вывод:
```shell
@ -233,7 +233,7 @@ Katacoda предоставляет бесплатную, встроенную
```shell
minikube addons disable metrics-server
```
Вывод:
```shell

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.8
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.8 # Update the version of nginx from 1.7.9 to 1.8
image: nginx:1.16.1 # Update the version of nginx from 1.14.2 to 1.16.1
ports:
- containerPort: 80

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.7.9
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -20,7 +20,7 @@ spec:
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
image: gcr.io/google_samples/gb-redisslave:v3
resources:
requests:
cpu: 100m

View File

@ -1,5 +1,5 @@
kind: PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:

View File

@ -106,16 +106,16 @@ spec:
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
@ -126,16 +126,15 @@ spec:
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.

View File

@ -29,6 +29,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.7.9
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.7.9
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -12,3 +12,5 @@ spec:
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
hostNetwork: true
dnsPolicy: Default

View File

@ -14,6 +14,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.7.9
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -13,6 +13,6 @@ spec:
spec:
containers:
- name: nginx
image: nginx:1.11.9 # update the image
image: nginx:1.16.1 # update the image
ports:
- containerPort: 80

View File

@ -49,7 +49,7 @@ spec:
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
podManagementPolicy: OrderedReady
template:
metadata:
labels:

View File

@ -1,4 +1,4 @@
apiVersion: audit.k8s.io/v1beta1 # This is required.
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
@ -65,4 +65,4 @@ rules:
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
- "RequestReceived"

View File

@ -1,42 +1,44 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

View File

@ -1,38 +1,21 @@
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3

View File

@ -1,21 +1,21 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -1,379 +1,379 @@
kind: ConfigMap
apiVersion: v1
data:
containers.input.conf: |-
# This configuration file for Fluentd is used
# to watch changes to Docker log files that live in the
# directory /var/lib/docker/containers/ and are symbolically
# linked to from the /var/log/containers directory using names that capture the
# pod name and container name. These logs are then submitted to
# Google Cloud Logging which assumes the installation of the cloud-logging plug-in.
#
# Example
# =======
# A line in the Docker log file might look like this JSON:
#
# {"log":"2014/09/25 21:15:03 Got request with path wombat\\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z"}
#
# The record reformer is used to write the tag to focus on the pod name
# and the Kubernetes container name. For example a Docker container's logs
# might be in the directory:
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
# and in the file:
# 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
# where 997599971ee6... is the Docker ID of the running container.
# The Kubernetes kubelet makes a symbolic link to this file on the host machine
# in the /var/log/containers directory which includes the pod name and the Kubernetes
# container name:
# synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# ->
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
# The /var/log directory on the host is mapped to the /var/log directory in the container
# running this instance of Fluentd and we end up collecting the file:
# /var/log/containers/synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# This results in the tag:
# var.log.containers.synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# The record reformer is used is discard the var.log.containers prefix and
# the Docker container ID suffix and "kubernetes." is pre-pended giving the tag:
# kubernetes.synthetic-logger-0.25lps-pod_default-synth-lgr
# Tag is then parsed by google_cloud plugin and translated to the metadata,
# visible in the log viewer
# Example:
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
<source>
type tail
format json
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<log>.*)/
reserve_data true
suppress_parse_error_log true
key_name log
</filter>
<match reform.**>
type record_reformer
enable_ruby true
tag raw.kubernetes.${tag_suffix[4].split('-')[0..-2].join('-')}
</match>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@type copy
<store>
@type prometheus
<metric>
type counter
name logging_line_count
desc Total number of lines generated by application containers
<labels>
tag ${tag}
</labels>
</metric>
</store>
<store>
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</store>
</match>
system.input.conf: |-
# Example:
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
<source>
type tail
format syslog
path /var/log/startupscript.log
pos_file /var/log/gcp-startupscript.log.pos
tag startupscript
</source>
# Examples:
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
<source>
type tail
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log
pos_file /var/log/gcp-docker.log.pos
tag docker
</source>
# Example:
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
<source>
type tail
# Not parsing this, because it doesn't have anything particularly useful to
# parse out of it (like severities).
format none
path /var/log/etcd.log
pos_file /var/log/gcp-etcd.log.pos
tag etcd
</source>
# Multi-line parsing is required for all the kube logs because very large log
# statements, such as those that include entire object bodies, get split into
# multiple lines by glog.
# Example:
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kubelet.log
pos_file /var/log/gcp-kubelet.log.pos
tag kubelet
</source>
# Example:
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-proxy.log
pos_file /var/log/gcp-kube-proxy.log.pos
tag kube-proxy
</source>
# Example:
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-apiserver.log
pos_file /var/log/gcp-kube-apiserver.log.pos
tag kube-apiserver
</source>
# Example:
# 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
# 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\S+\s+AUDIT:/
# Fields must be explicitly captured by name to be parsed into the record.
# Fields may not always be present, and order may change, so this just looks
# for a list of key="\"quoted\" value" pairs separated by spaces.
# Unknown fields are ignored.
# Note: We can't separate query/response lines as format1/format2 because
# they don't always come one after the other for a given query.
# TODO: Maybe add a JSON output mode to audit log so we can get rid of this?
format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/
time_format %FT%T.%L%Z
path /var/log/kube-apiserver-audit.log
pos_file /var/log/gcp-kube-apiserver-audit.log.pos
tag kube-apiserver-audit
</source>
# Example:
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kubernetes-dashboard
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-controller-manager.log
pos_file /var/log/gcp-kube-controller-manager.log.pos
tag kube-controller-manager
</source>
# Example:
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-scheduler.log
pos_file /var/log/gcp-kube-scheduler.log.pos
tag kube-scheduler
</source>
# Example:
# I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/rescheduler.log
pos_file /var/log/gcp-rescheduler.log.pos
tag rescheduler
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/glbc.log
pos_file /var/log/gcp-glbc.log.pos
tag glbc
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/cluster-autoscaler.log
pos_file /var/log/gcp-cluster-autoscaler.log.pos
tag cluster-autoscaler
</source>
# Logs from systemd-journal for interesting services.
<source>
type systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
pos_file /var/log/gcp-journald-docker.pos
read_from_head true
tag docker
</source>
<source>
type systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
pos_file /var/log/gcp-journald-kubelet.pos
read_from_head true
tag kubelet
</source>
monitoring.conf: |-
# Prometheus monitoring
<source>
@type prometheus
port 80
</source>
<source>
@type prometheus_monitor
</source>
output.conf: |-
# We use 2 output stanzas - one to handle the container logs and one to handle
# the node daemon logs, the latter of which explicitly sends its logs to the
# compute.googleapis.com service rather than container.googleapis.com to keep
# them separate since most users don't care about the node logs.
<match kubernetes.**>
@type copy
<store>
@type google_cloud
# Set the buffer type to file to improve the reliability and reduce the memory consumption
buffer_type file
buffer_path /var/log/fluentd-buffers/kubernetes.containers.buffer
# Set queue_full action to block because we want to pause gracefully
# in case of the off-the-limits load instead of throwing an exception
buffer_queue_full_action block
# Set the chunk limit conservatively to avoid exceeding the GCL limit
# of 10MiB per write request.
buffer_chunk_limit 2M
# Cap the combined memory usage of this buffer and the one below to
# 2MiB/chunk * (6 + 2) chunks = 16 MiB
buffer_queue_limit 6
# Never wait more than 5 seconds before flushing logs in the non-error case.
flush_interval 5s
# Never wait longer than 30 seconds between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</store>
<store>
@type prometheus
<metric>
type counter
name logging_entry_count
desc Total number of log entries generated by either an application container or a system component
<labels>
tag ${tag}
component container
</labels>
</metric>
</store>
</match>
# Keep a smaller buffer here since these logs are less important than the user's
# container logs.
<match **>
@type copy
<store>
@type google_cloud
detect_subservice false
buffer_type file
buffer_path /var/log/fluentd-buffers/kubernetes.system.buffer
buffer_queue_full_action block
buffer_chunk_limit 2M
buffer_queue_limit 2
flush_interval 5s
max_retry_wait 30
disable_retry_limit
num_threads 2
</store>
<store>
@type prometheus
<metric>
type counter
name logging_entry_count
desc Total number of log entries generated by either an application container or a system component
<labels>
tag ${tag}
component system
</labels>
</metric>
</store>
</match>
metadata:
name: fluentd-gcp-config
labels:
addonmanager.kubernetes.io/mode: Reconcile
apiVersion: v1
kind: ConfigMap
data:
containers.input.conf: |-
# This configuration file for Fluentd is used
# to watch changes to Docker log files that live in the
# directory /var/lib/docker/containers/ and are symbolically
# linked to from the /var/log/containers directory using names that capture the
# pod name and container name. These logs are then submitted to
# Google Cloud Logging which assumes the installation of the cloud-logging plug-in.
#
# Example
# =======
# A line in the Docker log file might look like this JSON:
#
# {"log":"2014/09/25 21:15:03 Got request with path wombat\\n",
# "stream":"stderr",
# "time":"2014-09-25T21:15:03.499185026Z"}
#
# The record reformer is used to write the tag to focus on the pod name
# and the Kubernetes container name. For example a Docker container's logs
# might be in the directory:
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
# and in the file:
# 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
# where 997599971ee6... is the Docker ID of the running container.
# The Kubernetes kubelet makes a symbolic link to this file on the host machine
# in the /var/log/containers directory which includes the pod name and the Kubernetes
# container name:
# synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# ->
# /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
# The /var/log directory on the host is mapped to the /var/log directory in the container
# running this instance of Fluentd and we end up collecting the file:
# /var/log/containers/synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# This results in the tag:
# var.log.containers.synthetic-logger-0.25lps-pod_default-synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
# The record reformer is used is discard the var.log.containers prefix and
# the Docker container ID suffix and "kubernetes." is pre-pended giving the tag:
# kubernetes.synthetic-logger-0.25lps-pod_default-synth-lgr
# Tag is then parsed by google_cloud plugin and translated to the metadata,
# visible in the log viewer
# Example:
# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
<source>
type tail
format json
time_key time
path /var/log/containers/*.log
pos_file /var/log/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%N%Z
tag reform.*
read_from_head true
</source>
<filter reform.**>
type parser
format /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<log>.*)/
reserve_data true
suppress_parse_error_log true
key_name log
</filter>
<match reform.**>
type record_reformer
enable_ruby true
tag raw.kubernetes.${tag_suffix[4].split('-')[0..-2].join('-')}
</match>
# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
@type copy
<store>
@type prometheus
<metric>
type counter
name logging_line_count
desc Total number of lines generated by application containers
<labels>
tag ${tag}
</labels>
</metric>
</store>
<store>
@type detect_exceptions
remove_tag_prefix raw
message log
stream stream
multiline_flush_interval 5
max_bytes 500000
max_lines 1000
</store>
</match>
system.input.conf: |-
# Example:
# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
<source>
type tail
format syslog
path /var/log/startupscript.log
pos_file /var/log/gcp-startupscript.log.pos
tag startupscript
</source>
# Examples:
# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
<source>
type tail
format /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
path /var/log/docker.log
pos_file /var/log/gcp-docker.log.pos
tag docker
</source>
# Example:
# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
<source>
type tail
# Not parsing this, because it doesn't have anything particularly useful to
# parse out of it (like severities).
format none
path /var/log/etcd.log
pos_file /var/log/gcp-etcd.log.pos
tag etcd
</source>
# Multi-line parsing is required for all the kube logs because very large log
# statements, such as those that include entire object bodies, get split into
# multiple lines by glog.
# Example:
# I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kubelet.log
pos_file /var/log/gcp-kubelet.log.pos
tag kubelet
</source>
# Example:
# I1118 21:26:53.975789 6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-proxy.log
pos_file /var/log/gcp-kube-proxy.log.pos
tag kube-proxy
</source>
# Example:
# I0204 07:00:19.604280 5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-apiserver.log
pos_file /var/log/gcp-kube-apiserver.log.pos
tag kube-apiserver
</source>
# Example:
# 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
# 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\S+\s+AUDIT:/
# Fields must be explicitly captured by name to be parsed into the record.
# Fields may not always be present, and order may change, so this just looks
# for a list of key="\"quoted\" value" pairs separated by spaces.
# Unknown fields are ignored.
# Note: We can't separate query/response lines as format1/format2 because
# they don't always come one after the other for a given query.
# TODO: Maybe add a JSON output mode to audit log so we can get rid of this?
format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/
time_format %FT%T.%L%Z
path /var/log/kube-apiserver-audit.log
pos_file /var/log/gcp-kube-apiserver-audit.log.pos
tag kube-apiserver-audit
</source>
# Example:
# I0204 06:55:31.872680 5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kubernetes-dashboard
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-controller-manager.log
pos_file /var/log/gcp-kube-controller-manager.log.pos
tag kube-controller-manager
</source>
# Example:
# W0204 06:49:18.239674 7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/kube-scheduler.log
pos_file /var/log/gcp-kube-scheduler.log.pos
tag kube-scheduler
</source>
# Example:
# I1104 10:36:20.242766 5 rescheduler.go:73] Running Rescheduler
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/rescheduler.log
pos_file /var/log/gcp-rescheduler.log.pos
tag rescheduler
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/glbc.log
pos_file /var/log/gcp-glbc.log.pos
tag glbc
</source>
# Example:
# I0603 15:31:05.793605 6 cluster_manager.go:230] Reading config from path /etc/gce.conf
<source>
type tail
format multiline
multiline_flush_interval 5s
format_firstline /^\w\d{4}/
format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/
time_format %m%d %H:%M:%S.%N
path /var/log/cluster-autoscaler.log
pos_file /var/log/gcp-cluster-autoscaler.log.pos
tag cluster-autoscaler
</source>
# Logs from systemd-journal for interesting services.
<source>
type systemd
filters [{ "_SYSTEMD_UNIT": "docker.service" }]
pos_file /var/log/gcp-journald-docker.pos
read_from_head true
tag docker
</source>
<source>
type systemd
filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
pos_file /var/log/gcp-journald-kubelet.pos
read_from_head true
tag kubelet
</source>
monitoring.conf: |-
# Prometheus monitoring
<source>
@type prometheus
port 80
</source>
<source>
@type prometheus_monitor
</source>
output.conf: |-
# We use 2 output stanzas - one to handle the container logs and one to handle
# the node daemon logs, the latter of which explicitly sends its logs to the
# compute.googleapis.com service rather than container.googleapis.com to keep
# them separate since most users don't care about the node logs.
<match kubernetes.**>
@type copy
<store>
@type google_cloud
# Set the buffer type to file to improve the reliability and reduce the memory consumption
buffer_type file
buffer_path /var/log/fluentd-buffers/kubernetes.containers.buffer
# Set queue_full action to block because we want to pause gracefully
# in case of the off-the-limits load instead of throwing an exception
buffer_queue_full_action block
# Set the chunk limit conservatively to avoid exceeding the GCL limit
# of 10MiB per write request.
buffer_chunk_limit 2M
# Cap the combined memory usage of this buffer and the one below to
# 2MiB/chunk * (6 + 2) chunks = 16 MiB
buffer_queue_limit 6
# Never wait more than 5 seconds before flushing logs in the non-error case.
flush_interval 5s
# Never wait longer than 30 seconds between retries.
max_retry_wait 30
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
# Use multiple threads for processing.
num_threads 2
</store>
<store>
@type prometheus
<metric>
type counter
name logging_entry_count
desc Total number of log entries generated by either an application container or a system component
<labels>
tag ${tag}
component container
</labels>
</metric>
</store>
</match>
# Keep a smaller buffer here since these logs are less important than the user's
# container logs.
<match **>
@type copy
<store>
@type google_cloud
detect_subservice false
buffer_type file
buffer_path /var/log/fluentd-buffers/kubernetes.system.buffer
buffer_queue_full_action block
buffer_chunk_limit 2M
buffer_queue_limit 2
flush_interval 5s
max_retry_wait 30
disable_retry_limit
num_threads 2
</store>
<store>
@type prometheus
<metric>
type counter
name logging_entry_count
desc Total number of log entries generated by either an application container or a system component
<labels>
tag ${tag}
component system
</labels>
</metric>
</store>
</match>
metadata:
name: fluentd-gcp-config
labels:
addonmanager.kubernetes.io/mode: Reconcile

View File

@ -1,37 +1,31 @@
apiVersion: v1
kind: Pod
metadata:
name: website
labels:
app: website
role: frontend
annotations:
podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version"
spec:
containers:
- name: website
image: nginx
volumeMounts:
- mountPath: /cache
name: cache-volume
- mountPath: /etc/app/config.json
readOnly: true
name: secret-volume
ports:
- containerPort: 80
env:
- name: DB_PORT
value: "6379"
- name: duplicate_key
value: FROM_ENV
- name: expansion
value: $(REPLACE_ME)
envFrom:
- configMapRef:
name: etcd-env-config
volumes:
- name: cache-volume
emptyDir: {}
- name: secret-volume
secret:
secretName: config-details
apiVersion: v1
kind: Pod
metadata:
name: website
labels:
app: website
role: frontend
annotations:
podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version"
spec:
containers:
- name: website
image: nginx
volumeMounts:
- mountPath: /cache
name: cache-volume
ports:
- containerPort: 80
env:
- name: DB_PORT
value: "6379"
- name: duplicate_key
value: FROM_ENV
- name: expansion
value: $(REPLACE_ME)
envFrom:
- configMapRef:
name: etcd-env-config
volumes:
- name: cache-volume
emptyDir: {}

View File

@ -1,30 +1,24 @@
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: allow-database
spec:
selector:
matchLabels:
role: frontend
env:
- name: DB_PORT
value: "6379"
- name: duplicate_key
value: FROM_ENV
- name: expansion
value: $(REPLACE_ME)
envFrom:
- configMapRef:
name: etcd-env-config
volumeMounts:
- mountPath: /cache
name: cache-volume
- mountPath: /etc/app/config.json
readOnly: true
name: secret-volume
volumes:
- name: cache-volume
emptyDir: {}
- name: secret-volume
secret:
secretName: config-details
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: allow-database
spec:
selector:
matchLabels:
role: frontend
env:
- name: DB_PORT
value: "6379"
- name: duplicate_key
value: FROM_ENV
- name: expansion
value: $(REPLACE_ME)
envFrom:
- configMapRef:
name: etcd-env-config
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}

View File

@ -5,7 +5,10 @@ metadata:
spec:
containers:
- name: redis
image: kubernetes/redis:v1
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"

View File

@ -30,7 +30,6 @@ spec:
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:

View File

@ -25,7 +25,6 @@ spec:
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:

View File

@ -7,9 +7,9 @@ spec:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume

View File

@ -3,5 +3,5 @@ kind: Secret
metadata:
name: test-secret
data:
username: bXktYXBwCg==
password: Mzk1MjgkdmRnN0piCg==
username: bXktYXBw
password: Mzk1MjgkdmRnN0pi

View File

@ -1,5 +1,5 @@
kind: Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
@ -9,7 +9,7 @@ spec:
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: acct
serviceAccountName: build-robot
volumes:
- name: vault-token
projected:

View File

@ -15,7 +15,7 @@ spec:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
- name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3

View File

@ -5,13 +5,15 @@ metadata:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo

View File

@ -5,6 +5,6 @@ metadata:
spec:
containers:
- name: nginx
image: nginx:1.7.9
image: nginx:1.14.2
ports:
- containerPort: 80

View File

@ -1,5 +1,5 @@
kind: PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:

View File

@ -1,12 +1,12 @@
kind: Pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx

View File

@ -1,5 +1,5 @@
kind: PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:

View File

@ -1,48 +1,48 @@
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false

View File

@ -1,12 +1,12 @@
kind: Service
apiVersion: v1
metadata:
name: hello
spec:
selector:
app: hello
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: http
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: http

View File

@ -1,9 +1,9 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80

View File

@ -1,46 +1,51 @@
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfigmap
containers:
- name: nginxhttps
image: bprashanth/nginxhttps:1.0
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume

View File

@ -1,20 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: my-empty-dir-pod
spec:
containers:
- image: microsoft/windowsservercore:1709
name: my-empty-dir-pod
volumeMounts:
- mountPath: /cache
name: cache-volume
- mountPath: C:/scratch
name: scratch-volume
volumes:
- name: cache-volume
emptyDir: {}
- name: scratch-volume
emptyDir: {}
nodeSelector:
beta.kubernetes.io/os: windows
apiVersion: v1
kind: Pod
metadata:
name: my-empty-dir-pod
spec:
containers:
- image: microsoft/windowsservercore:1709
name: my-empty-dir-pod
volumeMounts:
- mountPath: /cache
name: cache-volume
- mountPath: C:/scratch
name: scratch-volume
volumes:
- name: cache-volume
emptyDir: {}
- name: scratch-volume
emptyDir: {}
nodeSelector:
beta.kubernetes.io/os: windows

View File

@ -1,17 +1,17 @@
apiVersion: v1
kind: Pod
metadata:
name: run-as-username-container-demo
spec:
securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
containers:
- name: run-as-username-demo
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ["ping", "-t", "localhost"]
securityContext:
windowsOptions:
runAsUserName: "ContainerAdministrator"
nodeSelector:
beta.kubernetes.io/os: windows
apiVersion: v1
kind: Pod
metadata:
name: run-as-username-container-demo
spec:
securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
containers:
- name: run-as-username-demo
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ["ping", "-t", "localhost"]
securityContext:
windowsOptions:
runAsUserName: "ContainerAdministrator"
nodeSelector:
kubernetes.io/os: windows

View File

@ -1,14 +1,14 @@
apiVersion: v1
kind: Pod
metadata:
name: run-as-username-pod-demo
spec:
securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
containers:
- name: run-as-username-demo
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ["ping", "-t", "localhost"]
nodeSelector:
beta.kubernetes.io/os: windows
apiVersion: v1
kind: Pod
metadata:
name: run-as-username-pod-demo
spec:
securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
containers:
- name: run-as-username-demo
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ["ping", "-t", "localhost"]
nodeSelector:
kubernetes.io/os: windows

View File

@ -186,3 +186,6 @@ other = "次の項目"
[warning]
other = "警告:"
[input_placeholder_email_address]
other = "メールアドレス"

View File

@ -22,7 +22,7 @@
{{- if .Params.deprecated }}
<link rel="stylesheet" href="{{ "css/deprecation-warning.css" | relURL }}">
{{- end }}
{{- if eq .Params.class "gridPage" }}
{{- if or (eq .Params.class "gridPage") (eq .Params.class "gridPage gridPageHome") }}
<link rel="stylesheet" href="{{ "css/gridpage.css" | relURL }}">
{{- end }}
{{- if eq .Params.class "training" }}

View File

@ -33,12 +33,14 @@ import shutil
import subprocess
import sys
import tempfile
import platform
error_msgs = []
# pip should be installed when Python is installed, but just in case...
if not (shutil.which('pip') or shutil.which('pip3')):
error_msgs.append("Install pip so you can install PyYAML. https://pip.pypa.io/en/stable/installing")
error_msgs.append(
"Install pip so you can install PyYAML. https://pip.pypa.io/en/stable/installing")
reqs = subprocess.check_output([sys.executable, '-m', 'pip', 'freeze'])
installed_packages = [r.decode().split('==')[0] for r in reqs.split()]
@ -203,7 +205,9 @@ def main():
# create the temp work_dir
try:
print("Making temp work_dir")
work_dir = tempfile.mkdtemp()
work_dir = tempfile.mkdtemp(
dir='/tmp' if platform.system() == 'Darwin' else tempfile.gettempdir()
)
except OSError as ose:
print("[Error] Unable to create temp work_dir {}; error: {}"
.format(work_dir, ose))