resolving conflicts
commit
0ebc16c10c
|
@ -40,6 +40,7 @@ aliases:
|
|||
- rlenferink
|
||||
sig-docs-en-owners: # Admins for English content
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- daminisatya
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
|
@ -57,6 +58,7 @@ aliases:
|
|||
- zparnold
|
||||
sig-docs-en-reviews: # PR reviews for English content
|
||||
- bradtopol
|
||||
- celestehorgan
|
||||
- daminisatya
|
||||
- jimangel
|
||||
- kbarnard10
|
||||
|
@ -233,4 +235,4 @@ aliases:
|
|||
- butuzov
|
||||
- idvoretskyi
|
||||
- MaxymVlasov
|
||||
- Potapy4
|
||||
- Potapy4
|
|
@ -55,7 +55,7 @@ hugo server --buildFuture
|
|||
Більше інформації про внесок у документацію Kubernetes ви знайдете у наступних джерелах:
|
||||
|
||||
* [Внесок: з чого почати](https://kubernetes.io/docs/contribute/)
|
||||
* [Використання шаблонів сторінок](http://kubernetes.io/docs/contribute/style/page-templates/)
|
||||
* [Використання шаблонів сторінок](https://kubernetes.io/docs/contribute/style/page-content-types/)
|
||||
* [Керівництво зі стилю оформлення документації](http://kubernetes.io/docs/contribute/style/style-guide/)
|
||||
* [Переклад документації Kubernetes іншими мовами](https://kubernetes.io/docs/contribute/localization/)
|
||||
|
||||
|
|
19
README.md
19
README.md
|
@ -8,12 +8,27 @@ This repository contains the assets required to build the [Kubernetes website an
|
|||
|
||||
See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
|
||||
|
||||
To run the website locally when you have Hugo installed:
|
||||
Before building the site, clone the Kubernetes website repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubernetes/website.git
|
||||
cd website
|
||||
git submodule update --init --recursive
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
**Note:** The Kubernetes website deploys the [Docsy Hugo theme](https://github.com/google/docsy#readme).
|
||||
If you have not updated your website repository, the `website/themes/docsy` directory is empty. The site cannot build
|
||||
without a local copy of the theme.
|
||||
|
||||
Update the website theme:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
To build and test the site locally, run:
|
||||
|
||||
```bash
|
||||
hugo server --buildFuture
|
||||
```
|
||||
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!-- Optional section; add links to information related to this topic. -->
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
<!-- If you set the min-kubernetes-server-version parameter in the page's front matter,
|
||||
add the version check shortcode {{< version-check >}}.
|
||||
-->
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!-- Optional section; add links to information related to this topic. -->
|
||||
## {{% heading "whatsnext" %}}
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: "{{ replace .Name "-" " " | title }}"
|
||||
content_type: tutorial
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
## {{% heading "objectives" %}}
|
||||
|
||||
<!-- lessoncontent -->
|
||||
|
||||
## {{% heading "cleanup" %}}
|
||||
|
||||
<!-- Optional section; add links to information related to this topic. -->
|
||||
## {{% heading "whatsnext" %}}
|
|
@ -223,7 +223,7 @@ no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY
|
|||
url = "https://discuss.kubernetes.io"
|
||||
icon = "fa fa-envelope"
|
||||
desc = "Discussion and help from your fellow users"
|
||||
|
||||
|
||||
[[params.links.user]]
|
||||
name = "Twitter"
|
||||
url = "https://twitter.com/kubernetesio"
|
||||
|
@ -260,7 +260,7 @@ no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY
|
|||
url = "https://git.k8s.io/community/contributors/guide"
|
||||
icon = "fas fa-edit"
|
||||
desc = "Contribute to the Kubernetes website"
|
||||
|
||||
|
||||
[[params.links.developer]]
|
||||
name = "Stack Overflow"
|
||||
url = "https://stackoverflow.com/questions/tagged/kubernetes"
|
||||
|
@ -301,7 +301,7 @@ language_alternatives = ["en"]
|
|||
|
||||
[languages.ja]
|
||||
title = "Kubernetes"
|
||||
description = "Production-Grade Container Orchestration"
|
||||
description = "プロダクショングレードのコンテナ管理基盤"
|
||||
languageName = "日本語 Japanese"
|
||||
weight = 4
|
||||
contentDir = "content/ja"
|
||||
|
|
|
@ -229,10 +229,25 @@ other = "ICH BIN..."
|
|||
```
|
||||
Durch die Lokalisierung von Website-Zeichenfolgen kannst du Website-weiten Text und Funktionen anpassen: z. B. den gesetzlichen Copyright-Text in der Fußzeile auf jeder Seite.
|
||||
|
||||
### Sprachspezifischer Styleguide und Glossar
|
||||
## Sprachspezifischer Styleguide
|
||||
|
||||
Einige Sprachteams haben ihren eigenen sprachspezifischen Styleguide und ihr eigenes Glossar. Siehe zum Beispiel den [Leitfaden zur koreanischen Lokalisierung](/ko/docs/contribute/localization_ko/).
|
||||
|
||||
### Informale Schreibweise
|
||||
Für die deutsche Übersetzungen verwenden wir eine informelle Schreibweise und der Ansprache per `Du`. Allerdings werden keine Jargon, Slang, Wortspiele, Redewendungen oder kulturspezifische Bezüge eingebracht.
|
||||
|
||||
### Datums und Maßeinheiten
|
||||
Wenn notwendig sollten Datumsangaben in das in Deutschland übliche dd.mm.yyyy überführt werden. Alternativ können diese auch in den Textfluss eingebunden werden: "... am 24. April ....".
|
||||
|
||||
### Abkürzungen
|
||||
Abkürzungen sollten nach Möglichkeit nicht verwendet werden und entweder ausgeschrieben oder anderweitig umgangen werden.
|
||||
|
||||
### Zusammengesetzte Wörter
|
||||
Durch die Übersetzung werden oft Nomen aneinandergereiht, diese Wortketten müssen durch Bindestriche verbunden werden. Dies ist auch möglich wenn ein Teil ins Deutsche übersetzt wird ein weiterer jedoch im Englischen bestehen bleibt. Als Richtlinie gilt hier der [Duden](https://www.duden.de/sprachwissen/rechtschreibregeln/bindestrich).
|
||||
|
||||
### Anglizismen
|
||||
Die Verwendung von Anglizismen ist dann wünschenswert, wenn die Verwendung eines deutschen Wortes, vor allem für technische Begriffe, nicht eindeutig ist oder zu Unklarheiten führt.
|
||||
|
||||
## Branching Strategie
|
||||
|
||||
Da Lokalisierungsprojekte in hohem Maße gemeinschaftliche Bemühungen sind, ermutigen wir Teams, in gemeinsamen Entwicklungszweigen zu arbeiten.
|
||||
|
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "Working with Terraform and Kubernetes"
|
||||
date: 2020-06-29
|
||||
slug: working-with-terraform-and-kubernetes
|
||||
url: /blog/2020/06/working-with-terraform-and-kubernetes
|
||||
---
|
||||
|
||||
**Author:** [Philipp Strube](https://twitter.com/pst418), Kubestack
|
||||
|
||||
Maintaining Kubestack, an open-source [Terraform GitOps Framework](https://www.kubestack.com/lp/terraform-gitops-framework) for Kubernetes, I unsurprisingly spend a lot of time working with Terraform and Kubernetes. Kubestack provisions managed Kubernetes services like AKS, EKS and GKE using Terraform but also integrates cluster services from Kustomize bases into the GitOps workflow. Think of cluster services as everything that's required on your Kubernetes cluster, before you can deploy application workloads.
|
||||
|
||||
Hashicorp recently announced [better integration between Terraform and Kubernetes](https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform/). I took this as an opportunity to give an overview of how Terraform can be used with Kubernetes today and what to be aware of.
|
||||
|
||||
In this post I will however focus only on using Terraform to provision Kubernetes API resources, not Kubernetes clusters.
|
||||
|
||||
[Terraform](https://www.terraform.io/intro/index.html) is a popular infrastructure as code solution, so I will only introduce it very briefly here. In a nutshell, Terraform allows declaring a desired state for resources as code, and will determine and execute a plan to take the infrastructure from its current state, to the desired state.
|
||||
|
||||
To be able to support different resources, Terraform requires providers that integrate the respective API. So, to create Kubernetes resources we need a Kubernetes provider. Here are our options:
|
||||
|
||||
## Terraform `kubernetes` provider (official)
|
||||
|
||||
First, the [official Kubernetes provider](https://github.com/hashicorp/terraform-provider-kubernetes). This provider is undoubtedly the most mature of the three. However, it comes with a big caveat that's probably the main reason why using Terraform to maintain Kubernetes resources is not a popular choice.
|
||||
|
||||
Terraform requires a schema for each resource and this means the maintainers have to translate the schema of each Kubernetes resource into a Terraform schema. This is a lot of effort and was the reason why for a long time the supported resources where pretty limited. While this has improved over time, still not everything is supported. And especially [custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) are not possible to support this way.
|
||||
|
||||
This schema translation also results in some edge cases to be aware of. For example, `metadata` in the Terraform schema is a list of maps. Which means you have to refer to the `metadata.name` of a Kubernetes resource like this in Terraform: `kubernetes_secret.example.metadata.0.name`.
|
||||
|
||||
On the plus side however, having a Terraform schema means full integration between Kubernetes and other Terraform resources. Like for [example](https://github.com/kbst/terraform-kubestack/blob/e5caa6d20926d546a045144ebe79c7cc8c0b4c8a/aws/_modules/eks/ingress.tf#L37), using Terraform to create a Kubernetes service of type `LoadBalancer` and then use the returned ELB hostname in a Route53 record to configure DNS.
|
||||
|
||||
The biggest benefit when using Terraform to maintain Kubernetes resources is integration into the Terraform plan/apply life-cycle. So you can review planned changes before applying them. Also, using `kubectl`, purging of resources from the cluster is not trivial without manual intervention. Terraform does this reliably.
|
||||
|
||||
## Terraform `kubernetes-alpha` provider
|
||||
|
||||
Second, the new [alpha Kubernetes provider](https://github.com/hashicorp/terraform-provider-kubernetes-alpha). As a response to the limitations of the current Kubernetes provider the Hashicorp team recently released an alpha version of a new provider.
|
||||
|
||||
This provider uses dynamic resource types and server-side-apply to support all Kubernetes resources. I personally think this provider has the potential to be a game changer - even if [managing Kubernetes resources in HCL](https://github.com/hashicorp/terraform-provider-kubernetes-alpha#moving-from-yaml-to-hcl) may still not be for everyone. Maybe the Kustomize provider below will help with that.
|
||||
|
||||
The only downside really is, that it's explicitly discouraged to use it for anything but testing. But the more people test it, the sooner it should be ready for prime time. So I encourage everyone to give it a try.
|
||||
|
||||
## Terraform `kustomize` provider
|
||||
|
||||
Last, we have the [`kustomize` provider](https://github.com/kbst/terraform-provider-kustomize). Kustomize provides a way to do customizations of Kubernetes resources using inheritance instead of templating. It is designed to output the result to `stdout`, from where you can apply the changes using `kubectl`. This approach means that `kubectl` edge cases like no purging or changes to immutable attributes still make full automation difficult.
|
||||
|
||||
Kustomize is a popular way to handle customizations. But I was looking for a more reliable way to automate applying changes. Since this is exactly what Terraform is great at the Kustomize provider was born.
|
||||
|
||||
Not going into too much detail here, but from Terraform's perspective, this provider treats every Kubernetes resource as a JSON string. This way it can handle any Kubernetes resource resulting from the Kustomize build. But it has the big disadvantage that Kubernetes resources can not easily be integrated with other Terraform resources. Remember the load balancer example from above.
|
||||
|
||||
Under the hood, similarly to the new Kubernetes alpha provider, the Kustomize provider also uses the dynamic Kubernetes client and server-side-apply. Going forward, I plan to deprecate this part of the Kustomize provider that overlaps with the new Kubernetes provider and only keep the Kustomize integration.
|
||||
|
||||
## Conclusion
|
||||
|
||||
For teams that are already invested into Terraform, or teams that are looking for ways to replace `kubectl` in automation, Terraform's plan/apply life-cycle has always been a promising option to automate changes to Kubernetes resources. However, the limitations of the official Kubernetes provider resulted in this not seeing significant adoption.
|
||||
|
||||
The new alpha provider removes the limitations and has the potential to make Terraform a prime option to automate changes to Kubernetes resources.
|
||||
|
||||
Teams that have already adopted Kustomize, may find integrating Kustomize and Terraform using the Kustomize provider beneficial over `kubectl` because it avoids common edge cases. Even if in this set up, Terraform can only easily be used to plan and apply the changes, not to adapt the Kubernetes resources. In the future, this issue may be resolved by combining the Kustomize provider with the new Kubernetes provider.
|
||||
|
||||
If you have any questions regarding these three options, feel free to reach out to me on the Kubernetes Slack in either the [#kubestack](https://app.slack.com/client/T09NY5SBT/CMBCT7XRQ) or the [#kustomize](https://app.slack.com/client/T09NY5SBT/C9A5ALABG) channel. If you happen to give any of the providers a try and encounter a problem, please file a GitHub issue to help the maintainers fix it.
|
Binary file not shown.
After Width: | Height: | Size: 219 KiB |
Binary file not shown.
After Width: | Height: | Size: 144 KiB |
Binary file not shown.
After Width: | Height: | Size: 1.0 MiB |
|
@ -0,0 +1,104 @@
|
|||
---
|
||||
layout: blog
|
||||
title: "SIG-Windows Spotlight"
|
||||
date: 2020-06-30
|
||||
slug: sig-windows-spotlight-2020
|
||||
---
|
||||
|
||||
# SIG-Windows Spotlight
|
||||
_This post tells the story of how Kubernetes contributors work together to provide a container orchestrator that works for both Linux and Windows._
|
||||
|
||||
<img alt="Image of a computer with Kubernetes logo" width="30%" src="KubernetesComputer_transparent.png">
|
||||
|
||||
Most people who are familiar with Kubernetes are probably used to associating it with Linux. The connection makes sense, since Kubernetes ran on Linux from its very beginning. However, many teams and organizations working on adopting Kubernetes need the ability to orchestrate containers on Windows. Since the release of Docker and rise to popularity of containers, there have been efforts both from the community and from Microsoft itself to make container technology as accessible in Windows systems as it is in Linux systems.
|
||||
|
||||
Within the Kubernetes community, those who are passionate about making Kubernetes accessible to the Windows community can find a home in the Windows Special Interest Group. To learn more about SIG-Windows and the future of Kubernetes on Windows, I spoke to co-chairs [Mark Rossetti](https://github.com/marosset) and [Michael Michael](https://github.com/michmike) about the SIG's goals and how others can contribute.
|
||||
|
||||
## Intro to Windows Containers & Kubernetes
|
||||
|
||||
Kubernetes is the most popular tool for orchestrating container workloads, so to understand the Windows Special Interest Group (SIG) within the Kubernetes project, it's important to first understand what we mean when we talk about running containers on Windows.
|
||||
|
||||
***
|
||||
_"When looking at Windows support in Kubernetes," says SIG (Special Interest Group) Co-chairs Mark Rossetti and Michael Michael, "many start drawing comparisons to Linux containers. Although some of the comparisons that highlight limitations are fair, it is important to distinguish between operational limitations and differences between the Windows and Linux operating systems. Windows containers run the Windows operating system and Linux containers run Linux."_
|
||||
***
|
||||
|
||||
In essence, any "container" is simply a process being run on its host operating system, with some key tooling in place to isolate that process and its dependencies from the rest of the environment. The goal is to make that running process safely isolated, while taking up minimal resources from the system to perform that isolation. On Linux, the tooling used to isolate processes to create "containers" commonly boils down to cgroups and namespaces (among a few others), which are themselves tools built in to the Linux Kernel.
|
||||
|
||||
<img alt="A visual analogy using dogs to explain Linux cgroups and namespaces." width="40%" src="cgroupsNamespacesComboPic.png">
|
||||
|
||||
#### _If dogs were processes: containerization would be like giving each dog their own resources like toys and food using cgroups, and isolating troublesome dogs using namespaces._
|
||||
|
||||
|
||||
Native Windows processes are processes that are or must be run on a Windows operating system. This makes them fundamentally different from a process running on a Linux operating system. Since Linux containers are Linux processes being isolated by the Linux kernel tools known as cgroups and namespaces, containerizing native Windows processes meant implementing similar isolation tools within the Windows kernel itself. Thus, "Windows Containers" and "Linux Containers" are fundamentally different technologies, even though they have the same goals (isolating processes) and in some ways work similarly (using kernel level containerization).
|
||||
|
||||
So when it comes to running containers on Windows, there are actually two very important concepts to consider:
|
||||
|
||||
* Native Windows processes running as native Windows Server style containers,
|
||||
* and traditional Linux containers running on a Linux Kernel, generally hosted on a lightweight Hyper-V Virtual Machine.
|
||||
|
||||
You can learn more about Linux and Windows containers in this [tutorial](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers) from Microsoft.
|
||||
|
||||
|
||||
|
||||
### Kubernetes on Windows
|
||||
|
||||
Kubernetes was initially designed with Linux containers in mind and was itself designed to run on Linux systems. Because of that, much of the functionality of Kubernetes involves unique Linux functionality. The Linux-specific work is intentional--we all want Kubernetes to run optimally on Linux--but there is a growing demand for similar optimization for Windows servers. For cases where users need container orchestration on Windows, the Kubernetes contributor community of SIG-Windows has incorporated functionality for Windows-specific use cases.
|
||||
|
||||
***
|
||||
_"A common question we get is, will I be able to have a Windows-only cluster. The answer is NO. Kubernetes control plane components will continue to be based on Linux, while SIG-Windows is concentrating on the experience of having Windows worker nodes in a Kubernetes cluster."_
|
||||
***
|
||||
|
||||
Rather than separating out the concepts of "Windows Kubernetes," and "Linux Kubernetes," the community of SIG-Windows works toward adding functionality to the main Kubernetes project which allows it to handle use cases for Windows. These Windows capabilities mirror, and in some cases add unique functionality to, the Linux use cases Kubernetes has served since its release in 2014 (want to learn more history? Scroll through this [original design document](https://github.com/kubernetes/kubernetes/blob/e2b948dbfbba62b8cb681189377157deee93bb43/DESIGN.md).
|
||||
|
||||
|
||||
## What Does SIG-Windows Do?
|
||||
|
||||
***
|
||||
_"SIG-Windows is really the center for all things Windows in Kubernetes,"_ SIG chairs Mark and Michael said, _"We mainly focus on the compute side of things, but really anything related to running Kubernetes on Windows is in scope for SIG-Windows."_
|
||||
***
|
||||
|
||||
In order to best serve users, SIG-Windows works to make the Kubernetes user experience as consistent as possible for users of Windows and Linux. However some use cases simply only apply to one Operating System, and as such, the SIG-Windows group also works to create functionality that is unique to Windows-only workloads.
|
||||
|
||||
Many SIGs, or "Special Interest Groups" within Kubernetes have a narrow focus, allowing members to dive deep on a certain facet of the technology. While specific expertise is welcome, those interested in SIG-Windows will find it to be a great community to build broad understanding across many focus areas of Kubernetes. "Members from our SIG interface with storage, network, testing, cluster-lifecycle and others groups in Kubernetes."
|
||||
|
||||
### Who are SIG-Windows' Users?
|
||||
The best way to understand the technology a group makes, is often to understand who their customers or users are.
|
||||
|
||||
|
||||
|
||||
#### "A majority of the users we've interacted with have business-critical infrastructure running on Windows developed over many years and can't move those workloads to Linux for various reasons (cost, time, compliance, etc)," the SIG chairs shared. "By transporting those workloads into Windows containers and running them in Kubernetes they are able to quickly modernize their infrastructure and help migrate it to the cloud."
|
||||
|
||||
As anyone in the Kubernetes space can attest, companies around the world, in many different industries, see Kubernetes as their path to modernizing their infrastructure. Often this involves re-architecting or event totally re-inventing many of the ways they've been doing business. With the goal being to make their systems more scalable, more robust, and more ready for anything the future may bring. But not every application or workload can or should change the core operating system it runs on, so many teams need the ability to run containers at scale on Windows, or Linux, or both.
|
||||
|
||||
"Sometimes the driver to Windows containers is a modernization effort and sometimes it’s because of expiring hardware warranties or end-of-support cycles for the current operating system. Our efforts in SIG-Windows enable Windows developers to take advantage of cloud native tools and Kubernetes to build and deploy distributed applications faster. That’s exciting! In essence, users can retain the benefits of application availability while decreasing costs."
|
||||
|
||||
## Who are SIG-Windows?
|
||||
|
||||
Who are these contributors working on enabling Windows workloads for Kubernetes? It could be you!
|
||||
|
||||
Like with other Kubernetes SIGs, contributors to SIG-Windows can be anyone from independent hobbyists to professionals who work at many different companies. They come from many different parts of the world and bring to the table many different skill sets.
|
||||
|
||||
<img alt="Image of several people chatting pleasantly" width="30%" src="PeopleDoodle_transparent.png">
|
||||
|
||||
_"Like most other Kubernetes SIGs, we are a very welcome and open community," explained the SIG co-chairs Michael Michael and Mark Rosetti._
|
||||
|
||||
|
||||
### Becoming a contributor
|
||||
|
||||
For anyone interested in getting started, the co-chairs added, "New contributors can view old community meetings on GitHub (we record every single meeting going back three years), read our documentation, attend new community meetings, ask questions in person or on Slack, and file some issues on Github. We also attend all KubeCon conferences and host 1-2 sessions, a contributor session, and meet-the-maintainer office hours."
|
||||
|
||||
The co-chairs also shared a glimpse into what the path looks like to becoming a member of the SIG-Windows community:
|
||||
|
||||
"We encourage new contributors to initially just join our community and listen, then start asking some questions and get educated on Windows in Kubernetes. As they feel comfortable, they could graduate to improving our documentation, file some bugs/issues, and eventually they can be a code contributor by fixing some bugs. If they have long-term and sustained substantial contributions to Windows, they could become a technical lead or a chair of SIG-Windows. You won't know if you love this area unless you get started :) To get started, [visit this getting-started page](https://github.com/kubernetes/community/tree/master/sig-windows). It's a one stop shop with links to everything related to SIG-Windows in Kubernetes."
|
||||
|
||||
When asked if there were any useful skills for new contributors, the co-chairs said,
|
||||
|
||||
"We are always looking for expertise in Go and Networking and Storage, along with a passion for Windows. Those are huge skills to have. However, we don’t require such skills, and we welcome any and all contributors, with varying skill sets. If you don’t know something, we will help you acquire it."
|
||||
|
||||
You can get in touch with the folks at SIG-Windows in their [Slack channel](https://kubernetes.slack.com/archives/C0SJ4AFB7) or attend one of their regular meetings - currently 30min long on Tuesdays at 12:30PM EST! You can find links to their regular meetings as well as past meeting notes and recordings from the [SIG-Windows README](https://github.com/kubernetes/community/tree/master/sig-windows#readme) on GitHub.
|
||||
|
||||
As a closing message from SIG-Windows:
|
||||
|
||||
***
|
||||
#### _"We welcome you to get involved and join our community to share feedback and deployment stories, and contribute to code, docs, and improvements of any kind."_
|
||||
***
|
|
@ -7,6 +7,7 @@ weight: 100
|
|||
content_type: concept
|
||||
description: >
|
||||
Lower-level detail relevant to creating or administering a Kubernetes cluster.
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -40,14 +40,14 @@ Note that {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} also exposes
|
|||
If your cluster uses {{< glossary_tooltip term_id="rbac" text="RBAC" >}}, reading metrics requires authorization via a user, group or ServiceAccount with a ClusterRole that allows accessing `/metrics`.
|
||||
For example:
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: prometheus
|
||||
rules:
|
||||
- nonResourceURLs:
|
||||
- "/metrics"
|
||||
verbs:
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: prometheus
|
||||
rules:
|
||||
- nonResourceURLs:
|
||||
- "/metrics"
|
||||
verbs:
|
||||
- get
|
||||
```
|
||||
|
||||
|
@ -130,5 +130,4 @@ cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
|
|||
|
||||
* Read about the [Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) for metrics
|
||||
* See the list of [stable Kubernetes metrics](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)
|
||||
* Read about the [Kubernetes deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior )
|
||||
|
||||
* Read about the [Kubernetes deprecation policy](/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior)
|
||||
|
|
|
@ -126,25 +126,32 @@ spec:
|
|||
configMap:
|
||||
# Provide the name of the ConfigMap you want to mount.
|
||||
name: game-demo
|
||||
# An array of keys from the ConfigMap to create as files
|
||||
items:
|
||||
- key: "game.properties"
|
||||
path: "game.properties"
|
||||
- key: "user-interface.properties"
|
||||
path: "user-interface.properties"
|
||||
```
|
||||
|
||||
|
||||
A ConfigMap doesn't differentiate between single line property values and
|
||||
multi-line file-like values.
|
||||
What matters is how Pods and other objects consume those values.
|
||||
|
||||
For this example, defining a volume and mounting it inside the `demo`
|
||||
container as `/config` creates four files:
|
||||
container as `/config` creates two files,
|
||||
`/config/game.properties` and `/config/user-interface.properties`,
|
||||
even though there are four keys in the ConfigMap. This is because the Pod
|
||||
definition specifies an `items` array in the `volumes` section.
|
||||
If you omit the `items` array entirely, every key in the ConfigMap becomes
|
||||
a file with the same name as the key, and you get 4 files.
|
||||
|
||||
- `/config/player_initial_lives`
|
||||
- `/config/ui_properties_file_name`
|
||||
- `/config/game.properties`
|
||||
- `/config/user-interface.properties`
|
||||
## Using ConfigMaps
|
||||
|
||||
If you want to make sure that `/config` only contains files with a
|
||||
`.properties` extension, use two different ConfigMaps, and refer to both
|
||||
ConfigMaps in the `spec` for a Pod. The first ConfigMap defines
|
||||
`player_initial_lives` and `ui_properties_file_name`. The second
|
||||
ConfigMap defines the files that the kubelet places into `/config`.
|
||||
ConfigMaps can be mounted as data volumes. ConfigMaps can also be used by other
|
||||
parts of the system, without being directly exposed to the Pod. For example,
|
||||
ConfigMaps can hold data that other parts of the system should use for configuration.
|
||||
|
||||
{{< note >}}
|
||||
The most common way to use ConfigMaps is to configure settings for
|
||||
|
@ -157,12 +164,6 @@ or {{< glossary_tooltip text="operators" term_id="operator-pattern" >}} that
|
|||
adjust their behavior based on a ConfigMap.
|
||||
{{< /note >}}
|
||||
|
||||
## Using ConfigMaps
|
||||
|
||||
ConfigMaps can be mounted as data volumes. ConfigMaps can also be used by other
|
||||
parts of the system, without being directly exposed to the Pod. For example,
|
||||
ConfigMaps can hold data that other parts of the system should use for configuration.
|
||||
|
||||
### Using ConfigMaps as files from a Pod
|
||||
|
||||
To consume a ConfigMap in a volume in a Pod:
|
||||
|
|
|
@ -227,7 +227,7 @@ locally-attached writeable devices or, sometimes, by RAM.
|
|||
|
||||
Pods use ephemeral local storage for scratch space, caching, and for logs.
|
||||
The kubelet can provide scratch space to Pods using local ephemeral storage to
|
||||
mount [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
|
||||
mount [`emptyDir`](/docs/concepts/storage/volumes/#emptydir)
|
||||
{{< glossary_tooltip term_id="volume" text="volumes" >}} into containers.
|
||||
|
||||
The kubelet also uses this kind of storage to hold
|
||||
|
@ -657,7 +657,7 @@ Allocated resources:
|
|||
(Total limits may be over 100 percent, i.e., overcommitted.)
|
||||
CPU Requests CPU Limits Memory Requests Memory Limits
|
||||
------------ ---------- --------------- -------------
|
||||
680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%)
|
||||
680m (34%) 400m (20%) 920Mi (11%) 1070Mi (13%)
|
||||
```
|
||||
|
||||
In the preceding output, you can see that if a Pod requests more than 1120m
|
||||
|
@ -758,5 +758,3 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
|
|||
* Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference
|
||||
|
||||
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
|
||||
|
||||
|
||||
|
|
|
@ -87,7 +87,7 @@ spec:
|
|||
memory: 100Mi
|
||||
```
|
||||
|
||||
At admission time the RuntimeClass [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
|
||||
At admission time the RuntimeClass [admission controller](/docs/reference/access-authn-authz/admission-controllers/)
|
||||
updates the workload's PodSpec to include the `overhead` as described in the RuntimeClass. If the PodSpec already has this field defined,
|
||||
the Pod will be rejected. In the given example, since only the RuntimeClass name is specified, the admission controller mutates the Pod
|
||||
to include an `overhead`.
|
||||
|
@ -195,5 +195,3 @@ from source in the meantime.
|
|||
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ reviewers:
|
|||
- erictune
|
||||
- thockin
|
||||
content_type: concept
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -65,7 +65,7 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A
|
|||
|
||||
## Multi-architecture Images with Manifests
|
||||
|
||||
As well as providing binary images, a container registry can also server a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
|
||||
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
|
||||
|
||||
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
|
||||
|
||||
|
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
title: Example Concept Template
|
||||
reviewers:
|
||||
- chenopis
|
||||
content_type: concept
|
||||
toc_hide: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< note >}}
|
||||
Be sure to also [create an entry in the table of contents](/docs/home/contribute/write-new-topic/#creating-an-entry-in-the-table-of-contents) for your new document.
|
||||
{{< /note >}}
|
||||
|
||||
This page explains ...
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Understanding ...
|
||||
|
||||
Kubernetes provides ...
|
||||
|
||||
## Using ...
|
||||
|
||||
To use ...
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
**[Optional Section]**
|
||||
|
||||
* Learn more about [Writing a New Topic](/docs/home/contribute/style/write-new-topic/).
|
||||
* See [Page Content Types - Concept](/docs/home/contribute/style/page-concept-types/#concept).
|
||||
|
||||
|
||||
|
||||
|
|
@ -8,6 +8,7 @@ reviewers:
|
|||
- cheftako
|
||||
- chenopis
|
||||
content_type: concept
|
||||
no_list: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -232,6 +232,6 @@ Here are some examples of device plugin implementations:
|
|||
* Learn about [scheduling GPU resources](/docs/tasks/manage-gpus/scheduling-gpus/) using device plugins
|
||||
* Learn about [advertising extended resources](/docs/tasks/administer-cluster/extended-resource-node/) on a node
|
||||
* Read about using [hardware acceleration for TLS ingress](https://kubernetes.io/blog/2019/04/24/hardware-accelerated-ssl/tls-termination-in-ingress-controllers-using-kubernetes-device-plugins-and-runtimeclass/) with Kubernetes
|
||||
* Learn about the [Topology Manager] (/docs/tasks/adminster-cluster/topology-manager/)
|
||||
* Learn about the [Topology Manager](/docs/tasks/administer-cluster/topology-manager/)
|
||||
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ page will help you learn about scheduling.
|
|||
|
||||
## kube-scheduler
|
||||
|
||||
[kube-scheduler](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
[kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
is the default scheduler for Kubernetes and runs as part of the
|
||||
{{< glossary_tooltip text="control plane" term_id="control-plane" >}}.
|
||||
kube-scheduler is designed so that, if you want and need to, you can
|
||||
|
@ -95,4 +95,3 @@ of the scheduler:
|
|||
* Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
|
||||
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
|
||||
* Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/)
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.i
|
|||
provided and supported by VMware.
|
||||
* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments.
|
||||
* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508)
|
||||
for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest).
|
||||
for the [F5 BIG-IP Container Ingress Services for Kubernetes](https://clouddocs.f5.com/containers/latest/userguide/kubernetes/).
|
||||
* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io).
|
||||
* [HAProxy Ingress](https://haproxy-ingress.github.io) is a highly customizable community-driven ingress controller for HAProxy.
|
||||
* [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for the [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress). See the [official documentation](https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/).
|
||||
|
|
|
@ -91,7 +91,7 @@ Different [Ingress controller](/docs/concepts/services-networking/ingress-contro
|
|||
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
|
||||
has all the information needed to configure a load balancer or proxy server. Most importantly, it
|
||||
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
|
||||
for directing HTTP traffic.
|
||||
for directing HTTP(S) traffic.
|
||||
|
||||
### Ingress rules
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ Managing storage is a distinct problem from managing compute instances. The Pers
|
|||
|
||||
A _PersistentVolume_ (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using [Storage Classes](/docs/concepts/storage/storage-classes/). It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
|
||||
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).
|
||||
A _PersistentVolumeClaim_ (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see [AccessModes](#access-modes)).
|
||||
|
||||
While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the _StorageClass_ resource.
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@ content_type: concept
|
|||
title: Contribute to Kubernetes docs
|
||||
linktitle: Contribute
|
||||
main_menu: true
|
||||
no_list: true
|
||||
weight: 80
|
||||
card:
|
||||
name: contribute
|
||||
|
@ -23,8 +24,6 @@ Kubernetes documentation contributors:
|
|||
|
||||
Kubernetes documentation welcomes improvements from all contributors, new and experienced!
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Getting started
|
||||
|
@ -74,5 +73,3 @@ SIG Docs communicates with different methods:
|
|||
- Visit the [Kubernetes community site](/community/). Participate on Twitter or Stack Overflow, learn about local Kubernetes meetups and events, and more.
|
||||
- Read the [contributor cheatsheet](https://github.com/kubernetes/community/tree/master/contributors/guide/contributor-cheatsheet) to get involved with Kubernetes feature development.
|
||||
- Submit a [blog post or case study](/docs/contribute/new-content/blogs-case-studies/).
|
||||
|
||||
|
||||
|
|
|
@ -97,10 +97,12 @@ Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installi
|
|||
|
||||
### Create a local clone and set the upstream
|
||||
|
||||
3. In a terminal window, clone your fork:
|
||||
3. In a terminal window, clone your fork and update the [Docsy Hugo theme](https://github.com/google/docsy#readme):
|
||||
|
||||
```bash
|
||||
git clone git@github.com/<github_username>/website
|
||||
cd website
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
4. Navigate to the new `website` directory. Set the `kubernetes/website` repository as the `upstream` remote:
|
||||
|
@ -261,18 +263,26 @@ The commands below use Docker as default container engine. Set the `CONTAINER_EN
|
|||
|
||||
Alternately, install and use the `hugo` command on your computer:
|
||||
|
||||
5. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml).
|
||||
1. Install the [Hugo](https://gohugo.io/getting-started/installing/) version specified in [`website/netlify.toml`](https://raw.githubusercontent.com/kubernetes/website/master/netlify.toml).
|
||||
|
||||
6. In a terminal, go to your Kubernetes website repository and start the Hugo server:
|
||||
2. If you have not updated your website repository, the `website/themes/docsy` directory is empty.
|
||||
The site cannot build without a local copy of the theme. To update the website theme, run:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive --depth 1
|
||||
```
|
||||
|
||||
3. In a terminal, go to your Kubernetes website repository and start the Hugo server:
|
||||
|
||||
```bash
|
||||
cd <path_to_your_repo>/website
|
||||
hugo server
|
||||
hugo server --buildFuture
|
||||
```
|
||||
|
||||
7. In your browser’s address bar, enter `https://localhost:1313`.
|
||||
4. In a web browser, navigate to `https://localhost:1313`. Hugo watches the
|
||||
changes and rebuilds the site as needed.
|
||||
|
||||
8. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
|
||||
5. To stop the local Hugo instance, go back to the terminal and type `Ctrl+C`,
|
||||
or close the terminal window.
|
||||
|
||||
{{% /tab %}}
|
||||
|
@ -496,6 +506,6 @@ the templates with as much detail as possible when you file issues or PRs.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
- Read [Reviewing](/docs/contribute/reviewing/revewing-prs) to learn more about the review process.
|
||||
- Read [Reviewing](/docs/contribute/review/reviewing-prs) to learn more about the review process.
|
||||
|
||||
|
||||
|
|
|
@ -9,10 +9,10 @@ weight: 10
|
|||
|
||||
This page contains guidelines for Kubernetes documentation.
|
||||
|
||||
If you have questions about what's allowed, join the #sig-docs channel in
|
||||
[Kubernetes Slack](http://slack.k8s.io/) and ask!
|
||||
If you have questions about what's allowed, join the #sig-docs channel in
|
||||
[Kubernetes Slack](http://slack.k8s.io/) and ask!
|
||||
|
||||
You can register for Kubernetes Slack at http://slack.k8s.io/.
|
||||
You can register for Kubernetes Slack at http://slack.k8s.io/.
|
||||
|
||||
For information on creating new content for the Kubernetes
|
||||
docs, follow the [style guide](/docs/contribute/style/style-guide).
|
||||
|
@ -28,7 +28,7 @@ Source for the Kubernetes website, including the docs, resides in the
|
|||
|
||||
Located in the `kubernetes/website/content/<language_code>/docs` folder, the
|
||||
majority of Kubernetes documentation is specific to the [Kubernetes
|
||||
project](https://github.com/kubernetes/kubernetes).
|
||||
project](https://github.com/kubernetes/kubernetes).
|
||||
|
||||
## What's allowed
|
||||
|
||||
|
@ -41,12 +41,12 @@ Kubernetes docs allow content for third-party projects only when:
|
|||
### Third party content
|
||||
|
||||
Kubernetes documentation includes applied examples of projects in the Kubernetes project—projects that live in the [kubernetes](https://github.com/kubernetes) and
|
||||
[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations.
|
||||
[kubernetes-sigs](https://github.com/kubernetes-sigs) GitHub organizations.
|
||||
|
||||
Links to active content in the Kubernetes project are always allowed.
|
||||
Links to active content in the Kubernetes project are always allowed.
|
||||
|
||||
Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker),
|
||||
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), and [logging](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
|
||||
Kubernetes requires some third party content to function. Examples include container runtimes (containerd, CRI-O, Docker),
|
||||
[networking policy](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) (CNI plugins), [Ingress controllers](/docs/concepts/services-networking/ingress-controllers/), and [logging](/docs/concepts/cluster-administration/logging/).
|
||||
|
||||
Docs can link to third-party open source software (OSS) outside the Kubernetes project only if it's necessary for Kubernetes to function.
|
||||
|
||||
|
@ -60,7 +60,7 @@ and grows stale more quickly.
|
|||
|
||||
{{< note >}}
|
||||
|
||||
If you're a maintainer for a Kubernetes project and need help hosting your own docs,
|
||||
If you're a maintainer for a Kubernetes project and need help hosting your own docs,
|
||||
ask for help in [#sig-docs on Kubernetes Slack](https://kubernetes.slack.com/messages/C1J0BPD2M/).
|
||||
|
||||
{{< /note >}}
|
||||
|
@ -75,5 +75,3 @@ If you have questions about allowed content, join the [Kubernetes Slack](http://
|
|||
|
||||
|
||||
* Read the [Style guide](/docs/contribute/style/style-guide).
|
||||
|
||||
|
||||
|
|
|
@ -28,9 +28,17 @@ Task | A task page shows how to do a single thing. The idea is to give readers a
|
|||
Tutorial | A tutorial page shows how to accomplish a goal that ties together several Kubernetes features. A tutorial might provide several sequences of steps that readers can actually do as they read the page. Or it might provide explanations of related pieces of code. For example, a tutorial could provide a walkthrough of a code sample. A tutorial can include brief explanations of the Kubernetes features that are being tied together, but should link to related concept topics for deep explanations of individual features.
|
||||
{{< /table >}}
|
||||
|
||||
### Creating a new page
|
||||
|
||||
Use a [content type](/docs/contribute/style/page-content-types/) for each new page
|
||||
that you write. Using page type helps ensure
|
||||
consistency among topics of a given type.
|
||||
that you write. The docs site provides templates or
|
||||
[Hugo archetypes](https://gohugo.io/content-management/archetypes/) to create
|
||||
new content pages. To create a new type of page, run `hugo new` with the path to the file
|
||||
you want to create. For example:
|
||||
|
||||
```
|
||||
hugo new docs/concepts/my-first-concept.md
|
||||
```
|
||||
|
||||
## Choosing a title and filename
|
||||
|
||||
|
|
|
@ -416,10 +416,8 @@ signed certificate.
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Read [Manage TLS Certificates in a Cluster](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
* Read [Manage TLS Certificates in a Cluster](/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
* View the source code for the kube-controller-manager built in [signer](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/signer/cfssl_signer.go)
|
||||
* View the source code for the kube-controller-manager built in [approver](https://github.com/kubernetes/kubernetes/blob/32ec6c212ec9415f604ffc1f4c1f29b782968ff1/pkg/controller/certificates/approver/sarapprove.go)
|
||||
* For details of X.509 itself, refer to [RFC 5280](https://tools.ietf.org/html/rfc5280#section-3.1) section 3.1
|
||||
* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https://tools.ietf.org/html/rfc2986)
|
||||
|
||||
|
||||
|
|
|
@ -134,6 +134,8 @@ different Kubernetes components.
|
|||
| `ServiceAppProtocol` | `true` | Beta | 1.19 | |
|
||||
| `ServerSideApply` | `false` | Alpha | 1.14 | 1.15 |
|
||||
| `ServerSideApply` | `true` | Beta | 1.16 | |
|
||||
| `ServiceAccountIssuerDiscovery` | `false` | Alpha | 1.18 | |
|
||||
| `ServiceAppProtocol` | `false` | Alpha | 1.18 | |
|
||||
| `ServiceNodeExclusion` | `false` | Alpha | 1.8 | |
|
||||
| `ServiceTopology` | `false` | Alpha | 1.17 | |
|
||||
| `SetHostnameAsFQDN` | `false` | Alpha | 1.19 | |
|
||||
|
@ -441,7 +443,7 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `KubeletPluginsWatcher`: Enable probe-based plugin watcher utility to enable kubelet
|
||||
to discover plugins such as [CSI volume drivers](/docs/concepts/storage/volumes/#csi).
|
||||
- `KubeletPodResources`: Enable the kubelet's pod resources grpc endpoint.
|
||||
See [Support Device Monitoring](https://git.k8s.io/community/keps/sig-node/compute-device-assignment.md) for more details.
|
||||
See [Support Device Monitoring](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/compute-device-assignment.md) for more details.
|
||||
- `LegacyNodeRoleBehavior`: When disabled, legacy behavior in service load balancers and node disruption will ignore the `node-role.kubernetes.io/master` label in favor of the feature-specific labels.
|
||||
- `LocalStorageCapacityIsolation`: Enable the consumption of [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and also the `sizeLimit` property of an [emptyDir volume](/docs/concepts/storage/volumes/#emptydir).
|
||||
- `LocalStorageCapacityIsolationFSQuotaMonitoring`: When `LocalStorageCapacityIsolation` is enabled for [local ephemeral storage](/docs/concepts/configuration/manage-compute-resources-container/) and the backing filesystem for [emptyDir volumes](/docs/concepts/storage/volumes/#emptydir) supports project quotas and they are enabled, use project quotas to monitor [emptyDir volume](/docs/concepts/storage/volumes/#emptydir) storage consumption rather than filesystem walk for better performance and accuracy.
|
||||
|
@ -481,11 +483,12 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
- `ScheduleDaemonSetPods`: Enable DaemonSet Pods to be scheduled by the default scheduler instead of the DaemonSet controller.
|
||||
- `SCTPSupport`: Enables the _SCTP_ `protocol` value in Pod, Service, Endpoints, EndpointSlice, and NetworkPolicy definitions.
|
||||
- `ServerSideApply`: Enables the [Sever Side Apply (SSA)](/docs/reference/using-api/api-concepts/#server-side-apply) path at the API Server.
|
||||
- `ServiceAccountIssuerDiscovery`: Enable OIDC discovery endpoints (issuer and JWKS URLs) for the service account issuer in the API server. See [Configure Service Accounts for Pods](/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery) for more details.
|
||||
- `ServiceAppProtocol`: Enables the `AppProtocol` field on Services and Endpoints.
|
||||
- `ServiceLoadBalancerFinalizer`: Enable finalizer protection for Service load balancers.
|
||||
- `ServiceNodeExclusion`: Enable the exclusion of nodes from load balancers created by a cloud provider.
|
||||
A node is eligible for exclusion if labelled with "`alpha.service-controller.kubernetes.io/exclude-balancer`" key or `node.kubernetes.io/exclude-from-external-load-balancers`.
|
||||
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](https://kubernetes.io/docs/concepts/services-networking/service-topology/) for more details.
|
||||
- `ServiceTopology`: Enable service to route traffic based upon the Node topology of the cluster. See [ServiceTopology](/docs/concepts/services-networking/service-topology/) for more details.
|
||||
- `SetHostnameAsFQDN`: Enable the ability of setting Fully Qualified Domain Name(FQDN) as hostname of pod. See [Pod's `setHostnameAsFQDN` field](/docs/concepts/services-networking/dns-pod-service/#pod-sethostnameasfqdn-field).
|
||||
- `StartupProbe`: Enable the [startup](/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe) probe in the kubelet.
|
||||
- `StorageObjectInUseProtection`: Postpone the deletion of PersistentVolume or
|
||||
|
@ -526,4 +529,3 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
|||
|
||||
* The [deprecation policy](/docs/reference/using-api/deprecation-policy/) for Kubernetes explains
|
||||
the project's approach to removing features and components.
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ and capacity. The scheduler needs to take into account individual and collective
|
|||
resource requirements, quality of service requirements, hardware/software/policy
|
||||
constraints, affinity and anti-affinity specifications, data locality, inter-workload
|
||||
interference, deadlines, and so on. Workload-specific requirements will be exposed
|
||||
through the API as necessary. See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/)
|
||||
through the API as necessary. See [scheduling](/docs/concepts/scheduling-eviction/)
|
||||
for more information about scheduling and the kube-scheduler component.
|
||||
|
||||
```
|
||||
|
@ -511,8 +511,3 @@ kube-scheduler [flags]
|
|||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Endpoints
|
||||
id: endpoints
|
||||
date: 2020-04-23
|
||||
full_link:
|
||||
short_description: >
|
||||
Endpoints track the IP addresses of Pods with matching Service selectors.
|
||||
|
||||
aka:
|
||||
tags:
|
||||
- networking
|
||||
---
|
||||
Endpoints track the IP addresses of Pods with matching {{< glossary_tooltip text="selectors" term_id="selector" >}}.
|
||||
|
||||
<!--more-->
|
||||
Endpoints can be configured manually for {{< glossary_tooltip text="Services" term_id="service" >}} without selectors specified.
|
||||
The {{< glossary_tooltip text="EndpointSlice" term_id="endpoint-slice" >}} resource provides a scalable and extensible alternative to Endpoints.
|
|
@ -6,15 +6,15 @@ full_link: /docs/concepts/storage/volumes/
|
|||
short_description: >
|
||||
A directory containing data, accessible to the containers in a pod.
|
||||
|
||||
aka:
|
||||
aka:
|
||||
tags:
|
||||
- core-object
|
||||
- fundamental
|
||||
---
|
||||
A directory containing data, accessible to the {{< glossary_tooltip text="containers" term_id="container" >}} in a {{< glossary_tooltip term_id="pod" >}}.
|
||||
|
||||
<!--more-->
|
||||
<!--more-->
|
||||
|
||||
A Kubernetes volume lives as long as the Pod that encloses it. Consequently, a volume outlives any containers that run within the Pod, and data in the volume is preserved across container restarts.
|
||||
|
||||
See [storage](https://kubernetes.io/docs/concepts/storage/) for more information.
|
||||
See [storage](/docs/concepts/storage/) for more information.
|
||||
|
|
|
@ -706,9 +706,9 @@ Resource versions are strings that identify the server's internal version of an
|
|||
|
||||
Clients find resource versions in resources, including the resources in watch events, and list responses returned from the server:
|
||||
|
||||
[v1.meta/ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta) - The `metadata.resourceVersion` of a resource instance identifies the resource version the instance was last modified at.
|
||||
[v1.meta/ObjectMeta](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#objectmeta-v1-meta) - The `metadata.resourceVersion` of a resource instance identifies the resource version the instance was last modified at.
|
||||
|
||||
[v1.meta/ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#listmeta-v1-meta) - The `metadata.resourceVersion` of a resource collection (i.e. a list response) identifies the resource version at which the list response was constructed.
|
||||
[v1.meta/ListMeta](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#listmeta-v1-meta) - The `metadata.resourceVersion` of a resource collection (i.e. a list response) identifies the resource version at which the list response was constructed.
|
||||
|
||||
### The ResourceVersion Parameter
|
||||
|
||||
|
|
|
@ -20,35 +20,20 @@ card:
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
This section covers different options to set up and run Kubernetes.
|
||||
|
||||
Different Kubernetes solutions meet different requirements: ease of maintenance, security, control, available resources, and expertise required to operate and manage a cluster.
|
||||
|
||||
You can deploy a Kubernetes cluster on a local machine, cloud, on-prem datacenter, or choose a managed Kubernetes cluster. You can also create custom solutions across a wide range of cloud providers, or bare metal environments.
|
||||
|
||||
More simply, you can create a Kubernetes cluster in learning and production environments.
|
||||
|
||||
This section lists the different ways to set up and run Kubernetes.
|
||||
When you install Kubernetes, choose an installation type based on: ease of maintenance, security,
|
||||
control, available resources, and expertise required to operate and manage a cluster.
|
||||
|
||||
You can deploy a Kubernetes cluster on a local machine, cloud, on-prem datacenter, or choose a managed Kubernetes cluster. There are also custom solutions across a wide range of cloud providers, or bare metal environments.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Learning environment
|
||||
|
||||
If you're learning Kubernetes, use the Docker-based solutions: tools supported by the Kubernetes community, or tools in the ecosystem to set up a Kubernetes cluster on a local machine.
|
||||
|
||||
{{< table caption="Local machine solutions table that lists the tools supported by the community and the ecosystem to deploy Kubernetes." >}}
|
||||
|
||||
|Community |Ecosystem |
|
||||
| ------------ | -------- |
|
||||
| [Minikube](/docs/setup/learning-environment/minikube/) | [Docker Desktop](https://www.docker.com/products/docker-desktop)|
|
||||
| [kind (Kubernetes IN Docker)](/docs/setup/learning-environment/kind/) | [Minishift](https://docs.okd.io/latest/minishift/)|
|
||||
| | [MicroK8s](https://microk8s.io/)|
|
||||
|
||||
If you're learning Kubernetes, use the tools supported by the Kubernetes community, or tools in the ecosystem to set up a Kubernetes cluster on a local machine.
|
||||
|
||||
## Production environment
|
||||
|
||||
When evaluating a solution for a production environment, consider which aspects of operating a Kubernetes cluster (or _abstractions_) you want to manage yourself or offload to a provider.
|
||||
|
||||
[Kubernetes Partners](https://kubernetes.io/partners/#conformance) includes a list of [Certified Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes) providers.
|
||||
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ kops is an automated provisioning system:
|
|||
|
||||
* You must [install](https://github.com/kubernetes/kops#installing) `kops` on a 64-bit (AMD64 and Intel 64) device architecture.
|
||||
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them.
|
||||
* You must have an [AWS account](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html), generate [IAM keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) and [configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration) them. The IAM user will need [adequate permissions](https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#setup-iam-user).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ To follow this guide, you need:
|
|||
You also need to use a version of `kubeadm` that can deploy the version
|
||||
of Kubernetes that you want to use in your new cluster.
|
||||
|
||||
[Kubernetes' version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
|
||||
[Kubernetes' version and version skew support policy](/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
|
||||
Check that policy to learn about what versions of Kubernetes and `kubeadm`
|
||||
are supported. This page is written for Kubernetes {{< param "version" >}}.
|
||||
|
||||
|
@ -287,7 +287,7 @@ Several external projects provide Kubernetes Pod networks using CNI, some of whi
|
|||
support [Network Policy](/docs/concepts/services-networking/networkpolicies/).
|
||||
|
||||
See the list of available
|
||||
[networking and network policy add-ons](https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
|
||||
[networking and network policy add-ons](/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
|
||||
|
||||
You can install a Pod network add-on with the following command on the
|
||||
control-plane node or a node that has the kubeconfig credentials:
|
||||
|
@ -644,5 +644,3 @@ supports your chosen platform.
|
|||
## Troubleshooting {#troubleshooting}
|
||||
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
||||
|
||||
|
||||
|
|
|
@ -66,7 +66,7 @@ sudo sysctl --system
|
|||
|
||||
Make sure that the `br_netfilter` module is loaded before this step. This can be done by running `lsmod | grep br_netfilter`. To load it explicitly call `sudo modprobe br_netfilter`.
|
||||
|
||||
For more details please see the [Network Plugin Requirements](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) page.
|
||||
For more details please see the [Network Plugin Requirements](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements) page.
|
||||
|
||||
## Check required ports
|
||||
|
||||
|
@ -218,7 +218,7 @@ sudo systemctl enable --now kubelet
|
|||
You have to do this until SELinux support is improved in the kubelet.
|
||||
|
||||
- You can leave SELinux enabled if you know how to configure it but it may require settings that are not supported by kubeadm.
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="Fedora CoreOS" %}}
|
||||
Install CNI plugins (required for most pod network):
|
||||
|
@ -310,4 +310,3 @@ If you are running into difficulties with kubeadm, please consult our [troublesh
|
|||
|
||||
|
||||
* [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)
|
||||
|
||||
|
|
|
@ -80,13 +80,13 @@ Server-side Apply was promoted to Beta in 1.16, but is now introducing a second
|
|||
|
||||
### Extending Ingress with and replacing a deprecated annotation with IngressClass
|
||||
|
||||
In Kubernetes 1.18, there are two significant additions to Ingress: A new `pathType` field and a new `IngressClass` resource. The `pathType` field allows specifying how paths should be matched. In addition to the default `ImplementationSpecific` type, there are new `Exact` and `Prefix` path types.
|
||||
In Kubernetes 1.18, there are two significant additions to Ingress: A new `pathType` field and a new `IngressClass` resource. The `pathType` field allows specifying how paths should be matched. In addition to the default `ImplementationSpecific` type, there are new `Exact` and `Prefix` path types.
|
||||
|
||||
The `IngressClass` resource is used to describe a type of Ingress within a Kubernetes cluster. Ingresses can specify the class they are associated with by using a new `ingressClassName` field on Ingresses. This new resource and field replace the deprecated `kubernetes.io/ingress.class` annotation.
|
||||
|
||||
### SIG CLI introduces kubectl debug
|
||||
|
||||
SIG CLI was debating the need for a debug utility for quite some time already. With the development of [ephemeral containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/), it became more obvious how we can support developers with tooling built on top of `kubectl exec`. The addition of the `kubectl debug` [command](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/20190805-kubectl-debug.md) (it is alpha but your feedback is more than welcome), allows developers to easily debug their Pods inside the cluster. We think this addition is invaluable. This command allows one to create a temporary container which runs next to the Pod one is trying to examine, but also attaches to the console for interactive troubleshooting.
|
||||
SIG CLI was debating the need for a debug utility for quite some time already. With the development of [ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/), it became more obvious how we can support developers with tooling built on top of `kubectl exec`. The addition of the `kubectl debug` [command](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/20190805-kubectl-debug.md) (it is alpha but your feedback is more than welcome), allows developers to easily debug their Pods inside the cluster. We think this addition is invaluable. This command allows one to create a temporary container which runs next to the Pod one is trying to examine, but also attaches to the console for interactive troubleshooting.
|
||||
|
||||
### Introducing Windows CSI support alpha for Kubernetes
|
||||
|
||||
|
@ -126,7 +126,7 @@ No Known Issues Reported
|
|||
|
||||
#### kubectl:
|
||||
- `kubectl` and k8s.io/client-go no longer default to a server address of `http://localhost:8080`. If you own one of these legacy clusters, you are *strongly* encouraged to secure your server. If you cannot secure your server, you can set the `$KUBERNETES_MASTER` environment variable to `http://localhost:8080` to continue defaulting the server address. `kubectl` users can also set the server address using the `--server` flag, or in a kubeconfig file specified via `--kubeconfig` or `$KUBECONFIG`. ([#86173](https://github.com/kubernetes/kubernetes/pull/86173), [@soltysh](https://github.com/soltysh)) [SIG API Machinery, CLI and Testing]
|
||||
- `kubectl run` has removed the previously deprecated generators, along with flags unrelated to creating pods. `kubectl run` now only creates pods. See specific `kubectl create` subcommands to create objects other than pods.
|
||||
- `kubectl run` has removed the previously deprecated generators, along with flags unrelated to creating pods. `kubectl run` now only creates pods. See specific `kubectl create` subcommands to create objects other than pods.
|
||||
([#87077](https://github.com/kubernetes/kubernetes/pull/87077), [@soltysh](https://github.com/soltysh)) [SIG Architecture, CLI and Testing]
|
||||
- The deprecated command `kubectl rolling-update` has been removed ([#88057](https://github.com/kubernetes/kubernetes/pull/88057), [@julianvmodesto](https://github.com/julianvmodesto)) [SIG Architecture, CLI and Testing]
|
||||
|
||||
|
@ -193,13 +193,13 @@ No Known Issues Reported
|
|||
- node_memory_working_set_bytes --> node_memory_working_set_bytes
|
||||
- container_cpu_usage_seconds_total --> container_cpu_usage_seconds
|
||||
- container_memory_working_set_bytes --> container_memory_working_set_bytes
|
||||
- scrape_error --> scrape_error
|
||||
- scrape_error --> scrape_error
|
||||
([#86282](https://github.com/kubernetes/kubernetes/pull/86282), [@RainbowMango](https://github.com/RainbowMango)) [SIG Node]
|
||||
- In a future release, kubelet will no longer create the CSI NodePublishVolume target directory, in accordance with the CSI specification. CSI drivers may need to be updated accordingly to properly create and process the target path. ([#75535](https://github.com/kubernetes/kubernetes/issues/75535)) [SIG Storage]
|
||||
|
||||
#### kube-proxy:
|
||||
- `--healthz-port` and `--metrics-port` flags are deprecated, please use `--healthz-bind-address` and `--metrics-bind-address` instead ([#88512](https://github.com/kubernetes/kubernetes/pull/88512), [@SataQiu](https://github.com/SataQiu)) [SIG Network]
|
||||
- a new `EndpointSliceProxying` feature gate has been added to control the use of EndpointSlices in kube-proxy. The EndpointSlice feature gate that used to control this behavior no longer affects kube-proxy. This feature has been disabled by default. ([#86137](https://github.com/kubernetes/kubernetes/pull/86137), [@robscott](https://github.com/robscott))
|
||||
- a new `EndpointSliceProxying` feature gate has been added to control the use of EndpointSlices in kube-proxy. The EndpointSlice feature gate that used to control this behavior no longer affects kube-proxy. This feature has been disabled by default. ([#86137](https://github.com/kubernetes/kubernetes/pull/86137), [@robscott](https://github.com/robscott))
|
||||
|
||||
#### kubeadm:
|
||||
- command line option "kubelet-version" for `kubeadm upgrade node` has been deprecated and will be removed in a future release. ([#87942](https://github.com/kubernetes/kubernetes/pull/87942), [@SataQiu](https://github.com/SataQiu)) [SIG Cluster Lifecycle]
|
||||
|
@ -245,7 +245,7 @@ No Known Issues Reported
|
|||
- The alpha feature `ServiceAccountIssuerDiscovery` enables publishing OIDC discovery information and service account token verification keys at `/.well-known/openid-configuration` and `/openid/v1/jwks` endpoints by API servers configured to issue service account tokens. ([#80724](https://github.com/kubernetes/kubernetes/pull/80724), [@cceckman](https://github.com/cceckman)) [SIG API Machinery, Auth, Cluster Lifecycle and Testing]
|
||||
- CustomResourceDefinition schemas that use `x-kubernetes-list-map-keys` to specify properties that uniquely identify list items must make those properties required or have a default value, to ensure those properties are present for all list items. See https://kubernetes.io/docs/reference/using-api/api-concepts/#merge-strategy for details. ([#88076](https://github.com/kubernetes/kubernetes/pull/88076), [@eloyekunle](https://github.com/eloyekunle)) [SIG API Machinery and Testing]
|
||||
- CustomResourceDefinition schemas that use `x-kubernetes-list-type: map` or `x-kubernetes-list-type: set` now enable validation that the list items in the corresponding custom resources are unique. ([#84920](https://github.com/kubernetes/kubernetes/pull/84920), [@sttts](https://github.com/sttts)) [SIG API Machinery]
|
||||
|
||||
|
||||
#### Configuration file changes:
|
||||
|
||||
#### kube-apiserver:
|
||||
|
@ -257,7 +257,7 @@ No Known Issues Reported
|
|||
- Kube-scheduler can run more than one scheduling profile. Given a pod, the profile is selected by using its `.spec.schedulerName`. ([#88285](https://github.com/kubernetes/kubernetes/pull/88285), [@alculquicondor](https://github.com/alculquicondor)) [SIG Apps, Scheduling and Testing]
|
||||
- Scheduler Extenders can now be configured in the v1alpha2 component config ([#88768](https://github.com/kubernetes/kubernetes/pull/88768), [@damemi](https://github.com/damemi)) [SIG Release, Scheduling and Testing]
|
||||
- The PostFilter of scheduler framework is renamed to PreScore in kubescheduler.config.k8s.io/v1alpha2. ([#87751](https://github.com/kubernetes/kubernetes/pull/87751), [@skilxn-go](https://github.com/skilxn-go)) [SIG Scheduling and Testing]
|
||||
|
||||
|
||||
#### kube-proxy:
|
||||
- Added kube-proxy flags `--ipvs-tcp-timeout`, `--ipvs-tcpfin-timeout`, `--ipvs-udp-timeout` to configure IPVS connection timeouts. ([#85517](https://github.com/kubernetes/kubernetes/pull/85517), [@andrewsykim](https://github.com/andrewsykim)) [SIG Cluster Lifecycle and Network]
|
||||
- Added optional `--detect-local-mode` flag to kube-proxy. Valid values are "ClusterCIDR" (default matching previous behavior) and "NodeCIDR" ([#87748](https://github.com/kubernetes/kubernetes/pull/87748), [@satyasm](https://github.com/satyasm)) [SIG Cluster Lifecycle, Network and Scheduling]
|
||||
|
@ -689,8 +689,8 @@ filename | sha512 hash
|
|||
|
||||
- Add `rest_client_rate_limiter_duration_seconds` metric to component-base to track client side rate limiter latency in seconds. Broken down by verb and URL. ([#88134](https://github.com/kubernetes/kubernetes/pull/88134), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery, Cluster Lifecycle and Instrumentation]
|
||||
- Allow user to specify resource using --filename flag when invoking kubectl exec ([#88460](https://github.com/kubernetes/kubernetes/pull/88460), [@soltysh](https://github.com/soltysh)) [SIG CLI and Testing]
|
||||
- Apiserver add a new flag --goaway-chance which is the fraction of requests that will be closed gracefully(GOAWAY) to prevent HTTP/2 clients from getting stuck on a single apiserver.
|
||||
After the connection closed(received GOAWAY), the client's other in-flight requests won't be affected, and the client will reconnect.
|
||||
- Apiserver add a new flag --goaway-chance which is the fraction of requests that will be closed gracefully(GOAWAY) to prevent HTTP/2 clients from getting stuck on a single apiserver.
|
||||
After the connection closed(received GOAWAY), the client's other in-flight requests won't be affected, and the client will reconnect.
|
||||
The flag min value is 0 (off), max is .02 (1/50 requests); .001 (1/1000) is a recommended starting point.
|
||||
Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. ([#88567](https://github.com/kubernetes/kubernetes/pull/88567), [@answer1991](https://github.com/answer1991)) [SIG API Machinery]
|
||||
- Azure: add support for single stack IPv6 ([#88448](https://github.com/kubernetes/kubernetes/pull/88448), [@aramase](https://github.com/aramase)) [SIG Cloud Provider]
|
||||
|
@ -739,7 +739,7 @@ filename | sha512 hash
|
|||
- Kubelets perform fewer unnecessary pod status update operations on the API server. ([#88591](https://github.com/kubernetes/kubernetes/pull/88591), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Scalability]
|
||||
- Plugin/PluginConfig and Policy APIs are mutually exclusive when running the scheduler ([#88864](https://github.com/kubernetes/kubernetes/pull/88864), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
|
||||
- Specifying PluginConfig for the same plugin more than once fails scheduler startup.
|
||||
|
||||
|
||||
Specifying extenders and configuring .ignoredResources for the NodeResourcesFit plugin fails ([#88870](https://github.com/kubernetes/kubernetes/pull/88870), [@alculquicondor](https://github.com/alculquicondor)) [SIG Scheduling]
|
||||
- Support TLS Server Name overrides in kubeconfig file and via --tls-server-name in kubectl ([#88769](https://github.com/kubernetes/kubernetes/pull/88769), [@deads2k](https://github.com/deads2k)) [SIG API Machinery, Auth and CLI]
|
||||
- Terminating a restartPolicy=Never pod no longer has a chance to report the pod succeeded when it actually failed. ([#88440](https://github.com/kubernetes/kubernetes/pull/88440), [@smarterclayton](https://github.com/smarterclayton)) [SIG Node and Testing]
|
||||
|
@ -806,18 +806,18 @@ filename | sha512 hash
|
|||
If you are setting `--redirect-container-streaming=true`, then you must migrate off this configuration. The flag will no longer be able to be enabled starting in v1.20. If you are not setting the flag, no action is necessary. ([#88290](https://github.com/kubernetes/kubernetes/pull/88290), [@tallclair](https://github.com/tallclair)) [SIG API Machinery and Node]
|
||||
|
||||
- Yes.
|
||||
|
||||
|
||||
Feature Name: Support using network resources (VNet, LB, IP, etc.) in different AAD Tenant and Subscription than those for the cluster.
|
||||
|
||||
|
||||
Changes in Pull Request:
|
||||
|
||||
|
||||
1. Add properties `networkResourceTenantID` and `networkResourceSubscriptionID` in cloud provider auth config section, which indicates the location of network resources.
|
||||
2. Add function `GetMultiTenantServicePrincipalToken` to fetch multi-tenant service principal token, which will be used by Azure VM/VMSS Clients in this feature.
|
||||
3. Add function `GetNetworkResourceServicePrincipalToken` to fetch network resource service principal token, which will be used by Azure Network Resource (Load Balancer, Public IP, Route Table, Network Security Group and their sub level resources) Clients in this feature.
|
||||
4. Related unit tests.
|
||||
|
||||
|
||||
None.
|
||||
|
||||
|
||||
User Documentation: In PR https://github.com/kubernetes-sigs/cloud-provider-azure/pull/301 ([#88384](https://github.com/kubernetes/kubernetes/pull/88384), [@bowen5](https://github.com/bowen5)) [SIG Cloud Provider]
|
||||
|
||||
## Changes by Kind
|
||||
|
@ -833,8 +833,8 @@ filename | sha512 hash
|
|||
- Added support for multiple sizes huge pages on a container level ([#84051](https://github.com/kubernetes/kubernetes/pull/84051), [@bart0sh](https://github.com/bart0sh)) [SIG Apps, Node and Storage]
|
||||
- AppProtocol is a new field on Service and Endpoints resources, enabled with the ServiceAppProtocol feature gate. ([#88503](https://github.com/kubernetes/kubernetes/pull/88503), [@robscott](https://github.com/robscott)) [SIG Apps and Network]
|
||||
- Fixed missing validation of uniqueness of list items in lists with `x-kubernetes-list-type: map` or x-kubernetes-list-type: set` in CustomResources. ([#84920](https://github.com/kubernetes/kubernetes/pull/84920), [@sttts](https://github.com/sttts)) [SIG API Machinery]
|
||||
- Introduces optional --detect-local flag to kube-proxy.
|
||||
Currently the only supported value is "cluster-cidr",
|
||||
- Introduces optional --detect-local flag to kube-proxy.
|
||||
Currently the only supported value is "cluster-cidr",
|
||||
which is the default if not specified. ([#87748](https://github.com/kubernetes/kubernetes/pull/87748), [@satyasm](https://github.com/satyasm)) [SIG Cluster Lifecycle, Network and Scheduling]
|
||||
- Kube-scheduler can run more than one scheduling profile. Given a pod, the profile is selected by using its `.spec.SchedulerName`. ([#88285](https://github.com/kubernetes/kubernetes/pull/88285), [@alculquicondor](https://github.com/alculquicondor)) [SIG Apps, Scheduling and Testing]
|
||||
- Moving Windows RunAsUserName feature to GA ([#87790](https://github.com/kubernetes/kubernetes/pull/87790), [@marosset](https://github.com/marosset)) [SIG Apps and Windows]
|
||||
|
@ -1048,9 +1048,9 @@ filename | sha512 hash
|
|||
|
||||
- aggragation api will have alpha support for network proxy ([#87515](https://github.com/kubernetes/kubernetes/pull/87515), [@Sh4d1](https://github.com/Sh4d1)) [SIG API Machinery]
|
||||
- API request throttling (due to a high rate of requests) is now reported in client-go logs at log level 2. The messages are of the form
|
||||
|
||||
|
||||
Throttling request took 1.50705208s, request: GET:<URL>
|
||||
|
||||
|
||||
The presence of these messages, may indicate to the administrator the need to tune the cluster accordingly. ([#87740](https://github.com/kubernetes/kubernetes/pull/87740), [@jennybuckley](https://github.com/jennybuckley)) [SIG API Machinery]
|
||||
- kubeadm: reject a node joining the cluster if a node with the same name already exists ([#81056](https://github.com/kubernetes/kubernetes/pull/81056), [@neolit123](https://github.com/neolit123)) [SIG Cluster Lifecycle]
|
||||
- disableAvailabilitySetNodes is added to avoid VM list for VMSS clusters. It should only be used when vmType is "vmss" and all the nodes (including masters) are VMSS virtual machines. ([#87685](https://github.com/kubernetes/kubernetes/pull/87685), [@feiskyer](https://github.com/feiskyer)) [SIG Cloud Provider]
|
||||
|
|
|
@ -146,3 +146,16 @@ Running a cluster with `kubelet` instances that are persistently two minor versi
|
|||
* they must be upgraded within one minor version of `kube-apiserver` before the control plane can be upgraded
|
||||
* it increases the likelihood of running `kubelet` versions older than the three maintained minor releases
|
||||
{{</ warning >}}
|
||||
|
||||
### kube-proxy
|
||||
|
||||
* `kube-proxy` must be the same minor version as `kubelet` on the node.
|
||||
* `kube-proxy` must not be newer than `kube-apiserver`.
|
||||
* `kube-proxy` must be at most two minor versions older than `kube-apiserver.`
|
||||
|
||||
Example:
|
||||
|
||||
If `kube-proxy` version is **{{< skew latestVersion >}}**:
|
||||
|
||||
* `kubelet` version must be at the same minor version as **{{< skew latestVersion >}}**.
|
||||
* `kube-apiserver` version must be between **{{ skew oldestMinorVersion }}** and **{{ skew latestVersion }}**, inclusive.
|
||||
|
|
|
@ -11,9 +11,5 @@ This section of the Kubernetes documentation contains pages that
|
|||
show how to do individual tasks. A task page shows how to do a
|
||||
single thing, typically by giving a short sequence of steps.
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
If you would like to write a task page, see
|
||||
[Creating a Documentation Pull Request](/docs/home/contribute/create-pull-request/).
|
||||
|
|
|
@ -17,7 +17,7 @@ DNS resolution process in your cluster.
|
|||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
Your cluster must be running the CoreDNS add-on.
|
||||
[Migrating to CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/coredns/#migrating-to-coredns)
|
||||
[Migrating to CoreDNS](/docs/tasks/administer-cluster/coredns/#migrating-to-coredns)
|
||||
explains how to use `kubeadm` to migrate from `kube-dns`.
|
||||
|
||||
{{% version-check %}}
|
||||
|
@ -117,7 +117,7 @@ You can modify the default CoreDNS behavior by modifying the ConfigMap.
|
|||
|
||||
### Configuration of Stub-domain and upstream nameserver using CoreDNS
|
||||
|
||||
CoreDNS has the ability to configure stubdomains and upstream nameservers using the [forward plugin](https://coredns.io/plugins/forward/).
|
||||
CoreDNS has the ability to configure stubdomains and upstream nameservers using the [forward plugin](https://coredns.io/plugins/forward/).
|
||||
|
||||
#### Example
|
||||
If a cluster operator has a [Consul](https://www.consul.io/) domain server located at 10.150.0.1, and all Consul names have the suffix .consul.local. To configure it in CoreDNS, the cluster administrator creates the following stanza in the CoreDNS ConfigMap.
|
||||
|
|
|
@ -7,10 +7,8 @@ content_type: task
|
|||
<!-- overview -->
|
||||
This page shows how to configure a Key Management Service (KMS) provider and plugin to enable secret data encryption.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
* Kubernetes version 1.10.0 or later is required
|
||||
|
@ -19,8 +17,6 @@ This page shows how to configure a Key Management Service (KMS) provider and plu
|
|||
|
||||
{{< feature-state for_k8s_version="v1.12" state="beta" >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
The KMS encryption provider uses an envelope encryption scheme to encrypt data in etcd. The data is encrypted using a data encryption key (DEK); a new DEK is generated for each encryption. The DEKs are encrypted with a key encryption key (KEK) that is stored and managed in a remote KMS. The KMS provider uses gRPC to communicate with a specific KMS
|
||||
|
@ -30,10 +26,12 @@ plugin. The KMS plugin, which is implemented as a gRPC server and deployed on th
|
|||
|
||||
To configure a KMS provider on the API server, include a provider of type ```kms``` in the providers array in the encryption configuration file and set the following properties:
|
||||
|
||||
* `name`: Display name of the KMS plugin.
|
||||
* `endpoint`: Listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.
|
||||
* `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap.
|
||||
* `timeout`: How long should kube-apiserver wait for kms-plugin to respond before returning an error (default is 3 seconds).
|
||||
* `name`: Display name of the KMS plugin.
|
||||
* `endpoint`: Listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.
|
||||
* `cachesize`: Number of data encryption keys (DEKs) to be cached in the clear.
|
||||
When cached, DEKs can be used without another call to the KMS;
|
||||
whereas DEKs that are not cached require a call to the KMS to unwrap.
|
||||
* `timeout`: How long should kube-apiserver wait for kms-plugin to respond before returning an error (default is 3 seconds).
|
||||
|
||||
See [Understanding the encryption at rest configuration.](/docs/tasks/administer-cluster/encrypt-data)
|
||||
|
||||
|
@ -57,17 +55,18 @@ Then use the functions and data structures in the stub file to develop the serve
|
|||
|
||||
* kms plugin version: `v1beta1`
|
||||
|
||||
In response to procedure call Version, a compatible KMS plugin should return v1beta1 as VersionResponse.version
|
||||
In response to procedure call Version, a compatible KMS plugin should return v1beta1 as VersionResponse.version.
|
||||
|
||||
* message version: `v1beta1`
|
||||
|
||||
All messages from KMS provider have the version field set to current version v1beta1
|
||||
All messages from KMS provider have the version field set to current version v1beta1.
|
||||
|
||||
* protocol: UNIX domain socket (`unix`)
|
||||
|
||||
The gRPC server should listen at UNIX domain socket
|
||||
The gRPC server should listen at UNIX domain socket.
|
||||
|
||||
### Integrating a KMS plugin with the remote KMS
|
||||
|
||||
The KMS plugin can communicate with the remote KMS using any protocol supported by the KMS.
|
||||
All configuration data, including authentication credentials the KMS plugin uses to communicate with the remote KMS,
|
||||
are stored and managed by the KMS plugin independently. The KMS plugin can encode the ciphertext with additional metadata that may be required before sending it to the KMS for decryption.
|
||||
|
@ -80,108 +79,113 @@ To encrypt the data:
|
|||
|
||||
1. Create a new encryption configuration file using the appropriate properties for the `kms` provider:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- kms:
|
||||
name: myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
timeout: 3s
|
||||
- identity: {}
|
||||
```
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- kms:
|
||||
name: myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
timeout: 3s
|
||||
- identity: {}
|
||||
```
|
||||
|
||||
2. Set the `--encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file.
|
||||
3. Restart your API server.
|
||||
|
||||
Note:
|
||||
The alpha version of the encryption feature prior to 1.13 required a config file with
|
||||
`kind: EncryptionConfig` and `apiVersion: v1`, and used the `--experimental-encryption-provider-config` flag.
|
||||
1. Set the `--encryption-provider-config` flag on the kube-apiserver to point to the location of the configuration file.
|
||||
1. Restart your API server.
|
||||
|
||||
## Verifying that the data is encrypted
|
||||
Data is encrypted when written to etcd. After restarting your kube-apiserver, any newly created or updated secret should be encrypted when stored. To verify, you can use the etcdctl command line program to retrieve the contents of your secret.
|
||||
|
||||
Data is encrypted when written to etcd. After restarting your `kube-apiserver`,
|
||||
any newly created or updated secret should be encrypted when stored. To verify,
|
||||
you can use the `etcdctl` command line program to retrieve the contents of your secret.
|
||||
|
||||
1. Create a new secret called secret1 in the default namespace:
|
||||
```
|
||||
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
|
||||
```
|
||||
2. Using the etcdctl command line, read that secret out of etcd:
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C
|
||||
```
|
||||
where `[...]` must be the additional arguments for connecting to the etcd server.
|
||||
```
|
||||
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
|
||||
```
|
||||
1. Using the etcdctl command line, read that secret out of etcd:
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl get /kubernetes.io/secrets/default/secret1 [...] | hexdump -C
|
||||
```
|
||||
where `[...]` must be the additional arguments for connecting to the etcd server.
|
||||
|
||||
3. Verify the stored secret is prefixed with `k8s:enc:kms:v1:`, which indicates that the `kms` provider has encrypted the resulting data.
|
||||
1. Verify the stored secret is prefixed with `k8s:enc:kms:v1:`, which indicates that the `kms` provider has encrypted the resulting data.
|
||||
|
||||
4. Verify that the secret is correctly decrypted when retrieved via the API:
|
||||
```
|
||||
kubectl describe secret secret1 -n default
|
||||
```
|
||||
should match `mykey: mydata`
|
||||
1. Verify that the secret is correctly decrypted when retrieved via the API:
|
||||
```
|
||||
kubectl describe secret secret1 -n default
|
||||
```
|
||||
should match `mykey: mydata`
|
||||
|
||||
## Ensuring all secrets are encrypted
|
||||
|
||||
Because secrets are encrypted on write, performing an update on a secret encrypts that content.
|
||||
|
||||
The following command reads all secrets and then updates them to apply server side encryption. If an error occurs due to a conflicting write, retry the command. For larger clusters, you may wish to subdivide the secrets by namespace or script an update.
|
||||
The following command reads all secrets and then updates them to apply server side encryption.
|
||||
If an error occurs due to a conflicting write, retry the command.
|
||||
For larger clusters, you may wish to subdivide the secrets by namespace or script an update.
|
||||
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
||||
```
|
||||
|
||||
## Switching from a local encryption provider to the KMS provider
|
||||
|
||||
To switch from a local encryption provider to the `kms` provider and re-encrypt all of the secrets:
|
||||
|
||||
1. Add the `kms` provider as the first entry in the configuration file as shown in the following example.
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- kms:
|
||||
name : myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key1
|
||||
secret: <BASE 64 ENCODED SECRET>
|
||||
```
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- kms:
|
||||
name : myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key1
|
||||
secret: <BASE 64 ENCODED SECRET>
|
||||
```
|
||||
|
||||
2. Restart all kube-apiserver processes.
|
||||
1. Restart all kube-apiserver processes.
|
||||
|
||||
3. Run the following command to force all secrets to be re-encrypted using the `kms` provider.
|
||||
1. Run the following command to force all secrets to be re-encrypted using the `kms` provider.
|
||||
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json| kubectl replace -f -
|
||||
```
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json| kubectl replace -f -
|
||||
```
|
||||
|
||||
## Disabling encryption at rest
|
||||
|
||||
To disable encryption at rest:
|
||||
|
||||
1. Place the `identity` provider as the first entry in the configuration file:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- identity: {}
|
||||
- kms:
|
||||
name : myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
```
|
||||
2. Restart all kube-apiserver processes.
|
||||
3. Run the following command to force all secrets to be decrypted.
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
||||
```
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: EncryptionConfiguration
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- identity: {}
|
||||
- kms:
|
||||
name : myKmsPlugin
|
||||
endpoint: unix:///tmp/socketfile.sock
|
||||
cachesize: 100
|
||||
```
|
||||
1. Restart all kube-apiserver processes.
|
||||
1. Run the following command to force all secrets to be decrypted.
|
||||
```
|
||||
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
|
||||
```
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ if your cluster is running v1.16 then you can use kubectl v1.15, v1.16
|
|||
or v1.17; other combinations
|
||||
[aren't supported](/docs/setup/release/version-skew-policy/#kubectl).
|
||||
|
||||
Some of the examples use the commandline tool
|
||||
Some of the examples use the command line tool
|
||||
[jq](https://stedolan.github.io/jq/). You do not need `jq` to complete the task,
|
||||
because there are manual alternatives.
|
||||
|
||||
|
@ -380,4 +380,4 @@ internal failure, see Kubelet log for details | The kubelet encountered some int
|
|||
|
||||
- For more information on configuring the kubelet via a configuration file, see
|
||||
[Set kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file).
|
||||
- See the reference documentation for [`NodeConfigSource`](https://kubernetes.io/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodeconfigsource-v1-core)
|
||||
- See the reference documentation for [`NodeConfigSource`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#nodeconfigsource-v1-core)
|
||||
|
|
|
@ -30,8 +30,8 @@ a Pod or Container. Security context settings include, but are not limited to:
|
|||
|
||||
* readOnlyRootFilesystem: Mounts the container's root filesystem as read-only.
|
||||
|
||||
The above bullets are not a complete set of security context settings -- please see
|
||||
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
|
||||
The above bullets are not a complete set of security context settings -- please see
|
||||
[SecurityContext](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#securitycontext-v1-core)
|
||||
for a comprehensive list.
|
||||
|
||||
For more information about security mechanisms in Linux, see
|
||||
|
@ -59,11 +59,11 @@ Here is a configuration file for a Pod that has a `securityContext` and an `empt
|
|||
{{< codenew file="pods/security/security-context.yaml" >}}
|
||||
|
||||
In the configuration file, the `runAsUser` field specifies that for any Containers in
|
||||
the Pod, all processes run with user ID 1000. The `runAsGroup` field specifies the primary group ID of 3000 for
|
||||
the Pod, all processes run with user ID 1000. The `runAsGroup` field specifies the primary group ID of 3000 for
|
||||
all processes within any containers of the Pod. If this field is omitted, the primary group ID of the containers
|
||||
will be root(0). Any files created will also be owned by user 1000 and group 3000 when `runAsGroup` is specified.
|
||||
Since `fsGroup` field is specified, all processes of the container are also part of the supplementary group ID 2000.
|
||||
The owner for volume `/data/demo` and any files created in that volume will be Group ID 2000.
|
||||
will be root(0). Any files created will also be owned by user 1000 and group 3000 when `runAsGroup` is specified.
|
||||
Since `fsGroup` field is specified, all processes of the container are also part of the supplementary group ID 2000.
|
||||
The owner for volume `/data/demo` and any files created in that volume will be Group ID 2000.
|
||||
|
||||
Create the Pod:
|
||||
|
||||
|
@ -138,7 +138,7 @@ $ id
|
|||
uid=1000 gid=3000 groups=2000
|
||||
```
|
||||
You will see that gid is 3000 which is same as `runAsGroup` field. If the `runAsGroup` was omitted the gid would
|
||||
remain as 0(root) and the process will be able to interact with files that are owned by root(0) group and that have
|
||||
remain as 0(root) and the process will be able to interact with files that are owned by root(0) group and that have
|
||||
the required group permissions for root(0) group.
|
||||
|
||||
Exit your shell:
|
||||
|
@ -180,9 +180,9 @@ This is an alpha feature. To use it, enable the [feature gate](/docs/reference/c
|
|||
|
||||
{{< note >}}
|
||||
This field has no effect on ephemeral volume types such as
|
||||
[`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret),
|
||||
[`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap),
|
||||
and [`emptydir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir).
|
||||
[`secret`](/docs/concepts/storage/volumes/#secret),
|
||||
[`configMap`](/docs/concepts/storage/volumes/#configmap),
|
||||
and [`emptydir`](/docs/concepts/storage/volumes/#emptydir).
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
@ -423,6 +423,3 @@ kubectl delete pod security-context-demo-4
|
|||
* [Pod Security Policies](/docs/concepts/policy/pod-security-policy/)
|
||||
* [AllowPrivilegeEscalation design
|
||||
document](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ only the termination message:
|
|||
## Customizing the termination message
|
||||
|
||||
Kubernetes retrieves termination messages from the termination message file
|
||||
specified in the `terminationMessagePath` field of a Container, which as a default
|
||||
specified in the `terminationMessagePath` field of a Container, which has a default
|
||||
value of `/dev/termination-log`. By customizing this field, you can tell Kubernetes
|
||||
to use a different file. Kubernetes use the contents from the specified file to
|
||||
populate the Container's status message on both success and failure.
|
||||
|
|
|
@ -41,7 +41,7 @@ The API requires metrics server to be deployed in the cluster. Otherwise it will
|
|||
|
||||
### CPU
|
||||
|
||||
CPU is reported as the average usage, in [CPU cores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu), over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
|
||||
CPU is reported as the average usage, in [CPU cores](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu), over a period of time. This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The kubelet chooses the window for the rate calculation.
|
||||
|
||||
### Memory
|
||||
|
||||
|
@ -60,5 +60,3 @@ Metrics Server is registered with the main API server through
|
|||
[Kubernetes aggregator](/docs/concepts/api-extension/apiserver-aggregation/).
|
||||
|
||||
Learn more about the metrics server in [the design doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md).
|
||||
|
||||
|
||||
|
|
|
@ -1,52 +0,0 @@
|
|||
---
|
||||
title: Example Task Template
|
||||
reviewers:
|
||||
- chenopis
|
||||
content_type: task
|
||||
toc_hide: true
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
{{< note >}}
|
||||
Be sure to also [create an entry in the table of contents](/docs/contribute/style/write-new-topic/#placing-your-topic-in-the-table-of-contents) for your new document.
|
||||
{{< /note >}}
|
||||
|
||||
This page shows how to ...
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
* Do this.
|
||||
* Do this too.
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Doing ...
|
||||
|
||||
1. Do this.
|
||||
1. Do this next. Possibly read this [related explanation](#).
|
||||
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Understanding ...
|
||||
**[Optional Section]**
|
||||
|
||||
Here's an interesting thing to know about the steps you just did.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
**[Optional Section]**
|
||||
|
||||
* Learn more about [Writing a New Topic](/docs/home/contribute/write-new-topic/).
|
||||
* Learn about [Page Content Types - Task](/docs/home/contribute/style/page-content-types/#task).
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
title: Define Dependent Environment Variables
|
||||
content_type: task
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
This page shows how to define dependent environment variables for a container
|
||||
in a Kubernetes Pod.
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}}
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Define an environment dependent variable for a container
|
||||
|
||||
When you create a Pod, you can set dependent environment variables for the containers that run in the Pod. To set dependent environment variables, you can use $(VAR_NAME) in the `value` of `env` in the configuration file.
|
||||
|
||||
In this exercise, you create a Pod that runs one container. The configuration
|
||||
file for the Pod defines an dependent environment variable with common usage defined. Here is the configuration manifest for the
|
||||
Pod:
|
||||
|
||||
{{< codenew file="pods/inject/dependent-envars.yaml" >}}
|
||||
|
||||
1. Create a Pod based on that manifest:
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://k8s.io/examples/pods/inject/dependent-envars.yaml
|
||||
```
|
||||
```
|
||||
pod/dependent-envars-demo created
|
||||
```
|
||||
|
||||
2. List the running Pods:
|
||||
|
||||
```shell
|
||||
kubectl get pods dependent-envars-demo
|
||||
```
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
dependent-envars-demo 1/1 Running 0 9s
|
||||
```
|
||||
|
||||
3. Check the logs for the container running in your Pod:
|
||||
|
||||
```shell
|
||||
kubectl logs pod/dependent-envars-demo
|
||||
```
|
||||
```
|
||||
|
||||
UNCHANGED_REFERENCE=$(PROTOCOL)://172.17.0.1:80
|
||||
SERVICE_ADDRESS=https://172.17.0.1:80
|
||||
ESCAPED_REFERENCE=$(PROTOCOL)://172.17.0.1:80
|
||||
```
|
||||
|
||||
As shown above, you have defined the correct dependency reference of `SERVICE_ADDRESS`, bad dependency reference of `UNCHANGED_REFERENCE` and skip dependent references of `ESCAPED_REFERENCE`.
|
||||
|
||||
When an environment variable is already defined when being referenced,
|
||||
the reference can be correctly resolved, such as in the `SERVICE_ADDRESS` case.
|
||||
|
||||
When the environment variable is undefined or only includes some variables, the undefined environment variable is treated as a normal string, such as `UNCHANGED_REFERENCE`. Note that incorrectly parsed environment variables, in general, will not block the container from starting.
|
||||
|
||||
The `$(VAR_NAME)` syntax can be escaped with a double `$`, ie: `$$(VAR_NAME)`.
|
||||
Escaped references are never expanded, regardless of whether the referenced variable
|
||||
is defined or not. This can be seen from the `ESCAPED_REFERENCE` case above.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Learn more about [environment variables](/docs/tasks/inject-data-application/environment-variable-expose-pod-information/).
|
||||
* See [EnvVarSource](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#envvarsource-v1-core).
|
||||
|
|
@ -140,7 +140,7 @@ verify that the preset has been applied.
|
|||
|
||||
## ReplicaSet with Pod spec example
|
||||
|
||||
This is an example to show that only Pod specs are modified by Pod presets. Other workload types
|
||||
This is an example to show that only Pod specs are modified by Pod presets. Other workload types
|
||||
like ReplicaSets or Deployments are unaffected.
|
||||
|
||||
Here is the manifest for the PodPreset for this example:
|
||||
|
@ -290,7 +290,7 @@ kubectl get pod website -o yaml
|
|||
You can see there is no preset annotation (`podpreset.admission.kubernetes.io`). Seeing no annotation tells you that no preset has not been applied to the Pod.
|
||||
|
||||
However, the
|
||||
[PodPreset admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podpreset)
|
||||
[PodPreset admission controller](/docs/reference/access-authn-authz/admission-controllers/#podpreset)
|
||||
logs a warning containing details of the conflict.
|
||||
You can view the warning using `kubectl`:
|
||||
|
||||
|
@ -301,7 +301,7 @@ kubectl -n kube-system logs -l=component=kube-apiserver
|
|||
The output should look similar to:
|
||||
|
||||
```
|
||||
W1214 13:00:12.987884 1 admission.go:147] conflict occurred while applying podpresets: allow-database on pod: err: merging volume mounts for allow-database has a conflict on mount path /cache:
|
||||
W1214 13:00:12.987884 1 admission.go:147] conflict occurred while applying podpresets: allow-database on pod: err: merging volume mounts for allow-database has a conflict on mount path /cache:
|
||||
v1.VolumeMount{Name:"other-volume", ReadOnly:false, MountPath:"/cache", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}
|
||||
does not match
|
||||
core.VolumeMount{Name:"cache-volume", ReadOnly:false, MountPath:"/cache", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}
|
||||
|
@ -321,5 +321,3 @@ The output shows that the PodPreset was deleted:
|
|||
```
|
||||
podpreset "allow-database" deleted
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: Manual Rotation of CA Certificates
|
||||
min-kubernetes-server-version: v1.13
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
|
|
@ -206,7 +206,7 @@ To confirm successful installation of both a hypervisor and Minikube, you can ru
|
|||
|
||||
{{< note >}}
|
||||
|
||||
For setting the `--driver` with `minikube start`, enter the name of the hypervisor you installed in lowercase letters where `<driver_name>` is mentioned below. A full list of `--driver` values is available in [specifying the VM driver documentation](https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver).
|
||||
For setting the `--driver` with `minikube start`, enter the name of the hypervisor you installed in lowercase letters where `<driver_name>` is mentioned below. A full list of `--driver` values is available in [specifying the VM driver documentation](/docs/setup/learning-environment/minikube/#specifying-the-vm-driver).
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
|
|
|
@ -235,7 +235,6 @@ link target in parentheses. [Link to Kubernetes.io](https://kubernetes.io/) or
|
|||
You can also use HTML, but it is not preferred.
|
||||
<a href="https://kubernetes.io/">Link to Kubernetes.io</a>
|
||||
|
||||
|
||||
## Images
|
||||
|
||||
To format an image, use similar syntax to [links](#links), but add a leading `!`
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
title: Tutorials
|
||||
main_menu: true
|
||||
no_list: true
|
||||
weight: 60
|
||||
content_type: concept
|
||||
---
|
||||
|
@ -14,8 +15,6 @@ each of which has a sequence of steps.
|
|||
Before walking through each tutorial, you may want to bookmark the
|
||||
[Standardized Glossary](/docs/reference/glossary/) page for later references.
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Basics
|
||||
|
@ -64,13 +63,8 @@ Before walking through each tutorial, you may want to bookmark the
|
|||
|
||||
* [Using Source IP](/docs/tutorials/services/source-ip/)
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
If you would like to write a tutorial, see
|
||||
[Content Page Types](/docs/contribute/style/page-content-types/)
|
||||
for information about the tutorial page type.
|
||||
|
||||
|
||||
|
|
|
@ -447,6 +447,4 @@ kubectl delete deployment source-ip-app
|
|||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Learn more about [connecting applications via services](/docs/concepts/services-networking/connect-applications-service/)
|
||||
* Read how to [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
||||
|
||||
* Read how to [Create an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
|
|
@ -1,13 +1,12 @@
|
|||
Note: These tests are importing code from kubernetes that isn't really
|
||||
meant to be used outside the repo. This causes vendoring problems. As
|
||||
a result, we have to work around those with these lines in the travis
|
||||
config:
|
||||
To run the tests for a localization, use the following command:
|
||||
|
||||
```
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/apiserver
|
||||
- rm $GOPATH/src/k8s.io/kubernetes/vendor/k8s.io/client-go
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/vendor/* $GOPATH/src/
|
||||
- rm -rf $GOPATH/src/k8s.io/kubernetes/vendor/*
|
||||
- cp -r $GOPATH/src/k8s.io/kubernetes/staging/src/* $GOPATH/src/
|
||||
go test k8s.io/website/content/<lang>/examples
|
||||
```
|
||||
|
||||
where `<lang>` is the two character representation of a language. For example:
|
||||
|
||||
```
|
||||
go test k8s.io/website/content/en/examples
|
||||
```
|
||||
|
||||
|
|
|
@ -22,9 +22,7 @@ spec:
|
|||
cpu: 500m
|
||||
requests:
|
||||
cpu: 200m
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
@ -36,4 +34,3 @@ spec:
|
|||
- port: 80
|
||||
selector:
|
||||
run: php-apache
|
||||
|
||||
|
|
|
@ -28,34 +28,104 @@ import (
|
|||
"testing"
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/apimachinery/pkg/util/validation/field"
|
||||
"k8s.io/apimachinery/pkg/util/yaml"
|
||||
utilfeature "k8s.io/apiserver/pkg/util/feature"
|
||||
// "k8s.io/apiserver/pkg/util/feature"
|
||||
"k8s.io/kubernetes/pkg/api/legacyscheme"
|
||||
"k8s.io/kubernetes/pkg/api/testapi"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/apps"
|
||||
apps_validation "k8s.io/kubernetes/pkg/apis/apps/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/autoscaling"
|
||||
autoscaling_validation "k8s.io/kubernetes/pkg/apis/autoscaling/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/batch"
|
||||
batch_validation "k8s.io/kubernetes/pkg/apis/batch/validation"
|
||||
|
||||
api "k8s.io/kubernetes/pkg/apis/core"
|
||||
"k8s.io/kubernetes/pkg/apis/core/validation"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
ext_validation "k8s.io/kubernetes/pkg/apis/extensions/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/networking"
|
||||
networking_validation "k8s.io/kubernetes/pkg/apis/networking/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/policy"
|
||||
policy_validation "k8s.io/kubernetes/pkg/apis/policy/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/rbac"
|
||||
rbac_validation "k8s.io/kubernetes/pkg/apis/rbac/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/settings"
|
||||
settings_validation "k8s.io/kubernetes/pkg/apis/settings/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/apis/storage"
|
||||
storage_validation "k8s.io/kubernetes/pkg/apis/storage/validation"
|
||||
|
||||
"k8s.io/kubernetes/pkg/capabilities"
|
||||
"k8s.io/kubernetes/pkg/registry/batch/job"
|
||||
|
||||
// initialize install packages
|
||||
_ "k8s.io/kubernetes/pkg/apis/apps/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/autoscaling/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/batch/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/core/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/networking/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/policy/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/rbac/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/settings/install"
|
||||
_ "k8s.io/kubernetes/pkg/apis/storage/install"
|
||||
)
|
||||
|
||||
var (
|
||||
Groups map[string]TestGroup
|
||||
serializer runtime.SerializerInfo
|
||||
)
|
||||
|
||||
// TestGroup contains GroupVersion to uniquely identify the API
|
||||
type TestGroup struct {
|
||||
externalGroupVersion schema.GroupVersion
|
||||
}
|
||||
|
||||
// GroupVersion makes copy of schema.GroupVersion
|
||||
func (g TestGroup) GroupVersion() *schema.GroupVersion {
|
||||
copyOfGroupVersion := g.externalGroupVersion
|
||||
return ©OfGroupVersion
|
||||
}
|
||||
|
||||
// Codec returns the codec for the API version to test against
|
||||
func (g TestGroup) Codec() runtime.Codec {
|
||||
if serializer.Serializer == nil {
|
||||
return legacyscheme.Codecs.LegacyCodec(g.externalGroupVersion)
|
||||
}
|
||||
return legacyscheme.Codecs.CodecForVersions(serializer.Serializer, legacyscheme.Codecs.UniversalDeserializer(), schema.GroupVersions{g.externalGroupVersion}, nil)
|
||||
}
|
||||
|
||||
func initGroups() {
|
||||
Groups = make(map[string]TestGroup)
|
||||
groupNames := []string{
|
||||
api.GroupName,
|
||||
apps.GroupName,
|
||||
autoscaling.GroupName,
|
||||
batch.GroupName,
|
||||
networking.GroupName,
|
||||
policy.GroupName,
|
||||
rbac.GroupName,
|
||||
settings.GroupName,
|
||||
storage.GroupName,
|
||||
}
|
||||
|
||||
for _, gn := range groupNames {
|
||||
versions := legacyscheme.Scheme.PrioritizedVersionsForGroup(gn)
|
||||
Groups[gn] = TestGroup{
|
||||
externalGroupVersion: schema.GroupVersion{
|
||||
Group: gn,
|
||||
Version: versions[0].Version,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
|
||||
kinds, _, err := legacyscheme.Scheme.ObjectKinds(obj)
|
||||
if err != nil {
|
||||
|
@ -63,7 +133,7 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
|
|||
}
|
||||
kind := kinds[0]
|
||||
|
||||
for _, group := range testapi.Groups {
|
||||
for _, group := range Groups {
|
||||
if group.GroupVersion().Group != kind.Group {
|
||||
continue
|
||||
}
|
||||
|
@ -85,7 +155,7 @@ func getCodecForObject(obj runtime.Object) (runtime.Codec, error) {
|
|||
|
||||
func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
||||
// Enable CustomPodDNS for testing
|
||||
utilfeature.DefaultFeatureGate.Set("CustomPodDNS=true")
|
||||
// feature.DefaultFeatureGate.Set("CustomPodDNS=true")
|
||||
switch t := obj.(type) {
|
||||
case *api.ConfigMap:
|
||||
if t.Namespace == "" {
|
||||
|
@ -96,7 +166,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidateEndpoints(t)
|
||||
errors = validation.ValidateEndpointsCreate(t)
|
||||
case *api.LimitRange:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -115,7 +185,10 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidatePod(t)
|
||||
opts := validation.PodValidationOptions{
|
||||
AllowMultipleHugePageResources: true,
|
||||
}
|
||||
errors = validation.ValidatePod(t, opts)
|
||||
case *api.PodList:
|
||||
for i := range t.Items {
|
||||
errors = append(errors, validateObject(&t.Items[i])...)
|
||||
|
@ -148,7 +221,7 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = validation.ValidateService(t)
|
||||
errors = validation.ValidateService(t, true)
|
||||
case *api.ServiceAccount:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -189,11 +262,15 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = apps_validation.ValidateDeployment(t)
|
||||
case *extensions.Ingress:
|
||||
case *networking.Ingress:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = ext_validation.ValidateIngress(t)
|
||||
gv := schema.GroupVersion{
|
||||
Group: networking.GroupName,
|
||||
Version: legacyscheme.Scheme.PrioritizedVersionsForGroup(networking.GroupName)[0].Version,
|
||||
}
|
||||
errors = networking_validation.ValidateIngressCreate(t, gv)
|
||||
case *policy.PodSecurityPolicy:
|
||||
errors = policy_validation.ValidatePodSecurityPolicy(t)
|
||||
case *apps.ReplicaSet:
|
||||
|
@ -206,6 +283,11 @@ func validateObject(obj runtime.Object) (errors field.ErrorList) {
|
|||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = batch_validation.ValidateCronJob(t)
|
||||
case *networking.NetworkPolicy:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
}
|
||||
errors = networking_validation.ValidateNetworkPolicy(t)
|
||||
case *policy.PodDisruptionBudget:
|
||||
if t.Namespace == "" {
|
||||
t.Namespace = api.NamespaceDefault
|
||||
|
@ -247,10 +329,6 @@ func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// workaround for Jekyllr limit
|
||||
if bytes.HasPrefix(data, []byte("---\n")) {
|
||||
return fmt.Errorf("YAML file cannot start with \"---\", please remove the first line")
|
||||
}
|
||||
name := strings.TrimSuffix(file, ext)
|
||||
|
||||
var docs [][]byte
|
||||
|
@ -286,11 +364,14 @@ func walkConfigFiles(inDir string, t *testing.T, fn func(name, path string, data
|
|||
}
|
||||
|
||||
func TestExampleObjectSchemas(t *testing.T) {
|
||||
initGroups()
|
||||
|
||||
// Please help maintain the alphabeta order in the map
|
||||
cases := map[string]map[string][]runtime.Object{
|
||||
"admin": {
|
||||
"namespace-dev": {&api.Namespace{}},
|
||||
"namespace-prod": {&api.Namespace{}},
|
||||
"namespace-dev": {&api.Namespace{}},
|
||||
"namespace-prod": {&api.Namespace{}},
|
||||
"snowflake-deployment": {&apps.Deployment{}},
|
||||
},
|
||||
"admin/cloud": {
|
||||
"ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &apps.DaemonSet{}},
|
||||
|
@ -298,6 +379,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"admin/dns": {
|
||||
"busybox": {&api.Pod{}},
|
||||
"dns-horizontal-autoscaler": {&apps.Deployment{}},
|
||||
"dnsutils": {&api.Pod{}},
|
||||
},
|
||||
"admin/logging": {
|
||||
"fluentd-sidecar-config": {&api.ConfigMap{}},
|
||||
|
@ -343,21 +425,23 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"storagelimits": {&api.LimitRange{}},
|
||||
},
|
||||
"admin/sched": {
|
||||
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
|
||||
"my-scheduler": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &rbac.ClusterRoleBinding{}, &apps.Deployment{}},
|
||||
"pod1": {&api.Pod{}},
|
||||
"pod2": {&api.Pod{}},
|
||||
"pod3": {&api.Pod{}},
|
||||
},
|
||||
"application": {
|
||||
"deployment": {&apps.Deployment{}},
|
||||
"deployment-patch": {&apps.Deployment{}},
|
||||
"deployment-scale": {&apps.Deployment{}},
|
||||
"deployment-update": {&apps.Deployment{}},
|
||||
"nginx-app": {&api.Service{}, &apps.Deployment{}},
|
||||
"nginx-with-request": {&apps.Deployment{}},
|
||||
"shell-demo": {&api.Pod{}},
|
||||
"simple_deployment": {&apps.Deployment{}},
|
||||
"update_deployment": {&apps.Deployment{}},
|
||||
"deployment": {&apps.Deployment{}},
|
||||
"deployment-patch": {&apps.Deployment{}},
|
||||
"deployment-retainkeys": {&apps.Deployment{}},
|
||||
"deployment-scale": {&apps.Deployment{}},
|
||||
"deployment-update": {&apps.Deployment{}},
|
||||
"nginx-app": {&api.Service{}, &apps.Deployment{}},
|
||||
"nginx-with-request": {&apps.Deployment{}},
|
||||
"php-apache": {&apps.Deployment{}, &api.Service{}},
|
||||
"shell-demo": {&api.Pod{}},
|
||||
"simple_deployment": {&apps.Deployment{}},
|
||||
"update_deployment": {&apps.Deployment{}},
|
||||
},
|
||||
"application/cassandra": {
|
||||
"cassandra-service": {&api.Service{}},
|
||||
|
@ -413,15 +497,17 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"configmap-multikeys": {&api.ConfigMap{}},
|
||||
},
|
||||
"controllers": {
|
||||
"daemonset": {&apps.DaemonSet{}},
|
||||
"frontend": {&apps.ReplicaSet{}},
|
||||
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
|
||||
"job": {&batch.Job{}},
|
||||
"replicaset": {&apps.ReplicaSet{}},
|
||||
"replication": {&api.ReplicationController{}},
|
||||
"replication-nginx-1.7.9": {&api.ReplicationController{}},
|
||||
"replication-nginx-1.9.2": {&api.ReplicationController{}},
|
||||
"nginx-deployment": {&apps.Deployment{}},
|
||||
"daemonset": {&apps.DaemonSet{}},
|
||||
"fluentd-daemonset": {&apps.DaemonSet{}},
|
||||
"fluentd-daemonset-update": {&apps.DaemonSet{}},
|
||||
"frontend": {&apps.ReplicaSet{}},
|
||||
"hpa-rs": {&autoscaling.HorizontalPodAutoscaler{}},
|
||||
"job": {&batch.Job{}},
|
||||
"replicaset": {&apps.ReplicaSet{}},
|
||||
"replication": {&api.ReplicationController{}},
|
||||
"replication-nginx-1.14.2": {&api.ReplicationController{}},
|
||||
"replication-nginx-1.16.1": {&api.ReplicationController{}},
|
||||
"nginx-deployment": {&apps.Deployment{}},
|
||||
},
|
||||
"debug": {
|
||||
"counter-pod": {&api.Pod{}},
|
||||
|
@ -455,6 +541,8 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"pod-configmap-volume": {&api.Pod{}},
|
||||
"pod-configmap-volume-specific-key": {&api.Pod{}},
|
||||
"pod-multiple-configmap-env-variable": {&api.Pod{}},
|
||||
"pod-nginx-preferred-affinity": {&api.Pod{}},
|
||||
"pod-nginx-required-affinity": {&api.Pod{}},
|
||||
"pod-nginx-specific-node": {&api.Pod{}},
|
||||
"pod-nginx": {&api.Pod{}},
|
||||
"pod-projected-svc-token": {&api.Pod{}},
|
||||
|
@ -462,6 +550,7 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"pod-single-configmap-env-variable": {&api.Pod{}},
|
||||
"pod-with-node-affinity": {&api.Pod{}},
|
||||
"pod-with-pod-affinity": {&api.Pod{}},
|
||||
"pod-with-toleration": {&api.Pod{}},
|
||||
"private-reg-pod": {&api.Pod{}},
|
||||
"share-process-namespace": {&api.Pod{}},
|
||||
"simple-pod": {&api.Pod{}},
|
||||
|
@ -471,14 +560,17 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"redis-pod": {&api.Pod{}},
|
||||
},
|
||||
"pods/inject": {
|
||||
"dapi-envars-container": {&api.Pod{}},
|
||||
"dapi-envars-pod": {&api.Pod{}},
|
||||
"dapi-volume": {&api.Pod{}},
|
||||
"dapi-volume-resources": {&api.Pod{}},
|
||||
"envars": {&api.Pod{}},
|
||||
"secret": {&api.Secret{}},
|
||||
"secret-envars-pod": {&api.Pod{}},
|
||||
"secret-pod": {&api.Pod{}},
|
||||
"dapi-envars-container": {&api.Pod{}},
|
||||
"dapi-envars-pod": {&api.Pod{}},
|
||||
"dapi-volume": {&api.Pod{}},
|
||||
"dapi-volume-resources": {&api.Pod{}},
|
||||
"envars": {&api.Pod{}},
|
||||
"pod-multiple-secret-env-variable": {&api.Pod{}},
|
||||
"pod-secret-envFrom": {&api.Pod{}},
|
||||
"pod-single-secret-env-variable": {&api.Pod{}},
|
||||
"secret": {&api.Secret{}},
|
||||
"secret-envars-pod": {&api.Pod{}},
|
||||
"secret-pod": {&api.Pod{}},
|
||||
},
|
||||
"pods/probe": {
|
||||
"exec-liveness": {&api.Pod{}},
|
||||
|
@ -517,38 +609,53 @@ func TestExampleObjectSchemas(t *testing.T) {
|
|||
"redis": {&api.Pod{}},
|
||||
},
|
||||
"policy": {
|
||||
"baseline-psp": {&policy.PodSecurityPolicy{}},
|
||||
"example-psp": {&policy.PodSecurityPolicy{}},
|
||||
"privileged-psp": {&policy.PodSecurityPolicy{}},
|
||||
"restricted-psp": {&policy.PodSecurityPolicy{}},
|
||||
"example-psp": {&policy.PodSecurityPolicy{}},
|
||||
"zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}},
|
||||
"zookeeper-pod-disruption-budget-minunavailable": {&policy.PodDisruptionBudget{}},
|
||||
"zookeeper-pod-disruption-budget-minavailable": {&policy.PodDisruptionBudget{}},
|
||||
},
|
||||
"service": {
|
||||
"nginx-service": {&api.Service{}},
|
||||
"nginx-service": {&api.Service{}},
|
||||
"load-balancer-example": {&apps.Deployment{}},
|
||||
},
|
||||
"service/access": {
|
||||
"frontend": {&api.Service{}, &apps.Deployment{}},
|
||||
"hello-service": {&api.Service{}},
|
||||
"hello": {&apps.Deployment{}},
|
||||
"frontend": {&api.Service{}, &apps.Deployment{}},
|
||||
"hello-application": {&apps.Deployment{}},
|
||||
"hello-service": {&api.Service{}},
|
||||
"hello": {&apps.Deployment{}},
|
||||
},
|
||||
"service/networking": {
|
||||
"curlpod": {&apps.Deployment{}},
|
||||
"custom-dns": {&api.Pod{}},
|
||||
"hostaliases-pod": {&api.Pod{}},
|
||||
"ingress": {&extensions.Ingress{}},
|
||||
"nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
|
||||
"nginx-svc": {&api.Service{}},
|
||||
"run-my-nginx": {&apps.Deployment{}},
|
||||
"curlpod": {&apps.Deployment{}},
|
||||
"custom-dns": {&api.Pod{}},
|
||||
"dual-stack-default-svc": {&api.Service{}},
|
||||
"dual-stack-ipv4-svc": {&api.Service{}},
|
||||
"dual-stack-ipv6-lb-svc": {&api.Service{}},
|
||||
"dual-stack-ipv6-svc": {&api.Service{}},
|
||||
"hostaliases-pod": {&api.Pod{}},
|
||||
"ingress": {&networking.Ingress{}},
|
||||
"network-policy-allow-all-egress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-allow-all-ingress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-default-deny-egress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-default-deny-ingress": {&networking.NetworkPolicy{}},
|
||||
"network-policy-default-deny-all": {&networking.NetworkPolicy{}},
|
||||
"nginx-policy": {&networking.NetworkPolicy{}},
|
||||
"nginx-secure-app": {&api.Service{}, &apps.Deployment{}},
|
||||
"nginx-svc": {&api.Service{}},
|
||||
"run-my-nginx": {&apps.Deployment{}},
|
||||
},
|
||||
"windows": {
|
||||
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
|
||||
"daemonset": {&apps.DaemonSet{}},
|
||||
"deploy-hyperv": {&apps.Deployment{}},
|
||||
"deploy-resource": {&apps.Deployment{}},
|
||||
"emptydir-pod": {&api.Pod{}},
|
||||
"hostpath-volume-pod": {&api.Pod{}},
|
||||
"secret-pod": {&api.Secret{}, &api.Pod{}},
|
||||
"simple-pod": {&api.Pod{}},
|
||||
"configmap-pod": {&api.ConfigMap{}, &api.Pod{}},
|
||||
"daemonset": {&apps.DaemonSet{}},
|
||||
"deploy-hyperv": {&apps.Deployment{}},
|
||||
"deploy-resource": {&apps.Deployment{}},
|
||||
"emptydir-pod": {&api.Pod{}},
|
||||
"hostpath-volume-pod": {&api.Pod{}},
|
||||
"run-as-username-container": {&api.Pod{}},
|
||||
"run-as-username-pod": {&api.Pod{}},
|
||||
"secret-pod": {&api.Secret{}, &api.Pod{}},
|
||||
"simple-pod": {&api.Pod{}},
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: dependent-envars-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: dependent-envars-demo
|
||||
args:
|
||||
- while true; do echo -en '\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\n'; sleep 30; done;
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
image: busybox
|
||||
env:
|
||||
- name: SERVICE_PORT
|
||||
value: "80"
|
||||
- name: SERVICE_IP
|
||||
value: "172.17.0.1"
|
||||
- name: UNCHANGED_REFERENCE
|
||||
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
||||
- name: PROTOCOL
|
||||
value: "https"
|
||||
- name: SERVICE_ADDRESS
|
||||
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
||||
- name: ESCAPED_REFERENCE
|
||||
value: "$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
|
File diff suppressed because it is too large
Load Diff
|
@ -95,8 +95,4 @@ cid: partners
|
|||
|
||||
<style>
|
||||
{{< include "partner-style.css" >}}
|
||||
</style>
|
||||
|
||||
<script>
|
||||
{{< include "partner-script.js" >}}
|
||||
</script>
|
||||
</style>
|
|
@ -5,11 +5,11 @@ feature:
|
|||
description: >
|
||||
Kubernetes despliega los cambios a tu aplicación o su configuración de forma progresiva mientras monitoriza la salud de la aplicación para asegurarse que no elimina todas tus instancias al mismo tiempo. Si algo sale mal, Kubernetes revertirá el cambio por ti. Aprovéchate del creciente ecosistema de soluciones de despliegue.
|
||||
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Un controlador de _Deployment_ proporciona actualizaciones declarativas para los [Pods](/docs/concepts/workloads/pods/pod/) y los
|
||||
[ReplicaSets](/docs/concepts/workloads/controllers/replicaset/).
|
||||
|
@ -23,10 +23,10 @@ Todos los casos de uso deberían cubrirse manipulando el objeto Deployment.
|
|||
Considera la posibilidad de abrir un incidente en el repositorio principal de Kubernetes si tu caso de uso no está soportado por el motivo que sea.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Casos de uso
|
||||
|
||||
|
@ -1107,4 +1107,4 @@ no generará nuevos despliegues mientras esté pausado. Un Deployment se pausa d
|
|||
de forma similar. Pero se recomienda el uso de Deployments porque se declaran del lado del servidor, y proporcionan características adicionales
|
||||
como la posibilidad de retroceder a revisiones anteriores incluso después de haber terminado una actualización continua.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,18 +1,15 @@
|
|||
---
|
||||
title: Recolección de Basura
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
El papel del recolector de basura de Kubernetes es el de eliminar determinados objetos
|
||||
que en algún momento tuvieron un propietario, pero que ahora ya no.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}referencias de propietario
|
||||
<!-- body -->
|
||||
|
||||
## Propietarios y subordinados
|
||||
|
||||
|
@ -168,16 +165,12 @@ Ver [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment
|
|||
|
||||
Seguimiento en [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
[Documento de Diseño 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
|
||||
|
||||
[Documento de Diseño 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Jobs - Ejecución hasta el final
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
feature:
|
||||
title: Ejecución en lotes
|
||||
description: >
|
||||
|
@ -8,7 +8,7 @@ feature:
|
|||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Un Job crea uno o más Pods y se asegura de que un número específico de ellos termina de forma satisfactoria.
|
||||
Conforme los pods terminan satisfactoriamente, el Job realiza el seguimiento de las ejecuciones satisfactorias.
|
||||
|
@ -21,10 +21,10 @@ como consecuencia de un fallo de hardware o un reinicio en un nodo).
|
|||
|
||||
También se puede usar un Job para ejecutar múltiples Pods en paralelo.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Ejecutar un Job de ejemplo
|
||||
|
||||
|
@ -454,4 +454,4 @@ además del control completo de los Pods que se crean y cómo se les asigna trab
|
|||
Puedes utilizar un [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) para crear un Job que se ejecute en una hora/fecha determinadas, de forma similar
|
||||
a la herramienta `cron` de Unix.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,19 +1,19 @@
|
|||
---
|
||||
title: ReplicaSet
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
El objeto de un ReplicaSet es el de mantener un conjunto estable de réplicas de Pods ejecutándose
|
||||
en todo momento. Así, se usa en numerosas ocasiones para garantizar la disponibilidad de un
|
||||
número específico de Pods idénticos.
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Cómo funciona un ReplicaSet
|
||||
|
||||
|
@ -367,4 +367,4 @@ Los dos sirven al mismo propósito, y se comportan de forma similar, excepto por
|
|||
no soporta los requisitos del selector basado en conjunto, como se describe en la [guía de usuario de etiquetas](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
|
||||
Por ello, se prefiere los ReplicaSets a los ReplicationControllers.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -281,7 +281,7 @@ Incluso se plantea excluir el mecanismo de creación de pods a granel ([#170](ht
|
|||
El ReplicationController está pensado para ser una primitiva de bloques is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
|
||||
## Obejto API
|
||||
## Objeto API
|
||||
|
||||
El ReplicationController es un recurso de alto nivel en la API REST de Kubernetes. Más detalles acerca del
|
||||
objeto API se pueden encontrar aquí:
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
---
|
||||
reviewers:
|
||||
title: Pods
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Los _Pods_ son las unidades de computación desplegables más pequeñas que se pueden crear y gestionar en Kubernetes.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## ¿Qué és un Pod?
|
||||
|
||||
|
@ -151,4 +151,4 @@ Pod es un recurso de nivel superior en la API REST de Kubernetes.
|
|||
La definición de [objeto de API Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
|
||||
describe el objeto en detalle.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,207 @@
|
|||
---
|
||||
title: Empieza a contribuir
|
||||
slug: start
|
||||
content_template: templates/concept
|
||||
weight: 10
|
||||
card:
|
||||
name: contribute
|
||||
weight: 10
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Si quieres empezar a contribuir a la documentación de Kubernetes esta página y su temas enlazados pueden ayudarte a empezar. No necesitas ser un desarrollador o saber escribir de forma técnica para tener un gran impacto en la documentación y experiencia de usuario en Kubernetes! Todo lo que necesitas para los temas en esta página es una [Cuenta en GitHub](https://github.com/join) y un navegador web.
|
||||
|
||||
Si estas buscando información sobre cómo comenzar a contribuir a los repositorios de Kubernetes, entonces dirígete a [las guías de la comunidad Kubernetes](https://github.com/kubernetes/community/blob/master/governance.md)
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Lo básico sobre nuestra documentación
|
||||
|
||||
La documentación de Kuberentes esta escrita usando Markdown, procesada y
|
||||
desplegada usando Hugo. El código fuente está en GitHub accessible en [git.k8s.io/website/](https://github.com/kubernetes/website).
|
||||
La mayoría de la documentación en castellano está en `/content/es/docs`. Alguna de
|
||||
la documentación de referencia se genera automática con los scripts del
|
||||
directorio `/update-imported-docs`.
|
||||
|
||||
Puedes clasificar incidencias, editar contenido y revisar cambios de otros, todo ello
|
||||
desde la página de GitHub. También puedes usar la historia embebida de GitHub y
|
||||
las herramientas de búsqueda.
|
||||
|
||||
No todas las tareas se pueden realizar desde la interfaz web de GitHub, también
|
||||
se discute en las guías de contribución a la documentación
|
||||
[intermedia](/docs/contribute/intermediate/) y
|
||||
[avanzada](/docs/contribute/advanced/)
|
||||
|
||||
### Participar en la documentación de los SIG
|
||||
|
||||
La documentación de Kubernetes es mantenida por el {{< glossary_tooltip text="Special Interest Group" term_id="sig" >}} (SIG) denominado SIG Docs. Nos comunicamos usando un canal de Slack, una lista de correo
|
||||
y una reunión semana por video-conferencia. Siempre son bienvenidos nuevos
|
||||
participantes al grupo. Para más información ver
|
||||
[Participar en SIG Docs](/docs/contribute/participating/).
|
||||
|
||||
### Guías de estilo
|
||||
|
||||
Se mantienen unas [guías de estilo](/docs/contribute/style/style-guide/) con la información sobre las elecciones que cada comunidad SIG Docs ha realizado referente a gramática, sintaxis, formato del código fuente y convenciones tipográficas. Revisa la guía de estilos antes de hacer tu primera contribución y úsala para resolver tus dudas.
|
||||
|
||||
Los cambios en la guía de estilos se hacen desde el SIG Docs como grupo. Para añadir o proponer cambios [añade tus comentarios en la agenda](https://docs.google.com/document/d/1Ds87eRiNZeXwRBEbFr6Z7ukjbTow5RQcNZLaSvWWQsE/edit#) para las próximas reuniones del SIG Docs y participe en las discusiones durante la reunión. Revisa el apartado [avanzado](/docs/contribute/advanced/) para más información.
|
||||
|
||||
### Plantillas para páginas
|
||||
|
||||
Se usan plantillas para las páginas de documentación con el objeto de que todas tengan la misma presentación. Asegúrate de entender como funcionan estas plantillas y revisa el apartado [Uso de plantillas para páginas](/docs/contribute/style/page-templates/). Si tienes alguna consulta, no dudes en ponerte en contacto con el resto del equipo en Slack.
|
||||
|
||||
### Hugo shortcodes
|
||||
|
||||
La documentación de Kubernetes se transforma a partir de Markdown para obtener HTML usando Hugo. Hay que conocer los shortcodes estándar de Hugo, así como algunos que son personalizados para la documentación de Kubernetes. Para más información de como usarlos revisa [Hugo shortcodes personalizados](/docs/contribute/style/hugo-shortcodes/).
|
||||
|
||||
### Múltiples idiomas
|
||||
|
||||
La documentación original está disponible en múltiples idiomas en `/content/`. Cada idioma tiene su propia carpeta con el código de dos letras determinado por el [estándar ISO 639-1](https://www.loc.gov/standards/iso639-2/php/code_list.php). Por ejemplo, la documentación original en inglés se encuentra en `/content/en/docs/`.
|
||||
|
||||
Para más información sobre como contribuir a la documentación en múltiples idiomas revisa ["Localizar contenido"](/docs/contribute/intermediate#localize-content)
|
||||
|
||||
Si te interesa empezar una nueva localización revisa ["Localization"](/docs/contribute/localization/).
|
||||
|
||||
## Registro de incidencias
|
||||
|
||||
Cualquier persona con una cuenta de GitHub puede reportar una incidencia en la documentación de Kubernetes. Si ves algo erróneo, aunque no sepas como resolverlo, [reporta una incidencia](#cómo-reportar-una-incidencia). La única excepción a la regla es si se trata de un pequeño error, como alguno que puedes resolver por ti mismo. En este último caso, puedes tratar de [resolverlo](#mejorar-contenido-existente) sin necesidad de reportar una incidencia primero.
|
||||
|
||||
### Cómo reportar una incidencia
|
||||
|
||||
- **En una página existente**
|
||||
|
||||
Si ves un problema en una página existente en la [documentación de Kuberenetes](/docs/) ve al final de la página y haz clic en el botón **Abrir un Issue**. Si no estas autenticado en GitHub, te pedirá que te identifiques y posteriormente un formulario de nueva incidencia aparecerá con contenido pre-cargado.
|
||||
|
||||
Utilizando formato Markdown completa todos los detalles que sea posible. En los lugares en que haya corchetes (`[ ]`) pon una `x` en medio de los corchetes para representar la elección de una opción. Si tienes una posible solución al problema añádela.
|
||||
|
||||
- **Solicitar una nueva página**
|
||||
|
||||
Si crees que un contenido debería añadirse, pero no estás seguro de donde debería añadirse o si crees que no encaja en las páginas que ya existen, puedes crear un incidente. También puedes elegir una página ya existente donde pienses que pudiera encajar y crear el incidente desde esa página, o ir directamente a [https://github.com/kubernetes/website/issues/new/](https://github.com/kubernetes/website/issues/new/) y crearlo desde allí.
|
||||
|
||||
### Cómo reportar correctamente incidencias
|
||||
|
||||
Para estar seguros que tu incidencia se entiende y se puede procesar ten en cuenta esta guía:
|
||||
|
||||
- Usa la plantilla de incidencia y aporta detalles, cuantos más es mejor.
|
||||
- Explica de forma clara el impacto de la incidencia en los usuarios.
|
||||
- Mantén el alcance de una incidencia a una cantidad de trabajo razonable. Para problemas con un alcance muy amplio divídela en incidencias más pequeñas.
|
||||
|
||||
Por ejemplo, "Arreglar la documentación de seguridad" no es una incidencia procesable, pero "Añadir detalles en el tema 'Restringir acceso a la red'" si lo es.
|
||||
- Si la incidencia está relacionada con otra o con una petición de cambio puedes referirte a ella tanto por la URL como con el número de la incidencia o petición de cambio con el carácter `#` delante. Por ejemplo `Introducido por #987654`.
|
||||
- Se respetuoso y evita desahogarte. Por ejemplo, "La documentación sobre X apesta" no es útil o una crítica constructiva. El [Código de conducta](/community/code-of-conduct/) también aplica para las interacciones en los repositorios de Kubernetes en GitHub.
|
||||
|
||||
## Participa en las discusiones de SIG Docs
|
||||
|
||||
El equipo de SIG Docs se comunica por las siguientes vías:
|
||||
|
||||
- [Únete al Slack de Kubernetes](http://slack.k8s.io/) y entra al canal `#sig-docs` o `#kubernetes-docs-es` para la documentación en castellano. En Slack, discutimos sobre las incidencias de documentación en tiempo real, nos coordinamos y hablamos de temas relacionados con la documentación. No olvides presentarte cuando entres en el canal para que podamos saber un poco más de ti!
|
||||
- [Únete a la lista de correo `kubernetes-sig-docs`](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), donde tienen lugar las discusiones más amplias y se registran las decisiones oficiales.
|
||||
- Participa en la video-conferencia [semanal de SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs), esta se anuncia en el canal de Slack y la lista de correo. Actualmente esta reunión tiene lugar usando Zoom, por lo que necesitas descargar el [cliente Zoom](https://zoom.us/download) o llamar usando un teléfono.
|
||||
|
||||
{{< note >}}
|
||||
Puedes revisar la reunión semanal de SIG Docs en el [Calendario de reuniones de la comunidad Kubernetes](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles).
|
||||
{{< /note >}}
|
||||
|
||||
## Mejorar contenido existente
|
||||
|
||||
Para mejorar contenido existente crea una _pull request(PR)_ después de crear un _fork_. Estos términos son [específicos de GitHub](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/). No es necesario conocer todo sobre estos términos porque todo se realiza a través del navegador web. Cuando continúes con la [guía de contribución de documentación intermedia](/docs/contribute/intermediate/) entonces necesitarás un poco más de conocimiento de la metodología Git.
|
||||
|
||||
{{< note >}}
|
||||
**Desarrolladores de código de Kubernetes**: Si estás documentando una nueva característica para una versión futura de Kubernetes, entonces el proceso es un poco diferente. Mira el proceso y pautas en [Documentar una característica](/docs/contribute/intermediate/#sig-members-documenting-new-features) así como información sobre plazos.
|
||||
{{< /note >}}
|
||||
|
||||
### Firma el CNCF CLA {#firma-el-cla}
|
||||
|
||||
Antes de poder contribuir o documentar en Kubernetes **es necesario** leer [Guía del contribuidor](https://github.com/kubernetes/community/blob/master/contributors/guide/README.md) y [firmar el `Contributor License Agreement` (CLA)](https://github.com/kubernetes/community/blob/master/CLA.md). No te preocupes esto no lleva mucho tiempo!
|
||||
|
||||
### Busca algo con lo que trabajar
|
||||
|
||||
Si ves algo que quieras arreglar directamente, simplemente sigue las instrucciones más abajo. No es necesario que [reportes una incidencia](#registro-de-incidencias) (aunque de todas formas puedes).
|
||||
|
||||
Si quieres empezar por buscar una incidencia existente para trabajar puedes ir [https://github.com/kubernetes/website/issues](https://github.com/kubernetes/website/issues) y buscar una incidencia con la etiqueta `good first issue` (puedes usar [este](https://github.com/kubernetes/website/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) atajo). Lee los comentarios y asegurate de que no hay una petición de cambio abierta para esa incidencia y que nadie a dejado un comentario indicando que están trabajando en esa misma incidencia recientemente (3 días es una buena regla). Deja un comentario indicando que te gustaría trabajar en la incidencia.
|
||||
|
||||
### Elije que rama de Git usar
|
||||
|
||||
El aspecto más importante a la hora de mandar una petición de cambio es que rama usar como base para trabajar. Usa estas pautas para tomar la decisión:
|
||||
|
||||
- Utiliza `master` para arreglar problemas en contenido ya existente publicado, o hacer mejoras en contenido ya existente.
|
||||
- Utiliza una rama de versión (cómo `dev-{{< release-branch >}}` para la versión {{< release-branch>}}) para documentar futuras características o cambios para futuras versiones que todavía no se han publicado.
|
||||
- Utiliza una rama de características que haya sido acordada por SIG Docs para colaborar en grandes mejoras o cambios en la documentación existente, incluida la reorganización de contenido o cambios en la apariencia del sitio web.
|
||||
|
||||
Si todavía no estás seguro con que rama utilizar, pregunta en `#sig-docs`en Slack o atiende una reunión semanal del SIG Docs para aclarar tus dudas.
|
||||
|
||||
### Enviar una petición de cambio
|
||||
|
||||
Sigue estos pasos para enviar una petición de cambio y mejorar la documentación de Kubernetes.
|
||||
|
||||
1. En la página que hayas visto una incidencia haz clic en el icono del lápiz arriba a la derecha.
|
||||
Una nueva página de GitHub aparecerá con algunos textos de ayuda.
|
||||
2. Si nunca has creado un copia del repositorio de documentación de Kubernetes te pedirá que lo haga.
|
||||
Crea la copia bajo tu usuario de GitHub en lugar de otra organización de la que seas miembro. La copia generalmente tiene una URL como `https://github.com/<username>/website`, a menos que ya tengas un repositorio con un nombre en conflicto con este.
|
||||
|
||||
La razón por la que se pide crear una copia del repositorio es porque no tienes permisos para subir cambios directamente a rama en el repositorio original de Kubernetes.
|
||||
3. Aparecerá el editor Markdown de GitHub con el fichero Markdown fuente cargado. Realiza tus cambios. Debajo del editor completa el formulario **Propose file change**. El primer campo es el resumen del mensaje de tu commit y no debe ser más largo de 50 caracteres. El segundo campo es opcional, pero puede incluir más información y detalles si procede.
|
||||
|
||||
{{< note >}}
|
||||
No incluyas referencias a otras incidencias o peticiones de cambio de GitHub en el mensaje de los commits. Esto lo puedes añadir después en la descripción de la petición de cambio.
|
||||
{{< /note >}}
|
||||
|
||||
Haz clic en **Propose file change**. El cambio se guarda como un commit en una nueva rama de tu copia, automáticamente se le asignará un nombre estilo `patch-1`.
|
||||
|
||||
4. La siguiente pantalla resume los cambios que has hecho pudiendo comparar la nueva rama (la **head fork** y cajas de selección **compare**) con el estado actual del **base fork** y la rama **base** (`master` en el repositorio por defecto `kubernetes/website`). Puedes cambiar cualquiera de las cajas de selección, pero no lo hagas ahora. Hecha un vistazo a las distintas vistas en la parte baja de la pantalla y si todo parece correcto haz clic en **Create pull request**.
|
||||
|
||||
{{< note >}}
|
||||
Si no deseas crear una petición de cambio puedes hacerlo más delante, solo basta con navegar a la URL principal del repositorio de Kubernetes website o de tu copia. La página de GitHub te mostrará un mensaje para crear una petición de cambio si detecta que has subido una nueva rama a tu repositorio copia.
|
||||
{{< /note >}}
|
||||
|
||||
5. La pantalla **Open a pull request** aparece. El tema de una petición de cambio es el resumen del commit, pero puedes cambiarlo si lo necesitas. El cuerpo está pre-cargado con el mensaje del commit extendido (si lo hay) junto con una plantilla. Lee la plantilla y llena los detalles requeridos, entonces borra el texto extra de la plantilla. Deja la casilla **Allow edits from maintainers** seleccionada. Haz clic en **Create pull request**.
|
||||
|
||||
Enhorabuena! Tu petición de cambio está disponible en [Pull requests](https://github.com/kubernetes/website/pulls).
|
||||
|
||||
Después de unos minutos ya podrás pre-visualizar la página con los cambios de tu PR aplicados. Ve a la pestaña de **Conversation** en tu PR y haz clic en el enlace **Details** para ver el test `deploy/netlify`, localizado casi al final de la página. Se abrirá en la misma ventana del navegado por defecto.
|
||||
|
||||
6. Espera una revisión. Generalmente `k8s-ci-robot` sugiere unos revisores. Si un revisor te pide que hagas cambios puedes ir a la pestaña **FilesChanged** y hacer clic en el icono del lápiz para hacer tus cambios en cualquiera de los ficheros en la petición de cambio. Cuando guardes los cambios se creará un commit en la rama asociada a la petición de cambio.
|
||||
|
||||
7. Si tu cambio es aceptado, un revisor fusionará tu petición de cambio y tus cambios serán visibles en pocos minutos en la web de [kubernetes.io](https://kubernetes.io).
|
||||
|
||||
Esta es solo una forma de mandar una petición de cambio. Si eres un usuario de Git y GitHub avanzado puedes usar una aplicación GUI local o la linea de comandos con el cliente Git en lugar de usar la UI de GitHub. Algunos conceptos básicos sobre el uso de la línea de comandos Git
|
||||
cliente se discuten en la guía de documentación [intermedia](/docs/contribute/intermediate/).
|
||||
|
||||
## Revisar peticiones de cambio de documentación
|
||||
|
||||
Las personas que aún no son aprobadores o revisores todavía pueden revisar peticiones de cambio. Las revisiones no se consideran "vinculantes", lo que significa que su revisión por sí sola no hará que se fusionen las peticiones de cambio. Sin embargo, aún puede ser útil. Incluso si no deja ningún comentario de revisión, puede tener una idea de las convenciones y etiquetas en una petición de cambio y acostumbrarse al flujo de trabajo.
|
||||
|
||||
1. Ve a [https://github.com/kubernetes/website/pulls](https://github.com/kubernetes/website/pulls). Desde ahí podrás ver una lista de todas las peticiones de cambio en la documentación del website de Kubernetes.
|
||||
|
||||
2. Por defecto el único filtro que se aplica es `open`, por lo que no puedes ver las que ya se han cerrado o fusionado. Es una buena idea aplicar el filtro `cncf-cla: yes` y para tu primera revisión es una buena idea añadir `size/S` o `size/XS`. La etiqueta `size` se aplica automáticamente basada en el número de lineas modificadas en la PR. Puedes aplicar filtros con las cajas de selección al principio de la página, o usar [estos atajos](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+yes%22+label%3Asize%2FS) solo para PRs pequeñas. Los filtros son aplicados con `AND` todos juntos, por lo que no se puede buscar a la vez `size/S` y `size/XS` en la misma consulta.
|
||||
|
||||
3. Ve a la pestaña **Files changed**. Mira los cambios introducidos en la PR, y si aplica, mira también los incidentes enlazados. Si ves un algún problema o posibilidad de mejora pasa el cursor sobre la línea y haz click en el símbolo `+` que aparece.
|
||||
|
||||
Puedes entonces dejar un comentario seleccionando **Add single comment** o **Start a review**. Normalmente empezar una revisión es la forma recomendada, ya que te permite hacer varios comentarios y avisar a propietario de la PR solo cuando tu revisión este completada, en lugar de notificar cada comentario.
|
||||
|
||||
4. Cuando hayas acabado de revisar, haz clic en **Review changes** en la parte superior de la página. Puedes ver un resumen de la revisión y puedes elegir entre comentar, aprobar o solicitar cambios. Los nuevos contribuidores siempre deben elegir **Comment**.
|
||||
|
||||
Gracias por revisar una petición de cambio! Cuando eres nuevo en un proyecto es buena idea solicitar comentarios y opiniones en las revisiones de una petición de cambio. Otro buen lugar para solicitar comentarios es en el canal de Slack `#sig-docs`.
|
||||
|
||||
## Escribir un artículo en el blog
|
||||
|
||||
Cualquiera puede escribir un articulo en el blog y enviarlo para revisión. Los artículos del blog no deben ser comerciales y deben consistir en contenido que se pueda aplicar de la forma más amplia posible a la comunidad de Kubernetes.
|
||||
|
||||
Para enviar un artículo al blog puedes hacerlo también usando el formulario [Kubernetes blog submission form](https://docs.google.com/forms/d/e/1FAIpQLSch_phFYMTYlrTDuYziURP6nLMijoXx_f7sLABEU5gWBtxJHQ/viewform), o puedes seguir los siguientes pasos.
|
||||
|
||||
1. [Firma el CLA](#sign-the-cla) si no lo has hecho ya.
|
||||
2. Revisa el formato Markdown en los artículos del blog existentes en el [repositorio website](https://github.com/kubernetes/website/tree/master/content/en/blog/_posts).
|
||||
3. Escribe tu artículo usando el editor de texto que prefieras.
|
||||
4. En el mismo enlace que el paso 2 haz clic en botón **Create new file**. Pega el contenido de tu editor. Nombra el fichero para que coincida con el título del artículo, pero no pongas la fecha en el nombre. Los revisores del blog trabajarán contigo en el nombre final del fichero y la fecha en la que será publicado.
|
||||
5. Cuando guardes el fichero, GitHub te guiará en el proceso de petición de cambio.
|
||||
6. Un revisor de artículos del blog revisará tu envío y trabajará contigo aportando comentarios y los detalles finales. Cuando el artículo sea aprobado, se establecerá una fecha de publicación.
|
||||
|
||||
## Envía un caso de estudio
|
||||
|
||||
Un caso de estudio destaca como organizaciones están usando Kubernetes para resolver problemas del mundo real. Estos se escriben en colaboración con el equipo de marketing de Kubernetes que está dirigido por la {{< glossary_tooltip text="CNCF" term_id="cncf" >}}.
|
||||
|
||||
Revisa el código fuente para ver los [casos de estudio existentes](https://github.com/kubernetes/website/tree/master/content/en/case-studies). Usa el formulario [Kubernetes case study submission form](https://www.cncf.io/people/end-user-community/) para enviar tu propuesta.
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Cuando entiendas mejor las tareas mostradas en este tema y quieras formar parte del equipo de documentación de Kubernetes de una forma más activa lee la [guía intermedia de contribución](/docs/contribute/intermediate/).
|
|
@ -71,17 +71,14 @@ Realiza tareas comunes de gestión de un DaemonSet, como llevar a cabo una actua
|
|||
|
||||
## Gestionar GPUs
|
||||
|
||||
COnfigura y planifica GPUs de NVIDIA para hacerlas disponibles como recursos a los nodos de un clúster.
|
||||
Configura y planifica GPUs de NVIDIA para hacerlas disponibles como recursos a los nodos de un clúster.
|
||||
|
||||
## Gestionar HugePages
|
||||
|
||||
Configura y planifica HugePages como un recurso planificado en un clúster.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
Si quisieras escribir una página de Tareas, echa un vistazo a
|
||||
[Crear una Petición de Subida de Documentación](/docs/home/contribute/create-pull-request/).
|
||||
|
||||
|
|
|
@ -63,10 +63,8 @@ du tableau de PodCondition a six champs possibles :
|
|||
* `PodScheduled` : le Pod a été affecté à un nœud ;
|
||||
* `Ready` : le Pod est prêt à servir des requêtes et doit être rajouté aux équilibreurs
|
||||
de charge de tous les Services correspondants ;
|
||||
* `Initialized` : tous les [init containers](/docs/concepts/workloads/pods/init-containers)
|
||||
* `Initialized` : tous les [init containers](/fr/docs/concepts/workloads/pods/init-containers)
|
||||
ont démarré correctement ;
|
||||
* `Unschedulable` : le scheduler ne peut pas affecter le Pod pour l'instant, par exemple
|
||||
par manque de ressources ou en raison d'autres contraintes ;
|
||||
* `ContainersReady` : tous les conteneurs du Pod sont prêts.
|
||||
|
||||
|
||||
|
@ -98,12 +96,12 @@ Chaque sonde a un résultat parmi ces trois :
|
|||
* Failure: Le Conteneur a échoué au diagnostic.
|
||||
* Unknown: L'exécution du diagnostic a échoué, et donc aucune action ne peut être prise.
|
||||
|
||||
kubelet peut optionnellement exécuter et réagir à deux types de sondes sur des conteneurs
|
||||
kubelet peut optionnellement exécuter et réagir à trois types de sondes sur des conteneurs
|
||||
en cours d'exécution :
|
||||
|
||||
* `livenessProbe` : Indique si le Conteneur est en cours d'exécution. Si
|
||||
la liveness probe échoue, kubelet tue le Conteneur et le Conteneur
|
||||
est soumis à sa [politique de redémarrage](#restart-policy) (restart policy).
|
||||
est soumis à sa [politique de redémarrage](#politique-de-redemarrage) (restart policy).
|
||||
Si un Conteneur ne fournit pas de liveness probe, l'état par défaut est `Success`.
|
||||
|
||||
* `readinessProbe` : Indique si le Conteneur est prêt à servir des requêtes.
|
||||
|
@ -113,7 +111,13 @@ en cours d'exécution :
|
|||
`Failure`. Si le Conteneur ne fournit pas de readiness probe, l'état par
|
||||
défaut est `Success`.
|
||||
|
||||
### Quand devez-vous utiliser une liveness ou une readiness probe ?
|
||||
* `startupProbe`: Indique si l'application à l'intérieur du conteneur a démarré.
|
||||
Toutes les autres probes sont désactivées si une starup probe est fournie,
|
||||
jusqu'à ce qu'elle réponde avec succès. Si la startup probe échoue, le kubelet
|
||||
tue le conteneur, et le conteneur est assujetti à sa [politique de redémarrage](#politique-de-redemarrage).
|
||||
Si un conteneur ne fournit pas de startup probe, l'état par défaut est `Success`.
|
||||
|
||||
### Quand devez-vous utiliser une liveness probe ?
|
||||
|
||||
Si le process de votre Conteneur est capable de crasher de lui-même lorsqu'il
|
||||
rencontre un problème ou devient inopérant, vous n'avez pas forcément besoin
|
||||
|
@ -124,6 +128,10 @@ Si vous désirez que votre Conteneur soit tué et redémarré si une sonde écho
|
|||
spécifiez une liveness probe et indiquez une valeur pour `restartPolicy` à Always
|
||||
ou OnFailure.
|
||||
|
||||
### Quand devez-vous utiliser une readiness probe ?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.0" state="stable" >}}
|
||||
|
||||
Si vous voulez commencer à envoyer du trafic à un Pod seulement lorsqu'une sonde
|
||||
réussit, spécifiez une readiness probe. Dans ce cas, la readiness probe peut être
|
||||
la même que la liveness probe, mais l'existence de la readiness probe dans la spec
|
||||
|
@ -142,8 +150,16 @@ de sa suppression, le Pod se met automatiquement dans un état non prêt, que la
|
|||
readiness probe existe ou non.
|
||||
Le Pod reste dans le statut non prêt le temps que les Conteneurs du Pod s'arrêtent.
|
||||
|
||||
Pour plus d'informations sur la manière de mettre en place une liveness ou readiness probe,
|
||||
voir [Configurer des Liveness et Readiness Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
|
||||
### Quand devez-vous utiliser une startup probe ?
|
||||
|
||||
{{< feature-state for_k8s_version="v1.16" state="alpha" >}}
|
||||
|
||||
Si votre conteneur démarre habituellement en plus de `initialDelaySeconds + failureThreshold × periodSeconds`,
|
||||
vous devriez spécifier une startup probe qui vérifie le même point de terminaison que la liveness probe. La valeur par défaut pour `periodSeconds` est 30s.
|
||||
Vous devriez alors mettre sa valeur `failureThreshold` suffisamment haute pour permettre au conteneur de démarrer, sans changer les valeurs par défaut de la liveness probe. Ceci aide à se protéger de deadlocks.
|
||||
|
||||
Pour plus d'informations sur la manière de mettre en place une liveness, readiness ou startup probe,
|
||||
voir [Configurer des Liveness, Readiness et Startup Probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
## Statut d'un Pod et d'un Conteneur
|
||||
|
||||
|
@ -172,9 +188,7 @@ d'informations.
|
|||
...
|
||||
```
|
||||
|
||||
* `Running` : Indique que le conteneur s'exécute sans problème. Une fois qu'un centeneur est
|
||||
dans l'état Running, le hook `postStart` est exécuté (s'il existe). Cet état affiche aussi
|
||||
le moment auquel le conteneur est entré dans l'état Running.
|
||||
* `Running` : Indique que le conteneur s'exécute sans problème. Le hook `postStart` (s'il existe) est exécuté avant que le conteneur entre dans l'état Running. Cet état affiche aussi le moment auquel le conteneur est entré dans l'état Running.
|
||||
|
||||
```yaml
|
||||
...
|
||||
|
@ -199,27 +213,30 @@ dans l'état Terminated, le hook `preStop` est exécuté (s'il existe).
|
|||
...
|
||||
```
|
||||
|
||||
## Pod readiness gate
|
||||
## Pod readiness {#pod-readiness-gate}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
|
||||
|
||||
Afin d'étendre la readiness d'un Pod en autorisant l'injection de données
|
||||
supplémentaires ou des signaux dans `PodStatus`, Kubernetes 1.11 a introduit
|
||||
une fonctionnalité appelée [Pod ready++](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md).
|
||||
Vous pouvez utiliser le nouveau champ `ReadinessGate` dans `PodSpec`
|
||||
pour spécifier des conditions additionnelles à évaluer pour la readiness d'un Pod.
|
||||
Si Kubernetes ne peut pas trouver une telle condition dans le champ `status.conditions`
|
||||
d'un Pod, le statut de la condition est "`False`" par défaut. Voici un exemple :
|
||||
Votre application peut injecter des données dans `PodStatus`.
|
||||
|
||||
_Pod readiness_. Pour utiliser cette fonctionnalité, remplissez `readinessGates` dans le PodSpec avec
|
||||
une liste de conditions supplémentaires que le kubelet évalue pour la disponibilité du Pod.
|
||||
|
||||
Les Readiness gates sont déterminées par l'état courant des champs `status.condition` du Pod.
|
||||
Si Kubernetes ne peut pas trouver une telle condition dans le champs `status.conditions` d'un Pod, the statut de la condition
|
||||
est mise par défaut à "`False`".
|
||||
|
||||
Voici un exemple :
|
||||
|
||||
```yaml
|
||||
Kind: Pod
|
||||
kind: Pod
|
||||
...
|
||||
spec:
|
||||
readinessGates:
|
||||
- conditionType: "www.example.com/feature-1"
|
||||
status:
|
||||
conditions:
|
||||
- type: Ready # ceci est une builtin PodCondition
|
||||
- type: Ready # une PodCondition intégrée
|
||||
status: "False"
|
||||
lastProbeTime: null
|
||||
lastTransitionTime: 2018-01-01T00:00:00Z
|
||||
|
@ -233,27 +250,26 @@ status:
|
|||
...
|
||||
```
|
||||
|
||||
Les nouvelles conditions du Pod doivent être conformes au [format des étiquettes](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) de Kubernetes.
|
||||
La commande `kubectl patch` ne prenant pas encore en charge la modifictaion du statut
|
||||
des objets, les nouvelles conditions du Pod doivent être injectées avec
|
||||
l'action `PATCH` en utilisant une des [bibliothèques KubeClient](/docs/reference/using-api/client-libraries/).
|
||||
Les conditions du Pod que vous ajoutez doivent avoir des noms qui sont conformes au [format des étiquettes](/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) de Kubernetes.
|
||||
|
||||
Avec l'introduction de nouvelles conditions d'un Pod, un Pod est considéré comme prêt
|
||||
**seulement** lorsque les deux déclarations suivantes sont vraies :
|
||||
### Statut de la disponibilité d'un Pod {#statut-pod-disponibilité}
|
||||
|
||||
La commande `kubectl patch` ne peut pas patcher le statut d'un objet.
|
||||
Pour renseigner ces `status.conditions` pour le pod, les applications et
|
||||
{{< glossary_tooltip term_id="operator-pattern" text="operators">}} doivent utiliser l'action `PATCH`.
|
||||
Vous pouvez utiliser une [bibliothèque client Kubernetes](/docs/reference/using-api/client-libraries/) pour
|
||||
écrire du code qui renseigne les conditions particulières pour la disponibilité dun Pod.
|
||||
|
||||
Pour un Pod utilisant des conditions particulières, ce Pod est considéré prêt **seulement**
|
||||
lorsque les deux déclarations ci-dessous sont vraies :
|
||||
|
||||
* Tous les conteneurs du Pod sont prêts.
|
||||
* Toutes les conditions spécifiées dans `ReadinessGates` sont à "`True`".
|
||||
* Toutes les conditions spécifiées dans `ReadinessGates` sont `True`.
|
||||
|
||||
Pour faciliter le changement de l'évaluation de la readiness d'un Pod,
|
||||
une nouvelle condition de Pod `ContainersReady` est introduite pour capturer
|
||||
l'ancienne condition `Ready` d'un Pod.
|
||||
Lorsque les conteneurs d'un Pod sont prêts mais qu'au moins une condition particulière
|
||||
est manquante ou `False`, le kubelet renseigne la condition du Pod à `ContainersReady`.
|
||||
|
||||
Avec K8s 1.11, en tant que fonctionnalité alpha, "Pod Ready++" doit être explicitement activé en mettant la [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `PodReadinessGates`
|
||||
à true.
|
||||
|
||||
Avec K8s 1.12, la fonctionnalité est activée par défaut.
|
||||
|
||||
## Restart policy
|
||||
## Politique de redémarrage
|
||||
|
||||
La structure PodSpec a un champ `restartPolicy` avec comme valeur possible
|
||||
Always, OnFailure et Never. La valeur par défaut est Always.
|
||||
|
@ -267,33 +283,30 @@ une fois attaché à un nœud, un Pod ne sera jamais rattaché à un autre nœud
|
|||
|
||||
## Durée de vie d'un Pod
|
||||
|
||||
En général, un Pod ne disparaît pas avant que quelqu'un le détruise. Ceci peut être
|
||||
un humain ou un contrôleur. La seule exception à cette règle est pour les Pods ayant
|
||||
une `phase` Succeeded ou Failed depuis une durée donnée (déterminée
|
||||
par `terminated-pod-gc-threshold` sur le master), qui expireront et seront
|
||||
automatiquement détruits.
|
||||
En général, les Pods restent jusqu'à ce qu'un humain ou un process de
|
||||
{{< glossary_tooltip term_id="controller" text="contrôleur" >}} les supprime explicitement.
|
||||
|
||||
Trois types de contrôleurs sont disponibles :
|
||||
Le plan de contrôle nettoie les Pods terminés (avec une phase à `Succeeded` ou
|
||||
`Failed`), lorsque le nombre de Pods excède le seuil configuré
|
||||
(determiné par `terminated-pod-gc-threshold` dans le kube-controller-manager).
|
||||
Ceci empêche une fuite de ressources lorsque les Pods sont créés et supprimés au fil du temps.
|
||||
|
||||
- Utilisez un [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/) pour des
|
||||
Pods qui doivent se terminer, par exemple des calculs par batch. Les Jobs sont appropriés
|
||||
Il y a différents types de ressources pour créer des Pods :
|
||||
|
||||
- Utilisez un {{< glossary_tooltip term_id="deployment" >}},
|
||||
{{< glossary_tooltip term_id="replica-set" >}} ou {{< glossary_tooltip term_id="statefulset" >}}
|
||||
pour les Pods qui ne sont pas censés terminer, par exemple des serveurs web.
|
||||
|
||||
- Utilisez un {{< glossary_tooltip term_id="job" >}}
|
||||
pour les Pods qui sont censés se terminer une fois leur tâche accomplie. Les Jobs sont appropriés
|
||||
seulement pour des Pods ayant `restartPolicy` égal à OnFailure ou Never.
|
||||
|
||||
- Utilisez un [ReplicationController](/docs/concepts/workloads/controllers/replicationcontroller/),
|
||||
[ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) ou
|
||||
[Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
pour des Pods qui ne doivent pas s'arrêter, par exemple des serveurs web.
|
||||
ReplicationControllers sont appropriés pour des Pods ayant `restartPolicy` égal à
|
||||
Always.
|
||||
- Utilisez un {{< glossary_tooltip term_id="daemonset" >}}
|
||||
pour les Pods qui doivent s'exécuter sur chaque noeud éligible.
|
||||
|
||||
- Utilisez un [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) pour des Pods
|
||||
qui doivent s'exécuter une fois par machine, car ils fournissent un service système
|
||||
au niveau de la machine.
|
||||
|
||||
Les trois types de contrôleurs contiennent un PodTemplate. Il est recommandé
|
||||
de créer le contrôleur approprié et de le laisser créer les Pods, plutôt que de
|
||||
créer directement les Pods vous-même. Ceci car les Pods seuls ne sont pas résilients
|
||||
aux pannes machines, alors que les contrôleurs le sont.
|
||||
Toutes les ressources de charges de travail contiennent une PodSpec. Il est recommandé de créer
|
||||
la ressource de charges de travail appropriée et laisser le contrôleur de la ressource créer les Pods
|
||||
pour vous, plutôt que de créer directement les Pods vous-même.
|
||||
|
||||
Si un nœud meurt ou est déconnecté du reste du cluster, Kubernetes applique
|
||||
une politique pour mettre la `phase` de tous les Pods du nœud perdu à Failed.
|
||||
|
@ -391,7 +404,7 @@ spec:
|
|||
[attacher des handlers à des événements de cycle de vie d'un conteneur](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
|
||||
|
||||
* Apprenez par la pratique
|
||||
[configurer des liveness et readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/).
|
||||
[configurer des liveness, readiness et startup probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
* En apprendre plus sur les [hooks de cycle de vie d'un Conteneur](/docs/concepts/containers/container-lifecycle-hooks/).
|
||||
|
||||
|
|
|
@ -16,23 +16,18 @@ Cette page fournit un aperçu du `Pod`, l'objet déployable le plus petit dans l
|
|||
|
||||
## Comprendre les Pods
|
||||
|
||||
Un *Pod* est l'unité d'exécution de base d'une application Kubernetes--l'unité la plus petite et la plus simple dans le modèle d'objets de Kubernetes--que vous créez ou déployez. Un Pod représente des process en cours d'exécution dans votre {{< glossary_tooltip term_id="cluster" >}}.
|
||||
Un *Pod* est l'unité d'exécution de base d'une application Kubernetes--l'unité la plus petite et la plus simple dans le modèle d'objets de Kubernetes--que vous créez ou déployez. Un Pod représente des process en cours d'exécution dans votre {{< glossary_tooltip term_id="cluster" text="cluster" >}}.
|
||||
|
||||
Un Pod encapsule un conteneur applicatif (ou, dans certains cas, plusieurs conteneurs), des ressources de stockage, une IP réseau unique, et des options qui contrôlent comment le ou les conteneurs doivent s'exécuter. Un Pod représente une unité de déploiement : *une instance unique d'une application dans Kubernetes*, qui peut consister soit en un unique {{< glossary_tooltip text="container" term_id="container" >}} soit en un petit nombre de conteneurs qui sont étroitement liés et qui partagent des ressources.
|
||||
Un Pod encapsule un conteneur applicatif (ou, dans certains cas, plusieurs conteneurs), des ressources de stockage, une identité réseau (adresse IP) unique, ainsi que des options qui contrôlent comment le ou les conteneurs doivent s'exécuter. Un Pod représente une unité de déploiement : *une instance unique d'une application dans Kubernetes*, qui peut consister soit en un unique {{< glossary_tooltip text="container" term_id="container" >}} soit en un petit nombre de conteneurs qui sont étroitement liés et qui partagent des ressources.
|
||||
|
||||
> [Docker](https://www.docker.com) est le runtime de conteneurs le plus courant utilisé dans un Pod Kubernetes, mais les Pods prennent également en charge d'autres [runtimes de conteneurs](https://kubernetes.io/docs/setup/production-environment/container-runtimes/).
|
||||
> [Docker](https://www.docker.com) est le runtime de conteneurs le plus courant utilisé dans un Pod Kubernetes, mais les Pods prennent également en charge d'autres [runtimes de conteneurs](/docs/setup/production-environment/container-runtimes/).
|
||||
|
||||
Les Pods dans un cluster Kubernetes peuvent être utilisés de deux manières différentes :
|
||||
|
||||
* **les Pods exécutant un conteneur unique**. Le modèle "un-conteneur-par-Pod" est le cas d'utilisation Kubernetes le plus courant ; dans ce cas, vous pouvez voir un Pod comme un wrapper autour d'un conteneur unique, et Kubernetes gère les Pods plutôt que directement les conteneurs.
|
||||
* **les Pods exécutant plusieurs conteneurs devant travailler ensemble**. Un Pod peut encapsuler une application composée de plusieurs conteneurs co-localisés qui sont étroitement liés et qui doivent partager des ressources. Ces conteneurs co-localisés pourraient former une unique unité de service cohésive--un conteneur servant des fichiers d'un volume partagé au public, alors qu'un conteneur "sidecar" séparé rafraîchit ou met à jour ces fichiers. Le Pod enveloppe ensemble ces conteneurs et ressources de stockage en une entité maniable de base.
|
||||
|
||||
Le [Blog Kubernetes](http://kubernetes.io/blog) contient quelques informations supplémentaires sur les cas d'utilisation des Pods. Pour plus d'informations, voir :
|
||||
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns)
|
||||
* [Container Design Patterns](https://kubernetes.io/blog/2016/06/container-design-patterns)
|
||||
|
||||
Chaque Pod est destiné à exécuter une instance unique d'une application donnée. Si vous désirez mettre à l'échelle votre application horizontalement, (par ex., exécuter plusieurs instances), vous devez utiliser plusieurs Pods, un pour chaque instance. Dans Kubernetes, on parle généralement de _réplication_. Des Pods répliqués sont en général créés et gérés comme un groupe par une abstraction appelée Controller. Voir [Pods et Controllers](#pods-and-controllers) pour plus d'informations.
|
||||
Chaque Pod est destiné à exécuter une instance unique d'une application donnée. Si vous désirez mettre à l'échelle votre application horizontalement, (pour fournir plus de ressources au global en exécutant plus d'instances), vous devez utiliser plusieurs Pods, un pour chaque instance. Dans Kubernetes, on parle typiquement de _réplication_. Des Pods répliqués sont en général créés et gérés en tant que groupe par une ressource de charge de travail et son {{< glossary_tooltip text="_contrôleur_" term_id="controller" >}}. Voir [Pods et contrôleurs](#pods-et-controleurs) pour plus d'informations.
|
||||
|
||||
### Comment les Pods gèrent plusieurs conteneurs
|
||||
|
||||
|
@ -48,61 +43,76 @@ Les Pods fournissent deux types de ressources partagées pour leurs conteneurs :
|
|||
|
||||
#### Réseau
|
||||
|
||||
Chaque Pod se voit assigner une adresse IP unique. Tous les conteneurs d'un Pod partagent le même namespace réseau, y compris l'adresse IP et les ports réseau. Les conteneurs *à l'intérieur d'un Pod* peuvent communiquer entre eux en utilisant `localhost`. Lorsque les conteneurs dans un Pod communiquent avec des entités *en dehors du Pod*, ils doivent coordonner comment ils utilisent les ressources réseau partagées (comme les ports).
|
||||
Chaque Pod se voit assigner une adresse IP unique pour chaque famille d'adresses. Tous les conteneurs d'un Pod partagent le même namespace réseau, y compris l'adresse IP et les ports réseau. Les conteneurs *à l'intérieur d'un Pod* peuvent communiquer entre eux en utilisant `localhost`. Lorsque les conteneurs dans un Pod communiquent avec des entités *en dehors du Pod*, ils doivent coordonner comment ils utilisent les ressources réseau partagées (comme les ports).
|
||||
|
||||
#### Stockage
|
||||
|
||||
Un Pod peut spécifier un jeu de {{< glossary_tooltip text="Volumes" term_id="volume" >}} de stockage partagés. Tous les conteneurs dans le Pod peuvent accéder aux volumes partagés, permettant à ces conteneurs de partager des données. Les volumes permettent aussi les données persistantes d'un Pod de survivre au cas où un des conteneurs doit être redémarré. Voir [Volumes](/docs/concepts/storage/volumes/) pour plus d'informations sur la façon dont Kubernetes implémente le stockage partagé dans un Pod.
|
||||
Un Pod peut spécifier un jeu de {{< glossary_tooltip text="volumes" term_id="volume" >}} de stockage partagés. Tous les conteneurs dans le Pod peuvent accéder aux volumes partagés, permettant à ces conteneurs de partager des données. Les volumes permettent aussi les données persistantes d'un Pod de survivre au cas où un des conteneurs doit être redémarré. Voir [Volumes](/docs/concepts/storage/volumes/) pour plus d'informations sur la façon dont Kubernetes implémente le stockage partagé dans un Pod.
|
||||
|
||||
## Travailler avec des Pods
|
||||
|
||||
Vous aurez rarement à créer directement des Pods individuels dans Kubernetes--même des Pods à un seul conteneur. Ceci est dû au fait que les Pods sont conçus comme des entités relativement éphémères et jetables. Lorsqu'un Pod est créé (directement par vous ou indirectement par un Controller), il est programmé pour s'exécuter sur un {{< glossary_tooltip term_id="node" >}} dans votre cluster. Le Pod reste sur ce Nœud jusqu'à ce que le process se termine, l'objet pod soit supprimé, le pod soit *expulsé* par manque de ressources, ou le Nœud soit en échec.
|
||||
Vous aurez rarement à créer directement des Pods individuels dans Kubernetes--même des Pods à un seul conteneur. Ceci est dû au fait que les Pods sont conçus comme des entités relativement éphémères et jetables. Lorsqu'un Pod est créé (directement par vous ou indirectement par un {{< glossary_tooltip text="_contrôleur_" term_id="controller" >}}), il est programmé pour s'exécuter sur un {{< glossary_tooltip term_id="node" >}} dans votre cluster. Le Pod reste sur ce nœud jusqu'à ce que le process se termine, l'objet pod soit supprimé, le pod soit *expulsé* par manque de ressources, ou le nœud soit en échec.
|
||||
|
||||
{{< note >}}
|
||||
Redémarrer un conteneur dans un Pod ne doit pas être confondu avec redémarrer le Pod. Le Pod lui-même ne s'exécute pas, mais est un environnement dans lequel les conteneurs s'exécutent, et persiste jusqu'à ce qu'il soit supprimé.
|
||||
Redémarrer un conteneur dans un Pod ne doit pas être confondu avec redémarrer un Pod. Un Pod n'est pas un process, mais un environnement pour exécuter un conteneur. Un Pod persiste jusqu'à ce qu'il soit supprimé.
|
||||
{{< /note >}}
|
||||
|
||||
Les Pods ne se guérissent pas par eux-mêmes. Si un Pod est programmé sur un Nœud qui échoue, ou si l'opération de programmation elle-même échoue, le Pod est supprimé ; de plus, un Pod ne survivra pas à une expulsion due à un manque de ressources ou une mise en maintenance du Nœud. Kubernetes utilise une abstraction de plus haut niveau, appelée un *Controller*, qui s'occupe de gérer les instances de Pods relativement jetables. Ainsi, même s'il est possible d'utiliser des Pods directement, il est beaucoup plus courant dans Kubernetes de gérer vos Pods en utilisant un Controller. Voir [Pods et Controllers](#pods-and-controllers) pour plus d'informations sur la façon dont Kubernetes utilise des Controllers pour implémenter la mise à l'échelle et la guérison des Pods.
|
||||
Les Pods ne se guérissent pas par eux-mêmes. Si un Pod est programmé sur un Nœud qui échoue, ou si l'opération de programmation elle-même échoue, le Pod est supprimé ; de plus, un Pod ne survivra pas à une expulsion due à un manque de ressources ou une mise en maintenance du Nœud. Kubernetes utilise une abstraction de plus haut niveau, appelée un *contrôleur*, qui s'occupe de gérer les instances de Pods relativement jetables. Ainsi, même s'il est possible d'utiliser des Pods directement, il est beaucoup plus courant dans Kubernetes de gérer vos Pods en utilisant un contrôleur.
|
||||
|
||||
### Pods et Controllers
|
||||
### Pods et contrôleurs
|
||||
|
||||
Un Controller peut créer et gérer plusieurs Pods pour vous, s'occupant de la réplication et du déploiement et fournissant des capacités d'auto-guérison au niveau du cluster. Par exemple, si un Nœud échoue, le Controller peut automatiquement remplacer le Pod en programmant un remplaçant identique sur un Nœud différent.
|
||||
Vous pouvez utiliser des ressources de charges de travail pour créer et gérer plusieurs Pods pour vous. Un contrôleur pour la ressource gère la réplication,
|
||||
le plan de déploiement et la guérison automatique en cas de problèmes du Pod. Par exemple, si un noeud est en échec, un contrôleur note que les Pods de ce noeud
|
||||
ont arrêté de fonctionner et créent des Pods pour les remplacer. L'ordonnanceur place le Pod de remplacement sur un noeud en fonctionnement.
|
||||
|
||||
Quelques exemples de Controllers qui contiennent un ou plusieurs pods :
|
||||
Voici quelques exemples de ressources de charges de travail qui gèrent un ou plusieurs Pods :
|
||||
|
||||
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
|
||||
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
|
||||
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
|
||||
|
||||
En général, les Controllers utilisent des Templates de Pod que vous lui fournissez pour créer les Pods dont il est responsable.
|
||||
* {{< glossary_tooltip text="Deployment" term_id="deployment" >}}
|
||||
* {{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}}
|
||||
* {{< glossary_tooltip text="DaemonSet" term_id="daemonset" >}}
|
||||
|
||||
## Templates de Pod
|
||||
|
||||
Les Templates de Pod sont des spécifications de pod qui sont inclus dans d'autres objets, comme les
|
||||
[Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/), [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/), et
|
||||
[DaemonSets](/docs/concepts/workloads/controllers/daemonset/). Les Controllers utilisent les Templates de Pod pour créer réellement les pods.
|
||||
L'exemple ci-dessous est un manifeste simple pour un Pod d'un conteneur affichant un message.
|
||||
Les Templates de Pod sont des spécifications pour créer des Pods, et sont inclus dans les ressources de charges de travail comme
|
||||
les [Deployments](/fr/docs/concepts/workloads/controllers/deployment/), les [Jobs](/docs/concepts/jobs/run-to-completion-finite-workloads/) et
|
||||
les [DaemonSets](/docs/concepts/workloads/controllers/daemonset/).
|
||||
|
||||
Chaque contrôleur pour une ressource de charges de travail utilise le template de pod à l'intérieur de l'objet pour créer les Pods. Le template de pod fait partie de l'état désiré de la ressource de charges de travail que vous avez utilisé pour exécuter votre application.
|
||||
|
||||
L'exemple ci-dessous est un manifest pour un Job simple avec un `template` qui démarre un conteneur. Le conteneur dans ce Pod affiche un message puis se met en pause.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: myapp-pod
|
||||
labels:
|
||||
app: myapp
|
||||
name: hello
|
||||
spec:
|
||||
containers:
|
||||
- name: myapp-container
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
|
||||
template:
|
||||
# Ceci est un template de pod
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
|
||||
restartPolicy: OnFailure
|
||||
# Le template de pod se termine ici
|
||||
```
|
||||
Plutôt que de spécifier tous les états désirés courants de tous les réplicas, les templates de pod sont comme des emporte-pièces. Une fois qu'une pièce a été coupée, la pièce n'a plus de relation avec l'outil. Il n'y a pas de lien qui persiste dans le temps entre le template et le pod. Un changement à venir dans le template ou même le changement pour un nouveau template n'a pas d'effet direct sur les pods déjà créés. De manière similaire, les pods créés par un replication controller peuvent par la suite être modifiés directement. C'est en contraste délibéré avec les pods, qui spécifient l'état désiré courant de tous les conteneurs appartenant au pod. Cette approche simplifie radicalement la sémantique système et augmente la flexibilité de la primitive.
|
||||
|
||||
Modifier le template de pod ou changer pour un nouvau template de pod n'a pas d'effet sur les pods déjà existants. Les Pods ne reçoivent pas une mise à jour
|
||||
du template directement ; au lieu de cela, un nouveau Pod est créé pour correspondre au nouveau template de pod.
|
||||
|
||||
Par exemple, un contrôleur de Deployment s'assure que les Pods en cours d'exécution correspondent au template de pod en cours. Si le template est mis à jour,
|
||||
le contrôleur doit supprimer les pods existants et créer de nouveaux Pods avec le nouveau template. Chaque contrôleur de charges de travail implémente ses propres
|
||||
règles pour gérer les changements du template de Pod.
|
||||
|
||||
Sur les noeuds, le {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} n'observe ou ne gère pas directement les détails concernant les templates de pods et leurs mises à jours ; ces détails sont abstraits. Cette abstraction et cette séparation des préoccupations simplifie la sémantique du système, et rend possible l'extension du comportement du cluster sans changer le code existant.
|
||||
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* En savoir plus sur les [Pods](/docs/concepts/workloads/pods/pod/)
|
||||
* [The Distributed System Toolkit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns) explique les dispositions courantes pour des Pods avec plusieurs conteneurs
|
||||
* En savoir plus sur le comportement des Pods :
|
||||
* [Terminaison d'un Pod](/docs/concepts/workloads/pods/pod/#termination-of-pods)
|
||||
* [Cycle de vie d'un Pod](/docs/concepts/workloads/pods/pod-lifecycle/)
|
||||
|
|
|
@ -164,7 +164,7 @@ Un exemple de déroulement :
|
|||
1. Le Pod dans l'API server est mis à jour avec le temps au delà duquel le Pod est considéré "mort" ainsi que la période de grâce.
|
||||
1. Le Pod est affiché comme "Terminating" dans les listes des commandes client
|
||||
1. (en même temps que 3) Lorsque Kubelet voit qu'un Pod a été marqué "Terminating", le temps ayant été mis en 2, il commence le processus de suppression du pod.
|
||||
1. Si un des conteneurs du Pod a défini un [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), il est exécuté à l'intérieur du conteneur. Si le `preStop` hook est toujours en cours d'exécution à la fin de la période de grâce, l'étape 2 est invoquée avec une courte (2 secondes) période de grâce supplémentaire.
|
||||
1. Si un des conteneurs du Pod a défini un [preStop hook](/fr/docs/concepts/containers/container-lifecycle-hooks/#hook-details), il est exécuté à l'intérieur du conteneur. Si le `preStop` hook est toujours en cours d'exécution à la fin de la période de grâce, l'étape 2 est invoquée avec une courte (2 secondes) période de grâce supplémentaire une seule fois. Vous devez modifier `terminationGracePeriodSeconds` si le hook `preStop` a besoin de plus de temps pour se terminer.
|
||||
1. Le signal TERM est envoyé aux conteneurs. Notez que tous les conteneurs du Pod ne recevront pas le signal TERM en même temps et il peut être nécessaire de définir des `preStop` hook si l'ordre d'arrêt est important.
|
||||
1. (en même temps que 3) Le Pod est supprimé des listes d'endpoints des services, et n'est plus considéré comme faisant partie des pods en cours d'exécution pour les contrôleurs de réplication. Les Pods s'arrêtant lentement ne peuvent pas continuer à servir du trafic, les load balancers (comme le service proxy) les supprimant de leurs rotations.
|
||||
1. Lorsque la période de grâce expire, les processus s'exécutant toujours dans le Pod sont tués avec SIGKILL.
|
||||
|
@ -186,7 +186,6 @@ Si le master exécute Kubernetes v1.1 ou supérieur, et les nœuds exécutent un
|
|||
Si l'utilisateur appelle `kubectl describe pod FooPodName`, l'utilisateur peut voir la raison pour laquelle le pod est en état "pending". La table d'événements dans la sortie de la commande "describe" indiquera :
|
||||
`Error validating pod "FooPodName"."FooPodNamespace" from api, ignoring: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc2089d3248)true'`
|
||||
|
||||
|
||||
Si le master exécute une version antérieure à v1.1, les pods privilégiés ne peuvent alors pas être créés. Si l'utilisateur tente de créer un pod ayant un conteneur privilégié, l'utilisateur obtiendra l'erreur suivante :
|
||||
`The Pod "FooPodName" is invalid.
|
||||
spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true'`
|
||||
|
@ -196,4 +195,4 @@ spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20b222db0)true'
|
|||
Le Pod est une ressource au plus haut niveau dans l'API REST Kubernetes. Plus de détails sur l'objet de l'API peuvent être trouvés à :
|
||||
[Objet de l'API Pod](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core).
|
||||
|
||||
|
||||
Lorsque vous créez un manifest pour un objet Pod, soyez certain que le nom spécifié est un [nom de sous-domaine DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) valide.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: kubectl
|
||||
content_template: templates/tool-reference
|
||||
content_type: tool-reference
|
||||
description: Référence kubectl
|
||||
notitle: true
|
||||
---
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Configurer les comptes de service pour les pods
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 90
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
Un ServiceAccount (compte de service) fournit une identité pour les processus qui s'exécutent dans un Pod.
|
||||
|
||||
*Ceci est une introduction aux comptes de service pour les utilisateurs. Voir aussi
|
||||
|
@ -18,16 +18,17 @@ Lorsque vous (un humain) accédez au cluster (par exemple, en utilisant `kubectl
|
|||
authentifié par l'apiserver en tant que compte d'utilisateur particulier (actuellement, il s'agit
|
||||
généralement de l'utilisateur `admin`, à moins que votre administrateur de cluster n'ait personnalisé votre cluster). Les processus dans les conteneurs dans les Pods peuvent également contacter l'apiserver. Dans ce cas, ils sont authentifiés en tant que compte de service particulier (par exemple, `default`).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Utiliser le compte de service par défaut pour accéder au API server.
|
||||
|
||||
|
@ -279,5 +280,3 @@ kubectl create -f https://k8s.io/examples/pods/pod-projected-svc-token.yaml
|
|||
Kubelet demandera et stockera le token a la place du Pod, rendra le token disponible pour le Pod à un chemin d'accès configurable, et rafraîchissez le token à l'approche de son expiration. Kubelet fait tourner le token de manière proactive s'il est plus vieux que 80% de son TTL total, ou si le token est plus vieux que 24 heures.
|
||||
|
||||
L'application est responsable du rechargement du token lorsque celui ci est renouvelé. Un rechargement périodique (par ex. toutes les 5 minutes) est suffisant pour la plupart des cas d'utilisation.
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Komunikasi Master-Node
|
||||
title: Komunikasi antara Control Plane dan Node
|
||||
content_type: concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Dokumen ini menjelaskan tentang jalur-jalur komunikasi di antara klaster Kubernetes dan master yang sebenarnya hanya berhubungan dengan apiserver saja.
|
||||
Dokumen ini menjelaskan tentang jalur-jalur komunikasi di antara klaster Kubernetes dan control plane yang sebenarnya hanya berhubungan dengan apiserver saja.
|
||||
Kenapa ada dokumen ini? Supaya kamu, para pengguna Kubernetes, punya gambaran bagaimana mengatur instalasi untuk memperketat konfigurasi jaringan di dalam klaster.
|
||||
Hal ini cukup penting, karena klaster bisa saja berjalan pada jaringan tak terpercaya (<i>untrusted network</i>), ataupun melalui alamat-alamat IP publik pada penyedia cloud.
|
||||
|
||||
|
@ -15,31 +15,24 @@ Hal ini cukup penting, karena klaster bisa saja berjalan pada jaringan tak terpe
|
|||
|
||||
<!-- body -->
|
||||
|
||||
## Klaster menuju Master
|
||||
## Node Menuju Control Plane
|
||||
|
||||
Semua jalur komunikasi dari klaster menuju master diterminasi pada apiserver.
|
||||
Tidak ada komponen apapun di dalam master, selain apiserver, yang terekspos ke luar untuk diakses dari servis <i>remote</i>.
|
||||
Untuk instalasi klaster pada umumnya, apiserver diatur untuk <i>listen</i> ke koneksi <i>remote</i> melalui port HTTPS (443) yang aman, dengan satu atau beberapa metode [autentikasi](/docs/reference/access-authn-authz/authentication/) <i>client</i> yang telah terpasang.
|
||||
Kubernetes memiliki sebuah pola API "hub-and-spoke". Semua penggunaan API dari Node (atau Pod dimana Pod-Pod tersebut dijalankan) akan diterminasi pada apiserver (tidak ada satu komponen _control plane_ apa pun yang didesain untuk diekspos pada servis _remote_).
|
||||
Apiserver dikonfigurasi untuk mendengarkan koneksi aman _remote_ yang pada umumnya terdapat pada porta HTTPS (443) dengan satu atau lebih bentuk [autentikasi](/docs/reference/access-authn-authz/authentication/) klien yang dipasang.
|
||||
Sebaiknya, satu atau beberapa metode [otorisasi](/docs/reference/access-authn-authz/authorization/) juga dipasang, terutama jika kamu memperbolehkan [permintaan anonim (<i>anonymous request</i>)](/docs/reference/access-authn-authz/authentication/#anonymous-requests) ataupun [service account token](/docs/reference/access-authn-authz/authentication/#service-account-tokens).
|
||||
|
||||
Node-node seharusnya disediakan dengan <i>public root certificate</i> untuk klaster, sehingga node-node tersebut bisa terhubung secara aman ke apiserver dengan kredensial <i>client</i> yang valid.
|
||||
Contohnya, untuk instalasi GKE dengan standar konfigurasi, kredensial <i>client</i> harus diberikan kepada kubelet dalam bentuk <i>client certificate</i>.
|
||||
Lihat [menghidupkan TLS kubelet](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) untuk menyediakan <i>client certificate</i> untuk kubelet secara otomatis.
|
||||
Jika diperlukan, Pod-Pod dapat terhubung pada apiserver secara aman dengan menggunakan ServiceAccount.
|
||||
Dengan ini, Kubernetes memasukkan _public root certificate_ dan _bearer token_ yang valid ke dalam Pod, secara otomatis saat Pod mulai dijalankan.
|
||||
Kubernetes Service (di dalam semua Namespace) diatur dengan sebuah alamat IP virtual. Semua yang mengakses alamat IP ini akan dialihkan (melalui kube-proxy) menuju _endpoint_ HTTPS dari apiserver.
|
||||
|
||||
Jika diperlukan, pod-pod dapat terhubung pada apiserver secara aman dengan menggunakan <i>service account</i>.
|
||||
Dengan ini, Kubernetes memasukkan <i>public root certificate</i> dan <i>bearer token</i> yang valid ke dalam pod, secara otomatis saat pod mulai dijalankan.
|
||||
Kubernetes <i>service</i> (di dalam semua <i>namespace</i>) diatur dengan sebuah alamat IP virtual.
|
||||
Semua yang mengakses alamat IP ini akan dialihkan (melalui kube-proxy) menuju <i>endpoint</i> HTTPS dari apiserver.
|
||||
Komponen-komponen juga melakukan koneksi pada apiserver klaster melalui porta yang aman.
|
||||
|
||||
Komponen-komponen master juga berkomunikasi dengan apiserver melalui port yang aman di dalam klaster.
|
||||
Akibatnya, untuk konfigurasi yang umum dan standar, semua koneksi dari klaster (node-node dan pod-pod yang berjalan di atas node tersebut) menuju master sudah terhubung dengan aman.
|
||||
Dan juga, klaster dan master bisa terhubung melalui jaringan publik dan/atau yang tak terpercaya (<i>untrusted</i>).
|
||||
Akibatnya, untuk konfigurasi yang umum dan standar, semua koneksi dari klaster (node-node dan pod-pod yang berjalan di atas node tersebut) menujucontrol planesudah terhubung dengan aman.
|
||||
Dan juga, klaster dancontrol planebisa terhubung melalui jaringan publik dan/atau yang tak terpercaya (<i>untrusted</i>).
|
||||
|
||||
## Master menuju Klaster
|
||||
## Control Plane menuju Node
|
||||
|
||||
Ada dua jalur komunikasi utama dari master (apiserver) menuju klaster.
|
||||
Pertama, dari apiserver ke <i>process</i> kubelet yang berjalan pada setiap node di dalam klaster.
|
||||
Kedua, dari apiserver ke setiap node, pod, ataupun service melalui fungsi <i>proxy</i> pada apiserver.
|
||||
Ada dua jalur komunikasi utama dari _control plane_ (apiserver) menuju klaster. Pertama, dari apiserver ke proses kubelet yang berjalan pada setiap Node di dalam klaster. Kedua, dari apiserver ke setiap Node, Pod, ataupun Service melalui fungsi proksi pada apiserver
|
||||
|
||||
### Apiserver menuju kubelet
|
||||
|
||||
|
@ -67,11 +60,9 @@ Koneksi ini **tidak aman** untuk dilalui pada jaringan publik dan/atau tak terpe
|
|||
|
||||
### Tunnel SSH
|
||||
|
||||
Kubernetes menyediakan tunnel SSH untuk mengamankan jalur komunikasi Master -> Klaster.
|
||||
Kubernetes menyediakan tunnel SSH untuk mengamankan jalur komunikasi control plane -> Klaster.
|
||||
Dengan ini, apiserver menginisiasi sebuah <i>tunnel</i> SSH untuk setiap node di dalam klaster (terhubung ke server SSH di port 22) dan membuat semua trafik menuju kubelet, node, pod, atau service dilewatkan melalui <i>tunnel</i> tesebut.
|
||||
<i>Tunnel</i> ini memastikan trafik tidak terekspos keluar jaringan dimana node-node berada.
|
||||
|
||||
<i>Tunnel</i> SSH saat ini sudah usang (<i>deprecated</i>), jadi sebaiknya jangan digunakan, kecuali kamu tahu pasti apa yang kamu lakukan.
|
||||
Sebuah desain baru untuk mengganti kanal komunikasi ini sedang disiapkan.
|
||||
|
||||
|
|
@ -124,9 +124,7 @@ Kamu juga dapat mengimplementasikan Operator (yaitu, _Controller_) dengan
|
|||
menggunakan bahasa / _runtime_ yang dapat bertindak sebagai
|
||||
[klien dari API Kubernetes](/docs/reference/using-api/client-libraries/).
|
||||
|
||||
|
||||
|
||||
{{% capture Selanjutnya %}}
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* Memahami lebih lanjut tentang [_custome resources_](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
|
||||
* Temukan "ready-made" _operators_ dalam [OperatorHub.io](https://operatorhub.io/)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: LimitRange
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
|
|
@ -51,14 +51,14 @@ Dalam contoh ini:
|
|||
Dalam kasus ini, kamu hanya perlu memilih sebuah label yang didefinisikan pada templat Pod (`app: nginx`).
|
||||
Namun, aturan pemilihan yang lebih canggih mungkin dilakukan asal templat Pod-nya memenuhi aturan.
|
||||
{{< note >}}
|
||||
Kolom `matchLabels` berbentuk pasangan {key,value}. Sebuah {key,value} dalam _map_ `matchLabels` ekuivalen dengan
|
||||
Kolom `matchLabels` berbentuk pasangan {key,value}. Sebuah {key,value} dalam _map_ `matchLabels` ekuivalen dengan
|
||||
elemen pada `matchExpressions`, yang mana kolom key adalah "key", operator adalah "In", dan larik values hanya berisi "value".
|
||||
Semua prasyarat dari `matchLabels` maupun `matchExpressions` harus dipenuhi agar dapat dicocokkan.
|
||||
{{< /note >}}
|
||||
|
||||
* Kolom `template` berisi sub kolom berikut:
|
||||
* Pod dilabeli `app: nginx` dengan kolom `labels`.
|
||||
* Spesifikasi templat Pod atau kolom `.template.spec` menandakan bahwa Pod mennjalankan satu kontainer `nginx`,
|
||||
* Spesifikasi templat Pod atau kolom `.template.spec` menandakan bahwa Pod mennjalankan satu kontainer `nginx`,
|
||||
yang menjalankan image `nginx` [Docker Hub](https://hub.docker.com/) dengan versi 1.7.9.
|
||||
* Membuat satu kontainer bernama `nginx` sesuai kolom `name`.
|
||||
|
||||
|
@ -123,8 +123,8 @@ Dalam contoh ini:
|
|||
ReplicaSet yang dibuat menjamin bahwa ada tiga Pod `nginx`.
|
||||
|
||||
{{< note >}}
|
||||
Kamu harus memasukkan selektor dan label templat Pod yang benar pada Deployment (dalam kasus ini, `app: nginx`).
|
||||
Jangan membuat label atau selektor yang beririsan dengan kontroler lain (termasuk Deployment dan StatefulSet lainnya). Kubernetes tidak akan mencegah adanya label yang beririsan.
|
||||
Kamu harus memasukkan selektor dan label templat Pod yang benar pada Deployment (dalam kasus ini, `app: nginx`).
|
||||
Jangan membuat label atau selektor yang beririsan dengan kontroler lain (termasuk Deployment dan StatefulSet lainnya). Kubernetes tidak akan mencegah adanya label yang beririsan.
|
||||
Namun, jika beberapa kontroler memiliki selektor yang beririsan, kontroler itu mungkin akan konflik dan berjalan dengan tidak semestinya.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -144,7 +144,7 @@ Label ini menjamin anak-anak ReplicaSet milik Deployment tidak tumpang tindih. D
|
|||
Rilis Deployment hanya dapat dipicu oleh perubahan templat Pod Deployment (yaitu, `.spec.template`), contohnya perubahan kolom label atau image container. Yang lain, seperti replika, tidak akan memicu rilis.
|
||||
{{< /note >}}
|
||||
|
||||
Ikuti langkah-langkah berikut untuk membarui Deployment:
|
||||
Ikuti langkah-langkah berikut untuk membarui Deployment:
|
||||
|
||||
1. Ganti Pod nginx menjadi image `nginx:1.9.1` dari image `nginx:1.7.9`.
|
||||
|
||||
|
@ -191,7 +191,7 @@ Untuk menampilkan detail lain dari Deployment yang terbaru:
|
|||
nginx-deployment 3 3 3 3 36s
|
||||
```
|
||||
|
||||
* Jalankan `kubectl get rs` to see that the Deployment updated the Pods dengan membuat ReplicaSet baru dan
|
||||
* Jalankan `kubectl get rs` to see that the Deployment updated the Pods dengan membuat ReplicaSet baru dan
|
||||
menggandakannya menjadi 3 replika, sembari menghapus ReplicaSet menjadi 0 replika.
|
||||
|
||||
```shell
|
||||
|
@ -228,7 +228,7 @@ menggandakannya menjadi 3 replika, sembari menghapus ReplicaSet menjadi 0 replik
|
|||
Umumnya, dia memastikan paling banyak ada 125% jumlah Pod yang diinginkan menyala (25% tambahan maksimal).
|
||||
|
||||
Misalnya, jika kamu lihat Deployment diatas lebih jauh, kamu akan melihat bahwa pertama-tama dia membuat Pod baru,
|
||||
kemudian menghapus beberapa Pod lama, dan membuat yang baru. Dia tidak akan menghapus Pod lama sampai ada cukup
|
||||
kemudian menghapus beberapa Pod lama, dan membuat yang baru. Dia tidak akan menghapus Pod lama sampai ada cukup
|
||||
Pod baru menyala, dan pula tidak membuat Pod baru sampai ada cukup Pod lama telah mati.
|
||||
Dia memastikan paling sedikit 2 Pod menyala dan paling banyak total 4 Pod menyala.
|
||||
|
||||
|
@ -236,7 +236,7 @@ menggandakannya menjadi 3 replika, sembari menghapus ReplicaSet menjadi 0 replik
|
|||
```shell
|
||||
kubectl describe deployments
|
||||
```
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
|
@ -277,15 +277,15 @@ menggandakannya menjadi 3 replika, sembari menghapus ReplicaSet menjadi 0 replik
|
|||
```
|
||||
Disini bisa dilihat ketika pertama Deployment dibuat, dia membuat ReplicaSet (nginx-deployment-2035384211)
|
||||
dan langsung menggandakannya menjadi 3 replika. Saat Deployment diperbarui, dia membuat ReplicaSet baru
|
||||
(nginx-deployment-1564180365) dan menambah 1 replika kemudian mengecilkan ReplicaSet lama menjadi 2,
|
||||
(nginx-deployment-1564180365) dan menambah 1 replika kemudian mengecilkan ReplicaSet lama menjadi 2,
|
||||
sehingga paling sedikit 2 Pod menyala dan paling banyak 4 Pod dibuat setiap saat. Dia kemudian lanjut menaik-turunkan
|
||||
ReplicaSet baru dan ReplicaSet lama, dengan strategi pembaruan rolling yang sama.
|
||||
ReplicaSet baru dan ReplicaSet lama, dengan strategi pembaruan rolling yang sama.
|
||||
Terakhir, kamu akan dapat 3 replika di ReplicaSet baru telah menyala, dan ReplicaSet lama akan hilang (berisi 0).
|
||||
|
||||
### Perpanjangan (alias banyak pembaruan secara langsung)
|
||||
|
||||
Setiap kali Deployment baru is teramati oleh Deployment kontroler, ReplicaSet dibuat untuk membangkitkan Pod sesuai keinginan.
|
||||
Jika Deployment diperbarui, ReplicaSet yang terkait Pod dengan label `.spec.selector` yang cocok,
|
||||
Setiap kali Deployment baru is teramati oleh Deployment kontroler, ReplicaSet dibuat untuk membangkitkan Pod sesuai keinginan.
|
||||
Jika Deployment diperbarui, ReplicaSet yang terkait Pod dengan label `.spec.selector` yang cocok,
|
||||
namun kolom `.spec.template` pada templat tidak cocok akan dihapus. Kemudian, ReplicaSet baru akan
|
||||
digandakan sebanyak `.spec.replicas` dan semua ReplicaSet lama dihapus.
|
||||
|
||||
|
@ -294,7 +294,7 @@ tiap perubahan dan memulai penggandaan. Lalu, dia akan mengganti ReplicaSet yang
|
|||
-- mereka ditambahkan ke dalam daftar ReplicaSet lama dan akan mulai dihapus.
|
||||
|
||||
Contohnya, ketika kamu membuat Deployment untuk membangkitkan 5 replika `nginx:1.7.9`,
|
||||
kemudian membarui Deployment dengan versi `nginx:1.9.1` ketika ada 3 replika `nginx:1.7.9` yang dibuat.
|
||||
kemudian membarui Deployment dengan versi `nginx:1.9.1` ketika ada 3 replika `nginx:1.7.9` yang dibuat.
|
||||
Dalam kasus ini, Deployment akan segera menghapus 3 replika Pod `nginx:1.7.9` yang telah dibuat, dan mulai membuat
|
||||
Pod `nginx:1.9.1`. Dia tidak akan menunggu kelima replika `nginx:1.7.9` selesai baru menjalankan perubahan.
|
||||
|
||||
|
@ -310,8 +310,8 @@ Pada versi API `apps/v1`, selektor label Deployment tidak bisa diubah ketika sel
|
|||
* Penambahan selektor mensyaratkan label templat Pod di spek Deployment untuk diganti dengan label baru juga.
|
||||
Jika tidak, galat validasi akan muncul. Perubahan haruslah tidak tumpang-tindih, dengan kata lain selektor baru tidak mencakup ReplicaSet dan Pod yang dibuat dengan selektor lama. Sehingga, semua ReplicaSet lama akan menggantung sedangkan ReplicaSet baru tetap dibuat.
|
||||
* Pengubahan selektor mengubah nilai pada kunci selektor -- menghasilkan perilaku yang sama dengan penambahan.
|
||||
* Penghapusan selektor menghilangkan kunci yang ada pada selektor Deployment -- tidak mensyaratkan perubahan apapun pada label templat Pod.
|
||||
ReplicaSet yang ada tidak menggantung dan ReplicaSet baru tidak dibuat.
|
||||
* Penghapusan selektor menghilangkan kunci yang ada pada selektor Deployment -- tidak mensyaratkan perubahan apapun pada label templat Pod.
|
||||
ReplicaSet yang ada tidak menggantung dan ReplicaSet baru tidak dibuat.
|
||||
Tapi perhatikan bahwa label yang dihapus masih ada pada Pod dan ReplicaSet masing-masing.
|
||||
|
||||
## Membalikkan Deployment
|
||||
|
@ -321,10 +321,10 @@ Umumnya, semua riwayat rilis Deployment disimpan oleh sistem sehingga kamu dapat
|
|||
(kamu dapat mengubahnya dengan mengubah batas riwayat revisi).
|
||||
|
||||
{{< note >}}
|
||||
Revisi Deployment dibuat saat rilis Deployment dipicu. Ini berarti revisi baru dibuat jika dan hanya jika
|
||||
templat Pod Deployment (`.spec.template`) berubah, misalnya jika kamu membarui label atau image kontainer pada templat.
|
||||
Pembaruan lain, seperti penggantian skala Deployment, tidak membuat revisi Deployment, jadi kamu dapat memfasilitasi
|
||||
penggantian skala secara manual atau otomatis secara simultan. Artinya saat kamu membalikkan ke versi sebelumnya,
|
||||
Revisi Deployment dibuat saat rilis Deployment dipicu. Ini berarti revisi baru dibuat jika dan hanya jika
|
||||
templat Pod Deployment (`.spec.template`) berubah, misalnya jika kamu membarui label atau image kontainer pada templat.
|
||||
Pembaruan lain, seperti penggantian skala Deployment, tidak membuat revisi Deployment, jadi kamu dapat memfasilitasi
|
||||
penggantian skala secara manual atau otomatis secara simultan. Artinya saat kamu membalikkan ke versi sebelumnya,
|
||||
hanya bagian templat Pod Deployment yang dibalikkan.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -350,7 +350,7 @@ hanya bagian templat Pod Deployment yang dibalikkan.
|
|||
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
||||
* Tekan Ctrl-C untuk menghentikan pemeriksaan status rilis di atas. Untuk info lebih lanjut
|
||||
* Tekan Ctrl-C untuk menghentikan pemeriksaan status rilis di atas. Untuk info lebih lanjut
|
||||
tentang rilis tersendat, [baca disini](#status-deployment).
|
||||
|
||||
* Kamu lihat bahwa jumlah replika lama (`nginx-deployment-1564180365` dan `nginx-deployment-2035384211`) adalah 2, dan replika baru (nginx-deployment-3066724191) adalah 1.
|
||||
|
@ -383,17 +383,17 @@ tentang rilis tersendat, [baca disini](#status-deployment).
|
|||
```
|
||||
|
||||
{{< note >}}
|
||||
Controller Deployment menghentikan rilis yang buruk secara otomatis dan juga berhenti meningkatkan ReplicaSet baru.
|
||||
Controller Deployment menghentikan rilis yang buruk secara otomatis dan juga berhenti meningkatkan ReplicaSet baru.
|
||||
Ini tergantung pada parameter rollingUpdate (secara khusus `maxUnavailable`) yang dimasukkan.
|
||||
Kubernetes umumnya mengatur jumlahnya menjadi 25%.
|
||||
{{< /note >}}
|
||||
|
||||
* Tampilkan deskripsi Deployment:
|
||||
* Tampilkan deskripsi Deployment:
|
||||
```shell
|
||||
kubectl describe deployment
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
|
@ -440,11 +440,11 @@ tentang rilis tersendat, [baca disini](#status-deployment).
|
|||
|
||||
Ikuti langkah-langkah berikut untuk mengecek riwayat rilis:
|
||||
|
||||
1. Pertama, cek revisi Deployment sekarang:
|
||||
1. Pertama, cek revisi Deployment sekarang:
|
||||
```shell
|
||||
kubectl rollout history deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
deployments "nginx-deployment"
|
||||
REVISION CHANGE-CAUSE
|
||||
|
@ -464,7 +464,7 @@ Ikuti langkah-langkah berikut untuk mengecek riwayat rilis:
|
|||
kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
deployments "nginx-deployment" revision 2
|
||||
Labels: app=nginx
|
||||
|
@ -489,7 +489,7 @@ Ikuti langkah-langkah berikut untuk membalikkan Deployment dari versi sekarang k
|
|||
kubectl rollout undo deployment.v1.apps/nginx-deployment
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
deployment.apps/nginx-deployment
|
||||
```
|
||||
|
@ -499,7 +499,7 @@ Ikuti langkah-langkah berikut untuk membalikkan Deployment dari versi sekarang k
|
|||
kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
deployment.apps/nginx-deployment
|
||||
```
|
||||
|
@ -514,16 +514,16 @@ Ikuti langkah-langkah berikut untuk membalikkan Deployment dari versi sekarang k
|
|||
kubectl get deployment nginx-deployment
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 30m
|
||||
```
|
||||
3. Tampilkan deskripsi Deployment:
|
||||
3. Tampilkan deskripsi Deployment:
|
||||
```shell
|
||||
kubectl describe deployment nginx-deployment
|
||||
```
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
|
@ -594,9 +594,9 @@ deployment.apps/nginx-deployment scaled
|
|||
|
||||
### Pengaturan skala proporsional
|
||||
|
||||
Deployment RollingUpdate mendukung beberapa versi aplikasi berjalan secara bersamaan. Ketika kamu atau autoscaler
|
||||
mengubah skala Deployment RollingUpdate yang ada di tengah rilis (yang sedang berjalan maupun terjeda),
|
||||
kontroler Deployment menyeimbangkan replika tambahan dalam ReplicaSet aktif (ReplicaSet dengan Pod) untuk mencegah resiko.
|
||||
Deployment RollingUpdate mendukung beberapa versi aplikasi berjalan secara bersamaan. Ketika kamu atau autoscaler
|
||||
mengubah skala Deployment RollingUpdate yang ada di tengah rilis (yang sedang berjalan maupun terjeda),
|
||||
kontroler Deployment menyeimbangkan replika tambahan dalam ReplicaSet aktif (ReplicaSet dengan Pod) untuk mencegah resiko.
|
||||
Ini disebut *pengaturan skala proporsional*.
|
||||
|
||||
Sebagai contoh, kamu menjalankan Deployment dengan 10 replika, [maxSurge](#max-surge)=3, dan [maxUnavailable](#max-unavailable)=2.
|
||||
|
@ -636,20 +636,20 @@ persyaratan `maxUnavailable` yang disebut di atas. Cek status rilis:
|
|||
|
||||
* Kemudian, permintaan peningkatan untuk Deployment akan masuk. Autoscaler menambah replika Deployment
|
||||
menjadi 15. Controller Deployment perlu menentukan dimana 5 replika ini ditambahkan. Jika kamu memakai
|
||||
pengaturan skala proporsional, kelima replika akan ditambahkan ke ReplicaSet baru. Dengan pengaturan skala proporsional,
|
||||
pengaturan skala proporsional, kelima replika akan ditambahkan ke ReplicaSet baru. Dengan pengaturan skala proporsional,
|
||||
kamu menyebarkan replika tambahan ke semua ReplicaSet. Proporsi terbesar ada pada ReplicaSet dengan
|
||||
replika terbanyak dan proporsi yang lebih kecil untuk replika dengan ReplicaSet yang lebih sedikit.
|
||||
replika terbanyak dan proporsi yang lebih kecil untuk replika dengan ReplicaSet yang lebih sedikit.
|
||||
Sisanya akan diberikan ReplicaSet dengan replika terbanyak. ReplicaSet tanpa replika tidak akan ditingkatkan.
|
||||
|
||||
Dalam kasus kita di atas, 3 replika ditambahkan ke ReplicaSet lama dan 2 replika ditambahkan ke ReplicaSet baru.
|
||||
Proses rilis akan segera memindahkan semua ReplicaSet baru, dengan asumsi semua replika dalam kondisi sehat.
|
||||
Untuk memastikannya, jalankan:
|
||||
Dalam kasus kita di atas, 3 replika ditambahkan ke ReplicaSet lama dan 2 replika ditambahkan ke ReplicaSet baru.
|
||||
Proses rilis akan segera memindahkan semua ReplicaSet baru, dengan asumsi semua replika dalam kondisi sehat.
|
||||
Untuk memastikannya, jalankan:
|
||||
|
||||
```shell
|
||||
kubectl get deploy
|
||||
```
|
||||
|
||||
Keluaran akan tampil seperti berikut:
|
||||
Keluaran akan tampil seperti berikut:
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 15 18 7 8 7m
|
||||
|
@ -668,7 +668,7 @@ nginx-deployment-618515232 11 11 11 7m
|
|||
|
||||
## Menjeda dan Melanjutkan Deployment
|
||||
|
||||
Kamu dapat menjeda Deployment sebelum memicu satu atau lebih pembaruan kemudian meneruskannya.
|
||||
Kamu dapat menjeda Deployment sebelum memicu satu atau lebih pembaruan kemudian meneruskannya.
|
||||
Hal ini memungkinkanmu menerapkan beberapa perbaikan selama selang jeda tanpa melakukan rilis yang tidak perlu.
|
||||
|
||||
* Sebagai contoh, Deployment yang baru dibuat:
|
||||
|
@ -743,7 +743,7 @@ Hal ini memungkinkanmu menerapkan beberapa perbaikan selama selang jeda tanpa me
|
|||
deployment.apps/nginx-deployment resource requirements updated
|
||||
```
|
||||
|
||||
The state awal Deployment sebelum jeda akan melanjutkan fungsinya, tapi perubahan
|
||||
The state awal Deployment sebelum jeda akan melanjutkan fungsinya, tapi perubahan
|
||||
Deployment tidak akan berefek apapun selama Deployment masih terjeda.
|
||||
|
||||
* Kemudian, mulai kembali Deployment dan perhatikan ReplicaSet baru akan muncul dengan semua perubahan baru:
|
||||
|
@ -795,7 +795,7 @@ Kamu tidak bisa membalikkan Deployment yang terjeda sampai dia diteruskan.
|
|||
|
||||
## Status Deployment
|
||||
|
||||
Deployment melalui berbagai state dalam daur hidupnya. Dia dapat [berlangsung](#deployment-berlangsung) selagi merilis ReplicaSet baru, bisa juga [selesai](#deployment-selesai),
|
||||
Deployment melalui berbagai state dalam daur hidupnya. Dia dapat [berlangsung](#deployment-berlangsung) selagi merilis ReplicaSet baru, bisa juga [selesai](#deployment-selesai),
|
||||
atau juga [gagal](#deployment-gagal).
|
||||
|
||||
### Deployment Berlangsung
|
||||
|
@ -817,7 +817,7 @@ Kubernetes menandai Deployment sebagai _complete_ saat memiliki karakteristik be
|
|||
* Semua replika terkait Deployment dapat diakses.
|
||||
* Tidak ada replika lama untuk Deployment yang berjalan.
|
||||
|
||||
Kamu dapat mengecek apakah Deployment telah selesai dengan `kubectl rollout status`.
|
||||
Kamu dapat mengecek apakah Deployment telah selesai dengan `kubectl rollout status`.
|
||||
Jika rilis selesai, `kubectl rollout status` akan mengembalikan nilai balik nol.
|
||||
|
||||
```shell
|
||||
|
@ -833,7 +833,7 @@ $ echo $?
|
|||
|
||||
### Deployment Gagal
|
||||
|
||||
Deployment-mu bisa saja terhenti saat mencoba deploy ReplicaSet terbaru tanpa pernah selesai.
|
||||
Deployment-mu bisa saja terhenti saat mencoba deploy ReplicaSet terbaru tanpa pernah selesai.
|
||||
Ini dapat terjadi karena faktor berikut:
|
||||
|
||||
* Kuota tidak mencukupi
|
||||
|
@ -868,7 +868,7 @@ berikut ke `.status.conditions` milik Deployment:
|
|||
Lihat [konvensi Kubernetes API](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) untuk info lebih lanjut tentang kondisi status.
|
||||
|
||||
{{< note >}}
|
||||
Kubernetes tidak melakukan apapun pada Deployment yang tersendat selain melaporkannya sebagai `Reason=ProgressDeadlineExceeded`.
|
||||
Kubernetes tidak melakukan apapun pada Deployment yang tersendat selain melaporkannya sebagai `Reason=ProgressDeadlineExceeded`.
|
||||
Orkestrator yang lebih tinggi dapat memanfaatkannya untuk melakukan tindak lanjut. Misalnya, mengembalikan Deployment ke versi sebelumnya.
|
||||
{{< /note >}}
|
||||
|
||||
|
@ -877,7 +877,7 @@ Jika Deployment terjeda, Kubernetes tidak akan mengecek kemajuan pada selang itu
|
|||
Kamu dapat menjeda Deployment di tengah rilis dan melanjutkannya dengan aman tanpa memicu kondisi saat tenggat telah lewat.
|
||||
{{< /note >}}
|
||||
|
||||
Kamu dapat mengalami galat sejenak pada Deployment disebabkan timeout yang dipasang terlalu kecil atau
|
||||
Kamu dapat mengalami galat sejenak pada Deployment disebabkan timeout yang dipasang terlalu kecil atau
|
||||
hal-hal lain yang terjadi sementara. Misalnya, kamu punya kuota yang tidak mencukupi. Jika kamu mendeskripsikan Deployment
|
||||
kamu akan menjumpai pada bagian ini:
|
||||
|
||||
|
@ -937,7 +937,7 @@ Conditions:
|
|||
ReplicaFailure True FailedCreate
|
||||
```
|
||||
|
||||
Kamu dapat menangani isu keterbatasan kuota dengan menurunkan jumlah Deployment, bisa dengan menghapus kontrolers
|
||||
Kamu dapat menangani isu keterbatasan kuota dengan menurunkan jumlah Deployment, bisa dengan menghapus kontrolers
|
||||
yang sedang berjalan, atau dengan meningkatkan kuota pada namespace. Jika kuota tersedia, kemudian kontroler Deployment
|
||||
akan dapat menyelesaikan rilis Deployment. Kamu akan melihat bahwa status Deployment berubah menjadi kondisi sukses (`Status=True` dan `Reason=NewReplicaSetAvailable`).
|
||||
|
||||
|
@ -951,7 +951,7 @@ Conditions:
|
|||
|
||||
`Type=Available` dengan `Status=True` artinya Deployment-mu punya ketersediaan minimum. Ketersediaan minimum diatur
|
||||
oleh parameter yang dibuat pada strategi deployment. `Type=Progressing` dengan `Status=True` berarti Deployment
|
||||
sedang dalam rilis dan masih berjalan atau sudah selesai berjalan dan jumlah minimum replika tersedia
|
||||
sedang dalam rilis dan masih berjalan atau sudah selesai berjalan dan jumlah minimum replika tersedia
|
||||
(lihat bagian Alasan untuk kondisi tertentu - dalam kasus ini `Reason=NewReplicaSetAvailable` berarti Deployment telah selesai).
|
||||
|
||||
Kamu dapat mengecek apakah Deployment gagal berkembang dengan perintah `kubectl rollout status`. `kubectl rollout status`
|
||||
|
@ -974,7 +974,7 @@ Semua aksi yang dapat diterapkan pada Deployment yang selesai berjalan juga pada
|
|||
|
||||
## Kebijakan Pembersihan
|
||||
|
||||
Kamu dapat mengisi kolom `.spec.revisionHistoryLimit` di Deployment untuk menentukan banyak ReplicaSet
|
||||
Kamu dapat mengisi kolom `.spec.revisionHistoryLimit` di Deployment untuk menentukan banyak ReplicaSet
|
||||
pada Deployment yang ingin dipertahankan. Sisanya akan di garbage-collected di balik layar. Umumnya, nilai kolom berisi 10.
|
||||
|
||||
{{< note >}}
|
||||
|
@ -984,7 +984,7 @@ sehingga Deployment tidak akan dapat dikembalikan.
|
|||
|
||||
## Deployment Canary
|
||||
|
||||
Jika kamu ingin merilis ke sebagian pengguna atau server menggunakan Deployment,
|
||||
Jika kamu ingin merilis ke sebagian pengguna atau server menggunakan Deployment,
|
||||
kamu dapat membuat beberapa Deployment, satu tiap rilis, dengan mengikuti pola canary yang didesripsikan pada
|
||||
[mengelola sumber daya](/id/docs/concepts/cluster-administration/manage-deployment/#deploy-dengan-canary).
|
||||
|
||||
|
@ -1002,7 +1002,7 @@ Dalam `.spec` hanya ada kolom `.spec.template` dan `.spec.selector` yang wajib d
|
|||
|
||||
`.spec.template` adalah [templat Pod](/id/docs/concepts/workloads/pods/pod-overview/#templat-pod). Dia memiliki skema yang sama dengan [Pod](/id/docs/concepts/workloads/pods/pod/). Bedanya dia bersarang dan tidak punya `apiVersion` atau `kind`.
|
||||
|
||||
Selain kolom wajib untuk Pod, templat Pod pada Deployment harus menentukan label dan aturan menjalankan ulang yang tepat.
|
||||
Selain kolom wajib untuk Pod, templat Pod pada Deployment harus menentukan label dan aturan menjalankan ulang yang tepat.
|
||||
Untuk label, pastikaan tidak bertumpang tindih dengan kontroler lainnya. Lihat [selektor](#selektor)).
|
||||
|
||||
[`.spec.template.spec.restartPolicy`](/id/docs/concepts/workloads/pods/pod-lifecycle/#aturan-menjalankan-ulang) hanya boleh berisi `Always`,
|
||||
|
@ -1019,21 +1019,21 @@ untuk Pod yang dituju oleh Deployment ini.
|
|||
|
||||
`.spec.selector` harus sesuai `.spec.template.metadata.labels`, atau akan ditolak oleh API.
|
||||
|
||||
Di versi API `apps/v1`, `.spec.selector` dan `.metadata.labels` tidak berisi `.spec.template.metadata.labels` jika tidak disetel.
|
||||
Di versi API `apps/v1`, `.spec.selector` dan `.metadata.labels` tidak berisi `.spec.template.metadata.labels` jika tidak disetel.
|
||||
Jadi mereka harus disetel secara eksplisit. Perhatikan juga `.spec.selector` tidak dapat diubah setelah Deployment dibuat pada `apps/v1`.
|
||||
|
||||
Deployment dapat mematikan Pod yang labelnya cocok dengan selektor jika templatnya berbeda
|
||||
dari `.spec.template` atau total jumlah Pod melebihi `.spec.replicas`. Dia akan membuat Pod baru
|
||||
dari `.spec.template` atau total jumlah Pod melebihi `.spec.replicas`. Dia akan membuat Pod baru
|
||||
dengan `.spec.template` jika jumlah Pod kurang dari yang diinginkan.
|
||||
|
||||
{{< note >}}
|
||||
Kamu sebaiknya tidak membuat Pod lain yang labelnya cocok dengan selektor ini, baik secara langsung,
|
||||
melalui Deployment lain, atau membuat kontroler lain seperti ReplicaSet atau ReplicationController.
|
||||
Kalau kamu melakukannya, Deployment pertama akan mengira dia yang membuat Pod-pod ini.
|
||||
Kamu sebaiknya tidak membuat Pod lain yang labelnya cocok dengan selektor ini, baik secara langsung,
|
||||
melalui Deployment lain, atau membuat kontroler lain seperti ReplicaSet atau ReplicationController.
|
||||
Kalau kamu melakukannya, Deployment pertama akan mengira dia yang membuat Pod-pod ini.
|
||||
Kubernetes tidak akan mencegahmu melakukannya.
|
||||
{{< /note >}}
|
||||
|
||||
Jika kamu punya beberapa kontroler dengan selektor bertindihan, mereka akan saling bertikai
|
||||
Jika kamu punya beberapa kontroler dengan selektor bertindihan, mereka akan saling bertikai
|
||||
dan tidak akan berjalan semestinya.
|
||||
|
||||
### Strategi
|
||||
|
@ -1047,65 +1047,65 @@ Semua Pod yang ada dimatikan sebelum yang baru dibuat ketika nilai `.spec.strate
|
|||
|
||||
#### Membarui Deployment secara Bergulir
|
||||
|
||||
Deployment membarui Pod secara [bergulir](/id/docs/tasks/run-application/rolling-update-replication-controller/)
|
||||
Deployment membarui Pod secara bergulir
|
||||
saat `.spec.strategy.type==RollingUpdate`. Kamu dapat menentukan `maxUnavailable` dan `maxSurge` untuk mengatur
|
||||
proses pembaruan bergulir.
|
||||
|
||||
##### Ketidaktersediaan Maksimum
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable` adalah kolom opsional yang mengatur jumlah Pod maksimal
|
||||
yang tidak tersedia selama proses pembaruan. Nilainya bisa berupa angka mutlak (contohnya 5)
|
||||
atau persentase dari Pod yang diinginkan (contohnya 10%). Angka mutlak dihitung berdasarkan persentase
|
||||
dengan pembulatan ke bawah. Nilai tidak bisa nol jika `.spec.strategy.rollingUpdate.maxSurge` juga nol.
|
||||
`.spec.strategy.rollingUpdate.maxUnavailable` adalah kolom opsional yang mengatur jumlah Pod maksimal
|
||||
yang tidak tersedia selama proses pembaruan. Nilainya bisa berupa angka mutlak (contohnya 5)
|
||||
atau persentase dari Pod yang diinginkan (contohnya 10%). Angka mutlak dihitung berdasarkan persentase
|
||||
dengan pembulatan ke bawah. Nilai tidak bisa nol jika `.spec.strategy.rollingUpdate.maxSurge` juga nol.
|
||||
Nilai bawaannya yaitu 25%.
|
||||
|
||||
Sebagai contoh, ketika nilai berisi 30%, ReplicaSet lama dapat segera diperkecil menjadi 70% dari Pod
|
||||
yang diinginkan saat pembaruan bergulir dimulai. Seketika Pod baru siap, ReplicaSet lama dapat lebih diperkecil lagi,
|
||||
diikuti dengan pembesaran ReplicaSet, menjamin total jumlah Pod yang siap kapanpun ketika pembaruan
|
||||
Sebagai contoh, ketika nilai berisi 30%, ReplicaSet lama dapat segera diperkecil menjadi 70% dari Pod
|
||||
yang diinginkan saat pembaruan bergulir dimulai. Seketika Pod baru siap, ReplicaSet lama dapat lebih diperkecil lagi,
|
||||
diikuti dengan pembesaran ReplicaSet, menjamin total jumlah Pod yang siap kapanpun ketika pembaruan
|
||||
paling sedikit 70% dari Pod yang diinginkan.
|
||||
|
||||
##### Kelebihan Maksimum
|
||||
|
||||
`.spec.strategy.rollingUpdate.maxSurge` adalah kolom opsional yang mengatur jumlah Pod maksimal yang
|
||||
dapat dibuat melebihi jumlah Pod yang diinginkan. Nilainya bisa berupa angka mutlak (contohnya 5) atau persentase
|
||||
dari Pod yang diinginkan (contohnya 10%). Nilai tidak bisa nol jika `MaxUnavailable` juga nol. Angka mutlak
|
||||
`.spec.strategy.rollingUpdate.maxSurge` adalah kolom opsional yang mengatur jumlah Pod maksimal yang
|
||||
dapat dibuat melebihi jumlah Pod yang diinginkan. Nilainya bisa berupa angka mutlak (contohnya 5) atau persentase
|
||||
dari Pod yang diinginkan (contohnya 10%). Nilai tidak bisa nol jika `MaxUnavailable` juga nol. Angka mutlak
|
||||
dihitung berdasarkan persentase dengan pembulatan ke bawah. Nilai bawaannya yaitu 25%.
|
||||
|
||||
Sebagai contoh, ketika nilai berisi 30%, ReplicaSet baru dapat segera diperbesar saat pembaruan bergulir dimulai,
|
||||
sehingga total jumlah Pod yang baru dan lama tidak melebihi 130% dari Pod yang diinginkan.
|
||||
Saat Pod lama dimatikan, ReplicaSet baru dapat lebih diperbesar lagi, menjamin total jumlah Pod yang siap
|
||||
Sebagai contoh, ketika nilai berisi 30%, ReplicaSet baru dapat segera diperbesar saat pembaruan bergulir dimulai,
|
||||
sehingga total jumlah Pod yang baru dan lama tidak melebihi 130% dari Pod yang diinginkan.
|
||||
Saat Pod lama dimatikan, ReplicaSet baru dapat lebih diperbesar lagi, menjamin total jumlah Pod yang siap
|
||||
kapanpun ketika pembaruan paling banyak 130% dari Pod yang diinginkan.
|
||||
|
||||
### Tenggat Kemajuan dalam Detik
|
||||
|
||||
`.spec.progressDeadlineSeconds` adalah kolom opsional yang mengatur lama tunggu dalam dalam detik untuk Deployment-mu berjalan
|
||||
sebelum sistem melaporkan lagi bahwa Deployment [gagal](#deployment-gagal) - ditunjukkan dengan kondisi `Type=Progressing`, `Status=False`,
|
||||
dan `Reason=ProgressDeadlineExceeded` pada status sumber daya. Controller Deployment akan tetap mencoba ulang Deployment.
|
||||
Nantinya begitu pengembalian otomatis diimplementasikan, kontroler Deployment akan membalikkan Deployment segera
|
||||
`.spec.progressDeadlineSeconds` adalah kolom opsional yang mengatur lama tunggu dalam dalam detik untuk Deployment-mu berjalan
|
||||
sebelum sistem melaporkan lagi bahwa Deployment [gagal](#deployment-gagal) - ditunjukkan dengan kondisi `Type=Progressing`, `Status=False`,
|
||||
dan `Reason=ProgressDeadlineExceeded` pada status sumber daya. Controller Deployment akan tetap mencoba ulang Deployment.
|
||||
Nantinya begitu pengembalian otomatis diimplementasikan, kontroler Deployment akan membalikkan Deployment segera
|
||||
saat dia menjumpai kondisi tersebut.
|
||||
|
||||
Jika ditentukan, kolom ini harus lebih besar dari `.spec.minReadySeconds`.
|
||||
|
||||
### Lama Minimum untuk Siap dalam Detik
|
||||
|
||||
`.spec.minReadySeconds` adalah kolom opsional yang mengatur lama minimal sebuah Pod yang baru dibuat
|
||||
`.spec.minReadySeconds` adalah kolom opsional yang mengatur lama minimal sebuah Pod yang baru dibuat
|
||||
seharusnya siap tanpa ada kontainer yang rusak, untuk dianggap tersedia, dalam detik.
|
||||
Nilai bawaannya yaitu 0 (Pod akan dianggap tersedia segera ketika siap). Untuk mempelajari lebih lanjut
|
||||
Nilai bawaannya yaitu 0 (Pod akan dianggap tersedia segera ketika siap). Untuk mempelajari lebih lanjut
|
||||
kapan Pod dianggap siap, lihat [Pemeriksaan Kontainer](/id/docs/concepts/workloads/pods/pod-lifecycle/#pemeriksaan-kontainer).
|
||||
|
||||
### Kembali Ke
|
||||
|
||||
Kolom `.spec.rollbackTo` telah ditinggalkan pada versi API `extensions/v1beta1` dan `apps/v1beta1`, dan sudah tidak didukung mulai versi API `apps/v1beta2`.
|
||||
Kolom `.spec.rollbackTo` telah ditinggalkan pada versi API `extensions/v1beta1` dan `apps/v1beta1`, dan sudah tidak didukung mulai versi API `apps/v1beta2`.
|
||||
Sebagai gantinya, disarankan untuk menggunakan `kubectl rollout undo` sebagaimana diperkenalkan dalam [Kembali ke Revisi Sebelumnya](#kembali-ke-revisi-sebelumnya).
|
||||
|
||||
### Batas Riwayat Revisi
|
||||
|
||||
Riwayat revisi Deployment disimpan dalam ReplicaSet yang dia kendalikan.
|
||||
|
||||
`.spec.revisionHistoryLimit` adalah kolom opsional yang mengatur jumlah ReplicaSet lama yang dipertahankan
|
||||
untuk memungkinkan pengembalian. ReplicaSet lama ini mengambil sumber daya dari `etcd` dan memunculkan keluaran
|
||||
dari `kubectl get rs`. Konfigurasi tiap revisi Deployment disimpan pada ReplicaSet-nya; sehingga, begitu ReplicaSet lama dihapus,
|
||||
kamu tidak mampu lagi membalikkan revisi Deployment-nya. Umumnya, 10 ReplicaSet lama akan dipertahankan,
|
||||
`.spec.revisionHistoryLimit` adalah kolom opsional yang mengatur jumlah ReplicaSet lama yang dipertahankan
|
||||
untuk memungkinkan pengembalian. ReplicaSet lama ini mengambil sumber daya dari `etcd` dan memunculkan keluaran
|
||||
dari `kubectl get rs`. Konfigurasi tiap revisi Deployment disimpan pada ReplicaSet-nya; sehingga, begitu ReplicaSet lama dihapus,
|
||||
kamu tidak mampu lagi membalikkan revisi Deployment-nya. Umumnya, 10 ReplicaSet lama akan dipertahankan,
|
||||
namun nilai idealnya tergantung pada frekuensi dan stabilitas Deployment-deployment baru.
|
||||
|
||||
Lebih spesifik, mengisi kolom dengan nol berarti semua ReplicaSet lama dengan 0 replika akan dibersihkan.
|
||||
|
@ -1114,7 +1114,7 @@ Dalam kasus ini, rilis Deployment baru tidak dapat dibalikkan, sebab riwayat rev
|
|||
### Terjeda
|
||||
|
||||
`.spec.paused` adalah kolom boolean opsional untuk menjeda dan melanjutkan Deployment. Perbedaan antara Deployment yang terjeda
|
||||
dan yang tidak hanyalah perubahan apapun pada PodTemplateSpec Deployment terjeda tidak akan memicu rilis baru selama masih terjeda.
|
||||
dan yang tidak hanyalah perubahan apapun pada PodTemplateSpec Deployment terjeda tidak akan memicu rilis baru selama masih terjeda.
|
||||
Deployment umumnya tidak terjeda saat dibuat.
|
||||
|
||||
## Alternatif untuk Deployment
|
||||
|
@ -1122,7 +1122,6 @@ Deployment umumnya tidak terjeda saat dibuat.
|
|||
### kubectl rolling update
|
||||
|
||||
[`kubectl rolling update`](/id/docs/reference/generated/kubectl/kubectl-commands#rolling-update) membarui Pod dan ReplicationController
|
||||
dengan cara yang serupa. Namun, Deployments lebih disarankan karena deklaratif, berjalan di sisi server, dan punya fitur tambahan,
|
||||
dengan cara yang serupa. Namun, Deployments lebih disarankan karena deklaratif, berjalan di sisi server, dan punya fitur tambahan,
|
||||
seperti pembalikkan ke revisi manapun sebelumnya bahkan setelah pembaruan rolling selesais.
|
||||
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Batasan Persebaran Topologi Pod
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 50
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
|
||||
|
||||
|
@ -14,9 +14,9 @@ pada klaster yang ditetapkan sebagai _failure-domains_, seperti wilayah, zona, N
|
|||
topologi yang ditentukan oleh pengguna. Ini akan membantu untuk mencapai ketersediaan yang tinggi
|
||||
dan juga penggunaan sumber daya yang efisien.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Persyaratan
|
||||
|
||||
|
@ -287,4 +287,4 @@ Pada versi 1.18, dimana fitur ini masih Beta, beberapa limitasi yang sudah diket
|
|||
- Pengurangan jumlah Deployment akan membuat ketidakseimbangan pada persebaran Pod.
|
||||
- Pod yang cocok pada _tainted_ Node akan dihargai. Lihat [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,116 @@
|
|||
---
|
||||
title: Berpartisipasi dalam SIG Docs
|
||||
content_type: concept
|
||||
weight: 60
|
||||
card:
|
||||
name: contribute
|
||||
weight: 60
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
SIG Docs merupakan salah satu
|
||||
[kelompok peminatan khusus (_special interest groups_)](https://github.com/kubernetes/community/blob/master/sig-list.md)
|
||||
dalam proyek Kubernetes, yang berfokus pada penulisan, pembaruan, dan pemeliharaan
|
||||
dokumentasi untuk Kubernetes secara keseluruhan. Lihatlah
|
||||
[SIG Docs dari repositori github komunitas](https://github.com/kubernetes/community/tree/master/sig-docs)
|
||||
untuk informasi lebih lanjut tentang SIG.
|
||||
|
||||
SIG Docs menerima konten dan ulasan dari semua kontributor. Siapa pun dapat membuka
|
||||
_pull request_ (PR), dan siapa pun boleh mengajukan isu tentang konten atau komen
|
||||
pada _pull request_ yang sedang berjalan.
|
||||
|
||||
Kamu juga bisa menjadi [anggota (_member_)](/id/docs/contribute/participating/roles-and-responsibilities/#anggota),
|
||||
[pengulas (_reviewer_](/id/docs/contribute/participating/roles-and-responsibilities/#pengulas), atau [pemberi persetujuan (_approver_)](/id/docs/contribute/participating/roles-and-responsibilities/#approvers). Peran tersebut membutuhkan
|
||||
akses dan mensyaratkan tanggung jawab tertentu untuk menyetujui dan melakukan perubahan.
|
||||
Lihatlah [keanggotaan-komunitas (_community-membership_)](https://github.com/kubernetes/community/blob/master/community-membership.md)
|
||||
untuk informasi lebih lanjut tentang cara kerja keanggotaan dalam komunitas Kubernetes.
|
||||
|
||||
Selebihnya dari dokumen ini akan menguraikan beberapa cara unik dari fungsi peranan tersebut dalam
|
||||
SIG Docs, yang bertanggung jawab untuk memelihara salah satu aspek yang paling berhadapan dengan publik
|
||||
dalam Kubernetes - situs web dan dokumentasi dari Kubernetes.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Ketua umum (_chairperson_) SIG Docs {#ketua-umum-sig-docs}
|
||||
|
||||
Setiap SIG, termasuk SIG Docs, memilih satu atau lebih anggota SIG untuk bertindak sebagai
|
||||
ketua umum. Mereka merupakan kontak utama antara SIG Docs dan bagian lain dari
|
||||
organisasi Kubernetes. Mereka membutuhkan pengetahuan yang luas tentang struktur
|
||||
proyek Kubernetes secara keseluruhan dan bagaimana SIG Docs bekerja di dalamnya. Lihatlah
|
||||
[Kepemimpinan (_leadership_)](https://github.com/kubernetes/community/tree/master/sig-docs#leadership)
|
||||
untuk daftar ketua umum yang sekarang.
|
||||
|
||||
## Tim dan automasi dalam SIG Docs
|
||||
|
||||
Automasi dalam SIG Docs bergantung pada dua mekanisme berbeda:
|
||||
Tim GitHub dan berkas OWNERS.
|
||||
|
||||
### Tim GitHub
|
||||
|
||||
Terdapat dua kategori tim dalam SIG Docs [tim (_teams_)](https://github.com/orgs/kubernetes/teams?query=sig-docs) dalam GitHub:
|
||||
|
||||
- `@sig-docs-{language}-owners` merupakan pemberi persetujuan (_approver_) dan pemimpin (_lead_)
|
||||
- `@sig-docs-{language}-reviewers` merupakan pengulas (_reviewer_)
|
||||
|
||||
Setiap tim dapat direferensikan dengan `@name` mereka dalam komen GitHub untuk berkomunikasi dengan setiap orang di dalam grup.
|
||||
|
||||
Terkadang tim Prow dan GitHub tumpang tindih (_overlap_) tanpa kecocokan sama persis. Untuk penugasan masalah, _pull request_, dan untuk mendukung persetujuan PR,
|
||||
otomatisasi menggunakan informasi dari berkas `OWNERS`.
|
||||
|
||||
|
||||
### Berkas OWNERS dan bagian yang utama (_front-matter_)
|
||||
|
||||
Proyek Kubernetes menggunakan perangkat otomatisasi yang disebut prow untuk melakukan automatisasi
|
||||
yang terkait dengan isu dan _pull request_ dalam GitHub.
|
||||
[Repositori situs web Kubernetes](https://github.com/kubernetes/website) menggunakan
|
||||
dua buah [prow _plugin_](https://github.com/kubernetes/test-infra/tree/master/prow/plugins):
|
||||
|
||||
- blunderbuss
|
||||
- approve
|
||||
|
||||
Kedua _plugin_ menggunakan berkas
|
||||
[OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) dan
|
||||
[OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES)
|
||||
dalam level teratas dari repositori GitHub `kubernetes/website` untuk mengontrol
|
||||
bagaimana prow bekerja di dalam repositori.
|
||||
|
||||
Berkas OWNERS berisi daftar orang-orang yang menjadi pengulas dan pemberi persetujuan di dalam SIG Docs.
|
||||
Berkas OWNERS juga bisa terdapat di dalam subdirektori, dan dapat menimpa peranan karena
|
||||
dapat bertindak sebagai pengulas atau pemberi persetujuan berkas untuk subdirektori itu dan
|
||||
apa saja yang ada di dalamnya. Untuk informasi lebih lanjut tentang berkas OWNERS pada umumnya, lihatlah
|
||||
[OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md).
|
||||
|
||||
Selanjutnya, berkas _markdown_ individu dapat menyimpan daftar pengulas dan pemberi persetujuan
|
||||
pada bagian yang utama, baik dengan menyimpan daftar nama pengguna individu GitHub atau grup GitHub.
|
||||
|
||||
Kombinasi dari berkas OWNERS dan bagian yang utama dalam berkas _markdown_ menentukan
|
||||
saran kepada pemilik PR yang didapat dari sistem otomatis tentang siapa yang akan meminta ulasan teknis
|
||||
dan ulasan editorial untuk PR mereka.
|
||||
|
||||
## Cara menggabungkan pekerjaan
|
||||
|
||||
Ketika _pull request_ digabungkan ke cabang (_branch_) yang digunakan untuk mempublikasikan konten, konten itu dipublikasikan di http://kubernetes.io. Untuk memastikan bahwa
|
||||
kualitas konten yang kita terbitkan bermutu tinggi, kita membatasi penggabungan _pull request_ bagi para pemberi persetujuan
|
||||
SIG Docs. Beginilah cara kerjanya.
|
||||
|
||||
- Ketika _pull request_ memiliki label `lgtm` dan `approve`, tidak memiliki label `hold`,
|
||||
dan telah lulus semua tes, _pull request_ akan digabungkan secara otomatis.
|
||||
- Anggota organisasi Kubernetes dan pemberi persetujuan SIG Docs dapat menambahkan komen
|
||||
untuk mencegah penggabungan otomatis dari _pull request_ yang diberikan (dengan menambahkan komen `/hold`
|
||||
atau menahan komen `/lgtm`).
|
||||
- Setiap anggota Kubernetes dapat menambahkan label `lgtm` dengan menambahkan komen `lgtm`
|
||||
- Hanya pemberi persetujuan SIG Docs yang bisa menggabungkan _pull request_
|
||||
dengan menambahkan komen `/approve`. Beberapa pemberi persetujuan juga dapat melakukan
|
||||
tugas tambahan seperti [PR _Wrangler_](/id/docs/contribute/advanced#menjadi-pr-wrangler-untuk-seminggu) atau
|
||||
[Ketua Umum SIG Docs](#ketua-umum-sig-docs).
|
||||
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
Untuk informasi lebih lanjut tentang cara berkontribusi pada dokumentasi Kubernetes, lihatlah:
|
||||
|
||||
- [Berkontribusi konten baru](/id/docs/contribute/overview/)
|
||||
- [Mengulas konten](/id/docs/contribute/review/reviewing-prs)
|
||||
- [Panduan gaya dokumentasi](/id/docs/contribute/style/)
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
title: Menyarankan peningkatan kualitas konten
|
||||
slug: suggest-improvements
|
||||
content_type: concept
|
||||
weight: 10
|
||||
card:
|
||||
name: contribute
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Jika kamu menemukan masalah pada dokumentasi Kubernetes, atau mempunyai ide untuk
|
||||
konten baru, maka silakan untuk membuat isu pada Github. Kamu hanya membutuhkan
|
||||
sebuah [akun Github](https://github.com/join) dan sebuah _web browser_.
|
||||
|
||||
Pada kebanyakan kasus, pekerjaan dalam dokumentasi Kubernetes diawali dengan sebuah
|
||||
isu pada Github. Kontributor Kubernetes akan mengkaji, mengkategorisasi dan menandai isu
|
||||
sesuai kebutuhan. Selanjutnya, kamu atau anggota lain dari komunitas Kubernetes dapat membuat
|
||||
_pull request_ dengan perubahan yang akan menyelesaikan masalahnya.
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Membuka sebuah issue
|
||||
|
||||
Jika kamu mau menyarankan peningkatan kualitas pada konten yang sudah ada, atau menemukan kesalahan,
|
||||
maka silakan membuka sebuah isu.
|
||||
|
||||
1. Turun ke bagian bawah dari suatu halaman dan klik pada tombol **Buat Isu**. Ini akan
|
||||
mengantarmu pada halaman Github isu dengan beberapa tajuk yang telah diisi.
|
||||
2. Deskripsikan isu atau saran untuk peningkatan kualitas. Sediakan detail sebanyak mungkin yang kamu bisa.
|
||||
3. Klik **Submit new issue**
|
||||
|
||||
Setelah dikirim, cek isu yang kamu buat secara berkala atau hidupkan notifikasi Github.
|
||||
Pengulas (_reviewer_) atau anggota komunitas lainnya mungkin akan menanyakan pertanyaan
|
||||
sebelum mereka mengambil suatu tindakan terhadap isumu.
|
||||
|
||||
## Menyarankan konten baru
|
||||
|
||||
Jika kamu memiliki ide untuk konten baru, tapi kamu tidak yakin dimana mengutarakannya,
|
||||
kamu tetap dapat membuat sebuah isu. Antara lain:
|
||||
|
||||
- Pilih halaman pada bagian yang menurutmu konten tersebut berhubungan dan klik **Buat Isu**.
|
||||
- Pergi ke [Github](https://github.com/kubernetes/website/issues/new/) dan langsung membuat isu.
|
||||
|
||||
## Bagaimana cara membuat isu yang bagus
|
||||
|
||||
Perhatikan hal berikut ketika membuat sebuah isu:
|
||||
|
||||
- Memberikan deskripsi isu yang jelas. Deskripsikan apa yang memang kurang, tertinggal,
|
||||
salah atau konten mana yang memerlukan peningkatan kualitas.
|
||||
- Jelaskan dampak spesifik dari isu terhadap pengguna.
|
||||
- Batasi cakupan dari sebuah isu menjadi ukuran pekerjaan yang masuk akal.
|
||||
Untuk masalah dengan cakupan yang besar, pecah isu itu menjadi beberapa isu lebih kecil.
|
||||
Misal, "Membenahi dokumentasi keamanan" masih sangat luas cakupannya, tapi "Penambahan
|
||||
detail pada topik 'Pembatasan akses jaringan'" adalah lebih spesifik untuk dikerjakan.
|
||||
- Mencari isu yang sudah ada untuk melihat apakah ada sesuatu yang berhubungan atau
|
||||
mirip dengan isu yang baru.
|
||||
- Jika isu yang baru berhubungan dengan isu lain atau _pull request_, tambahkan rujukan
|
||||
dengan menuliskan URL lengkap atau dengan nomor isu atau _pull request_ yang diawali dengan
|
||||
karakter `#`. Contohnya, `Diajukan oleh #987654`.
|
||||
- Mengikuti [Kode Etik Komunitas](/id/community/code-of-conduct/). Menghargai kontributor lain.
|
||||
Misalnya, "Dokumentasi ini sangat jelek" adalah contoh yang tidak membantu dan juga bukan
|
||||
masukan yang sopan.
|
||||
|
|
@ -1,16 +1,16 @@
|
|||
---
|
||||
title: Menggunakan Otorisasi RBAC
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
aliases: [../../../rbac/]
|
||||
weight: 70
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
Kontrol akses berbasis peran (RBAC) adalah metode pengaturan akses ke sumber daya komputer
|
||||
atau jaringan berdasarkan peran pengguna individu dalam organisasi kamu.
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
Otorisasi RBAC menggunakan `rbac.authorization.k8s.io` kelompok API untuk mengendalikan keputusan
|
||||
otorisasi, memungkinkan kamu untuk mengkonfigurasi kebijakan secara dinamis melalui API Kubernetes.
|
||||
|
||||
|
@ -1192,4 +1192,4 @@ kubectl create clusterrolebinding permissive-binding \
|
|||
After you have transitioned to use RBAC, you should adjust the access controls
|
||||
for your cluster to ensure that these meet your information security needs.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,16 +1,16 @@
|
|||
---
|
||||
title: Menjalankan klaster dalam beberapa zona
|
||||
weight: 10
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini menjelaskan tentang bagaimana menjalankan sebuah klaster dalam beberapa zona.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Pendahuluan
|
||||
|
||||
|
@ -398,4 +398,4 @@ KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b k
|
|||
KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,414 @@
|
|||
---
|
||||
title: Runtime Container
|
||||
content_type: concept
|
||||
weight: 10
|
||||
---
|
||||
<!-- overview -->
|
||||
{{< feature-state for_k8s_version="v1.6" state="stable" >}}
|
||||
|
||||
Untuk menjalankan Container di Pod, Kubernetes menggunakan _runtime_ Container (Container runtimes). Berikut ini adalah
|
||||
petunjuk instalasi untuk berbagai macam _runtime_.
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
|
||||
{{< caution >}}
|
||||
Sebuah kekurangan ditemukan dalam cara `runc` menangani pendeskripsi berkas (_file_) sistem ketika menjalankan Container.
|
||||
Container yang berbahaya dapat menggunakan kekurangan ini untuk menimpa konten biner `runc` dan
|
||||
akibatnya Container tersebut dapat menjalankan perintah yang sewenang-wenang pada sistem host dari Container tersebut.
|
||||
|
||||
Silahkan merujuk pada [CVE-2019-5736](https://access.redhat.com/security/cve/cve-2019-5736) untuk informasi lebih lanjut tentang masalah ini.
|
||||
{{< /caution >}}
|
||||
|
||||
### Penerapan
|
||||
|
||||
{{< note >}}
|
||||
Dokumen ini ditulis untuk pengguna yang memasang CRI (Container Runtime Interface) pada sistem operasi Linux. Untuk sistem operasi yang lain,
|
||||
silahkan cari dokumentasi khusus untuk platform kamu.
|
||||
|
||||
{{< /note >}}
|
||||
|
||||
Kamu harus menjalankan semua perintah dalam panduan ini sebagai `root`. Sebagai contoh, awali perintah
|
||||
dengan `sudo`, atau masuk sebagai `root` dan kemudian baru menjalankan perintah sebagai pengguna `root`.
|
||||
|
||||
### _Driver_ cgroup
|
||||
|
||||
Ketika systemd dipilih sebagai sistem init untuk sebuah distribusi Linux, proses init menghasilkan
|
||||
dan menggunakan grup kontrol root (`cgroup`) dan proses ini akan bertindak sebagai manajer cgroup. Systemd memiliki integrasi yang ketat
|
||||
dengan cgroup dan akan mengalokasikan cgroups untuk setiap proses. Kamu dapat mengonfigurasi
|
||||
_runtime_ Container dan kubelet untuk menggunakan `cgroupfs`. Menggunakan `cgroupfs` bersama dengan systemd berarti
|
||||
akan ada dua manajer cgroup yang berbeda.
|
||||
|
||||
Cgroup digunakan untuk membatasi sumber daya yang dialokasikan untuk proses.
|
||||
Sebuah manajer cgroup tunggal akan menyederhanakan pandangan tentang sumber daya apa yang sedang dialokasikan
|
||||
dan secara bawaan (_default_) akan memiliki pandangan yang lebih konsisten tentang sumber daya yang tersedia dan yang sedang digunakan. Ketika kita punya memiliki
|
||||
dua manajer maka kita pun akan memiliki dua pandangan berbeda tentang sumber daya tersebut. Kita telah melihat kasus di lapangan
|
||||
di mana Node yang dikonfigurasi menggunakan `cgroupfs` untuk kubelet dan Docker, dan `systemd`
|
||||
untuk semua sisa proses yang berjalan pada Node maka Node tersebut akan menjadi tidak stabil di bawah tekanan sumber daya.
|
||||
|
||||
Mengubah aturan sedemikian rupa sehingga _runtime_ Container dan kubelet kamu menggunakan `systemd` sebagai _driver_ cgroup
|
||||
akan menstabilkan sistem. Silahkan perhatikan opsi `native.cgroupdriver=systemd` dalam pengaturan Docker di bawah ini.
|
||||
|
||||
{{< caution >}}
|
||||
Mengubah driver cgroup dari Node yang telah bergabung kedalam sebuah Cluster sangat tidak direkomendasikan.
|
||||
Jika kubelet telah membuat Pod menggunakan semantik dari sebuah _driver_ cgroup, mengubah _runtime_ Container
|
||||
ke _driver_ cgroup yang lain dapat mengakibatkan kesalahan pada saat percobaan untuk membuat kembali PodSandbox
|
||||
untuk Pod yang sudah ada. Menjalankan ulang (_restart_) kubelet mungkin tidak menyelesaikan kesalahan tersebut. Rekomendasi yang dianjurkan
|
||||
adalah untuk menguras Node dari beban kerjanya, menghapusnya dari Cluster dan menggabungkannya kembali.
|
||||
|
||||
{{< /caution >}}
|
||||
|
||||
## Docker
|
||||
|
||||
Pada setiap mesin kamu, mari menginstall Docker.
|
||||
Versi yang direkomendasikan adalah 19.03.11, tetapi versi 1.13.1, 17.03, 17.06, 17.09, 18.06 dan 18.09 juga diketahui bekerja dengan baik.
|
||||
Jagalah versi Docker pada versi terbaru yang sudah terverifikasi pada catatan rilis Kubernetes.
|
||||
|
||||
Gunakan perintah berikut untuk menginstal Docker pada sistem kamu:
|
||||
|
||||
{{< tabs name="tab-cri-docker-installation" >}}
|
||||
{{% tab name="Ubuntu 16.04+" %}}
|
||||
|
||||
```shell
|
||||
# (Menginstal Docker CE)
|
||||
## Mengatur repositori:
|
||||
### Menginstal packet untuk mengijinkan apt untuk menggunakan repositori melalui HTTPS
|
||||
apt-get update && apt-get install -y \
|
||||
apt-transport-https ca-certificates curl software-properties-common gnupg2
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menambahkan key GPG resmi dari Docker:
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menambahkan repositori apt dari Docker:
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menginstal Docker CE
|
||||
apt-get update && apt-get install -y \
|
||||
containerd.io=1.2.13-2 \
|
||||
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
|
||||
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
|
||||
```
|
||||
|
||||
```shell
|
||||
# Mengatur daemon Docker
|
||||
cat > /etc/docker/daemon.json <<EOF
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=systemd"],
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "100m"
|
||||
},
|
||||
"storage-driver": "overlay2"
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
```shell
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menjalankan ulang Docker
|
||||
systemctl daemon-reload
|
||||
systemctl restart docker
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS/RHEL 7.4+" %}}
|
||||
|
||||
```shell
|
||||
# (Menginstal Docker CE)
|
||||
## Mengatur repositori
|
||||
### Menginstal paket yang diperlukan
|
||||
yum install -y yum-utils device-mapper-persistent-data lvm2
|
||||
```
|
||||
|
||||
```shell
|
||||
## Menambahkan repositori apt dari Docker
|
||||
yum-config-manager --add-repo \
|
||||
https://download.docker.com/linux/centos/docker-ce.repo
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menginstal Docker CE
|
||||
yum update -y && yum install -y \
|
||||
containerd.io-1.2.13 \
|
||||
docker-ce-19.03.11 \
|
||||
docker-ce-cli-19.03.11
|
||||
```
|
||||
|
||||
```shell
|
||||
## Membuat berkas /etc/docker
|
||||
mkdir /etc/docker
|
||||
```
|
||||
|
||||
```shell
|
||||
# Mengatur daemon Docker
|
||||
cat > /etc/docker/daemon.json <<EOF
|
||||
{
|
||||
"exec-opts": ["native.cgroupdriver=systemd"],
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "100m"
|
||||
},
|
||||
"storage-driver": "overlay2",
|
||||
"storage-opts": [
|
||||
"overlay2.override_kernel_check=true"
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
```shell
|
||||
mkdir -p /etc/systemd/system/docker.service.d
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menjalankan ulang Docker
|
||||
systemctl daemon-reload
|
||||
systemctl restart docker
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
Jika kamu menginginkan layanan Docker berjalan dari saat memulai pertama (_boot_), jalankan perintah ini:
|
||||
|
||||
```shell
|
||||
sudo systemctl enable docker
|
||||
```
|
||||
|
||||
Silahkan merujuk pada [Panduan resmi instalasi Docker](https://docs.docker.com/engine/installation/)
|
||||
untuk informasi lebih lanjut.
|
||||
|
||||
## CRI-O
|
||||
|
||||
Bagian ini mencakup langkah-langkah yang diperlukan untuk menginstal `CRI-O` sebagai _runtime_ CRI.
|
||||
|
||||
Gunakan perintah-perinath berikut untuk menginstal CRI-O pada sistem kamu:
|
||||
|
||||
{{< note >}}
|
||||
Versi mayor dan minor dari CRI-O harus sesuai dengan versi mayor dan minor dari Kubernetes.
|
||||
Untuk informasi lebih lanjut, lihatlah [Matriks kompatibilitas CRI-O](https://github.com/cri-o/cri-o).
|
||||
{{< /note >}}
|
||||
|
||||
### Prasyarat
|
||||
|
||||
```shell
|
||||
modprobe overlay
|
||||
modprobe br_netfilter
|
||||
|
||||
# Mengatur parameter sysctl yang diperlukan, dimana ini akan bernilai tetap setiap kali penjalanan ulang.
|
||||
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
EOF
|
||||
|
||||
sysctl --system
|
||||
```
|
||||
|
||||
{{< tabs name="tab-cri-cri-o-installation" >}}
|
||||
{{% tab name="Debian" %}}
|
||||
|
||||
```shell
|
||||
# Debian Unstable/Sid
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Unstable/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Unstable/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
|
||||
```shell
|
||||
# Debian Testing
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_Testing/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_Testing/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
|
||||
```shell
|
||||
# Debian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Debian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
|
||||
```shell
|
||||
# Raspbian 10
|
||||
echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Raspbian_10/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/Raspbian_10/Release.key -O- | sudo apt-key add -
|
||||
```
|
||||
|
||||
dan kemudian install CRI-O:
|
||||
```shell
|
||||
sudo apt-get install cri-o-1.17
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="Ubuntu 18.04, 19.04 and 19.10" %}}
|
||||
|
||||
```shell
|
||||
# Mengatur repositori paket
|
||||
. /etc/os-release
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
|
||||
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menginstal CRI-O
|
||||
sudo apt-get install cri-o-1.17
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="CentOS/RHEL 7.4+" %}}
|
||||
|
||||
```shell
|
||||
# Menginstal prasyarat
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_7/devel:kubic:libcontainers:stable.repo
|
||||
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}/CentOS_7/devel:kubic:libcontainers:stable:cri-o:{{< skew latestVersion >}}.repo
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menginstal CRI-O
|
||||
yum install -y cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab name="openSUSE Tumbleweed" %}}
|
||||
|
||||
```shell
|
||||
sudo zypper install cri-o
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### Memulai CRI-O
|
||||
|
||||
```shell
|
||||
systemctl daemon-reload
|
||||
systemctl start crio
|
||||
```
|
||||
|
||||
Silahkan merujuk pada [Panduan instalasi CRI-O](https://github.com/kubernetes-sigs/cri-o#getting-started)
|
||||
untuk informasi lanjut.
|
||||
|
||||
## Containerd
|
||||
|
||||
Bagian ini berisi langkah-langkah yang diperlukan untuk menggunakan `containerd` sebagai _runtime_ CRI.
|
||||
|
||||
Gunakan perintah-perintah berikut untuk menginstal containerd pada sistem kamu:
|
||||
|
||||
|
||||
### Prasyarat
|
||||
|
||||
```shell
|
||||
cat > /etc/modules-load.d/containerd.conf <<EOF
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
modprobe overlay
|
||||
modprobe br_netfilter
|
||||
|
||||
# Mengatur parameter sysctl yang diperlukan, dimana ini akan bernilai tetap setiap kali penjalanan ulang.
|
||||
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
EOF
|
||||
|
||||
sysctl --system
|
||||
```
|
||||
|
||||
### Menginstal containerd
|
||||
|
||||
{{< tabs name="tab-cri-containerd-installation" >}}
|
||||
{{% tab name="Ubuntu 16.04" %}}
|
||||
|
||||
```shell
|
||||
# (Meninstal containerd)
|
||||
## Mengatur repositori paket
|
||||
### Install packages to allow apt to use a repository over HTTPS
|
||||
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
|
||||
```
|
||||
|
||||
```shell
|
||||
## Menambahkan key GPG resmi dari Docker:
|
||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
```
|
||||
|
||||
```shell
|
||||
## Mengatur repositori paket Docker
|
||||
add-apt-repository \
|
||||
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
```
|
||||
|
||||
```shell
|
||||
## Menginstal containerd
|
||||
apt-get update && apt-get install -y containerd.io
|
||||
```
|
||||
|
||||
```shell
|
||||
# Mengonfigure containerd
|
||||
mkdir -p /etc/containerd
|
||||
containerd config default > /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menjalankan ulang containerd
|
||||
systemctl restart containerd
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS/RHEL 7.4+" %}}
|
||||
|
||||
```shell
|
||||
# (Menginstal containerd)
|
||||
## Mengatur repositori
|
||||
### Menginstal paket prasyarat
|
||||
yum install -y yum-utils device-mapper-persistent-data lvm2
|
||||
```
|
||||
|
||||
```shell
|
||||
## Menambahkan repositori Docker
|
||||
yum-config-manager \
|
||||
--add-repo \
|
||||
https://download.docker.com/linux/centos/docker-ce.repo
|
||||
```
|
||||
|
||||
```shell
|
||||
## Menginstal containerd
|
||||
yum update -y && yum install -y containerd.io
|
||||
```
|
||||
|
||||
```shell
|
||||
## Mengonfigurasi containerd
|
||||
mkdir -p /etc/containerd
|
||||
containerd config default > /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
```shell
|
||||
# Menjalankan ulang containerd
|
||||
systemctl restart containerd
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
### systemd
|
||||
|
||||
Untuk menggunakan driver cgroup `systemd`, atur `plugins.cri.systemd_cgroup = true` pada `/etc/containerd/config.toml`.
|
||||
Ketika menggunakan kubeadm, konfigurasikan secara manual
|
||||
[driver cgroup untuk kubelet](/id/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#mengonfigurasi-cgroup-untuk-kubelet-pada-node-control-plane)
|
||||
|
||||
## _Runtime_ CRI yang lainnya: Frakti
|
||||
|
||||
Silahkan lihat [Panduan cepat memulai Frakti](https://github.com/kubernetes/frakti#quickstart) untuk informasi lebih lanjut.
|
||||
|
||||
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Membuat sebuah klaster dengan control-plane tunggal menggunakan kubeadm
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 30
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Perkakas <img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">`kubeadm` membantu kamu membuat sebuah klaster Kubernetes minimum yang layak dan sesuai dengan _best practice_. Bahkan, kamu dapat menggunakan `kubeadm` untuk membuat sebuah klaster yang lolos [uji Kubernetes Conformance](https://kubernetes.io/blog/2017/10/software-conformance-certification).
|
||||
`kubeadm` juga mendukung fungsi siklus hidup (_lifecycle_)
|
||||
|
@ -22,9 +22,10 @@ server di _cloud_, sebuah Raspberry Pi, dan lain-lain. Baik itu men-_deploy_ pad
|
|||
_cloud_ ataupun _on-premise_, kamu dapat mengintegrasikan `kubeadm` pada sistem _provisioning_ seperti
|
||||
Ansible atau Terraform.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
Untuk mengikuti panduan ini, kamu membutuhkan:
|
||||
|
||||
|
@ -51,9 +52,9 @@ sedikit seiring dengan berevolusinya kubeadm, namun secara umum implementasinya
|
|||
Semua perintah di dalam `kubeadm alpha`, sesuai definisi, didukung pada level _alpha_.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Tujuan
|
||||
|
||||
|
@ -559,9 +560,9 @@ Lihat dokumentasi referensi [`kubeadm reset`](/docs/reference/setup-tools/kubead
|
|||
untuk informasi lebih lanjut mengenai sub-perintah ini dan
|
||||
opsinya.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture discussion %}}
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Selanjutnya
|
||||
|
||||
|
@ -635,4 +636,4 @@ mendukung platform pilihanmu.
|
|||
|
||||
Jika kamu menemui kesulitan dengan kubeadm, silakan merujuk pada [dokumen penyelesaian masalah](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -264,7 +264,7 @@ systemctl enable --now kubelet
|
|||
|
||||
Sekarang kubelet akan melakukan _restart_ setiap beberapa detik, sambil menunggu dalam kondisi _crashloop_ sampai kubeadm memberikan instruksi yang harus dilakukan.
|
||||
|
||||
## Mengonfigurasi _driver_ cgroup yang digunakan oleh kubelet pada Node _control-plane_
|
||||
## Mengonfigurasi _driver_ cgroup yang digunakan oleh kubelet pada Node _control-plane_ {#mengonfigurasi-cgroup-untuk-kubelet-pada-node-control-plane}
|
||||
|
||||
Ketika menggunakan Docker, kubeadm akan mendeteksi secara otomatis _driver_ cgroup untuk kubelet
|
||||
dan mengaturnya pada berkas `/var/lib/kubelet/config.yaml` pada saat _runtime_.
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
title: Membuat Load Balancer Eksternal
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 80
|
||||
---
|
||||
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini menjelaskan bagaimana membuat _Load Balancer_ Eksternal.
|
||||
|
||||
|
@ -21,15 +21,16 @@ Untuk informasi mengenai penyediaan dan penggunaan sumber daya Ingress yang dapa
|
|||
servis URL yang dapat dijangkau secara eksternal, penyeimbang beban lalu lintas, terminasi SSL, dll.,
|
||||
silahkan cek dokumentasi [Ingress](/docs/concepts/services-networking/ingress/)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Berkas konfigurasi
|
||||
|
||||
|
@ -193,4 +194,4 @@ Sekali _load balancer_ eksternal menyediakan bobot, fungsionalitas ini dapat dit
|
|||
|
||||
Pod internal ke lalu lintas Pod harus berperilaku sama seperti Service ClusterIP, dengan probabilitas yang sama pada seluruh Pod.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Mengatur Probe Liveness, Readiness dan Startup
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 110
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini memperlihatkan bagaimana cara untuk mengatur _probe liveness_, _readiness_, dan
|
||||
_startup_ untuk Container.
|
||||
|
@ -26,15 +26,16 @@ berhasil, kamu harus memastikan _probe_ tersebut tidak mengganggu _startup_ dari
|
|||
Mekanisme ini dapat digunakan untuk mengadopsi pemeriksaan _liveness_ pada saat memulai Container yang lambat,
|
||||
untuk menghindari Container dimatikan oleh kubelet sebelum Container mulai dan berjalan.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Mendefinisikan perintah liveness
|
||||
|
||||
|
@ -358,9 +359,10 @@ Untuk _probe_ TCP, kubelet membuat koneksi _probe_ pada Node, tidak pada Pod, ya
|
|||
kamu tidak menggunakan nama Service di dalam parameter `host` karena kubelet tidak bisa
|
||||
me-_resolve_-nya.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Pelajari lebih lanjut tentang
|
||||
[Probe Container](/id/docs/concepts/workloads/pods/pod-lifecycle/#container-probes).
|
||||
|
@ -371,4 +373,4 @@ Kamu juga dapat membaca rujukan API untuk:
|
|||
* [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core)
|
||||
* [Probe](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#probe-v1-core)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Mengatur Pod untuk Penyimpanan dengan PersistentVolume
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 60
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini akan menjelaskan bagaimana kamu dapat mengatur sebuah Pod dengan menggunakan
|
||||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
|
||||
|
@ -19,9 +19,10 @@ PersistentVolumeClaim yang secara otomatis terikat dengan PersistentVolume yang
|
|||
|
||||
3. Kamu membuat sebuah Pod yang menggunakan PersistentVolumeClaim di atas untuk penyimpanan.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
* Kamu membutuhkan sebuah klaster Kubernetes yang hanya memiliki satu Node, dan
|
||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
|
||||
|
@ -32,9 +33,9 @@ tidak memiliki sebuah klaster dengan Node tunggal, kamu dapat membuatnya dengan
|
|||
* Familiar dengan materi di
|
||||
[Persistent Volumes](/id/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Membuat sebuah berkas index.html di dalam Node kamu
|
||||
|
||||
|
@ -235,10 +236,10 @@ sudo rmdir /mnt/data
|
|||
|
||||
Sekarang kamu dapat menutup _shell_ Node kamu.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture discussion %}}
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Kontrol akses
|
||||
|
||||
|
@ -266,10 +267,11 @@ Ketika sebuah Pod mengkonsumsi PersistentVolume, GID yang terkait dengan Persist
|
|||
tidak ada di dalam sumberdaya Pod itu sendiri.
|
||||
{{< /note >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Belajar lebih lanjut tentang [PersistentVolume](/id/docs/concepts/storage/persistent-volumes/).
|
||||
* Baca [dokumen perancangan Penyimpanan _Persistent_](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
|
||||
|
@ -281,4 +283,4 @@ tidak ada di dalam sumberdaya Pod itu sendiri.
|
|||
* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
|
||||
* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
---
|
||||
title: Mengonfigurasi Konteks Keamanan untuk Pod atau Container
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 80
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Konteks keamanan (_security context_) menentukan wewenang (_privilege_) dan aturan kontrol akses untuk sebuah Pod
|
||||
atau Container. Aturan konteks keamanan meliputi hal-hal berikut ini namun tidak terbatas pada hal-hal tersebut:
|
||||
|
@ -31,15 +31,16 @@ Poin-poin di atas bukanlah sekumpulan lengkap dari aturan konteks keamanan - sil
|
|||
Untuk informasi lebih lanjut tentang mekanisme keamanan pada Linux, silahkan lihat
|
||||
[ikhtisar fitur keamanan pada Kernel Linux](https://www.linux.com/learn/overview-linux-kernel-security-features)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Mengatur konteks keamanan untuk Pod
|
||||
|
||||
|
@ -401,9 +402,10 @@ kubectl delete pod security-context-demo-3
|
|||
kubectl delete pod security-context-demo-4
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [PodSecurityContext](/docs/reference/generated/kubernetes-api/{{<param"version">}}/#podsecuritycontext-v1-core)
|
||||
* [SecurityContext](/docs/reference/generated/kubernetes-api/{{<param"version">}}/#securitycontext-v1-core)
|
||||
|
@ -413,4 +415,4 @@ kubectl delete pod security-context-demo-4
|
|||
* [Kebijakan keamanan Pod](/docs/concepts/policy/pod-security-policy/)
|
||||
* [Dokumen desain AllowPrivilegeEscalation](https://git.k8s.io/community/contributors/design-proposals/auth/no-new-privs.md)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -1,24 +1,25 @@
|
|||
---
|
||||
title: Mendapatkan Shell Untuk Masuk ke Container yang Sedang Berjalan
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini menunjukkan bagaimana cara menggunakan `kubectl exec` untuk
|
||||
mendapatkan _shell_ untuk masuk ke dalam Container yang sedang berjalan.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Mendapatkan sebuah _shell_ untuk masuk ke sebuah Container
|
||||
|
||||
|
@ -118,9 +119,9 @@ kubectl exec shell-demo ls /
|
|||
kubectl exec shell-demo cat /proc/1/mounts
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture discussion %}}
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
## Membuka sebuah _shell_ ketika sebuah Pod memiliki lebih dari satu Container
|
||||
|
||||
|
@ -134,14 +135,15 @@ _shell_ ke Container dengan nama main-app.
|
|||
kubectl exec -it my-pod --container main-app -- /bin/bash
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec)
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,26 +1,27 @@
|
|||
---
|
||||
title: Mendefinisikan Perintah dan Argumen untuk sebuah Kontainer
|
||||
content_template: templates/task
|
||||
content_type: task
|
||||
weight: 10
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
Laman ini menunjukkan bagaimana cara mendefinisikan perintah-perintah
|
||||
dan argumen-argumen saat kamu menjalankan Container
|
||||
dalam sebuah {{< glossary_tooltip term_id="Pod" >}}.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture prerequisites %}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture steps %}}
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Mendefinisikan sebuah perintah dan argumen-argumen saat kamu membuat sebuah Pod
|
||||
|
||||
|
@ -145,12 +146,13 @@ Berikut ini beberapa contoh:
|
|||
| `[/ep-1]` | `[foo bar]` | `[/ep-2]` | `[zoo boo]` | `[ep-2 zoo boo]` |
|
||||
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Pelajari lebih lanjut tentang [mengatur Pod and Container](/id/docs/tasks/).
|
||||
* Pelajari lebih lanjut tentang [menjalankan perintah di dalam sebuah Container](/id/docs/tasks/debug-application-cluster/get-shell-running-container/).
|
||||
* Lihat [Container](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#container-v1-core).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -4,11 +4,11 @@ feature:
|
|||
title: Horizontal scaling
|
||||
description: >
|
||||
Scale up dan scale down aplikasimu dengan sebuah perintah yang serderhana, dengan UI, atau otomatis bersadarkan penggunaan CPU.
|
||||
content_template: templates/concept
|
||||
content_type: concept
|
||||
weight: 90
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
<!-- overview -->
|
||||
|
||||
HorizontalPodAutoscaler secara otomatis akan memperbanyak jumlah Pod di dalam ReplicationController, Deployment,
|
||||
ReplicaSet ataupun StatefulSet berdasarkan hasil observasi penggunaan CPU(atau, dengan
|
||||
|
@ -20,10 +20,10 @@ HorizontalPodAutoscaler diimplementasikan sebagai Kubernetes API *resource* dan
|
|||
Kontroler akan mengubah jumlah replika pada ReplicationController atau pada Deployment untuk menyesuaikan dengan hasil observasi rata-rata
|
||||
penggunaan CPU sesuai dengan yang ditentukan oleh pengguna.
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!-- body -->
|
||||
|
||||
## Bagaimana cara kerja HorizontalPodAutoscaler?
|
||||
|
||||
|
@ -441,12 +441,13 @@ behavior:
|
|||
selectPolicy: Disabled
|
||||
```
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture whatsnext %}}
|
||||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
|
||||
* Dokumentasi desain [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
|
||||
* Perintah kubectl autoscale [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
|
||||
* Contoh penggunaan [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/).
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "TLS"
|
||||
weight: 100
|
||||
---
|
||||
|
|
@ -0,0 +1,214 @@
|
|||
---
|
||||
title: Kelola Sertifikat TLS Pada Klaster
|
||||
content_template: templates/task
|
||||
---
|
||||
|
||||
<!-- overview -->
|
||||
|
||||
Kubernetes menyediakan API `certificates.k8s.io` yang memungkinkan kamu membuat sertifikat
|
||||
TLS yang ditandatangani oleh Otoritas Sertifikat (CA) yang kamu kendalikan. CA dan sertifikat ini
|
||||
bisa digunakan oleh _workload_ untuk membangun kepercayaan.
|
||||
|
||||
API `certificates.k8s.io` menggunakan protokol yang mirip dengan [konsep ACME](https://github.com/ietf-wg-acme/acme/).
|
||||
|
||||
{{< note >}}
|
||||
Sertifikat yang dibuat menggunakan API `certificates.k8s.io` ditandatangani oleh CA
|
||||
khusus. Ini memungkinkan untuk mengkonfigurasi klaster kamu agar menggunakan CA _root_ klaster untuk tujuan ini,
|
||||
namun jangan pernah mengandalkan ini. Jangan berasumsi bahwa sertifikat ini akan melakukan validasi
|
||||
dengan CA _root_ klaster
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
## Mempercayai TLS dalam Klaster
|
||||
|
||||
Mempercayai CA khusus dari aplikasi yang berjalan sebagai Pod biasanya memerlukan
|
||||
beberapa tambahan konfigurasi aplikasi. Kamu harus menambahkan bundel sertifikat CA
|
||||
ke daftar sertifikat CA yang dipercaya klien atau server TLS.
|
||||
Misalnya, kamu akan melakukan ini dengan konfigurasi TLS golang dengan mengurai rantai sertifikat
|
||||
dan menambahkan sertifikat yang diurai ke `RootCAs` di _struct_
|
||||
[`tls.Config`](https://godoc.org/crypto/tls#Config).
|
||||
|
||||
Kamu bisa mendistribusikan sertifikat CA sebagai sebuah
|
||||
[ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap) yang bisa diakses oleh Pod kamu.
|
||||
|
||||
## Meminta Sertifikat
|
||||
|
||||
Bagian berikut mendemonstrasikan cara membuat sertifikat TLS untuk sebuah
|
||||
Service kubernetes yang diakses melalui DNS.
|
||||
|
||||
{{< note >}}
|
||||
Tutorial ini menggunakan CFSSL: PKI dan peralatan TLS dari Cloudflare [klik disini](https://blog.cloudflare.com/introducing-cfssl/) untuk mengetahui lebih jauh.
|
||||
{{< /note >}}
|
||||
|
||||
## Unduh dan Pasang CFSSL
|
||||
|
||||
Contoh ini menggunakan cfssl yang dapat diunduh pada
|
||||
[https://pkg.cfssl.org/](https://pkg.cfssl.org/).
|
||||
|
||||
## Membuat CertificateSigningRequest
|
||||
|
||||
Buat kunci pribadi dan CertificateSigningRequest (CSR) dengan menggunakan perintah berikut:
|
||||
|
||||
```shell
|
||||
cat <<EOF | cfssl genkey - | cfssljson -bare server
|
||||
{
|
||||
"hosts": [
|
||||
"my-svc.my-namespace.svc.cluster.local",
|
||||
"my-pod.my-namespace.pod.cluster.local",
|
||||
"192.0.2.24",
|
||||
"10.0.34.2"
|
||||
],
|
||||
"CN": "my-pod.my-namespace.pod.cluster.local",
|
||||
"key": {
|
||||
"algo": "ecdsa",
|
||||
"size": 256
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
`192.0.2.24` adalah klaster IP Service,
|
||||
`my-svc.my-namespace.svc.cluster.local` adalah nama DNS Service,
|
||||
`10.0.34.2` adalah IP Pod dan `my-pod.my-namespace.pod.cluster.local`
|
||||
adalah nama DNS Pod. Kamu akan melihat keluaran berikut:
|
||||
|
||||
```
|
||||
2017/03/21 06:48:17 [INFO] generate received request
|
||||
2017/03/21 06:48:17 [INFO] received CSR
|
||||
2017/03/21 06:48:17 [INFO] generating key: ecdsa-256
|
||||
2017/03/21 06:48:17 [INFO] encoded CSR
|
||||
```
|
||||
|
||||
Perintah ini menghasilkan dua berkas; Ini menghasilkan `server.csr` yang berisi permintaan sertifikasi PEM
|
||||
tersandi [pkcs#10](https://tools.ietf.org/html/rfc2986),
|
||||
dan `server-key.pem` yang berisi PEM kunci yang tersandi untuk sertifikat yang
|
||||
masih harus dibuat.
|
||||
|
||||
## Membuat objek CertificateSigningRequest untuk dikirim ke API Kubernetes
|
||||
Buat sebuah yaml CSR dan kirim ke API Server dengan menggunakan perintah berikut:
|
||||
|
||||
```shell
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: certificates.k8s.io/v1beta1
|
||||
kind: CertificateSigningRequest
|
||||
metadata:
|
||||
name: my-svc.my-namespace
|
||||
spec:
|
||||
request: $(cat server.csr | base64 | tr -d '\n')
|
||||
usages:
|
||||
- digital signature
|
||||
- key encipherment
|
||||
- server auth
|
||||
EOF
|
||||
```
|
||||
|
||||
Perhatikan bahwa berkas `server.csr` yang dibuat pada langkah 1 merupakan base64 tersandi
|
||||
dan disimpan di _field_ `.spec.request`. Kami juga meminta
|
||||
sertifikat dengan penggunaan kunci "_digital signature_", "_key enchiperment_", dan "_server
|
||||
auth_". Kami mendukung semua penggunaan kunci dan penggunaan kunci yang diperpanjang yang terdaftar
|
||||
[di sini](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage)
|
||||
sehingga kamu dapat meminta sertifikat klien dan sertifikat lain menggunakan
|
||||
API yang sama.
|
||||
|
||||
CSR semestinya bisa dilihat dari API pada status _Pending_. Kamu bisa melihatnya dengan menjalankan:
|
||||
|
||||
```shell
|
||||
kubectl describe csr my-svc.my-namespace
|
||||
```
|
||||
|
||||
```none
|
||||
Name: my-svc.my-namespace
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
CreationTimestamp: Tue, 21 Mar 2017 07:03:51 -0700
|
||||
Requesting User: yourname@example.com
|
||||
Status: Pending
|
||||
Subject:
|
||||
Common Name: my-svc.my-namespace.svc.cluster.local
|
||||
Serial Number:
|
||||
Subject Alternative Names:
|
||||
DNS Names: my-svc.my-namespace.svc.cluster.local
|
||||
IP Addresses: 192.0.2.24
|
||||
10.0.34.2
|
||||
Events: <none>
|
||||
```
|
||||
|
||||
## Mendapatkan Persetujuan CertificateSigningRequest
|
||||
|
||||
Penyetujuan CertificateSigningRequest dapat dilakukan dengan otomatis
|
||||
atau dilakukan sekali oleh administrator klaster. Informasi lebih lanjut tentang
|
||||
apa yang terjadi dibahas dibawah ini.
|
||||
|
||||
## Unduh dan Gunakan Sertifikat
|
||||
|
||||
Setelah CSR ditandatangani dan disetujui, kamu akan melihat:
|
||||
|
||||
```shell
|
||||
kubectl get csr
|
||||
```
|
||||
|
||||
```none
|
||||
NAME AGE REQUESTOR CONDITION
|
||||
my-svc.my-namespace 10m yourname@example.com Approved,Issued
|
||||
```
|
||||
|
||||
Kamu bisa mengundur sertifikat yang telah diterbitkan dan menyimpannya ke berkas
|
||||
`server.crt` dengan menggunakan perintah berikut:
|
||||
|
||||
```shell
|
||||
kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \
|
||||
| base64 --decode > server.crt
|
||||
```
|
||||
|
||||
Sekarang kamu bisa menggunakan `server.crt` dan `server-key.pem` sebagai pasangan
|
||||
kunci untuk memulai server HTTPS kamu.
|
||||
|
||||
## Penyetujuan CertificateSigningRequest
|
||||
|
||||
Administrator Kubernetes (dengan izin yang cukup) dapat menyetujui secara manual
|
||||
(atau menolak) Certificate Signing Requests dengan menggunakan perintah `kubectl certificate
|
||||
approve` dan `kubectl certificate deny`. Namun jika kamu bermaksud
|
||||
untuk menggunakan API ini secara sering, kamu dapat mempertimbangkan untuk menulis
|
||||
Certificate _controller_ otomatis.
|
||||
|
||||
Baik itu mesin atau manusia yang menggunakan kubectl seperti di atas, peran pemberi persetujuan adalah
|
||||
untuk memverifikasi bahwa CSR memenuhi dua persyaratan:
|
||||
1. Subjek CSR mengontrol kunci pribadi yang digunakan untuk menandatangani CSR. Ini
|
||||
mengatasi ancaman pihak ketiga yang menyamar sebagai subjek resmi.
|
||||
Pada contoh di atas, langkah ini adalah untuk memverifikasi bahwa Pod mengontrol
|
||||
kunci pribadi yang digunakan untuk menghasilkan CSR.
|
||||
2. Subjek CSR berwenang untuk bertindak dalam konteks yang diminta. Ini
|
||||
mengatasi ancaman subjek yang tidak diinginkan bergabung dengan klaster. Dalam
|
||||
contoh di atas, langkah ini untuk memverifikasi bahwa Pod diizinkan
|
||||
berpartisipasi dalam Service yang diminta.
|
||||
|
||||
Jika dan hanya jika kedua persyaratan ini dipenuhi, pemberi persetujuan harus menyetujui
|
||||
CSR dan sebaliknya harus menolak CSR.
|
||||
|
||||
## Peringatan tentang Izin Persetujuan
|
||||
|
||||
Kemampuan untuk menyetujui CSR menentukan siapa yang mempercayai siapa di dalam lingkungan kamu.
|
||||
Kemampuan untuk menyetujui CSR tersebut seharusnya tidak diberikan secara luas.
|
||||
Persyaratan tantangan yang disebutkan di bagian sebelumnya dan
|
||||
dampak dari mengeluarkan sertifikat khusus, harus sepenuhnya dipahami
|
||||
sebelum memberikan izin ini.
|
||||
|
||||
## Catatan Untuk Administrator Klaster
|
||||
|
||||
Tutorial ini mengasumsikan bahwa penanda tangan diatur untuk melayani API sertifikat.
|
||||
Kubernetes _controller manager_ menyediakan implementasi bawaan dari penanda tangan. Untuk
|
||||
mengaktifkan, berikan parameter `--cluster-signed-cert-file` dan
|
||||
`--cluster-signed-key-file` ke _controller manager_ dengan _path_ ke
|
||||
pasangan kunci CA kamu.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: Membuat Klaster
|
||||
weight: 10
|
||||
---
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Tutorial Interaktif - Membuat Klaster
|
||||
weight: 20
|
||||
---
|
||||
|
||||
<!DOCTYPE html>
|
||||
|
||||
<html lang="id">
|
||||
|
||||
<body>
|
||||
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/styles.css" rel="stylesheet">
|
||||
<link href="/docs/tutorials/kubernetes-basics/public/css/overrides.css" rel="stylesheet">
|
||||
<script src="https://katacoda.com/embed.js"></script>
|
||||
|
||||
<div class="layout" id="top">
|
||||
|
||||
<main class="content katacoda-content">
|
||||
|
||||
<div class="katacoda">
|
||||
<div class="katacoda__alert">
|
||||
Layar terlalu kecil untuk berinteraksi dengan Terminal, silahkan gunakan desktop/tablet.
|
||||
</div>
|
||||
<div class="katacoda__box" id="inline-terminal-1" data-katacoda-lang="id" data-katacoda-id="kubernetes-bootcamp/1" data-katacoda-color="326de6" data-katacoda-secondary="273d6d" data-katacoda-hideintro="false" data-katacoda-font="Roboto" data-katacoda-fontheader="Roboto Slab" data-katacoda-prompt="Kubernetes Bootcamp Terminal" style="height: 600px;"></div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-md-12">
|
||||
<a class="btn btn-lg btn-success" href="/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/" role="button">Lanjut ke Modul 2<span class="btn__next">›</span></a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</main>
|
||||
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue