Merge remote-tracking branch 'upstream/master' into dev-1.19

pull/22932/head
Savitha Raghunathan 2020-08-03 16:09:39 -04:00
commit 53c71ad3f9
170 changed files with 10327 additions and 3906 deletions

View File

@ -4,37 +4,61 @@
This repository contains the assets required to build the [Kubernetes website and documentation](https://kubernetes.io/). We're glad that you want to contribute!
## Running the website locally using Hugo
# Using this repository
See the [official Hugo documentation](https://gohugo.io/getting-started/installing/) for Hugo installation instructions. Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
You can run the website locally using Hugo, or you can run it in a container runtime. We strongly recommend using the container runtime, as it gives deployment consistency with the live website.
Before building the site, clone the Kubernetes website repository:
## Prerequisites
```bash
To use this repository, you need the following installed locally:
- [yarn](https://yarnpkg.com/)
- [npm](https://www.npmjs.com/)
- [Go](https://golang.org/)
- [Hugo](https://gohugo.io/)
- A container runtime, like [Docker](https://www.docker.com/).
Before you start, install the dependencies. Clone the repository and navigate to the directory:
```
git clone https://github.com/kubernetes/website.git
cd website
```
The Kubernetes website uses the [Docsy Hugo theme](https://github.com/google/docsy#readme). Even if you plan to run the website in a container, we strongly recommend pulling in the submodule and other development dependencies by running the following:
```
# install dependencies
yarn
# pull in the Docsy submodule
git submodule update --init --recursive --depth 1
```
**Note:** The Kubernetes website deploys the [Docsy Hugo theme](https://github.com/google/docsy#readme).
If you have not updated your website repository, the `website/themes/docsy` directory is empty. The site cannot build
without a local copy of the theme.
## Running the website using a container
Update the website theme:
To build the site in a container, run the following to build the container image and run it:
```bash
git submodule update --init --recursive --depth 1
```
make container-image
make container-serve
```
Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
## Running the website locally using Hugo
Make sure to install the Hugo extended version specified by the `HUGO_VERSION` environment variable in the [`netlify.toml`](netlify.toml#L10) file.
To build and test the site locally, run:
```bash
hugo server --buildFuture
make serve
```
This will start the local Hugo server on port 1313. Open up your browser to http://localhost:1313 to view the website. As you make changes to the source files, Hugo updates the website and forces a browser refresh.
## Get involved with SIG Docs
# Get involved with SIG Docs
Learn more about SIG Docs Kubernetes community and meetings on the [community page](https://github.com/kubernetes/community/tree/master/sig-docs#meetings).
@ -43,7 +67,7 @@ You can also reach the maintainers of this project at:
- [Slack](https://kubernetes.slack.com/messages/sig-docs)
- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
## Contributing to the docs
# Contributing to the docs
You can click the **Fork** button in the upper-right area of the screen to create a copy of this repository in your GitHub account. This copy is called a *fork*. Make any changes you want in your fork, and when you are ready to send those changes to us, go to your fork and create a new pull request to let us know about it.
@ -60,7 +84,7 @@ For more information about contributing to the Kubernetes documentation, see:
* [Documentation Style Guide](https://kubernetes.io/docs/contribute/style/style-guide/)
* [Localizing Kubernetes Documentation](https://kubernetes.io/docs/contribute/localization/)
## Localization `README.md`'s
# Localization `README.md`'s
| Language | Language |
|---|---|
@ -72,10 +96,10 @@ For more information about contributing to the Kubernetes documentation, see:
|[Italian](README-it.md)|[Ukrainian](README-uk.md)|
|[Japanese](README-ja.md)|[Vietnamese](README-vi.md)|
## Code of conduct
# Code of conduct
Participation in the Kubernetes community is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
## Thank you!
# Thank you!
Kubernetes thrives on community participation, and we appreciate your contributions to our website and our documentation!

22
SECURITY.md Normal file
View File

@ -0,0 +1,22 @@
# Security Policy
## Security Announcements
Join the [kubernetes-security-announce] group for security and vulnerability announcements.
You can also subscribe to an RSS feed of the above using [this link][kubernetes-security-announce-rss].
## Reporting a Vulnerability
Instructions for reporting a vulnerability can be found on the
[Kubernetes Security and Disclosure Information] page.
## Supported Versions
Information about supported Kubernetes versions can be found on the
[Kubernetes version and version skew support policy] page on the Kubernetes website.
[kubernetes-security-announce]: https://groups.google.com/forum/#!forum/kubernetes-security-announce
[kubernetes-security-announce-rss]: https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50
[Kubernetes version and version skew support policy]: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions
[Kubernetes Security and Disclosure Information]: https://kubernetes.io/docs/reference/issues-security/security/#report-a-vulnerability

View File

@ -231,7 +231,7 @@ no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY
[[params.links.user]]
name = "Calendar"
url = "https://calendar.google.com/calendar/embed?src=nt2tcnbtbied3l6gi2h29slvc0%40group.calendar.google.com"
url = "https://calendar.google.com/calendar/embed?src=calendar%40kubernetes.io"
icon = "fas fa-calendar-alt"
desc = "Google Calendar for Kubernetes"

View File

@ -123,7 +123,7 @@ Wenn Sie beispielsweise versuchen, einen Node aus folgendem Inhalt zu erstellen:
```
Kubernetes erstellt intern ein Node-Oject (die Darstellung) und validiert den Node durch Zustandsprüfung basierend auf dem Feld `metadata.name`.
Kubernetes erstellt intern ein Node-Objekt (die Darstellung) und validiert den Node durch Zustandsprüfung basierend auf dem Feld `metadata.name`.
Wenn der Node gültig ist, d.h. wenn alle notwendigen Dienste ausgeführt werden, ist er berechtigt, einen Pod auszuführen.
Andernfalls wird er für alle Clusteraktivitäten ignoriert, bis er gültig wird.

View File

@ -428,11 +428,11 @@ Weitere Informationen zu Minikube finden Sie im [Vorschlag](https://git.k8s.io/c
## Zusätzliche Links
* **Ziele und Nichtziele**: Die Ziele und Nichtziele des Minikube-Projekts finden Sie in unserer [Roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
* **Ziele und Nichtziele**: Die Ziele und Nichtziele des Minikube-Projekts finden Sie in unserer [Roadmap](https://minikube.sigs.k8s.io/docs/contrib/roadmap/).
* **Entwicklungshandbuch**: Lesen Sie [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) für einen Überblick über das Senden von Pull-Requests.
* **Minikube bauen**: Anweisungen zum Erstellen/Testen von Minikube aus dem Quellcode finden Sie im [build Handbuch](https://git.k8s.io/minikube/docs/contributors/build_guide.md).
* **Minikube bauen**: Anweisungen zum Erstellen/Testen von Minikube aus dem Quellcode finden Sie im [build Handbuch](https://minikube.sigs.k8s.io/docs/contrib/building/).
* **Neue Abhängigkeit hinzufügen**: Anweisungen zum Hinzufügen einer neuen Abhängigkeit zu Minikube finden Sie in der [Anleitung zum Hinzufügen von Abhängigkeiten](https://minikube.sigs.k8s.io/docs/drivers/).
* **Neues Addon hinzufügen**: Anweisungen zum Hinzufügen eines neuen Addons für Minikube finden Sie im [Anleitung zum Hinzufügen eines Addons](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md).
* **Neues Addon hinzufügen**: Anweisungen zum Hinzufügen eines neuen Addons für Minikube finden Sie im [Anleitung zum Hinzufügen eines Addons](https://minikube.sigs.k8s.io/docs/handbook/addons/).
* **MicroK8s**: Linux-Benutzer, die die Ausführung einer virtuellen Maschine vermeiden möchten, sollten [MicroK8s](https://microk8s.io/) als Alternative in Betracht ziehen.
## Community

View File

@ -5,7 +5,6 @@ date: 2020-06-30
slug: sig-windows-spotlight-2020
---
# SIG-Windows Spotlight
_This post tells the story of how Kubernetes contributors work together to provide a container orchestrator that works for both Linux and Windows._
<img alt="Image of a computer with Kubernetes logo" width="30%" src="KubernetesComputer_transparent.png">

View File

@ -0,0 +1,215 @@
---
layout: blog
title: "Physics, politics and Pull Requests: the Kubernetes 1.18 release interview"
date: 2020-08-03
---
**Author**: Craig Box (Google)
The start of the COVID-19 pandemic couldn't delay the release of Kubernetes 1.18, but unfortunately [a small bug](https://github.com/kubernetes/utils/issues/141) could — thankfully only by a day. This was the last cat that needed to be herded by 1.18 release lead [Jorge Alarcón](https://twitter.com/alejandrox135) before the [release on March 25](https://kubernetes.io/blog/2020/03/25/kubernetes-1-18-release-announcement/).
One of the best parts about co-hosting the weekly [Kubernetes Podcast from Google](https://kubernetespodcast.com/) is the conversations we have with the people who help bring Kubernetes releases together. [Jorge was our guest on episode 96](https://kubernetespodcast.com/episode/096-kubernetes-1.18/) back in March, and [just like last week](https://kubernetes.io/blog/2020/07/27/music-and-math-the-kubernetes-1.17-release-interview/) we are delighted to bring you the transcript of this interview.
If you'd rather enjoy the "audiobook version", including another interview when 1.19 is released later this month, [subscribe to the show](https://kubernetespodcast.com/subscribe/) wherever you get your podcasts.
In the last few weeks, we've talked to long-time Kubernetes contributors and SIG leads [David Oppenheimer](https://kubernetespodcast.com/episode/114-scheduling/), [David Ashpole](https://kubernetespodcast.com/episode/113-instrumentation-and-cadvisor/) and [Wojciech Tyczynski](https://kubernetespodcast.com/episode/111-scalability/). All are worth taking the dog for a longer walk to listen to!
---
**ADAM GLICK: You're a former physicist. I have to ask, what kind of physics did you work on?**
JORGE ALARCÓN: Back in my days of math and all that, I used to work in [computational biology](https://en.wikipedia.org/wiki/Computational_biology) and a little bit of high energy physics. Computational biology was, for the most part, what I spent most of my time on. And it was essentially exploring the big idea of we have the structure of proteins. We know what they're made of. Now, based on that structure, we want to be able to predict [how they're going to fold](https://en.wikipedia.org/wiki/Protein_folding) and how they're going to behave, which essentially translates into the whole idea of designing pharmaceuticals, designing vaccines, or anything that you can possibly think of that has any connection whatsoever to a living organism.
**ADAM GLICK: That would seem to ladder itself well into maybe going to something like bioinformatics. Did you take a tour into that, or did you decide to go elsewhere directly?**
JORGE ALARCÓN: It is related, and I worked a little bit with some people that did focus on bioinformatics on the field specifically, but I never took a detour into it. Really, my big idea with computational biology, to be honest, it wasn't even the biology. That's usually what sells it, what people are really interested in, because protein engineering, all the cool and amazing things that you can do.
Which is definitely good, and I don't want to take away from it. But my big thing is because biology is such a real thing, it is amazingly complicated. And the math— the models that you have to design to study those systems, to be able to predict something that people can actually experiment and measure, it just captivated me. The level of complexity, the beauty, the mechanisms, all the structures that you see once you got through the math and look at things, it just kind of got to me.
**ADAM GLICK: How did you go from that world into the world of Kubernetes?**
JORGE ALARCÓN: That's both a really boring story and an interesting one.
[LAUGHING]
I did my thing with physics, and it was good. It was fun. But at some point, I wanted— working in academia— at least my feeling for it is that generally all the people that you're surrounded with are usually academics. Just another bunch of physics, a bunch of mathematicians.
But very seldom do you actually get the opportunity to take what you're working on and give it to someone else to use. Even with the mathematicians and physicists, the things that we're working on are super specialized, and you can probably find three, four, five people that can actually understand everything that you're saying. A lot of people are going to get the gist of it, but understanding the details, it's somewhat rare.
One of the things that I absolutely love about tech, about software engineering, coding, all that, is how open and transparent everything is. You can write your library in Python, you can publish it, and suddenly the world is going to actually use it, actually consume it. And because normally, I've seen that it has a large avenue where you can work in something really complicated, you can communicate it, and people can actually go ahead and take it and run with it in their given direction. And that is kind of what happened.
At some point, by pure accident and chance, I came across this group of people on the internet, and they were in the stages of making up this new group that's called [Data for Democracy](https://datafordemocracy.org/), a non-profit. And the whole idea was the internet, especially Twitter— that's how we congregated— Twitter, the internet. We have a ton of data scientists, people who work as software engineers, and the like. What if we all come together and try to solve some issues that actually affect the daily lives of people. And there were a ton of projects. Helping the ACLU gather data for something interesting that they were doing, gather data and analyze it for local governments— where do you have potholes, how much water is being consumed.
Try to apply all the science that we knew, combined with all the code that we could write, and offer a good and digestible idea for people to say, OK, this makes sense, let's do something about it— policy, action, whatever. And I started working with this group, Data for Democracy— wonderful set of people. And the person who I believe we can blame for Data for Democracy— the one who got the idea and got it up and running, his name is Jonathan Morgan. And eventually, we got to work together. He started a startup, and I went to work with the startup. And that was essentially the thing that took me away from physics and into the world of software engineering— Data for Democracy, definitely.
**ADAM GLICK: Were you using Kubernetes as part of that work there?**
JORGE ALARCÓN: No, it was simple as it gets. You just try to get some data. You create a couple [IPython notebooks](https://ipython.org/), some setting up of really simple MySQL databases, and that was it.
**ADAM GLICK: Where did you get started using Kubernetes? And was it before you started contributing to it and being a part, or did you decide to jump right in?**
JORGE ALARCÓN: When I first started using Kubernetes, it was also on my first job. So there wasn't a lot of specific training in regards to software engineering or anything of the sort that I did before I actually started working as a software engineer. I just went from physicist to engineer. And in my days of physics, at least on the computer side, I was completely trained in the super old school system administrator, where you have your 10, 20 computers. You know physically where they are, and you have to connect the cables.
**ADAM GLICK: All pets— all pets all the time.**
JORGE ALARCÓN: [LAUGHING] You have to have your huge Python, bash scripts, three, five major versions, all because doing an upgrade will break something really important and you have no idea how to work on it. And that was my training. That was the way that I learned how to do things. Those were the kind of things that I knew how to do.
And when I got to this company— startup— we were pretty much starting from scratch. We were building a couple applications. We work testing them, we were deploying them on a couple of managed instances. But like everything, there was a lot of toil that we wanted to automate. The whole issue of, OK, after days of work, we finally managed to get this version of the application up and running in these machines.
It's open to the internet. People can test it out. But it turns out that it is now two weeks behind the latest on all the master branches for this repo, so now we want to update. And we have to go through the process of bringing it back up, creating new machines, do that whole thing. And I had no idea what Kubernetes was, to be honest. My boss at the moment mentioned it to me like, hey, we should use Kubernetes because apparently, Kubernetes is something that might be able to help us here. And we did some— I want to call it research and development.
It was actually just making— again, startup, small company, small team, so really me just playing around with Kubernetes trying to get it to work, trying to get it to run. I was so lost. I had no idea what I was doing— not enough. I didn't have an idea of how Kubernetes was supposed to help me. And at that point, I did the best Googling that I could manage. Didn't really find a lot of examples. Didn't find a lot of blog posts. It was early.
**ADAM GLICK: What time frame was this?**
JORGE ALARCÓN: Three, four years ago, so definitely not 1.13. That's the best guesstimate that I can give at this point. But I wasn't able to find any good examples, any tutorials. The only book that I was able to get my hands on was the one written by Joe Beda, Kelsey Hightower, and I forget the other author. But what is it? "[Kubernetes— Up and Running](](http://shop.oreilly.com/product/0636920223788.do))"?
And in general, right now I use it as reference— it's really good. But as a beginner, I still was lost. They give all these amazing examples, they provide the applications, but I had no idea why someone might need a Pod, why someone might need a Deployment. So my last resort was to try and find someone who actually knew Kubernetes.
By accident, during my eternal Googling, I actually found a link to the [Kubernetes Slack](http://slack.kubernetes.io/). I jumped into the Kubernetes Slack hoping that someone might be able to help me out. And that was my entry point into the Kubernetes community. I just kept on exploring the Slack, tried to see what people were talking about, what they were asking to try to make sense of it, and just kept on iterating. And at some point, I think I got the hang of it.
**ADAM GLICK: What made you decide to be a release lead?**
JORGE ALARCÓN: The answer to this is my answer to why I have been contributing to Kubernetes. I really just want to be able to help out the community. Kubernetes is something that I absolutely adore.
Comparing Kubernetes to old school system administration, a handful of years ago, it took me like a week to create a node for an application to run. It took me months to get something that vaguely looked like an Ingress resource— just setting up the Nginx, and allowing someone else to actually use my application. And the fact that I could do all of that in five minutes, it really captivated me. Plus I've got to blame it on the physics. The whole idea with physics, I really like the patterns, and I really like the design of Kubernetes.
Once I actually got the hang of it, I loved the idea of how everything was designed, and I just wanted to learn a lot more about it. And I wanted to help the contributors. I wanted to help the people who actually build it. I wanted to help maintain it, and help provide the information for new contributors or new users. So instead of taking months for them to be up and running, let's just chat about what your issue is, and let's try to get a fix within the next hour or so.
**ADAM GLICK: You work for a stealth startup right now. Is it fair to assume that they're using Kubernetes?**
JORGE ALARCÓN: Yes—
[LAUGHING]
—for everything.
**ADAM GLICK: Are you able to say what [Searchable](https://www.searchable.ai/) does?**
JORGE ALARCÓN: The thing that we are trying to build is kind of like a search engine for your documents. Usually, if people have a question, they jump on Google. And for the most part, you're going to be able to get a good answer. You can ask something really random, like 'what is the weight of an elephant?'
Which, if you think about it, it's kind of random, but Google is going to give you an answer. And the thing that we are trying to build is something similar to that, but for files. So essentially, a search engine for your files. And most people, you have your local machine loaded up with— at least mine, I have a couple tens of gigabytes of different files.
I have Google Drive. I have a lot of documents that live in my email and the like. So the idea is to kind of build a search engine that is going to be able to connect all of those pieces. And besides doing simple word searches— for example, 'Kubernetes interview', and bring me the documents that we're looking at with all the questions— I can also ask things like what issue did I find last week while testing Prometheus. And it's going to be able to read my files, like through natural language processing, understand it, and be able to give me an answer.
**ADAM GLICK: It is a Google for your personal and non-public information, essentially?**
JORGE ALARCÓN: Hopefully.
**ADAM GLICK: Is the work that you do with Kubernetes as the release lead— is that part of your day job, or is that something that you're doing kind of nights and weekends separate from your day job?**
JORGE ALARCÓN: Both. Strictly speaking, my day job is just keep working on the application, build the things that it needs, maintain the infrastructure, and all that. When I started working at the company— which by the way, the person who brought me into the company was also someone that I met from my days in Data for Democracy— we started talking about the work.
I mentioned that I do a lot of work with the Kubernetes community and if it was OK that I continue doing it. And to my surprise, the answer was not only a yes, but yeah, you can do it during your day work. And at least for the time being, I just balance— I try to keep things organized.
Some days I just focus on Kubernetes. Some mornings I do Kubernetes. And then afternoon, I do Searchable, vice-versa, or just go back and forth, and try to balance the work as much as possible. But being release lead, definitely, it is a lot, so nights and weekends.
**ADAM GLICK: How much time does it take to be the release lead?**
JORGE ALARCÓN: It varies, but probably, if I had to give an estimate, at the very least you have to be able to dedicate four hours most days.
**ADAM GLICK: Four hours a day?**
JORGE ALARCÓN: Yeah, most days. It varies a lot. For example, at the beginning of the release cycle, you don't need to put in that much work because essentially, you're just waiting and helping people get set up, and people are writing their [Kubernetes Enhancement Proposals](https://github.com/kubernetes/enhancements/tree/master/keps), they are implementing it, and you can answer some questions. It's relatively easy, but for the most part, a lot of the time the four hours go into talking with people, just making sure that, hey, are people actually writing their enhancements, do we have all the enhancements that we want. And most of those fours hours, going around, chatting with people, and making sure that things are being done. And if, for some reason, someone needs help, just directing them to the right place to get their answer.
**ADAM GLICK: What does Searchable get out of you doing this work?**
JORGE ALARCÓN: Physically, nothing. The thing that we're striving for is to give back to the community. My manager/boss/homeslice— I told him I was going to call him my homeslice— both of us have experience working in open source. At some point, he was also working on a project that I'm probably going to mispronounce, but Mahout with Apache.
And he also has had this experience. And both of us have this general idea and strive to build something for Searchable that's going to be useful for people, but also build knowledge, build guides, build applications that are going to be useful for the community. And at least one of the things that I was able to do right now is be the lead for the Kubernetes team. And this is a way of giving back to the community. We're using Kubernetes to run our things, so let's try to balance how things work.
**ADAM GLICK: Lachlan Evenson was the release lead on 1.16 as well as [our guest back in episode 72](https://kubernetespodcast.com/episode/072-kubernetes-1.16/), and he's returned on this release as the [emeritus advisor](https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks/emeritus-adviser). What did you learn from him?**
JORGE ALARCÓN: Oh, everything. And it actually all started back on 1.16. So like you said, an amazing person— he's an amazing individual. And it's truly an opportunity to be able to work with him. During 1.16, I was the CI Signal lead, and Lachie is very hands on.
He's not the kind of person to just give you a list of things and say, do them. He actually comes to you, has a conversation, and he works with you more than anything. And when we were working together on 1.16, I got to learn a lot from him in terms of CI Signal. And especially because we talked about everything just to make sure that 1.16 was ready to go, I also got to pick up a couple of things that a release lead has to know, has to be able to do, has to work on to get a release out the door.
And now, during this release, there is a lot of information that's really useful, and there's a lot of advice and general wisdom that comes in handy. For most of the things that impact a lot of things, we are always in communication. Like, I'm doing this, you're doing that, advice. And essentially, every single thing that we do is pretty much a code review. You do it, and then you wait for someone else to give you comments. And that's been a strong part of our relationship working.
**ADAM GLICK: What would you say the theme for this release is?**
JORGE ALARCÓN: I think one of the themes is "fit and finish". There are a lot of features that we are bumping from alpha to beta, from beta to stable. And we want to make sure that people have a good user experience. Operators and developers alike just want to get rid of as many bugs as possible, improve the flow of things.
But the other really cool thing is we have about an equal distribution between alpha, beta, and stable. We are also bringing up a lot of new features. So besides making Kubernetes more stable for all the users that are already using it, we are working on bringing up new things that people can try out for the next release and see how it goes in the future.
**ADAM GLICK: Did you have a release team mascot?**
JORGE ALARCÓN: Kind of.
**ADAM GLICK: Who/what was it?**
JORGE ALARCÓN: [LAUGHING] I say kind of because I'm using the mascot in the [logo](https://twitter.com/KubernetesPod/status/1242953121380392963), and the logo is inspired by the Large Hadron Collider.
**ADAM GLICK: Oh, fantastic.**
JORGE ALARCÓN: Being the release lead, I really had to take a chance on this opportunity to use the LHC as the mascot.
**ADAM GLICK: We've had [some of the folks from the LHC on the show](https://kubernetespodcast.com/episode/062-cern/), and I know they listen, and they will be thrilled with that.**
JORGE ALARCÓN: [LAUGHING] Hopefully, they like the logo.
**ADAM GLICK: If you look at this release, what part of this release, what thing that has been added to it are you personally most excited about?**
JORGE ALARCÓN: Like a parent can't choose which child is his or her favorite, you really can't choose a specific thing.
**ADAM GLICK: We have been following online and in the issues an enhancement that's called [sidecar containers](https://github.com/kubernetes/enhancements/issues/753). You'd be able to mark the order of containers starting in a pod. Tim Hockin posted [a long comment on behalf of a number of SIG Node contributors](https://github.com/kubernetes/enhancements/issues/753#issuecomment-597372056) citing social, procedural, and technical concerns about what's going on with that— in particular, that it moved out of 1.18 and is now moving to 1.19. Did you have any thoughts on that?**
JORGE ALARCÓN: The sidecar enhancement has definitely been an interesting one. First off, thank you very much to Joseph Irving, the author of the KEP. And thank you very much to Tim Hockin, who voiced out the point of view of the approvers, maintainers of SIG Node. And I guess a little bit of context before we move on is, in the Kubernetes community, we have contributors, we have reviewers, and we have approvers.
Contributors are people who write PRs, who file issues, who troubleshoot issues. Reviewers are contributors who focus on one or multiple specific areas within the project, and then approvers are maintainers for the specific area, for one or multiple specific areas, of the project. So you can think of approvers as people who have write access in a repo or someplace within a repo.
The issue with the sidecar enhancement is that it has been deferred for multiple releases now, and that's been because there hasn't been a lot of collaboration between the KEP authors and the approvers for specific parts of the project. Something worthwhile to mention— and this was brought up during the original discussion— is this can obviously be frustrating for both contributors and for approvers. From the contributor's side of things, you are working on something. You are doing your best to make sure that it works.
And to build something that's going to be used by people, both from the approver side of things and, I think, for the most part, every single person in the Kubernetes community, we are all really excited to see this project grow. We want to help improve it, and we love when new people come in and work on new enhancements, bug fixes, and the like.
But one of the limitations is the day only has so many hours, and there are only so many things that we can work on at a time. So people prioritize in whatever way works best, and some things just fall behind. And a lot of the time, the things that fall behind are not because people don't want them to continue moving forward, but it's just a limited amount of resources, a limited amount of people.
And I think this discussion around the sidecar enhancement proposal has been very useful, and it points us to the need for more standardized mentoring programs. This is something that multiple SIGs are working on. For example, SIG Contribex, SIG Cluster Lifecycle, SIG Release. The idea is to standardize some sort of mentoring experience so that we can better prepare new contributors to become reviewers and ultimately approvers.
Because ultimately at the end of the day, if we have more people who are knowledgeable about Kubernetes, or even some specific area of Kubernetes, we can better distribute the load, and we can better collaborate on whatever new things come up. I think the sidecar enhancement has shown us mentoring is something worthwhile, and we need a lot more of it. Because as much work as we do, more things are going to continue popping in throughout the project. And the more people we have who are comfortable working in these really complicated areas of Kubernetes, the better off that we are going to be.
**ADAM GLICK: Was there any talk of delaying 1.18 due to the current worldwide health situation?**
JORGE ALARCÓN: We thought about it, and the plan was to just wait and see how people felt. Tried make sure that people were comfortable continuing to work and all the people were landing in new enhancements, or fixing tests, or members of the release team who were making sure that things were happening. We wanted to see that people were comfortable, that they could continue doing their job. And for a moment, I actually thought about delaying just outright— we're going to give it more time, and hopefully at some point, things are going to work out.
But people just continue doing their amazing work. There was no delay. There was no hitch throughout the process. So at some point, I just figured we stay with the current timeline and see how we went. And at this point, things are more or less set.
**ADAM GLICK: Amazing power of a distributed team.**
JORGE ALARCÓN: Yeah, definitely.
[LAUGHING]
**ADAM GLICK: [Taylor Dolezal was announced as the 1.19 release lead](https://twitter.com/alejandrox135/status/1239629281766096898). Do you know how that choice was made, and by whom?**
JORGE ALARCÓN: I actually got to choose the lead. The practice is the current lead for the release team is going to look at people and see, first off, who's interested and out of the people interested, who can do the job, who's comfortable enough with the release team, with the Kubernetes community at large who can actually commit the amount of hours throughout the next, hopefully, three months.
And for one, I think Taylor has been part of my team. So there is the release team. Then the release team has multiple subgroups. One of those subgroups is actually just for me and my shadows. So for this release, it was mrbobbytables and Taylor. And Taylor volunteered to take over 1.19, and I'm sure that he will do an amazing job.
**ADAM GLICK: I am as well. What advice will you give Taylor?**
JORGE ALARCÓN: Over-communicate as much as possible. Normally, if you made it to the point that you are the lead for a release, or even the shadow for a release, you more or less are familiar with a lot of the work— CI Signal, enhancements, documentation, and the like. And a lot of people, if they know how to do their job, they might tell themselves, yeah, I could do it— no need to worry about it. I'm just going to go ahead and sign this PR, debug this test, whatever.
But one of the interesting aspects is whenever we are actually working in a release, 50% of the work has to go into actually making the release happen. The other 50% of the work has to go into mentoring people, and making sure the newcomers, new members are able to learn everything that they need to learn to do your job, you being in the lead for a subgroup or the entire team. And whenever you actually see that things need to happen, just over-communicate.
Try to provide the opportunity for someone else to do the work, and over-communicate with them as much as possible to make sure that they are learning whatever it is that they need to learn. If neither you or the other person knows what's going on, then I can over-communicate, so someone hopefully will see your messages and come to the rescue. That happens a lot. There's a lot of really nice and kind people who will come out and tell you how something works, help you fix it.
**ADAM GLICK: If you were to sum up your experience running this release, what would it be?**
JORGE ALARCÓN: It's been super fun and a little bit stressing, to be honest. Being the release lead is definitely amazing. You're kind of sitting at the center of Kubernetes.
You not only see the people who are working on things— the things that are broken, and the users filling out issues, and saying what broke, and the like. But you also get the opportunity to work with a lot of people who do a lot of non-code related work. Docs is one of the most obvious things. There's a lot of work that goes into communications, contributor experience, public relations.
And being connected, getting to talk with those people mostly every other day, it's really fun. It's a really good experience in terms of becoming a better contributor to the community, but also taking some of that knowledge home with you and applying it somewhere else. If you are a software engineer, if you are a project manager, whatever, it's amazing how much you can learn.
**ADAM GLICK: I know the community likes to rotate around who are the release leads. But if you were given the opportunity to be a release lead for a future release of Kubernetes, would you do it again?**
JORGE ALARCÓN: Yeah, it's a fun job. To be honest, it can be really stressing. Especially, as I mentioned, at some point, most of that work is just going to be talking with people, and talking requires a lot more thought and effort than just sitting down and thinking about things sometimes. And some of that can be really stressful.
But the job itself, it is definitely fun. And at some distant point in the future, if for some reason it was a possibility, I will think about it. But definitely, as you mentioned, one thing that we try to do is cycle out, because I can have fun in it, and that's all good and nice. And hopefully I can help another release go out the door. But providing the opportunity for other people to learn I think is a lot more important than just being the lead itself.
---
_[Jorge Alarcón](https://twitter.com/alejandrox135) is a site reliability engineer with Searchable AI and served as the Kubernetes 1.18 release team lead._
_You can find the [Kubernetes Podcast from Google](http://www.kubernetespodcast.com/) at [@KubernetesPod](https://twitter.com/KubernetesPod) on Twitter, and you can [subscribe](https://kubernetespodcast.com/subscribe/) so you never miss an episode._

View File

@ -46,7 +46,7 @@ These connections terminate at the kubelet's HTTPS endpoint. By default, the api
To verify this connection, use the `--kubelet-certificate-authority` flag to provide the apiserver with a root certificate bundle to use to verify the kubelet's serving certificate.
If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
untrusted or public network.
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.

View File

@ -23,8 +23,6 @@ The [components](/docs/concepts/overview/components/#node-components) on a node
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}, and the
{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}.
<!-- body -->
## Management
@ -195,7 +193,7 @@ The node lifecycle controller automatically creates
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
Pods can also have tolerations which let them tolerate a Node's taints.
See [Taint Nodes by Condition](/docs/concepts/configuration/taint-and-toleration/#taint-nodes-by-condition)
See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
for more details.
### Capacity and Allocatable {#capacity}
@ -339,6 +337,6 @@ for more information.
* Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* Read the [Node](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)
section of the architecture design document.
* Read about [taints and tolerations](/docs/concepts/configuration/taint-and-toleration/).
* Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
* Read about [cluster autoscaling](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling).

View File

@ -34,14 +34,14 @@ Before choosing a guide, here are some considerations:
- Do you **just want to run a cluster**, or do you expect to do **active development of Kubernetes project code**? If the
latter, choose an actively-developed distro. Some distros only use binary releases, but
offer a greater variety of choices.
- Familiarize yourself with the [components](/docs/admin/cluster-components/) needed to run a cluster.
- Familiarize yourself with the [components](/docs/concepts/overview/components/) needed to run a cluster.
## Managing a cluster
* [Managing a cluster](/docs/tasks/administer-cluster/cluster-management/) describes several topics related to the lifecycle of a cluster: creating a new cluster, upgrading your clusters master and worker nodes, performing node maintenance (e.g. kernel upgrades), and upgrading the Kubernetes API version of a running cluster.
* Learn how to [manage nodes](/docs/concepts/nodes/node/).
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
@ -59,14 +59,14 @@ Before choosing a guide, here are some considerations:
* [Using Admission Controllers](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization.
* [Using Sysctls in a Kubernetes Cluster](/docs/concepts/cluster-administration/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
* [Using Sysctls in a Kubernetes Cluster](/docs/tasks/administer-cluster/sysctl-cluster/) describes to an administrator how to use the `sysctl` command-line tool to set kernel parameters .
* [Auditing](/docs/tasks/debug-application-cluster/audit/) describes how to interact with Kubernetes' audit logs.
### Securing the kubelet
* [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/)
* [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
* [Kubelet authentication/authorization](/docs/admin/kubelet-authentication-authorization/)
* [Kubelet authentication/authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
## Optional Cluster Services

View File

@ -5,35 +5,30 @@ content_type: concept
<!-- overview -->
Add-ons extend the functionality of Kubernetes.
This page lists some of the available add-ons and links to their respective installation instructions.
Add-ons in each section are sorted alphabetically - the ordering does not imply any preferential status.
<!-- body -->
## Networking and Network Policy
* [ACI](https://www.github.com/noironetworks/aci-containers) provides integrated container networking and network security with Cisco ACI.
* [Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy.
* [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins.
* [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, Romana, or Weave.
* [Contiv](http://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](http://github.com/contiv). The [installer](http://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Contiv](https://contiv.github.io) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options.
* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads.
* [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md) is an overlay network provider that can be used with Kubernetes.
* [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod.
* [Multus](https://github.com/Intel-Corp/multus-cni) is a Multi plugin for multiple network support in Kubernetes to support all CNI plugins (e.g. Calico, Cilium, Contiv, Flannel), in addition to SRIOV, DPDK, OVS-DPDK and VPP based workloads in Kubernetes.
* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
* [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in (NCP) provides integration between VMware NSX-T and container orchestrators such as Kubernetes, as well as integration between NSX-T and container-based CaaS/PaaS platforms such as Pivotal Container Service (PKS) and OpenShift.
* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) is an SDN platform that provides policy-based networking between Kubernetes Pods and non-Kubernetes environments with visibility and security monitoring.
* [Romana](http://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Romana](https://romana.io) is a Layer 3 networking solution for pod networks that also supports the [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Kubeadm add-on installation details available [here](https://github.com/romana/romana/tree/master/containerize).
* [Weave Net](https://www.weave.works/docs/net/latest/kube-addon/) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
## Service Discovery

View File

@ -8,8 +8,6 @@ weight: 30
This page explains how to manage Kubernetes running on a specific
cloud provider.
<!-- body -->
### kubeadm
[kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/) is a popular option for creating kubernetes clusters.
@ -46,8 +44,10 @@ controllerManager:
```
The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
for the [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and the
[kubelet](/docs/admin/kubelet/). The contents of the file specified in `--cloud-config` for each provider is documented below as well.
for the [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/),
[kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) and the
[kubelet](/docs/reference/command-line-tools-reference/kubelet/).
The contents of the file specified in `--cloud-config` for each provider is documented below as well.
For all external cloud providers, please follow the instructions on the individual repositories,
which are listed under their headings below, or one may view [the list of all repositories](https://github.com/kubernetes?q=cloud-provider-&type=&language=)
@ -94,7 +94,7 @@ Different settings can be applied to a load balancer service in AWS using _annot
* `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`: Used to specify access log s3 bucket prefix.
* `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags`: Used on the service to specify a comma-separated list of key-value pairs which will be recorded as additional tags in the ELB. For example: `"Key1=Val1,Key2=Val2,KeyNoVal1=,KeyNoVal2"`.
* `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If `http` (default) or `https`, an HTTPS listener that terminates the connection and parses headers is created. If set to `ssl` or `tcp`, a "raw" SSL listener is used. If set to `http` and `aws-load-balancer-ssl-cert` is not used then a HTTP listener is used.
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
* `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`: Used on the service to request a secure listener. Value is a valid certificate ARN. For more, see [ELB Listener Config](https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html) CertARN is an IAM or CM certificate ARN, for example `arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012`.
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled`: Used on the service to enable or disable connection draining.
* `service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout`: Used on the service to specify a connection draining timeout.
* `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout`: Used on the service to specify the idle connection timeout.
@ -358,13 +358,10 @@ Kubernetes network plugin and should appear in the `[Route]` section of the
the `extraroutes` extension then use `router-id` to specify a router to add
routes to. The router chosen must span the private networks containing your
cluster nodes (typically there is only one node network, and this value should be
the default router for the node network). This value is required to use [kubenet]
the default router for the node network). This value is required to use
[kubenet](/docs/concepts/cluster-administration/network-plugins/#kubenet)
on OpenStack.
[kubenet]: /docs/concepts/cluster-administration/network-plugins/#kubenet
## OVirt
### Node Name

View File

@ -311,10 +311,12 @@ exports additional metrics. Monitoring these can help you determine whether your
configuration is inappropriately throttling important traffic, or find
poorly-behaved workloads that may be harming system health.
* `apiserver_flowcontrol_rejected_requests_total` counts requests that
were rejected, grouped by the name of the assigned priority level,
the name of the assigned FlowSchema, and the reason for rejection.
The reason will be one of the following:
* `apiserver_flowcontrol_rejected_requests_total` is a counter vector
(cumulative since server start) of requests that were rejected,
broken down by the labels `flowSchema` (indicating the one that
matched the request), `priorityLevel` (indicating the one to which
the request was assigned), and `reason`. The `reason` label will be
have one of the following values:
* `queue-full`, indicating that too many requests were already
queued,
* `concurrency-limit`, indicating that the
@ -323,23 +325,72 @@ poorly-behaved workloads that may be harming system health.
* `time-out`, indicating that the request was still in the queue
when its queuing time limit expired.
* `apiserver_flowcontrol_dispatched_requests_total` counts requests
that began executing, grouped by the name of the assigned priority
level and the name of the assigned FlowSchema.
* `apiserver_flowcontrol_dispatched_requests_total` is a counter
vector (cumulative since server start) of requests that began
executing, broken down by the labels `flowSchema` (indicating the
one that matched the request) and `priorityLevel` (indicating the
one to which the request was assigned).
* `apiserver_flowcontrol_current_inqueue_requests` gives the
instantaneous total number of queued (not executing) requests,
grouped by priority level and FlowSchema.
* `apiserver_current_inqueue_requests` is a gauge vector of recent
high water marks of the number of queued requests, grouped by a
label named `request_kind` whose value is `mutating` or `readOnly`.
These high water marks describe the largest number seen in the one
second window most recently completed. These complement the older
`apiserver_current_inflight_requests` gauge vector that holds the
last window's high water mark of number of requests actively being
served.
* `apiserver_flowcontrol_current_executing_requests` gives the instantaneous
total number of executing requests, grouped by priority level and FlowSchema.
* `apiserver_flowcontrol_read_vs_write_request_count_samples` is a
histogram vector of observations of the then-current number of
requests, broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `request_kind` (which takes on
the values `mutating` and `readOnly`). The observations are made
periodically at a high rate.
* `apiserver_flowcontrol_request_queue_length_after_enqueue` gives a
histogram of queue lengths for the queues, grouped by priority level
and FlowSchema, as sampled by the enqueued requests. Each request
that gets queued contributes one sample to its histogram, reporting
the length of the queue just after the request was added. Note that
this produces different statistics than an unbiased survey would.
* `apiserver_flowcontrol_read_vs_write_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `request_kind` (which takes on
the values `mutating` and `readOnly`); the label `mark` takes on
values `high` and `low`. The water marks are accumulated over
windows bounded by the times when an observation was added to
`apiserver_flowcontrol_read_vs_write_request_count_samples`. These
water marks show the range of values that occurred between samples.
* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector
holding the instantaneous number of queued (not executing) requests,
broken down by the labels `priorityLevel` and `flowSchema`.
* `apiserver_flowcontrol_current_executing_requests` is a gauge vector
holding the instantaneous number of executing (not waiting in a
queue) requests, broken down by the labels `priorityLevel` and
`flowSchema`.
* `apiserver_flowcontrol_priority_level_request_count_samples` is a
histogram vector of observations of the then-current number of
requests broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `priorityLevel`. Each
histogram gets observations taken periodically, up through the last
activity of the relevant sort. The observations are made at a high
rate.
* `apiserver_flowcontrol_priority_level_request_count_watermarks` is a
histogram vector of high or low water marks of the number of
requests broken down by the labels `phase` (which takes on the
values `waiting` and `executing`) and `priorityLevel`; the label
`mark` takes on values `high` and `low`. The water marks are
accumulated over windows bounded by the times when an observation
was added to
`apiserver_flowcontrol_priority_level_request_count_samples`. These
water marks show the range of values that occurred between samples.
* `apiserver_flowcontrol_request_queue_length_after_enqueue` is a
histogram vector of queue lengths for the queues, broken down by
the labels `priorityLevel` and `flowSchema`, as sampled by the
enqueued requests. Each request that gets queued contributes one
sample to its histogram, reporting the length of the queue just
after the request was added. Note that this produces different
statistics than an unbiased survey would.
{{< note >}}
An outlier value in a histogram here means it is likely that a single flow
(i.e., requests by one user or for one namespace, depending on
@ -349,14 +400,17 @@ poorly-behaved workloads that may be harming system health.
to increase that PriorityLevelConfiguration's concurrency shares.
{{< /note >}}
* `apiserver_flowcontrol_request_concurrency_limit` gives the computed
concurrency limit (based on the API server's total concurrency limit and PriorityLevelConfigurations'
concurrency shares) for each PriorityLevelConfiguration.
* `apiserver_flowcontrol_request_concurrency_limit` is a gauge vector
hoding the computed concurrency limit (based on the API server's
total concurrency limit and PriorityLevelConfigurations' concurrency
shares), broken down by the label `priorityLevel`.
* `apiserver_flowcontrol_request_wait_duration_seconds` gives a histogram of how
long requests spent queued, grouped by the FlowSchema that matched the
request, the PriorityLevel to which it was assigned, and whether or not the
request successfully executed.
* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram
vector of how long requests spent queued, broken down by the labels
`flowSchema` (indicating which one matched the request),
`priorityLevel` (indicating the one to which the request was
assigned), and `execute` (indicating whether the request started
executing).
{{< note >}}
Since each FlowSchema always assigns requests to a single
PriorityLevelConfiguration, you can add the histograms for all the
@ -364,9 +418,11 @@ poorly-behaved workloads that may be harming system health.
requests assigned to that priority level.
{{< /note >}}
* `apiserver_flowcontrol_request_execution_seconds` gives a histogram of how
long requests took to actually execute, grouped by the FlowSchema that matched the
request and the PriorityLevel to which it was assigned.
* `apiserver_flowcontrol_request_execution_seconds` is a histogram
vector of how long requests took to actually execute, broken down by
the labels `flowSchema` (indicating which one matched the request)
and `priorityLevel` (indicating the one to which the request was
assigned).
### Debug endpoints

View File

@ -133,7 +133,7 @@ Because the logging agent must run on every node, it's common to implement it as
Using a node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates only one agent per node, and it doesn't require any changes to the applications running on the node. However, node-level logging _only works for applications' standard output and standard error_.
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) for use with Google Cloud Platform, and [Elasticsearch](/docs/user-guide/logging/elasticsearch). You can find more information and instructions in the dedicated documents. Both use [fluentd](http://www.fluentd.org/) with custom configuration as an agent on the node.
Kubernetes doesn't specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: [Stackdriver Logging](/docs/tasks/debug-application-cluster/logging-stackdriver/) for use with Google Cloud Platform, and [Elasticsearch](/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/). You can find more information and instructions in the dedicated documents. Both use [fluentd](https://www.fluentd.org/) with custom configuration as an agent on the node.
### Using a sidecar container with the logging agent
@ -240,7 +240,7 @@ a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to c
{{< note >}}
The configuration of fluentd is beyond the scope of this article. For
information about configuring fluentd, see the
[official fluentd documentation](http://docs.fluentd.org/).
[official fluentd documentation](https://docs.fluentd.org/).
{{< /note >}}
The second file describes a pod that has a sidecar container running fluentd.

View File

@ -10,9 +10,6 @@ weight: 40
You've deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features that we will discuss in more depth are [configuration files](/docs/concepts/configuration/overview/) and [labels](/docs/concepts/overview/working-with-objects/labels/).
<!-- body -->
## Organizing resource configurations
@ -356,7 +353,8 @@ Sometimes it's necessary to make narrow, non-disruptive updates to resources you
### kubectl apply
It is suggested to maintain a set of configuration files in source control (see [configuration as code](http://martinfowler.com/bliki/InfrastructureAsCode.html)),
It is suggested to maintain a set of configuration files in source control
(see [configuration as code](https://martinfowler.com/bliki/InfrastructureAsCode.html)),
so that they can be maintained and versioned along with the code for the resources they configure.
Then, you can use [`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands/#apply) to push your configuration changes to the cluster.

View File

@ -17,9 +17,6 @@ problems to address:
3. Pod-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
4. External-to-Service communications: this is covered by [services](/docs/concepts/services-networking/service/).
<!-- body -->
Kubernetes is all about sharing machines between applications. Typically,
@ -93,7 +90,7 @@ Thanks to the "programmable" characteristic of Open vSwitch, Antrea is able to i
### AOS from Apstra
[AOS](http://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
[AOS](https://www.apstra.com/products/aos/) is an Intent-Based Networking system that creates and manages complex datacenter environments from a simple integrated platform. AOS leverages a highly scalable distributed design to eliminate network outages while minimizing costs.
The AOS Reference Design currently supports Layer-3 connected hosts that eliminate legacy Layer-2 switching problems. These Layer-3 hosts can be Linux servers (Debian, Ubuntu, CentOS) that create BGP neighbor relationships directly with the top of rack switches (TORs). AOS automates the routing adjacencies and then provides fine grained control over the route health injections (RHI) that are common in a Kubernetes deployment.
@ -101,7 +98,7 @@ AOS has a rich set of REST API endpoints that enable Kubernetes to quickly chang
AOS supports the use of common vendor equipment from manufacturers including Cisco, Arista, Dell, Mellanox, HPE, and a large number of white-box systems and open network operating systems like Microsoft SONiC, Dell OPX, and Cumulus Linux.
Details on how the AOS system works can be accessed here: http://www.apstra.com/products/how-it-works/
Details on how the AOS system works can be accessed here: https://www.apstra.com/products/how-it-works/
### AWS VPC CNI for Kubernetes
@ -123,7 +120,7 @@ Azure CNI is available natively in the [Azure Kubernetes Service (AKS)] (https:/
With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat OpenShift, Mesosphere DC/OS & Docker Swarm will be natively integrated alongside with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](https://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on-premises deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).
### Cilium
@ -135,7 +132,7 @@ addressing, and it can be used in combination with other CNI plugins.
### CNI-Genie from Huawei
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/networking.md#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](http://docs.projectcalico.org/), [Romana](http://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
[CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) is a CNI plugin that enables Kubernetes to [simultaneously have access to different implementations](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-cni-plugins/README.md#what-cni-genie-feature-1-multiple-cni-plugins-enables) of the [Kubernetes network model](/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model) in runtime. This includes any implementation that runs as a [CNI plugin](https://github.com/containernetworking/cni#3rd-party-plugins), such as [Flannel](https://github.com/coreos/flannel#flannel), [Calico](https://docs.projectcalico.org/), [Romana](https://romana.io), [Weave-net](https://www.weave.works/products/weave-net/).
CNI-Genie also supports [assigning multiple IP addresses to a pod](https://github.com/Huawei-PaaS/CNI-Genie/blob/master/docs/multiple-ips/README.md#feature-2-extension-cni-genie-multiple-ip-addresses-per-pod), each from a different CNI plugin.
@ -157,11 +154,11 @@ network complexity required to deploy Kubernetes at scale within AWS.
### Contiv
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](http://contiv.io) is all open sourced.
[Contiv](https://github.com/contiv/netplugin) provides configurable networking (native l3 using BGP, overlay using vxlan, classic l2, or Cisco-SDN/ACI) for various use cases. [Contiv](https://contiv.io) is all open sourced.
### Contrail / Tungsten Fabric
[Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
[Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is a truly open, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with various orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide different isolation modes for virtual machines, containers/pods and bare metal workloads.
### DANM
@ -242,7 +239,7 @@ traffic to the internet.
### Kube-router
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](http://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
[Kube-router](https://github.com/cloudnativelabs/kube-router) is a purpose-built networking solution for Kubernetes that aims to provide high performance and operational simplicity. Kube-router provides a Linux [LVS/IPVS](https://www.linuxvirtualserver.org/software/ipvs.html)-based service proxy, a Linux kernel forwarding-based pod-to-pod networking solution with no overlays, and iptables/ipset-based network policy enforcer.
### L2 networks and linux bridging
@ -252,8 +249,8 @@ Note that these instructions have only been tried very casually - it seems to
work, but has not been thoroughly tested. If you use this technique and
perfect the process, please let us know.
Follow the "With Linux Bridge devices" section of [this very nice
tutorial](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
Follow the "With Linux Bridge devices" section of
[this very nice tutorial](https://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/) from
Lars Kellogg-Stedman.
### Multus (a Multi Network plugin)
@ -274,7 +271,7 @@ Multus supports all [reference plugins](https://github.com/containernetworking/p
### Nuage Networks VCS (Virtualized Cloud Services)
[Nuage](http://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
[Nuage](https://www.nuagenetworks.net) provides a highly scalable policy-based Software-Defined Networking (SDN) platform. Nuage uses the open source Open vSwitch for the data plane along with a feature rich SDN Controller built on open standards.
The Nuage platform uses overlays to provide seamless policy-based networking between Kubernetes Pods and non-Kubernetes environments (VMs and bare metal servers). Nuage's policy abstraction model is designed with applications in mind and makes it easy to declare fine-grained policies for applications.The platform's real-time analytics engine enables visibility and security monitoring for Kubernetes applications.
@ -294,7 +291,7 @@ at [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes).
### Project Calico
[Project Calico](http://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
[Project Calico](https://docs.projectcalico.org/) is an open source container networking provider and network policy engine.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet, for both Linux (open source) and Windows (proprietary - available from [Tigera](https://www.tigera.io/essentials/)). Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
@ -302,7 +299,7 @@ Calico can also be run in policy enforcement mode in conjunction with other netw
### Romana
[Romana](http://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
[Romana](https://romana.io) is an open source network and security automation solution that lets you deploy Kubernetes without an overlay network. Romana supports Kubernetes [Network Policy](/docs/concepts/services-networking/network-policies/) to provide isolation across network namespaces.
### Weave Net from Weaveworks
@ -312,13 +309,9 @@ Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-pl
or stand-alone. In either version, it doesn't require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.
## {{% heading "whatsnext" %}}
The early design of the networking model and its rationale, and some future
plans are described in more detail in the [networking design
document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).
plans are described in more detail in the
[networking design document](https://git.k8s.io/community/contributors/design-proposals/network/networking.md).

View File

@ -752,4 +752,4 @@ You can see that the Container was terminated because of `reason:OOM Killed`, wh
* Read the [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) API reference
* Read about [project quotas](http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS
* Read about [project quotas](https://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html) in XFS

View File

@ -73,7 +73,7 @@ A desired state of an object is described by a Deployment, and if changes to tha
## Container Images
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/admin/kubelet/) attempts to pull the specified image.
The [imagePullPolicy](/docs/concepts/containers/images/#updating-images) and the tag of the image affect when the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) attempts to pull the specified image.
- `imagePullPolicy: IfNotPresent`: the image is pulled only if it is not already present locally.

View File

@ -11,7 +11,7 @@ weight: 70
{{< feature-state for_k8s_version="v1.14" state="stable" >}}
[Pods](/docs/user-guide/pods) can have _priority_. Priority indicates the
[Pods](/docs/concepts/workloads/pods/pod/) can have _priority_. Priority indicates the
importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the
scheduler tries to preempt (evict) lower priority Pods to make scheduling of the
pending Pod possible.

View File

@ -28,7 +28,7 @@ The Kubernetes Container environment provides several important resources to Con
The *hostname* of a Container is the name of the Pod in which the Container is running.
It is available through the `hostname` command or the
[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html)
[`gethostname`](https://man7.org/linux/man-pages/man2/gethostname.2.html)
function call in libc.
The Pod name and namespace are available as environment variables through the
@ -51,7 +51,7 @@ FOO_SERVICE_PORT=<the port the service is running on>
```
Services have dedicated IP addresses and are available to the Container via DNS,
if [DNS addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled. 
if [DNS addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) is enabled. 

View File

@ -19,8 +19,6 @@ before referring to it in a
This page provides an outline of the container image concept.
<!-- body -->
## Image names
@ -261,7 +259,7 @@ EOF
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [ServiceAccount](/docs/user-guide/service-accounts) resource.
in a [ServiceAccount](/docs/tasks/configure-pod-container/configure-service-accounts/) resource.
Check [Add ImagePullSecrets to a Service Account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for detailed instructions.

View File

@ -24,9 +24,6 @@ their work environment. Developers who are prospective {{< glossary_tooltip text
useful as an introduction to what extension points and patterns
exist, and their trade-offs and limitations.
<!-- body -->
## Overview
@ -37,14 +34,14 @@ Customization approaches can be broadly divided into *configuration*, which only
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
* [kubelet](/docs/admin/kubelet/)
* [kube-apiserver](/docs/admin/kube-apiserver/)
* [kube-controller-manager](/docs/admin/kube-controller-manager/)
* [kube-scheduler](/docs/admin/kube-scheduler/).
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
## Extensions
@ -75,10 +72,9 @@ failure.
In the webhook model, Kubernetes makes a network request to a remote service.
In the *Binary Plugin* model, Kubernetes executes a binary (program).
Binary plugins are used by the kubelet (e.g. [Flex Volume
Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
and [Network
Plugins](/docs/concepts/cluster-administration/network-plugins/))
Binary plugins are used by the kubelet (e.g.
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume)
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
and by kubectl.
Below is a diagram showing how the extension points interact with the
@ -98,12 +94,12 @@ This diagram shows the extension points in a Kubernetes system.
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](#storage-plugins).
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
@ -119,7 +115,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro
Do not use a Custom Resource as data storage for application, user, or monitoring data.
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/).
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
### Combining New APIs with Automation
@ -172,20 +168,21 @@ Kubelet call a Binary Plugin to mount the volume.
### Device Plugins
Device plugins allow a node to discover new Node resources (in addition to the
builtin ones like cpu and memory) via a [Device
Plugin](/docs/concepts/cluster-administration/device-plugins/).
builtin ones like cpu and memory) via a
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
### Network Plugins
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
Different networking fabrics can be supported via node-level
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
### Scheduler Extensions
The scheduler is a special type of controller that watches pods, and assigns
pods to nodes. The default scheduler can be replaced entirely, while
continuing to use other Kubernetes components, or [multiple
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
continuing to use other Kubernetes components, or
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
can run at the same time.
This is a significant undertaking, and almost all Kubernetes users find they
@ -196,18 +193,14 @@ The scheduler also supports a
that permits a webhook backend (scheduler extension) to filter and prioritize
the nodes chosen for a pod.
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
* Learn more about Infrastructure extensions
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)

View File

@ -15,8 +15,6 @@ The additional APIs can either be ready-made solutions such as [service-catalog]
The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object.
<!-- body -->
## Aggregation layer
@ -34,11 +32,8 @@ If your extension API server cannot achieve that latency requirement, consider m
`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
## {{% heading "whatsnext" %}}
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/).
* Then, [setup an extension api-server](/docs/tasks/extend-kubernetes/setup-extension-api-server/) to work with the aggregation layer.
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/).

View File

@ -13,8 +13,6 @@ weight: 10
resource to your Kubernetes cluster and when to use a standalone service. It describes the two
methods for adding custom resources and how to choose between them.
<!-- body -->
## Custom resources
@ -28,7 +26,7 @@ many core Kubernetes functions are now built using custom resources, making Kube
Custom resources can appear and disappear in a running cluster through dynamic registration,
and cluster admins can update custom resources independently of the cluster itself.
Once a custom resource is installed, users can create and access its objects using
[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like
[kubectl](/docs/reference/kubectl/overview/), just as they do for built-in resources like
*Pods*.
## Custom controllers
@ -52,7 +50,9 @@ for specific applications into an extension of the Kubernetes API.
## Should I add a custom resource to my Kubernetes Cluster?
When creating a new API, consider whether to [aggregate your API with the Kubernetes cluster APIs](/docs/concepts/api-extension/apiserver-aggregation/) or let your API stand alone.
When creating a new API, consider whether to
[aggregate your API with the Kubernetes cluster APIs](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
or let your API stand alone.
| Consider API aggregation if: | Prefer a stand-alone API if: |
| ---------------------------- | ---------------------------- |

View File

@ -19,8 +19,6 @@ The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adap
and other similar computing resources that may require vendor specific initialization
and setup.
<!-- body -->
## Device plugin registration
@ -39,7 +37,7 @@ During the registration, the device plugin needs to send:
* The name of its Unix socket.
* The Device Plugin API version against which it was built.
* The `ResourceName` it wants to advertise. Here `ResourceName` needs to follow the
[extended resource naming scheme](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)
[extended resource naming scheme](/docs/concepts/configuration/manage-resources-container/#extended-resources)
as `vendor-domain/resourcetype`.
(For example, an NVIDIA GPU is advertised as `nvidia.com/gpu`.)

View File

@ -15,14 +15,14 @@ Kubernetes is highly configurable and extensible. As a result,
there is rarely a need to fork or submit patches to the Kubernetes
project code.
This guide describes the options for customizing a Kubernetes
cluster. It is aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}} who want to
understand how to adapt their Kubernetes cluster to the needs of
their work environment. Developers who are prospective {{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}} or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}} will also find it
useful as an introduction to what extension points and patterns
exist, and their trade-offs and limitations.
This guide describes the options for customizing a Kubernetes cluster. It is
aimed at {{< glossary_tooltip text="cluster operators" term_id="cluster-operator" >}}
who want to understand how to adapt their
Kubernetes cluster to the needs of their work environment. Developers who are prospective
{{< glossary_tooltip text="Platform Developers" term_id="platform-developer" >}}
or Kubernetes Project {{< glossary_tooltip text="Contributors" term_id="contributor" >}}
will also find it useful as an introduction to what extension points and
patterns exist, and their trade-offs and limitations.
<!-- body -->
@ -35,14 +35,14 @@ Customization approaches can be broadly divided into *configuration*, which only
*Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary:
* [kubelet](/docs/admin/kubelet/)
* [kube-apiserver](/docs/admin/kube-apiserver/)
* [kube-controller-manager](/docs/admin/kube-controller-manager/)
* [kube-scheduler](/docs/admin/kube-scheduler/).
* [kubelet](/docs/reference/command-line-tools-reference/kubelet/)
* [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/)
* [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/)
* [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/).
Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with managed installation. When they are changeable, they are usually only changeable by the cluster administrator. Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
*Built-in Policy APIs*, such as [ResourceQuota](/docs/concepts/policy/resource-quotas/), [PodSecurityPolicies](/docs/concepts/policy/pod-security-policy/), [NetworkPolicy](/docs/concepts/services-networking/network-policies/) and Role-based Access Control ([RBAC](/docs/reference/access-authn-authz/rbac/)), are built-in Kubernetes APIs. APIs are typically used with hosted Kubernetes services and with managed Kubernetes installations. They are declarative and use the same conventions as other Kubernetes resources like pods, so new cluster configuration can be repeatable and be managed the same way as applications. And, where they are stable, they enjoy a [defined support policy](/docs/reference/using-api/deprecation-policy/) like other Kubernetes APIs. For these reasons, they are preferred over *configuration files* and *flags* where suitable.
## Extensions
@ -73,10 +73,9 @@ failure.
In the webhook model, Kubernetes makes a network request to a remote service.
In the *Binary Plugin* model, Kubernetes executes a binary (program).
Binary plugins are used by the kubelet (e.g. [Flex Volume
Plugins](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md)
and [Network
Plugins](/docs/concepts/cluster-administration/network-plugins/))
Binary plugins are used by the kubelet (e.g.
[Flex Volume Plugins](/docs/concepts/storage/volumes/#flexVolume)
and [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/))
and by kubectl.
Below is a diagram showing how the extension points interact with the
@ -95,13 +94,13 @@ This diagram shows the extension points in a Kubernetes system.
<!-- image source diagrams: https://docs.google.com/drawings/d/1k2YdJgNTtNfW7_A8moIIkij-DmVgEhNrn3y2OODwqQQ/view -->
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/overview/extending#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/overview/extending#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/overview/extending#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/overview/extending#storage-plugins).
1. Users often interact with the Kubernetes API using `kubectl`. [Kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/) extend the kubectl binary. They only affect the individual user's local environment, and so cannot enforce site-wide policies.
2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the [API Access Extensions](/docs/concepts/extend-kubernetes/#api-access-extensions) section.
3. The apiserver serves various kinds of *resources*. *Built-in resource kinds*, like `pods`, are defined by the Kubernetes project and can't be changed. You can also add resources that you define, or that other projects have defined, called *Custom Resources*, as explained in the [Custom Resources](/docs/concepts/extend-kubernetes/#user-defined-types) section. Custom Resources are often used with API Access Extensions.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the [Scheduler Extensions](/docs/concepts/overview/extending#scheduler-extensions) section.
5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. [Network Plugins](/docs/concepts/extend-kubernetes/#network-plugins) allow for different implementations of pod networking.
7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via [Storage Plugins](/docs/concepts/extend-kubernetes/#storage-plugins).
If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.
@ -117,7 +116,7 @@ Consider adding a Custom Resource to Kubernetes if you want to define new contro
Do not use a Custom Resource as data storage for application, user, or monitoring data.
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/api-extension/custom-resources/).
For more about Custom Resources, see the [Custom Resources concept guide](/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
### Combining New APIs with Automation
@ -162,28 +161,28 @@ After a request is authorized, if it is a write operation, it also goes through
### Storage Plugins
[Flex Volumes](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/flexvolume-deployment.md
) allow users to mount volume types without built-in support by having the
[Flex Volumes](/docs/concepts/storage/volumes/#flexVolume)
allow users to mount volume types without built-in support by having the
Kubelet call a Binary Plugin to mount the volume.
### Device Plugins
Device plugins allow a node to discover new Node resources (in addition to the
builtin ones like cpu and memory) via a [Device
Plugin](/docs/concepts/cluster-administration/device-plugins/).
builtin ones like cpu and memory) via a
[Device Plugin](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/).
### Network Plugins
Different networking fabrics can be supported via node-level [Network Plugins](/docs/admin/network-plugins/).
Different networking fabrics can be supported via node-level
[Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/).
### Scheduler Extensions
The scheduler is a special type of controller that watches pods, and assigns
pods to nodes. The default scheduler can be replaced entirely, while
continuing to use other Kubernetes components, or [multiple
schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
continuing to use other Kubernetes components, or
[multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
can run at the same time.
This is a significant undertaking, and almost all Kubernetes users find they
@ -195,16 +194,13 @@ that permits a webhook backend (scheduler extension) to filter and prioritize
the nodes chosen for a pod.
## {{% heading "whatsnext" %}}
* Learn more about [Custom Resources](/docs/concepts/api-extension/custom-resources/)
* Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
* Learn about [Dynamic admission control](/docs/reference/access-authn-authz/extensible-admission-controllers/)
* Learn more about Infrastructure extensions
* [Network Plugins](/docs/concepts/cluster-administration/network-plugins/)
* [Device Plugins](/docs/concepts/cluster-administration/device-plugins/)
* [Network Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
* [Device Plugins](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/)
* Learn about [kubectl plugins](/docs/tasks/extend-kubectl/kubectl-plugins/)
* Learn about the [Operator pattern](/docs/concepts/extend-kubernetes/operator/)

View File

@ -6,14 +6,11 @@ weight: 30
<!-- overview -->
Operators are software extensions to Kubernetes that make use of [custom
resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
Operators are software extensions to Kubernetes that make use of
[custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
to manage applications and their components. Operators follow
Kubernetes principles, notably the [control loop](/docs/concepts/#kubernetes-control-plane).
<!-- body -->
## Motivation

View File

@ -24,7 +24,6 @@ The Kubernetes API lets you query and manipulate the state of objects in the Kub
API endpoints, resource types and samples are described in the [API Reference](/docs/reference/kubernetes-api/).
<!-- body -->
## API changes
@ -135,7 +134,7 @@ There are several API groups in a cluster:
(e.g. `apiVersion: batch/v1`). The Kubernetes [API reference](/docs/reference/kubernetes-api/) has a
full list of available API groups.
There are two paths to extending the API with [custom resources](/docs/concepts/api-extension/custom-resources/):
There are two paths to extending the API with [custom resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/):
1. [CustomResourceDefinition](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
lets you declaratively define how the API server should provide your chosen resource API.

View File

@ -22,10 +22,9 @@ Each object can have a set of key/value labels defined. Each Key must be unique
}
```
Labels allow for efficient queries and watches and are ideal for use in UIs and CLIs. Non-identifying information should be recorded using [annotations](/docs/concepts/overview/working-with-objects/annotations/).
Labels allow for efficient queries and watches and are ideal for use in UIs
and CLIs. Non-identifying information should be recorded using
[annotations](/docs/concepts/overview/working-with-objects/annotations/).
<!-- body -->
@ -77,7 +76,7 @@ spec:
## Label selectors
Unlike [names and UIDs](/docs/user-guide/identifiers), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Unlike [names and UIDs](/docs/concepts/overview/working-with-objects/names/), labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
Via a _label selector_, the client/user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
@ -186,7 +185,10 @@ kubectl get pods -l 'environment,environment notin (frontend)'
### Set references in API objects
Some Kubernetes objects, such as [`services`](/docs/user-guide/services) and [`replicationcontrollers`](/docs/user-guide/replication-controller), also use label selectors to specify sets of other resources, such as [pods](/docs/user-guide/pods).
Some Kubernetes objects, such as [`services`](/docs/concepts/services-networking/service/)
and [`replicationcontrollers`](/docs/concepts/workloads/controllers/replicationcontroller/),
also use label selectors to specify sets of other resources, such as
[pods](/docs/concepts/workloads/pods/pod/).
#### Service and ReplicationController
@ -210,7 +212,11 @@ this selector (respectively in `json` or `yaml` format) is equivalent to `compon
#### Resources that support set-based requirements
Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/jobs-run-to-completion/), [`Deployment`](/docs/concepts/workloads/controllers/deployment/), [`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/), support _set-based_ requirements as well.
Newer resources, such as [`Job`](/docs/concepts/workloads/controllers/job/),
[`Deployment`](/docs/concepts/workloads/controllers/deployment/),
[`ReplicaSet`](/docs/concepts/workloads/controllers/replicaset/), and
[`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/),
support _set-based_ requirements as well.
```yaml
selector:
@ -228,4 +234,3 @@ selector:
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
See the documentation on [node selection](/docs/concepts/scheduling-eviction/assign-pod-node/) for more information.

View File

@ -13,9 +13,6 @@ weight: 30
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
These virtual clusters are called namespaces.
<!-- body -->
## When to Use Multiple Namespaces
@ -35,13 +32,14 @@ In future versions of Kubernetes, objects in the same namespace will have the sa
access control policies by default.
It is not necessary to use multiple namespaces just to separate slightly different
resources, such as different versions of the same software: use [labels](/docs/user-guide/labels) to distinguish
resources, such as different versions of the same software: use
[labels](/docs/concepts/overview/working-with-objects/labels) to distinguish
resources within the same namespace.
## Working with Namespaces
Creation and deletion of namespaces are described in the [Admin Guide documentation
for namespaces](/docs/admin/namespaces).
Creation and deletion of namespaces are described in the
[Admin Guide documentation for namespaces](/docs/tasks/administer-cluster/namespaces).
{{< note >}}
Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces.
@ -93,7 +91,8 @@ kubectl config view --minify | grep namespace:
## Namespaces and DNS
When you create a [Service](/docs/user-guide/services), it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
When you create a [Service](/docs/concepts/services-networking/service/),
it creates a corresponding [DNS entry](/docs/concepts/services-networking/dns-pod-service/).
This entry is of the form `<service-name>.<namespace-name>.svc.cluster.local`, which means
that if a container just uses `<service-name>`, it will resolve to the service which
is local to a namespace. This is useful for using the same configuration across
@ -104,7 +103,8 @@ across namespaces, you need to use the fully qualified domain name (FQDN).
Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are
in some namespaces. However namespace resources are not themselves in a namespace.
And low-level resources, such as [nodes](/docs/admin/node) and
And low-level resources, such as
[nodes](/docs/concepts/architecture/nodes/) and
persistentVolumes, are not in any namespace.
To see which Kubernetes resources are and aren't in a namespace:
@ -117,12 +117,8 @@ kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false
```
## {{% heading "whatsnext" %}}
* Learn more about [creating a new namespace](/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace).
* Learn more about [deleting a namespace](/docs/tasks/administer-cluster/namespaces/#deleting-a-namespace).

View File

@ -10,7 +10,6 @@ Kubernetes objects. This document provides an overview of the different
approaches. Read the [Kubectl book](https://kubectl.docs.kubernetes.io) for
details of managing objects by Kubectl.
<!-- body -->
## Management techniques
@ -167,11 +166,8 @@ Disadvantages compared to imperative object configuration:
- Declarative object configuration is harder to debug and understand results when they are unexpected.
- Partial updates using diffs create complex merge and patch operations.
## {{% heading "whatsnext" %}}
- [Managing Kubernetes Objects Using Imperative Commands](/docs/tasks/manage-kubernetes-objects/imperative-command/)
- [Managing Kubernetes Objects Using Object Configuration (Imperative)](/docs/tasks/manage-kubernetes-objects/imperative-config/)
- [Managing Kubernetes Objects Using Object Configuration (Declarative)](/docs/tasks/manage-kubernetes-objects/declarative-config/)
@ -180,4 +176,3 @@ Disadvantages compared to imperative object configuration:
- [Kubectl Book](https://kubectl.docs.kubernetes.io)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)

View File

@ -8,13 +8,10 @@ weight: 10
<!-- overview -->
By default, containers run with unbounded [compute resources](/docs/user-guide/compute-resources) on a Kubernetes cluster.
By default, containers run with unbounded [compute resources](/docs/concepts/configuration/manage-resources-containers/) on a Kubernetes cluster.
With resource quotas, cluster administrators can restrict resource consumption and creation on a {{< glossary_tooltip text="namespace" term_id="namespace" >}} basis.
Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
<!-- body -->
A _LimitRange_ provides constraints that can:
@ -54,11 +51,8 @@ there may be contention for resources. In this case, the Containers or Pods will
Neither contention nor changes to a LimitRange will affect already created resources.
## {{% heading "whatsnext" %}}
Refer to the [LimitRanger design document](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_limit_range.md) for more information.
For examples on using limits, see:
@ -68,7 +62,5 @@ For examples on using limits, see:
- [how to configure default CPU Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/).
- [how to configure default Memory Requests and Limits per namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/).
- [how to configure minimum and maximum Storage consumption per namespace](/docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage).
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/).
- a [detailed example on configuring quota per namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/).

View File

@ -14,9 +14,6 @@ weight: 20
Pod Security Policies enable fine-grained authorization of pod creation and
updates.
<!-- body -->
## What is a Pod Security Policy?
@ -143,13 +140,13 @@ For a complete example of authorizing a PodSecurityPolicy, see
### Troubleshooting
- The [Controller Manager](/docs/admin/kube-controller-manager/) must be run
- The [Controller Manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) must be run
against [the secured API port](/docs/reference/access-authn-authz/controlling-access/),
and must not have superuser permissions. Otherwise requests would bypass
authentication and authorization modules, all PodSecurityPolicy objects would be
allowed, and users would be able to create privileged containers. For more details
on configuring Controller Manager authorization, see [Controller
Roles](/docs/reference/access-authn-authz/rbac/#controller-roles).
on configuring Controller Manager authorization, see
[Controller Roles](/docs/reference/access-authn-authz/rbac/#controller-roles).
## Policy Order
@ -639,15 +636,12 @@ By default, all safe sysctls are allowed.
- `allowedUnsafeSysctls` - allows specific sysctls that had been disallowed by the default list, so long as these are not listed in `forbiddenSysctls`.
Refer to the [Sysctl documentation](
/docs/concepts/cluster-administration/sysctl-cluster/#podsecuritypolicy).
/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy).
## {{% heading "whatsnext" %}}
- See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/) for policy recommendations.
Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.
- Refer to [Pod Security Policy Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicy-v1beta1-policy) for the api details.

View File

@ -13,9 +13,6 @@ there is a concern that one team could use more than its fair share of resources
Resource quotas are a tool for administrators to address this concern.
<!-- body -->
A resource quota, defined by a `ResourceQuota` object, provides constraints that limit
@ -27,15 +24,21 @@ Resource quotas work like this:
- Different teams work in different namespaces. Currently this is voluntary, but
support for making this mandatory via ACLs is planned.
- The administrator creates one `ResourceQuota` for each namespace.
- Users create resources (pods, services, etc.) in the namespace, and the quota system
tracks usage to ensure it does not exceed hard resource limits defined in a `ResourceQuota`.
- If creating or updating a resource violates a quota constraint, the request will fail with HTTP
status code `403 FORBIDDEN` with a message explaining the constraint that would have been violated.
- If quota is enabled in a namespace for compute resources like `cpu` and `memory`, users must specify
requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
See the [walkthrough](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
for an example of how to avoid this problem.
The name of a `ResourceQuota` object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
@ -63,7 +66,7 @@ A resource quota is enforced in a particular namespace when there is a
## Compute Resource Quota
You can limit the total sum of [compute resources](/docs/user-guide/compute-resources) that can be requested in a given namespace.
You can limit the total sum of [compute resources](/docs/concepts/configuration/manage-resources-containers/) that can be requested in a given namespace.
The following resource types are supported:
@ -77,7 +80,7 @@ The following resource types are supported:
### Resource Quota For Extended Resources
In addition to the resources mentioned above, in release 1.10, quota support for
[extended resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources) is added.
[extended resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources) is added.
As overcommit is not allowed for extended resources, it makes no sense to specify both `requests`
and `limits` for the same extended resource in a quota. So for extended resources, only quota items
@ -553,7 +556,7 @@ plugins:
limitedResources:
- resource: pods
matchScopes:
- scopeName: PriorityClass
- scopeName: PriorityClass
operator: In
values: ["cluster-services"]
```
@ -572,7 +575,7 @@ plugins:
limitedResources:
- resource: pods
matchScopes:
- scopeName: PriorityClass
- scopeName: PriorityClass
operator: In
values: ["cluster-services"]
```
@ -595,11 +598,7 @@ See [LimitedResources](https://github.com/kubernetes/kubernetes/pull/36765) and
See a [detailed example for how to use resource quota](/docs/tasks/administer-cluster/quota-api-object/).
## {{% heading "whatsnext" %}}
See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.
- See [ResourceQuota design doc](https://git.k8s.io/community/contributors/design-proposals/resource-management/admission_control_resource_quota.md) for more information.

View File

@ -10,8 +10,6 @@ In Kubernetes, _scheduling_ refers to making sure that {{< glossary_tooltip text
are matched to {{< glossary_tooltip text="Nodes" term_id="node" >}} so that
{{< glossary_tooltip term_id="kubelet" >}} can run them.
<!-- body -->
## Scheduling overview {#scheduling}
@ -92,7 +90,7 @@ of the scheduler:
* Read about [scheduler performance tuning](/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* Read about [Pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
* Read the [reference documentation](/docs/reference/command-line-tools-reference/kube-scheduler/) for kube-scheduler
* Learn about [configuring multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/)
* Learn about [configuring multiple schedulers](/docs/tasks/extend-kubernetes/configure-multiple-schedulers/)
* Learn about [topology management policies](/docs/tasks/administer-cluster/topology-manager/)
* Learn about [Pod Overhead](/docs/concepts/configuration/pod-overhead/)
* Learn about scheduling of Pods that use volumes in:

View File

@ -175,7 +175,7 @@ toleration to pods that use the special hardware. As in the dedicated nodes use
it is probably easiest to apply the tolerations using a custom
[admission controller](/docs/reference/access-authn-authz/admission-controllers/).
For example, it is recommended to use [Extended
Resources](/docs/concepts/configuration/manage-compute-resources-container/#extended-resources)
Resources](/docs/concepts/configuration/manage-resources-containers/#extended-resources)
to represent the special hardware, taint your special hardware nodes with the
extended resource name and run the
[ExtendedResourceToleration](/docs/reference/access-authn-authz/admission-controllers/#extendedresourcetoleration)

View File

@ -133,7 +133,7 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip
Kubernetes supports 2 primary modes of finding a Service - environment variables
and DNS. The former works out of the box while the latter requires the
[CoreDNS cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns).
[CoreDNS cluster addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns).
{{< note >}}
If the service environment variables are not desired (because possible clashing with expected program ones,
too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks`

View File

@ -68,10 +68,19 @@ of the form `auto-generated-name.my-svc.my-namespace.svc.cluster-domain.example`
### A/AAAA records
Any pods created by a Deployment or DaemonSet have the following
DNS resolution available:
In general a pod has the following DNS resolution:
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example.`
`pod-ip-address.my-namespace.pod.cluster-domain.example`.
For example, if a pod in the `default` namespace has the IP address 172.17.0.3,
and the domain name for your cluster is `cluster.local`, then the Pod has a DNS name:
`172-17-0-3.default.pod.cluster.local`.
Any pods created by a Deployment or DaemonSet exposed by a Service have the
following DNS resolution available:
`pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
### Pod's hostname and subdomain fields
@ -294,4 +303,3 @@ The availability of Pod DNS Config and DNS Policy "`None`" is shown as below.
For guidance on administering DNS configurations, check
[Configure DNS Service](/docs/tasks/administer-cluster/dns-custom-nameservers/)

View File

@ -26,7 +26,7 @@ Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.i
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress
controller with [community](https://www.getambassador.io/docs) or
[commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/).
* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager).
* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](https://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager).
* [AWS ALB Ingress Controller](https://github.com/kubernetes-sigs/aws-alb-ingress-controller) enables ingress using the [AWS Application Load Balancer](https://aws.amazon.com/elasticloadbalancing/).
* [Contour](https://projectcontour.io/) is an [Envoy](https://www.envoyproxy.io/) based ingress controller
provided and supported by VMware.

View File

@ -393,7 +393,7 @@ a Service.
It's also worth noting that even though health checks are not exposed directly
through the Ingress, there exist parallel concepts in Kubernetes such as
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
that allow you to achieve the same end result. Please review the controller
specific documentation to see how they handle health checks (for example:
[nginx](https://git.k8s.io/ingress-nginx/README.md), or

View File

@ -20,8 +20,6 @@ With Kubernetes you don't need to modify your application to use an unfamiliar s
Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods,
and can load-balance across them.
<!-- body -->
## Motivation
@ -387,7 +385,7 @@ variables and DNS.
When a Pod is run on a Node, the kubelet adds a set of environment variables
for each active Service. It supports both [Docker links
compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
[makeLinkVariables](http://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
[makeLinkVariables](https://releases.k8s.io/{{< param "githubbranch" >}}/pkg/kubelet/envvars/envvars.go#L49))
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
@ -751,7 +749,7 @@ In the above example, if the Service contained three ports, `80`, `443`, and
`8443`, then `443` and `8443` would use the SSL certificate, but `80` would just
be proxied HTTP.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
From Kubernetes v1.9 onwards you can use [predefined AWS SSL policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) with HTTPS or SSL listeners for your Services.
To see which policies are available for use, you can use the `aws` command line tool:
```bash
@ -891,7 +889,7 @@ To use a Network Load Balancer on AWS, use the annotation `service.beta.kubernet
```
{{< note >}}
NLB only works with certain instance classes; see the [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
NLB only works with certain instance classes; see the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets)
on Elastic Load Balancing for a list of supported instance types.
{{< /note >}}
@ -1048,9 +1046,9 @@ spec:
## Shortcomings
Using the userspace proxy for VIPs, work at small to medium scale, but will
not scale to very large clusters with thousands of Services. The [original
design proposal for portals](http://issue.k8s.io/1107) has more details on
this.
not scale to very large clusters with thousands of Services. The
[original design proposal for portals](https://github.com/kubernetes/kubernetes/issues/1107)
has more details on this.
Using the userspace proxy obscures the source IP address of a packet accessing
a Service.

View File

@ -19,9 +19,6 @@ weight: 20
This document describes the current state of _persistent volumes_ in Kubernetes. Familiarity with [volumes](/docs/concepts/storage/volumes/) is suggested.
<!-- body -->
## Introduction
@ -148,7 +145,11 @@ The `Recycle` reclaim policy is deprecated. Instead, the recommended approach is
If supported by the underlying volume plugin, the `Recycle` reclaim policy performs a basic scrub (`rm -rf /thevolume/*`) on the volume and makes it available again for a new claim.
However, an administrator can configure a custom recycler Pod template using the Kubernetes controller manager command line arguments as described [here](/docs/admin/kube-controller-manager/). The custom recycler Pod template must contain a `volumes` specification, as shown in the example below:
However, an administrator can configure a custom recycler Pod template using
the Kubernetes controller manager command line arguments as described in the
[reference](/docs/reference/command-line-tools-reference/kube-controller-manager/).
The custom recycler Pod template must contain a `volumes` specification, as
shown in the example below:
```yaml
apiVersion: v1

View File

@ -15,8 +15,6 @@ This document describes the concept of a StorageClass in Kubernetes. Familiarity
with [volumes](/docs/concepts/storage/volumes/) and
[persistent volumes](/docs/concepts/storage/persistent-volumes) is suggested.
<!-- body -->
## Introduction
@ -168,11 +166,11 @@ A cluster administrator can address this issue by specifying the `WaitForFirstCo
will delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created.
PersistentVolumes will be selected or provisioned conforming to the topology that is
specified by the Pod's scheduling constraints. These include, but are not limited to, [resource
requirements](/docs/concepts/configuration/manage-compute-resources-container),
requirements](/docs/concepts/configuration/manage-resources-containers/),
[node selectors](/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector),
[pod affinity and
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
and [taints and tolerations](/docs/concepts/configuration/taint-and-toleration).
and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
@ -244,7 +242,7 @@ parameters:
```
* `type`: `io1`, `gp2`, `sc1`, `st1`. See
[AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html)
for details. Default: `gp2`.
* `zone` (Deprecated): AWS zone. If neither `zone` nor `zones` is specified, volumes are
generally round-robin-ed across all active zones where Kubernetes cluster
@ -256,7 +254,7 @@ parameters:
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS
volume plugin multiplies this with size of requested volume to compute IOPS
of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see
[AWS docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).
[AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).
A string is expected here, i.e. `"10"`, not `10`.
* `fsType`: fsType that is supported by kubernetes. Default: `"ext4"`.
* `encrypted`: denotes whether the EBS volume should be encrypted or not.

View File

@ -18,10 +18,7 @@ Container starts with a clean state. Second, when running Containers together
in a `Pod` it is often necessary to share files between those Containers. The
Kubernetes `Volume` abstraction solves both of these problems.
Familiarity with [Pods](/docs/user-guide/pods) is suggested.
Familiarity with [Pods](/docs/concepts/workloads/pods/pod/) is suggested.
<!-- body -->
@ -100,7 +97,7 @@ We welcome additional contributions.
### awsElasticBlockStore {#awselasticblockstore}
An `awsElasticBlockStore` volume mounts an Amazon Web Services (AWS) [EBS
Volume](http://aws.amazon.com/ebs/) into your Pod. Unlike
Volume](https://aws.amazon.com/ebs/) into your Pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of an EBS
volume are preserved and the volume is merely unmounted. This means that an
EBS volume can be pre-populated with data, and that data can be "handed off"
@ -401,8 +398,8 @@ See the [Flocker example](https://github.com/kubernetes/examples/tree/{{< param
### gcePersistentDisk {#gcepersistentdisk}
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE) [Persistent
Disk](http://cloud.google.com/compute/docs/disks) into your Pod. Unlike
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE)
[Persistent Disk](https://cloud.google.com/compute/docs/disks) into your Pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be "handed off" between Pods.
@ -537,7 +534,7 @@ spec:
### glusterfs {#glusterfs}
A `glusterfs` volume allows a [Glusterfs](http://www.gluster.org) (an open
A `glusterfs` volume allows a [Glusterfs](https://www.gluster.org) (an open
source networked filesystem) volume to be mounted into your Pod. Unlike
`emptyDir`, which is erased when a Pod is removed, the contents of a
`glusterfs` volume are preserved and the volume is merely unmounted. This
@ -589,7 +586,7 @@ Watch out when using this type of volume, because:
able to account for resources used by a `hostPath`
* the files or directories created on the underlying hosts are only writable by root. You
either need to run your process as root in a
[privileged Container](/docs/user-guide/security-context) or modify the file
[privileged Container](/docs/tasks/configure-pod-container/security-context/) or modify the file
permissions on the host to be able to write to a `hostPath` volume
#### Example Pod
@ -952,7 +949,7 @@ More details and examples can be found [here](https://github.com/kubernetes/exam
### quobyte {#quobyte}
A `quobyte` volume allows an existing [Quobyte](http://www.quobyte.com) volume to
A `quobyte` volume allows an existing [Quobyte](https://www.quobyte.com) volume to
be mounted into your Pod.
{{< caution >}}
@ -966,8 +963,8 @@ GitHub project has [instructions](https://github.com/quobyte/quobyte-csi#quobyte
### rbd {#rbd}
An `rbd` volume allows a [Rados Block
Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
An `rbd` volume allows a
[Rados Block Device](https://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your
Pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of
a `rbd` volume are preserved and the volume is merely unmounted. This
means that a RBD volume can be pre-populated with data, and that data can
@ -1044,7 +1041,7 @@ A Container using a Secret as a [subPath](#using-subpath) volume mount will not
receive Secret updates.
{{< /note >}}
Secrets are described in more detail [here](/docs/user-guide/secrets).
Secrets are described in more detail [here](/docs/concepts/configuration/secret/).
### storageOS {#storageos}
@ -1274,11 +1271,12 @@ medium of the filesystem holding the kubelet root dir (typically
Pods.
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
request a certain amount of space using a [resource](/docs/user-guide/compute-resources)
request a certain amount of space using a [resource](/docs/concepts/configuration/manage-resources-containers/)
specification, and to select the type of media to use, for clusters that have
several media types.
## Out-of-Tree Volume Plugins
The Out-of-tree volume plugins include the Container Storage Interface (CSI)
and FlexVolume. They enable storage vendors to create custom storage plugins
without adding them to the Kubernetes repository.

View File

@ -26,9 +26,6 @@ In a simple case, one DaemonSet, covering all nodes, would be used for each type
A more complex setup might use multiple DaemonSets for a single type of daemon, but with
different flags and/or different memory and cpu requests for different hardware types.
<!-- body -->
## Writing a DaemonSet Spec
@ -48,7 +45,8 @@ kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
### Required Fields
As with all other Kubernetes config, a DaemonSet needs `apiVersion`, `kind`, and `metadata` fields. For
general information about working with config files, see [deploying applications](/docs/user-guide/deploying-applications/),
general information about working with config files, see
[running stateless applications](/docs/tasks/run-application/run-stateless-application-deployment/),
[configuring containers](/docs/tasks/), and [object management using kubectl](/docs/concepts/overview/working-with-objects/object-management/) documents.
The name of a DaemonSet object must be a valid
@ -71,7 +69,7 @@ A Pod Template in a DaemonSet must have a [`RestartPolicy`](/docs/concepts/workl
### Pod Selector
The `.spec.selector` field is a pod selector. It works the same as the `.spec.selector` of
a [Job](/docs/concepts/jobs/run-to-completion-finite-workloads/).
a [Job](/docs/concepts/workloads/controllers/job/).
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the
`.spec.template`. The pod selector will no longer be defaulted when left empty. Selector
@ -147,7 +145,7 @@ automatically to DaemonSet Pods. The default scheduler ignores
### Taints and Tolerations
Although Daemon Pods respect
[taints and tolerations](/docs/concepts/configuration/taint-and-toleration),
[taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/),
the following tolerations are added to DaemonSet Pods automatically according to
the related features.
@ -213,7 +211,7 @@ use a DaemonSet rather than creating individual Pods.
### Static Pods
It is possible to create Pods by writing a file to a certain directory watched by Kubelet. These
are called [static pods](/docs/concepts/cluster-administration/static-pod/).
are called [static pods](/docs/tasks/configure-pod-container/static-pod/).
Unlike DaemonSet, static Pods cannot be managed with kubectl
or other Kubernetes API clients. Static Pods do not depend on the apiserver, making them useful
in cluster bootstrapping cases. Also, static Pods may be deprecated in the future.

View File

@ -22,7 +22,6 @@ You describe a _desired state_ in a Deployment, and the Deployment {{< glossary_
Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.
{{< /note >}}
<!-- body -->
## Use Case
@ -1040,7 +1039,8 @@ can create multiple Deployments, one for each release, following the canary patt
## Writing a Deployment Spec
As with all other Kubernetes configs, a Deployment needs `.apiVersion`, `.kind`, and `.metadata` fields.
For general information about working with config files, see [deploying applications](/docs/tutorials/stateless-application/run-stateless-application-deployment/),
For general information about working with config files, see
[deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/),
configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents.
The name of a Deployment object must be a valid
[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).

View File

@ -24,9 +24,6 @@ due to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
<!-- body -->
## Running an example Job
@ -122,6 +119,7 @@ A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/d
The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/#pod-templates). It has exactly the same schema as a {{< glossary_tooltip text="Pod" term_id="pod" >}}, except it is nested and does not have an `apiVersion` or `kind`.
In addition to required fields for a Pod, a pod template in a Job must specify appropriate
@ -450,7 +448,7 @@ requires only a single Pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
Jobs are complementary to [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/).
A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
manages Pods that are expected to terminate (e.g. batch tasks).

View File

@ -23,9 +23,6 @@ A _ReplicationController_ ensures that a specified number of pod replicas are ru
time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is
always up and available.
<!-- body -->
## How a ReplicationController Works
@ -134,7 +131,7 @@ labels and an appropriate restart policy. For labels, make sure not to overlap w
Only a [`.spec.template.spec.restartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Always` is allowed, which is the default if not specified.
For local container restarts, ReplicationControllers delegate to an agent on the node,
for example the [Kubelet](/docs/admin/kubelet/) or Docker.
for example the [Kubelet](/docs/reference/command-line-tools-reference/kubelet/) or Docker.
### Labels on the ReplicationController
@ -214,7 +211,7 @@ The ReplicationController makes it easy to scale the number of replicas up or do
The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.
As explained in [#1353](http://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
As explained in [#1353](https://issue.k8s.io/1353), the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
@ -239,11 +236,11 @@ Pods created by a ReplicationController are intended to be fungible and semantic
## Responsibilities of the ReplicationController
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](http://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://issue.k8s.io/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](http://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](http://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](http://issue.k8s.io/170)).
The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://issue.k8s.io/492)), which would change its `replicas` field. We will not add scheduling policies (for example, [spreading](https://issue.k8s.io/367#issuecomment-48428019)) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://issue.k8s.io/170)).
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](https://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.
## API Object
@ -271,7 +268,7 @@ Unlike in the case where a user directly created pods, a ReplicationController r
### Job
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicationController for pods that are expected to terminate on their own
Use a [`Job`](/docs/concepts/workloads/controllers/job/) instead of a ReplicationController for pods that are expected to terminate on their own
(that is, batch jobs).
### DaemonSet
@ -283,6 +280,6 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown.
## For more information
Read [Run Stateless AP Replication Controller](/docs/tutorials/stateless-application/run-stateless-ap-replication-controller/).
Read [Run Stateless Application Deployment](/docs/tasks/run-application/run-stateless-application-deployment/).

View File

@ -200,7 +200,7 @@ The StatefulSet should not specify a `pod.Spec.TerminationGracePeriodSeconds` of
When the nginx example above is created, three Pods will be deployed in the order
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
[Running and Ready](/docs/user-guide/pod-states/), and web-2 will not be deployed until
[Running and Ready](/docs/concepts/workloads/pods/pod-lifecycle/), and web-2 will not be deployed until
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
becomes Running and Ready.

View File

@ -20,12 +20,6 @@ Alpha Disclaimer: this feature is currently alpha, and can be enabled with both
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`TTLAfterFinished`.
<!-- body -->
## TTL Controller
@ -82,9 +76,7 @@ very small. Please be aware of this risk when setting a non-zero TTL.
## {{% heading "whatsnext" %}}
* [Clean up Jobs automatically](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
[Clean up Jobs automatically](/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically)
[Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)
* [Design doc](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/0026-ttl-after-finish.md)

View File

@ -16,7 +16,6 @@ what types of disruptions can happen to Pods.
It is also for cluster administrators who want to perform automated
cluster actions, like upgrading and autoscaling clusters.
<!-- body -->
## Voluntary and involuntary disruptions
@ -70,15 +69,15 @@ deleting deployments or pods bypasses Pod Disruption Budgets.
Here are some ways to mitigate involuntary disruptions:
- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-cpu-ram-container) it needs.
- Ensure your pod [requests the resources](/docs/tasks/configure-pod-container/assign-memory-resource) it needs.
- Replicate your application if you need higher availability. (Learn about running replicated
[stateless](/docs/tasks/run-application/run-stateless-application-deployment/)
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
[stateless](/docs/tasks/run-application/run-stateless-application-deployment/)
and [stateful](/docs/tasks/run-application/run-replicated-stateful-application/) applications.)
- For even higher availability when running replicated applications,
spread applications across racks (using
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature))
or across zones (if using a
[multi-zone cluster](/docs/setup/multiple-zones).)
spread applications across racks (using
[anti-affinity](/docs/user-guide/node-selection/#inter-pod-affinity-and-anti-affinity-beta-feature))
or across zones (if using a
[multi-zone cluster](/docs/setup/multiple-zones).)
The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
no voluntary disruptions at all. However, your cluster administrator or hosting provider

View File

@ -75,7 +75,7 @@ For an example of adding a label, see the PR for adding the [Italian language la
Let Kubernetes SIG Docs know you're interested in creating a localization! Join the [SIG Docs Slack channel](https://kubernetes.slack.com/messages/C1J0BPD2M/). Other localization teams are happy to help you get started and answer any questions you have.
You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding channels for Indonesian and Portuguese](https://github.com/kubernetes/community/pull/3605).
You can also create a Slack channel for your localization in the `kubernetes/community` repository. For an example of adding a Slack channel, see the PR for [adding a channel for Persian](https://github.com/kubernetes/community/pull/4980).
## Minimum required content

View File

@ -105,7 +105,7 @@ SIG Docs approvers. Here's how it works.
- Any Kubernetes member can add the `lgtm` label by adding a `/lgtm` comment.
- Only SIG Docs approvers can merge a pull request
by adding an `/approve` comment. Some approvers also perform additional
specific roles, such as [PR Wrangler](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) or
specific roles, such as [PR Wrangler](/docs/contribute/participate/pr-wranglers/) or
[SIG Docs chairperson](#sig-docs-chairperson).

View File

@ -47,6 +47,19 @@ These queries exclude localization PRs. All queries are against the main branch
- [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22): Lists PRs against the main branch with no clear blockers. (change "XS" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]).
- [Not against the main branch](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @<manager's_github-username>`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch.
### Helpful Prow commands for wranglers
```
# add English label
/language en
# add squash label to PR if more than one commit
/label tide/merge-method-squash
# retitle a PR via Prow (such as a work-in-progress [WIP] or better detail of PR)
/retitle [WIP] <TITLE>
```
### When to close Pull Requests
Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure.
@ -66,4 +79,4 @@ To close a pull request, leave a `/close` comment on the PR.
The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity.
{{< /note >}}
{{< /note >}}

View File

@ -49,7 +49,7 @@ with the request:
* Username: a string which identifies the end user. Common values might be `kube-admin` or `jane@example.com`.
* UID: a string which identifies the end user and attempts to be more consistent and unique than username.
* Groups: a set of strings which associate users with a set of commonly grouped users.
* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users. Common values might be `system:masters` or `devops-team`.
* Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful.
All values are opaque to the authentication system and only hold significance

View File

@ -600,7 +600,11 @@ more information about how an object's schema is used to make decisions when
merging, see
[sigs.k8s.io/structured-merge-diff](https://sigs.k8s.io/structured-merge-diff).
A number of markers were added in Kubernetes 1.16 and 1.17, to allow API developers to describe the merge strategy supported by lists, maps, and structs. These markers can be applied to objects of the respective type, in Go files or OpenAPI specs.
A number of markers were added in Kubernetes 1.16 and 1.17, to allow API
developers to describe the merge strategy supported by lists, maps, and
structs. These markers can be applied to objects of the respective type,
in Go files or in the [OpenAPI schema definition of the
CRD](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io):
| Golang marker | OpenAPI extension | Accepted values | Description | Introduced in |
|---|---|---|---|---|
@ -613,8 +617,12 @@ A number of markers were added in Kubernetes 1.16 and 1.17, to allow API develop
By default, Server Side Apply treats custom resources as unstructured data. All
keys are treated the same as struct fields, and all lists are considered atomic.
If the validation field is specified in the Custom Resource Definition, it is
used when merging objects of this type.
If the Custom Resource Definition defines a
[schema](/docs/reference/generated/kubernetes-api/{{< param "version" >}}#jsonschemaprops-v1-apiextensions-k8s-io)
that contains annotations as defined in the previous "Merge Strategy"
section, these annotations will be used when merging objects of this
type.
### Using Server-Side Apply in a controller

View File

@ -274,7 +274,7 @@ kubeadm to tell it what to do.
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet
and set it in the `/var/lib/kubelet/config.yaml` file during runtime.
If you are using a different CRI, you have to modify the file with your `cgroupDriver` value, like so:
If you are using a different CRI, you must pass your `cgroupDriver` value to `kubeadm init`, like so:
```yaml
apiVersion: kubelet.config.k8s.io/v1beta1
@ -282,6 +282,8 @@ kind: KubeletConfiguration
cgroupDriver: <value>
```
For further details, please read [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
Please mind, that you **only** have to do that if the cgroup driver of your CRI
is not `cgroupfs`, because that is the default value in the kubelet already.

View File

@ -403,4 +403,8 @@ nodeRegistration:
Alternatively, you can modify `/etc/fstab` to make the `/usr` mount writeable, but please
be advised that this is modifying a design principle of the Linux distribution.
## `kubeadm upgrade plan` prints out `context deadline exceeded` error message
This error message is shown when upgrading a Kubernetes cluster with `kubeadm` in the case of running an external etcd. This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster. You can proceed with `kubeadm upgrade apply ...`.
This issue is fixed as of version 1.19.

View File

@ -202,8 +202,8 @@ kubectl describe node <your-node-name> | grep dongle
### For cluster administrators
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)

View File

@ -8,7 +8,7 @@ content_type: task
This example demonstrates an easy way to limit the amount of storage consumed in a namespace.
The following resources are used in the demonstration: [ResourceQuota](/docs/concepts/policy/resource-quotas/),
[LimitRange](/docs/tasks/administer-cluster/memory-default-namespace/),
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/),
and [PersistentVolumeClaim](/docs/concepts/storage/persistent-volumes/).

View File

@ -202,7 +202,7 @@ resources:
```
Because your Container did not specify its own CPU request and limit, it was given the
[default CPU request and limit](/docs/tasks/administer-cluster/cpu-default-namespace/)
[default CPU request and limit](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
from the LimitRange.
At this point, your Container might be running or it might not be running. Recall that a prerequisite for this task is that your cluster must have at least 1 CPU available for use. If each of your Nodes has only 1 CPU, then there might not be enough allocatable CPU on any Node to accommodate a request of 800 millicpu. If you happen to be using Nodes with 2 CPU, then you probably have enough CPU to accommodate the 800 millicpu request.
@ -247,15 +247,15 @@ kubectl delete namespace constraints-cpu-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -171,15 +171,15 @@ kubectl delete namespace default-cpu-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -198,7 +198,7 @@ resources:
```
Because your Container did not specify its own memory request and limit, it was given the
[default memory request and limit](/docs/tasks/administer-cluster/memory-default-namespace/)
[default memory request and limit](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
from the LimitRange.
At this point, your Container might be running or it might not be running. Recall that a prerequisite
@ -247,15 +247,15 @@ kubectl delete namespace constraints-mem-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -178,15 +178,15 @@ kubectl delete namespace default-mem-example
### For cluster administrators
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -137,7 +137,7 @@ the memory request total for all Containers running in a namespace.
You can also restrict the totals for memory limit, cpu request, and cpu limit.
If you want to restrict individual Containers, instead of totals for all Containers, use a
[LimitRange](/docs/tasks/administer-cluster/memory-constraint-namespace/).
[LimitRange](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/).
## Clean up
@ -154,15 +154,15 @@ kubectl delete namespace quota-mem-cpu-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -115,15 +115,15 @@ kubectl delete namespace quota-pod-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -148,17 +148,17 @@ kubectl delete namespace quota-object-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
### For app developers

View File

@ -32,17 +32,17 @@ they are allowed to perform is the first line of defense.
### Use Transport Layer Security (TLS) for all API traffic
Kubernetes expects that all API communication in the cluster is encrypted by default with TLS, and the
majority of installation methods will allow the necessary certificates to be created and distributed to
the cluster components. Note that some components and installation methods may enable local ports over
HTTP and administrators should familiarize themselves with the settings of each component to identify
majority of installation methods will allow the necessary certificates to be created and distributed to
the cluster components. Note that some components and installation methods may enable local ports over
HTTP and administrators should familiarize themselves with the settings of each component to identify
potentially unsecured traffic.
### API Authentication
Choose an authentication mechanism for the API servers to use that matches the common access patterns
when you install a cluster. For instance, small single user clusters may wish to use a simple certificate
Choose an authentication mechanism for the API servers to use that matches the common access patterns
when you install a cluster. For instance, small single user clusters may wish to use a simple certificate
or static Bearer token approach. Larger clusters may wish to integrate an existing OIDC or LDAP server that
allow users to be subdivided into groups.
allow users to be subdivided into groups.
All API clients must be authenticated, even those that are part of the infrastructure like nodes,
proxies, the scheduler, and volume plugins. These clients are typically [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/) or use x509 client certificates, and they are created automatically at cluster startup or are setup as part of the cluster installation.
@ -63,10 +63,10 @@ As with authentication, simple and broad roles may be appropriate for smaller cl
more users interact with the cluster, it may become necessary to separate teams into separate
namespaces with more limited roles.
With authorization, it is important to understand how updates on one object may cause actions in
other places. For instance, a user may not be able to create pods directly, but allowing them to
create a deployment, which creates pods on their behalf, will let them create those pods
indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node
With authorization, it is important to understand how updates on one object may cause actions in
other places. For instance, a user may not be able to create pods directly, but allowing them to
create a deployment, which creates pods on their behalf, will let them create those pods
indirectly. Likewise, deleting a node from the API will result in the pods scheduled to that node
being terminated and recreated on other nodes. The out of the box roles represent a balance
between flexibility and the common use cases, but more limited roles should be carefully reviewed
to prevent accidental escalation. You can make roles specific to your use case if the out-of-box ones don't meet your needs.
@ -84,7 +84,7 @@ Consult the [Kubelet authentication/authorization reference](/docs/admin/kubelet
## Controlling the capabilities of a workload or user at runtime
Authorization in Kubernetes is intentionally high level, focused on coarse actions on resources.
More powerful controls exist as **policies** to limit by use case how those objects act on the
More powerful controls exist as **policies** to limit by use case how those objects act on the
cluster, themselves, and other resources.
### Limiting resource usage on a cluster
@ -92,9 +92,9 @@ cluster, themselves, and other resources.
[Resource quota](/docs/concepts/policy/resource-quotas/) limits the number or capacity of
resources granted to a namespace. This is most often used to limit the amount of CPU, memory,
or persistent disk a namespace can allocate, but can also control how many pods, services, or
volumes exist in each namespace.
volumes exist in each namespace.
[Limit ranges](/docs/tasks/administer-cluster/memory-default-namespace/) restrict the maximum or minimum size of some of the
[Limit ranges](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) restrict the maximum or minimum size of some of the
resources above, to prevent users from requesting unreasonably high or low values for commonly
reserved resources like memory, or to provide default limits when none are specified.
@ -104,14 +104,14 @@ reserved resources like memory, or to provide default limits when none are speci
A pod definition contains a [security context](/docs/tasks/configure-pod-container/security-context/)
that allows it to request access to running as a specific Linux user on a node (like root),
access to run privileged or access the host network, and other controls that would otherwise
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/)
allow it to run unfettered on a hosting node. [Pod security policies](/docs/concepts/policy/pod-security-policy/)
can limit which users or service accounts can provide dangerous security context settings. For example, pod security policies can limit volume mounts, especially `hostPath`, which are aspects of a pod that should be controlled.
Generally, most application workloads need limited access to host resources so they can
successfully run as a root process (uid 0) without access to host information. However,
considering the privileges associated with the root user, you should write application
containers to run as a non-root user. Similarly, administrators who wish to prevent
client applications from escaping their containers should use a restrictive pod security
Generally, most application workloads need limited access to host resources so they can
successfully run as a root process (uid 0) without access to host information. However,
considering the privileges associated with the root user, you should write application
containers to run as a non-root user. Similarly, administrators who wish to prevent
client applications from escaping their containers should use a restrictive pod security
policy.
@ -147,8 +147,8 @@ kernel on behalf of some more-privileged process.)
### Restricting network access
The [network policies](/docs/tasks/administer-cluster/declare-network-policy/) for a namespace
allows application authors to restrict which pods in other namespaces may access pods and ports
The [network policies](/docs/tasks/administer-cluster/declare-network-policy/) for a namespace
allows application authors to restrict which pods in other namespaces may access pods and ports
within their namespaces. Many of the supported [Kubernetes networking providers](/docs/concepts/cluster-administration/networking/)
now respect network policy.
@ -157,7 +157,7 @@ load balanced services, which on many clusters can control whether those users a
are visible outside of the cluster.
Additional protections may be available that control network rules on a per plugin or per
environment basis, such as per-node firewalls, physically separating cluster nodes to
environment basis, such as per-node firewalls, physically separating cluster nodes to
prevent cross talk, or advanced networking policy.
### Restricting cloud metadata API access
@ -173,14 +173,14 @@ to the metadata API, and avoid using provisioning data to deliver secrets.
### Controlling which nodes pods may access
By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a
By default, there are no restrictions on which nodes may run a pod. Kubernetes offers a
[rich set of policies for controlling placement of pods onto nodes](/docs/concepts/scheduling-eviction/assign-pod-node/)
and the [taint based pod placement and eviction](/docs/concepts/scheduling-eviction/taint-and-toleration/)
that are available to end users. For many clusters use of these policies to separate workloads
can be a convention that authors adopt or enforce via tooling.
As an administrator, a beta admission plugin `PodNodeSelector` can be used to force pods
within a namespace to default or require a specific node selector, and if end users cannot
As an administrator, a beta admission plugin `PodNodeSelector` can be used to force pods
within a namespace to default or require a specific node selector, and if end users cannot
alter namespaces, this can strongly limit the placement of all of the pods in a specific workload.
@ -194,7 +194,7 @@ Write access to the etcd backend for the API is equivalent to gaining root on th
and read access can be used to escalate fairly quickly. Administrators should always use strong
credentials from the API servers to their etcd server, such as mutual auth via TLS client certificates,
and it is often recommended to isolate the etcd servers behind a firewall that only the API servers
may access.
may access.
{{< caution >}}
Allowing other components within the cluster to access the master etcd instance with
@ -206,7 +206,7 @@ access to a subset of the keyspace is strongly recommended.
### Enable audit logging
The [audit logger](/docs/tasks/debug-application-cluster/audit/) is a beta feature that records actions taken by the
API for later analysis in the event of a compromise. It is recommended to enable audit logging
API for later analysis in the event of a compromise. It is recommended to enable audit logging
and archive the audit file on a secure server.
### Restrict access to alpha or beta features
@ -229,8 +229,8 @@ rotate those tokens frequently. For example, once the bootstrap phase is complet
Many third party integrations to Kubernetes may alter the security profile of your cluster. When
enabling an integration, always review the permissions that an extension requests before granting
it access. For example, many security integrations may request access to view all secrets on
your cluster which is effectively making that component a cluster admin. When in doubt,
restrict the integration to functioning in a single namespace if possible.
your cluster which is effectively making that component a cluster admin. When in doubt,
restrict the integration to functioning in a single namespace if possible.
Components that create pods may also be unexpectedly powerful if they can do so inside namespaces
like the `kube-system` namespace, because those pods can gain access to service account secrets
@ -251,7 +251,7 @@ are not encrypted or an attacker gains read access to etcd.
### Receiving alerts for security updates and reporting vulnerabilities
Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce)
Join the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce)
group for emails about security announcements. See the [security reporting](/security/)
page for more on how to report vulnerabilities.

View File

@ -254,17 +254,17 @@ kubectl delete namespace cpu-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -43,7 +43,7 @@ If the resource metrics API is available, the output includes a
reference to `metrics.k8s.io`.
```shell
NAME
NAME
v1beta1.metrics.k8s.io
```
@ -344,17 +344,17 @@ kubectl delete namespace mem-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -250,17 +250,17 @@ kubectl delete namespace qos-example
### For cluster administrators
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/memory-default-namespace/)
* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/cpu-default-namespace/)
* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/memory-constraint-namespace/)
* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/cpu-constraint-namespace/)
* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/)
* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/quota-pod-namespace/)
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)

View File

@ -1,7 +1,6 @@
---
title: Docs smoke test page
main_menu: false
mermaid: true
---
This page serves two purposes:
@ -297,7 +296,8 @@ tables, use HTML instead.
## Visualizations with Mermaid
Add `mermaid: true` to the [front matter](https://gohugo.io/content-management/front-matter/) of any page to enable [Mermaid JS](https://mermaidjs.github.io) visualizations. The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
You can use [Mermaid JS](https://mermaidjs.github.io) visualizations.
The Mermaid JS version is specified in [/layouts/partials/head.html](https://github.com/kubernetes/website/blob/master/layouts/partials/head.html)
```
{{</* mermaid */>}}

View File

@ -1,7 +1,6 @@
---
title: Using Source IP
content_type: tutorial
mermaid: true
min-kubernetes-server-version: v1.5
---

View File

@ -51,7 +51,7 @@ After this tutorial, you will know the following.
- How to consistently configure the ensemble using ConfigMaps.
- How to spread the deployment of ZooKeeper servers in the ensemble.
- How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
<!-- lessoncontent -->
@ -91,7 +91,7 @@ kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
This creates the `zk-hs` Headless Service, the `zk-cs` Service,
the `zk-pdb` PodDisruptionBudget, and the `zk` StatefulSet.
```shell
```
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
@ -107,7 +107,7 @@ kubectl get pods -w -l app=zk
Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl.
```shell
```
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
@ -143,7 +143,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `<statefulset name>-<ordinal index>`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
`zk-2`.
```shell
```
zk-0
zk-1
zk-2
@ -159,7 +159,7 @@ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeepe
Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.
```shell
```
myid zk-0
1
myid zk-1
@ -177,7 +177,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
The `zk-hs` Service creates a domain for all of the Pods,
`zk-hs.default.svc.cluster.local`.
```shell
```
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
@ -196,7 +196,7 @@ the file, the `1`, `2`, and `3` correspond to the identifiers in the
ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
the `zk` StatefulSet.
```shell
```
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/log
@ -219,7 +219,9 @@ Consensus protocols require that the identifiers of each participant be unique.
```shell
kubectl get pods -w -l app=zk
```
```
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
@ -243,7 +245,7 @@ the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that
endpoint will be the unique ZooKeeper server claiming the identity configured
in its `myid` file.
```shell
```
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
@ -252,7 +254,7 @@ zk-2.zk-hs.default.svc.cluster.local
This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
represents a correctly configured ensemble.
```shell
```
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
@ -269,7 +271,8 @@ The command below executes the `zkCli.sh` script to write `world` to the path `/
```shell
kubectl exec zk-0 zkCli.sh create /hello world
```
```
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
@ -285,7 +288,7 @@ kubectl exec zk-1 zkCli.sh get /hello
The data that you created on `zk-0` is available on all the servers in the
ensemble.
```shell
```
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
@ -316,6 +319,9 @@ Use the [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#d
```shell
kubectl delete statefulset zk
```
```
statefulset.apps "zk" deleted
```
@ -327,7 +333,7 @@ kubectl get pods -w -l app=zk
When `zk-0` if fully terminated, use `CTRL-C` to terminate kubectl.
```shell
```
zk-2 1/1 Terminating 0 9m
zk-0 1/1 Terminating 0 11m
zk-1 1/1 Terminating 0 10m
@ -358,7 +364,7 @@ kubectl get pods -w -l app=zk
Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl.
```shell
```
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
@ -386,7 +392,7 @@ kubectl exec zk-2 zkCli.sh get /hello
Even though you terminated and recreated all of the Pods in the `zk` StatefulSet, the ensemble still serves the original value.
```shell
```
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
@ -430,7 +436,7 @@ kubectl get pvc -l app=zk
When the `StatefulSet` recreated its Pods, it remounts the Pods' PersistentVolumes.
```shell
```
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
datadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
@ -464,6 +470,8 @@ Get the `zk` StatefulSet.
```shell
kubectl get sts zk -o yaml
```
```
command:
- sh
@ -506,7 +514,7 @@ kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
The logging configuration below will cause the ZooKeeper process to write all
of its logs to the standard output file stream.
```shell
```
zookeeper.root.logger=CONSOLE
zookeeper.console.threshold=INFO
log4j.rootLogger=${zookeeper.root.logger}
@ -526,7 +534,7 @@ kubectl logs zk-0 --tail 20
You can view application logs written to standard out or standard error using `kubectl logs` and from the Kubernetes Dashboard.
```shell
```
2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740
2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)
2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749
@ -583,7 +591,7 @@ kubectl exec zk-0 -- ps -elf
As the `runAsUser` field of the `securityContext` object is set to 1000,
instead of running as root, the ZooKeeper process runs as the zookeeper user.
```shell
```
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
@ -599,7 +607,7 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
Because the `fsGroup` field of the `securityContext` object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data.
```shell
```
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
```
@ -621,7 +629,8 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv
```shell
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
```
```
statefulset.apps/zk patched
```
@ -629,7 +638,8 @@ Use `kubectl rollout status` to watch the status of the update.
```shell
kubectl rollout status sts/zk
```
```
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
@ -648,7 +658,9 @@ Use the `kubectl rollout history` command to view a history or previous configur
```shell
kubectl rollout history sts/zk
```
```
statefulsets "zk"
REVISION
1
@ -659,7 +671,9 @@ Use the `kubectl rollout undo` command to roll back the modification.
```shell
kubectl rollout undo sts/zk
```
```
statefulset.apps/zk rolled back
```
@ -680,7 +694,7 @@ kubectl exec zk-0 -- ps -ef
The command used as the container's entry point has PID 1, and
the ZooKeeper process, a child of the entry point, has PID 27.
```shell
```
UID PID PPID C STIME TTY TIME CMD
zookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
zookeep+ 27 1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
@ -700,7 +714,7 @@ kubectl exec zk-0 -- pkill java
The termination of the ZooKeeper process caused its parent process to terminate. Because the `RestartPolicy` of the container is Always, it restarted the parent process.
```shell
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 21m
zk-1 1/1 Running 0 20m
@ -740,7 +754,7 @@ The Pod `template` for the `zk` `StatefulSet` specifies a liveness probe.
The probe calls a bash script that uses the ZooKeeper `ruok` four letter
word to test the server's health.
```bash
```
OK=$(echo ruok | nc 127.0.0.1 $1)
if [ "$OK" == "imok" ]; then
exit 0
@ -767,7 +781,9 @@ the ensemble are restarted.
```shell
kubectl get pod -w -l app=zk
```
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 1h
zk-1 1/1 Running 0 1h
@ -832,7 +848,7 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo "";
All of the Pods in the `zk` `StatefulSet` are deployed on different nodes.
```shell
```
kubernetes-node-cxpk
kubernetes-node-a5aq
kubernetes-node-2g2d
@ -854,7 +870,7 @@ This is because the Pods in the `zk` `StatefulSet` have a `PodAntiAffinity` spec
```
The `requiredDuringSchedulingIgnoredDuringExecution` field tells the
Kubernetes Scheduler that it should never co-locate two Pods which have `app` label
Kubernetes Scheduler that it should never co-locate two Pods which have `app` label
as `zk` in the domain defined by the `topologyKey`. The `topologyKey`
`kubernetes.io/hostname` indicates that the domain is an individual node. Using
different rules, labels, and selectors, you can extend this technique to spread
@ -891,7 +907,7 @@ kubectl get pdb zk-pdb
The `max-unavailable` field indicates to Kubernetes that at most one Pod from
`zk` `StatefulSet` can be unavailable at any time.
```shell
```
NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE
zk-pdb N/A 1 1
```
@ -906,7 +922,9 @@ In another terminal, use this command to get the nodes that the Pods are current
```shell
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
```
```
kubernetes-node-pb41
kubernetes-node-ixsl
kubernetes-node-i4c4
@ -917,6 +935,9 @@ drain the node on which the `zk-0` Pod is scheduled.
```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
```
```
node "kubernetes-node-pb41" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
@ -927,7 +948,7 @@ node "kubernetes-node-pb41" drained
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
`zk-0` is rescheduled to another node.
```shell
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
@ -949,7 +970,9 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
```
```
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
pod "zk-1" deleted
node "kubernetes-node-ixsl" drained
@ -959,7 +982,9 @@ The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `Po
```shell
kubectl get pods -w -l app=zk
```
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
@ -987,6 +1012,8 @@ Continue to watch the Pods of the stateful set, and drain the node on which
```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
```
```
node "kubernetes-node-i4c4" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
@ -1007,7 +1034,7 @@ kubectl exec zk-0 zkCli.sh get /hello
The service is still available because its `PodDisruptionBudget` is respected.
```shell
```
WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x200000002
@ -1027,7 +1054,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
```shell
kubectl uncordon kubernetes-node-pb41
```
```
node "kubernetes-node-pb41" uncordoned
```
@ -1035,7 +1063,8 @@ node "kubernetes-node-pb41" uncordoned
```shell
kubectl get pods -w -l app=zk
```
```
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
@ -1102,6 +1131,3 @@ You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure
used in this tutorial. Follow the necessary steps, based on your environment,
storage configuration, and provisioning method, to ensure that all storage is
reclaimed.

View File

@ -99,7 +99,7 @@ debes utilizar el nombre cualificado completo de dominio (FQDN).
La mayoría de los recursos de Kubernetes (ej. pods, services, replication controllers, y otros) están
en algunos espacios de nombres. Sin embargo, los recursos que representan a los propios
espacios de nombres no están a su vez en espacios de nombres.
De forma similar, los recursos de bajo nivel, como los nodos [nodos](/docs/admin/node) y
De forma similar, los recursos de bajo nivel, como los [nodos](/docs/admin/node) y
los volúmenes persistentes, no están en ningún espacio de nombres.
Para comprobar qué recursos de Kubernetes están y no están en un espacio de nombres:

View File

@ -40,7 +40,7 @@ A noter: Toutes les distributions ne sont pas activement maintenues. Choisissez
* La rubrique [Certificats](/docs/concepts/cluster-administration/certificates/) décrit les étapes à suivre pour générer des certificats à laide de différentes suites d'outils.
* L' [Environnement de conteneur dans Kubernetes](/docs/concepts/containers/container-environment-variables/) décrit l'environnement des conteneurs gérés par la Kubelet sur un nœud Kubernetes.
* L' [Environnement de conteneur dans Kubernetes](/docs/concepts/containers/container-environment/) décrit l'environnement des conteneurs gérés par Kubelet sur un nœud Kubernetes.
* Le [Contrôle de l'accès à l'API Kubernetes](/docs/reference/access-authn-authz/controlling-access/) explique comment configurer les autorisations pour les utilisateurs et les comptes de service.
@ -64,4 +64,3 @@ A noter: Toutes les distributions ne sont pas activement maintenues. Choisissez
* [Integration DNS](/docs/concepts/services-networking/dns-pod-service/) décrit comment résoudre un nom DNS directement vers un service Kubernetes.
* [Journalisation des évènements et surveillance de l'activité du cluster](/docs/concepts/cluster-administration/logging/) explique le fonctionnement de la journalisation des évènements dans Kubernetes et son implémentation.

View File

@ -1,6 +1,6 @@
---
title: Les variables denvironnement du conteneur
description: Variables d'environnement pour conteneur Kubernetes
title: L'environnement du conteneur
description: L'environnement du conteneur Kubernetes
content_type: concept
weight: 20
---

View File

@ -0,0 +1,234 @@
---
title: Mengoperasikan klaster etcd untuk Kubernetes
content_type: task
---
<!-- overview -->
{{< glossary_definition term_id="etcd" length="all" prepend="etcd adalah ">}}
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## Prerequisites
* Jalankan etcd sebagai klaster dimana anggotanya berjumlah ganjil.
* Etcd adalah sistem terdistribusi berbasis _leader_. Pastikan _leader_ secara berkala mengirimkan _heartbeat_ dengan tepat waktu ke semua pengikutnya untuk menjaga kestabilan klaster.
* Pastikan tidak terjadi kekurangan sumber daya.
Kinerja dan stabilitas dari klaster sensitif terhadap jaringan dan _IO disk_. Kekurangan sumber daya apa pun dapat menyebabkan _timeout_ dari _heartbeat_, yang menyebabkan ketidakstabilan klaster. Etcd yang tidak stabil mengindikasikan bahwa tidak ada _leader_ yang terpilih. Dalam keadaan seperti itu, sebuah klaster tidak dapat membuat perubahan apa pun ke kondisi saat ini, yang menyebabkan tidak ada Pod baru yang dapat dijadwalkan.
* Menjaga kestabilan klaster etcd sangat penting untuk stabilitas klaster Kubernetes. Karenanya, jalankan klaster etcd pada mesin khusus atau lingkungan terisolasi untuk [persyaratan sumber daya terjamin](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#hardware-recommendations).
* Versi minimum yang disarankan untuk etcd yang dijalankan dalam lingkungan produksi adalah `3.2.10+`.
## Persyaratan sumber daya
Mengoperasikan etcd dengan sumber daya terbatas hanya cocok untuk tujuan pengujian. Untuk peluncuran dalam lingkungan produksi, diperlukan konfigurasi perangkat keras lanjutan. Sebelum meluncurkan etcd dalam produksi, lihat [dokumentasi referensi persyaratan sumber daya](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations).
## Memulai Klaster etcd
Bagian ini mencakup bagaimana memulai klaster etcd dalam Node tunggal dan Node multipel.
### Klaster etcd dalam Node tunggal
Gunakan Klaster etcd Node tunggal hanya untuk tujuan pengujian
1. Jalankan perintah berikut ini:
```sh
./etcd --listen-client-urls=http://$PRIVATE_IP:2379 --advertise-client-urls=http://$PRIVATE_IP:2379
```
2. Start server API Kubernetes dengan _flag_ `--etcd-servers=$PRIVATE_IP:2379`.
Ganti `PRIVATE_IP` dengan IP klien etcd kamu.
### Klaster etcd dengan Node multipel
Untuk daya tahan dan ketersediaan tinggi, jalankan etcd sebagai klaster dengan Node multipel dalam lingkungan produksi dan cadangkan secara berkala. Sebuah klaster dengan lima anggota direkomendasikan dalam lingkungan produksi. Untuk informasi lebih lanjut, lihat [Dokumentasi FAQ](https://github.com/coreos/etcd/blob/master/Documentation/faq.md#what-is-failure-tolerance).
Mengkonfigurasi klaster etcd baik dengan informasi anggota statis atau dengan penemuan dinamis. Untuk informasi lebih lanjut tentang pengklasteran, lihat [Dokumentasi pengklasteran etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/clustering.md).
Sebagai contoh, tinjau sebuah klaster etcd dengan lima anggota yang berjalan dengan URL klien berikut: `http://$IP1:2379`, `http://$IP2:2379`, `http://$IP3:2379`, `http://$IP4:2379`, dan `http://$IP5:2379`. Untuk memulai server API Kubernetes:
1. Jalankan perintah berikut ini:
```sh
./etcd --listen-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379 --advertise-client-urls=http://$IP1:2379, http://$IP2:2379, http://$IP3:2379, http://$IP4:2379, http://$IP5:2379
```
2. Start server Kubernetes API dengan flag `--etcd-servers=$IP1:2379, $IP2:2379, $IP3:2379, $IP4:2379, $IP5:2379`.
Ganti `IP` dengan alamat IP klien kamu.
### Klaster etcd dengan Node multipel dengan load balancer
Untuk menjalankan penyeimbangan beban (_load balancing_) untuk klaster etcd:
1. Siapkan sebuah klaster etcd.
2. Konfigurasikan sebuah _load balancer_ di depan klaster etcd.
Sebagai contoh, anggap saja alamat _load balancer_ adalah `$LB`.
3. Mulai Server API Kubernetes dengan _flag_ `--etcd-servers=$LB:2379`.
## Mengamankan klaster etcd
Akses ke etcd setara dengan izin root pada klaster sehingga idealnya hanya server API yang memiliki akses ke etcd. Dengan pertimbangan sensitivitas data, disarankan untuk memberikan izin hanya kepada Node-Node yang membutuhkan akses ke klaster etcd.
Untuk mengamankan etcd, tetapkan aturan _firewall_ atau gunakan fitur keamanan yang disediakan oleh etcd. Fitur keamanan etcd tergantung pada Infrastruktur Kunci Publik / _Public Key Infrastructure_ (PKI) x509. Untuk memulai, buat saluran komunikasi yang aman dengan menghasilkan pasangan kunci dan sertifikat. Sebagai contoh, gunakan pasangan kunci `peer.key` dan `peer.cert` untuk mengamankan komunikasi antara anggota etcd, dan `client.key` dan `client.cert` untuk mengamankan komunikasi antara etcd dan kliennya. Lihat [contoh skrip](https://github.com/coreos/etcd/tree/master/hack/tls-setup) yang disediakan oleh proyek etcd untuk menghasilkan pasangan kunci dan berkas CA untuk otentikasi klien.
### Mengamankan komunikasi
Untuk mengonfigurasi etcd dengan _secure peer communication_, tentukan _flag_ `--peer-key-file=peer.key` dan `--peer-cert-file=peer.cert`, dan gunakan https sebagai skema URL.
Demikian pula, untuk mengonfigurasi etcd dengan _secure client communication_, tentukan _flag_ `--key-file=k8sclient.key` dan `--cert-file=k8sclient.cert`, dan gunakan https sebagai skema URL.
### Membatasi akses klaster etcd
Setelah konfigurasi komunikasi aman, batasi akses klaster etcd hanya ke server API Kubernetes. Gunakan otentikasi TLS untuk melakukannya.
Sebagai contoh, anggap pasangan kunci `k8sclient.key` dan `k8sclient.cert` dipercaya oleh CA `etcd.ca`. Ketika etcd dikonfigurasi dengan `--client-cert-auth` bersama dengan TLS, etcd memverifikasi sertifikat dari klien dengan menggunakan CA dari sistem atau CA yang dilewati oleh _flag_ `--trusted-ca-file`. Menentukan _flag_ `--client-cert-auth=true` dan `--trusted-ca-file=etcd.ca` akan membatasi akses kepada klien yang mempunyai sertifikat `k8sclient.cert`.
Setelah etcd dikonfigurasi dengan benar, hanya klien dengan sertifikat yang valid dapat mengaksesnya. Untuk memberikan akses kepada server Kubernetes API, konfigurasikan dengan _flag_ `--etcd-certfile=k8sclient.cert`,`--etcd-keyfile=k8sclient.key` dan `--etcd-cafile=ca.cert`.
{{< note >}}
Otentikasi etcd saat ini tidak didukung oleh Kubernetes. Untuk informasi lebih lanjut, lihat masalah terkait [Mendukung Auth Dasar untuk Etcd v2](https://github.com/kubernetes/kubernetes/issues/23398).
{{< /note >}}
## Mengganti anggota etcd yang gagal
Etcd klaster mencapai ketersediaan tinggi dengan mentolerir kegagalan dari sebagian kecil anggota. Namun, untuk meningkatkan kesehatan keseluruhan dari klaster, segera ganti anggota yang gagal. Ketika banyak anggota yang gagal, gantilah satu per satu. Mengganti anggota yang gagal melibatkan dua langkah: menghapus anggota yang gagal dan menambahkan anggota baru.
Meskipun etcd menyimpan ID anggota unik secara internal, disarankan untuk menggunakan nama unik untuk setiap anggota untuk menghindari kesalahan manusia. Sebagai contoh, sebuah klaster etcd dengan tiga anggota. Jadikan URL-nya, member1=http://10.0.0.1, member2=http://10.0.0.2, and member3=http://10.0.0.3. Ketika member1 gagal, ganti dengan member4=http://10.0.0.4.
1. Dapatkan ID anggota yang gagal dari member1:
`etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list`
Akan tampil pesan berikut:
8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379
91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379
fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379
2. Hapus anggota yang gagal:
`etcdctl member remove 8211f1d0f64f3269`
Akan tampil pesan berikut:
Removed member 8211f1d0f64f3269 from cluster
3. Tambahkan anggota baru:
`./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380`
Akan tampil pesan berikut:
Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
4. Jalankan anggota yang baru ditambahkan pada mesin dengan IP `10.0.0.4`:
export ETCD_NAME="member4"
export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380"
export ETCD_INITIAL_CLUSTER_STATE=existing
etcd [flags]
5. Lakukan salah satu dari yang berikut:
1. Perbarui _flag_ `--etcd-server` untuk membuat Kubernetes mengetahui perubahan konfigurasi, lalu start ulang server API Kubernetes.
2. Perbarui konfigurasi _load balancer_ jika _load balancer_ digunakan dalam Deployment.
Untuk informasi lebih lanjut tentang konfigurasi ulang klaster, lihat [Dokumentasi Konfigurasi etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member).
## Mencadangkan klaster etcd
Semua objek Kubernetes disimpan dalam etcd. Mencadangkan secara berkala data klaster etcd penting untuk memulihkan klaster Kubernetes di bawah skenario bencana, seperti kehilangan semua Node _control plane_. Berkas _snapshot_ berisi semua status Kubernetes dan informasi penting. Untuk menjaga data Kubernetes yang sensitif aman, enkripsi berkas _snapshot_.
Mencadangkan klaster etcd dapat dilakukan dengan dua cara: _snapshot_ etcd bawaan dan _snapshot_ volume.
### Snapshot bawaan
Fitur _snapshot_ didukung oleh etcd secara bawaan, jadi mencadangkan klaster etcd lebih mudah. _Snapshot_ dapat diambil dari anggota langsung dengan command `etcdctl snapshot save` atau dengan menyalin `member/snap/db` berkas dari etcd [direktori data](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir) yang saat ini tidak digunakan oleh proses etcd. Mengambil _snapshot_ biasanya tidak akan mempengaruhi kinerja anggota.
Di bawah ini adalah contoh untuk mengambil _snapshot_ dari _keyspace_ yang dilayani oleh `$ENDPOINT` ke berkas `snapshotdb`:
```sh
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
# keluar 0
# memverifikasi hasil snapshot
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
+----------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| fe01cf57 | 10 | 7 | 2.1 MB |
+----------+----------+------------+------------+
```
### Snapshot volume
Jika etcd berjalan pada volume penyimpanan yang mendukung cadangan, seperti Amazon Elastic Block Store, buat cadangan data etcd dengan mengambil _snapshot_ dari volume penyimpanan.
## Memperbesar skala dari klaster etcd
Peningkatan skala klaster etcd meningkatkan ketersediaan dengan menukarnya untuk kinerja. Penyekalaan tidak akan meningkatkan kinerja atau kemampuan klaster. Aturan umum adalah untuk tidak melakukan penyekalaan naik atau turun untuk klaster etcd. Jangan mengonfigurasi grup penyekalaan otomatis untuk klaster etcd. Sangat disarankan untuk selalu menjalankan klaster etcd statis dengan lima anggota untuk klaster produksi Kubernetes untuk setiap skala yang didukung secara resmi.
Penyekalaan yang wajar adalah untuk meningkatkan klaster dengan tiga anggota menjadi dengan lima anggota, ketika dibutuhkan lebih banyak keandalan. Lihat [Dokumentasi Rekonfigurasi etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member) untuk informasi tentang cara menambahkan anggota ke klaster yang ada.
## Memulihkan klaster etcd
Etcd mendukung pemulihan dari _snapshot_ yang diambil dari proses etcd dari versi [major.minor](http://semver.org/). Memulihkan versi dari versi patch lain dari etcd juga didukung. Operasi pemulihan digunakan untuk memulihkan data klaster yang gagal.
Sebelum memulai operasi pemulihan, berkas _snapshot_ harus ada. Ini bisa berupa berkas _snapshot_ dari operasi pencadangan sebelumnya, atau dari sisa [direktori data](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md#--data-dir). Untuk informasi dan contoh lebih lanjut tentang memulihkan klaster dari berkas _snapshot_, lihat [dokumentasi pemulihan bencana etcd](https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster).
Jika akses URL dari klaster yang dipulihkan berubah dari klaster sebelumnya, maka server API Kubernetes harus dikonfigurasi ulang sesuai dengan URL tersebut. Pada kasus ini, start kembali server API Kubernetes dengan _flag_ `--etcd-servers=$NEW_ETCD_CLUSTER` bukan _flag_ `--etcd-servers=$OLD_ETCD_CLUSTER`. Ganti `$NEW_ETCD_CLUSTER` dan `$OLD_ETCD_CLUSTER` dengan alamat IP masing-masing. Jika _load balancer_ digunakan di depan klaster etcd, kamu mungkin hanya perlu memperbarui _load balancer_ sebagai gantinya.
Jika mayoritas anggota etcd telah gagal secara permanen, klaster etcd dianggap gagal. Dalam skenario ini, Kubernetes tidak dapat membuat perubahan apa pun ke kondisi saat ini. Meskipun Pod terjadwal mungkin terus berjalan, tidak ada Pod baru yang bisa dijadwalkan. Dalam kasus seperti itu, pulihkan klaster etcd dan kemungkinan juga untuk mengonfigurasi ulang server API Kubernetes untuk memperbaiki masalah ini.
## Memutakhirkan dan memutar balikan klaster etcd
Pada Kubernetes v1.13.0, etcd2 tidak lagi didukung sebagai _backend_ penyimpanan untuk klaster Kubernetes baru atau yang sudah ada. _Timeline_ untuk dukungan Kubernetes untuk etcd2 dan etcd3 adalah sebagai berikut:
- Kubernetes v1.0: hanya etcd2
- Kubernetes v1.5.1: dukungan etcd3 ditambahkan, standar klaster baru yang dibuat masih ke etcd2
- Kubernetes v1.6.0: standar klaster baru yang dibuat dengan `kube-up.sh` adalah etcd3,
dan `kube-apiserver` standarnya ke etcd3
- Kubernetes v1.9.0: pengumuman penghentian _backend_ penyimpanan etcd2 diumumkan
- Kubernetes v1.13.0: _backend_ penyimpanan etcd2 dihapus, `kube-apiserver` akan
menolak untuk start dengan `--storage-backend=etcd2`, dengan pesan
`etcd2 is no longer a supported storage backend`
Sebelum memutakhirkan v1.12.x kube-apiserver menggunakan `--storage-backend=etcd2` ke
v1.13.x, data etcd v2 harus dimigrasikan ke _backend_ penyimpanan v3 dan
permintaan kube-apiserver harus diubah untuk menggunakan `--storage-backend=etcd3`.
Proses untuk bermigrasi dari etcd2 ke etcd3 sangat tergantung pada bagaimana
klaster etcd diluncurkan dan dikonfigurasi, serta bagaimana klaster Kubernetes diluncurkan dan dikonfigurasi. Kami menyarankan kamu berkonsultasi dengan dokumentasi penyedia kluster kamu untuk melihat apakah ada solusi yang telah ditentukan.
Jika klaster kamu dibuat melalui `kube-up.sh` dan masih menggunakan etcd2 sebagai penyimpanan _backend_, silakan baca [Kubernetes v1.12 etcd cluster upgrade docs](https://v1-12.docs.kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#upgrading-and-rolling-back-etcd-clusters)
## Masalah umum: penyeimbang klien etcd dengan _secure endpoint_
Klien etcd v3, dirilis pada etcd v3.3.13 atau sebelumnya, memiliki [_critical bug_](https://github.com/kubernetes/kubernetes/issues/72102) yang mempengaruhi kube-apiserver dan penyebaran HA. Pemindahan kegagalan (_failover_) penyeimbang klien etcd tidak bekerja dengan baik dengan _secure endpoint_. Sebagai hasilnya, server etcd boleh gagal atau terputus sesaat dari kube-apiserver. Hal ini mempengaruhi peluncuran HA dari kube-apiserver.
Perbaikan dibuat di [etcd v3.4](https://github.com/etcd-io/etcd/pull/10911) (dan di-backport ke v3.3.14 atau yang lebih baru): klien baru sekarang membuat bundel kredensial sendiri untuk menetapkan target otoritas dengan benar dalam fungsi dial.
Karena perbaikan tersebut memerlukan pemutakhiran dependensi dari gRPC (ke v1.23.0), _downstream_ Kubernetes [tidak mendukung upgrade etcd](https://github.com/kubernetes/kubernetes/issues/72102#issuecomment-526645978), yang berarti [perbaikan etcd di kube-apiserver](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab) hanya tersedia mulai Kubernetes 1.16.
Untuk segera memperbaiki celah keamanan (_bug_) ini untuk Kubernetes 1.15 atau sebelumnya, buat kube-apiserver khusus. kamu dapat membuat perubahan lokal ke [`vendor/google.golang.org/grpc/credentials/credentials.go`](https://github.com/kubernetes/kubernetes/blob/7b85be021cd2943167cd3d6b7020f44735d9d90b/vendor/google.golang.org/grpc/credentials/credentials.go#L135) dengan [etcd@db61ee106](https://github.com/etcd-io/etcd/pull/10911/commits/db61ee106ca9363ba3f188ecf27d1a8843da33ab).
Lihat ["kube-apiserver 1.13.x menolak untuk bekerja ketika server etcd pertama tidak tersedia"](https://github.com/kubernetes/kubernetes/issues/72102).

View File

@ -0,0 +1,262 @@
---
title: Kustomisasi Service DNS
content_type: task
min-kubernetes-server-version: v1.12
---
<!-- overview -->
Laman ini menjelaskan cara mengonfigurasi DNS
{{< glossary_tooltip text="Pod" term_id="pod" >}} kamu dan menyesuaikan
proses resolusi DNS pada klaster kamu.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
Klaster kamu harus menjalankan tambahan (_add-on_) CoreDNS terlebih dahulu.
[Migrasi ke CoreDNS](/docs/tasks/administer-cluster/coredns/#migrasi-ke-coredns)
menjelaskan tentang bagaimana menggunakan `kubeadm` untuk melakukan migrasi dari `kube-dns`.
{{% version-check %}}
<!-- steps -->
## Pengenalan
DNS adalah Service bawaan dalam Kubernetes yang diluncurkan secara otomatis
melalui _addon manager_
[add-on klaster](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/README.md).
Sejak Kubernetes v1.12, CoreDNS adalah server DNS yang direkomendasikan untuk menggantikan kube-dns. Jika klaster kamu
sebelumnya menggunakan kube-dns, maka kamu mungkin masih menggunakan `kube-dns` daripada CoreDNS.
{{< note >}}
Baik Service CoreDNS dan kube-dns diberi nama `kube-dns` pada _field_ `metadata.name`.
Hal ini agar ada interoperabilitas yang lebih besar dengan beban kerja yang bergantung pada nama Service `kube-dns` lama untuk me-_resolve_ alamat internal ke dalam klaster. Dengan menggunakan sebuah Service yang bernama `kube-dns` mengabstraksi detail implementasi yang dijalankan oleh penyedia DNS di belakang nama umum tersebut.
{{< /note >}}
Jika kamu menjalankan CoreDNS sebagai sebuah Deployment, maka biasanya akan ditampilkan sebagai sebuah Service Kubernetes dengan alamat IP yang statis.
Kubelet meneruskan informasi DNS _resolver_ ke setiap Container dengan argumen `--cluster-dns=<dns-service-ip>`.
Nama DNS juga membutuhkan domain. Kamu dapat mengonfigurasi domain lokal di kubelet
dengan argumen `--cluster-domain=<default-local-domain>`.
Server DNS mendukung _forward lookup_ (_record_ A dan AAAA), _port lookup_ (_record_ SRV), _reverse lookup_ alamat IP (_record_ PTR),
dan lain sebagainya. Untuk informasi lebih lanjut, lihatlah [DNS untuk Service dan Pod](/id/docs/concepts/services-networking/dns-pod-service/).
Jika `dnsPolicy` dari Pod diatur menjadi `default`, itu berarti mewarisi konfigurasi resolusi nama
dari Node yang dijalankan Pod. Resolusi DNS pada Pod
harus berperilaku sama dengan Node tersebut.
Tapi lihat [Isu-isu yang telah diketahui](/docs/tasks/debug-application-cluster/dns-debugging-resolution/#known-issues).
Jika kamu tidak menginginkan hal ini, atau jika kamu menginginkan konfigurasi DNS untuk Pod berbeda, kamu bisa
menggunakan argumen `--resolv-conf` pada kubelet. Atur argumen ini menjadi "" untuk mencegah Pod tidak
mewarisi konfigurasi DNS. Atur ke jalur (_path_) berkas yang tepat untuk berkas yang berbeda dengan
`/etc/resolv.conf` untuk menghindari mewarisi konfigurasi DNS.
## CoreDNS
CoreDNS adalah server DNS otoritatif untuk kegunaan secara umum yang dapat berfungsi sebagai Service DNS untuk klaster, yang sesuai dengan [spesifikasi dns](https://github.com/kubernetes/dns/blob/master/docs/specification.md).
### Opsi ConfigMap pada CoreDNS
CoreDNS adalah server DNS yang modular dan mudah dipasang, dan setiap _plugin_ dapat menambahkan fungsionalitas baru ke CoreDNS.
Fitur ini dapat dikonfigurasikan dengan menjaga berkas [Corefile](https://coredns.io/2017/07/23/corefile-explained/), yang merupakan
berkas konfigurasi dari CoreDNS. Sebagai administrator klaster, kamu dapat memodifikasi
{{< glossary_tooltip text="ConfigMap" term_id="configmap" >}} untuk Corefile dari CoreDNS dengan mengubah cara perilaku pencarian Service DNS
pada klaster tersebut.
Di Kubernetes, CoreDNS diinstal dengan menggunakan konfigurasi Corefile bawaan sebagai berikut:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
```
Konfigurasi Corefile meliputi [_plugin_](https://coredns.io/plugins/) berikut ini dari CoreDNS:
* [errors](https://coredns.io/plugins/errors/): Kesalahan yang ditampilkan ke output standar (_stdout_)
* [health](https://coredns.io/plugins/health/): Kesehatan dari CoreDNS dilaporkan pada `http://localhost:8080/health`. Dalam sintaks yang diperluas `lameduck` akan menangani proses tidak sehat agar menunggu selama 5 detik sebelum proses tersebut dimatikan.
* [ready](https://coredns.io/plugins/ready/): _Endpoint_ HTTP pada port 8181 akan mengembalikan OK 200, ketika semua _plugin_ yang dapat memberi sinyal kesiapan, telah memberikan sinyalnya.
* [kubernetes](https://coredns.io/plugins/kubernetes/): CoreDNS akan menjawab pertanyaan (_query_) DNS berdasarkan IP Service dan Pod pada Kubernetes. Kamu dapat menemukan [lebih detail](https://coredns.io/plugins/kubernetes/) tentang _plugin_ itu dalam situs web CoreDNS. `ttl` memungkinkan kamu untuk mengatur TTL khusus untuk respon dari pertanyaan DNS. Standarnya adalah 5 detik. TTL minimum yang diizinkan adalah 0 detik, dan maksimum hanya dibatasi sampai 3600 detik. Mengatur TTL ke 0 akan mencegah _record_ untuk di simpan sementara dalam _cache_.
Opsi `pods insecure` disediakan untuk kompatibilitas dengan Service _kube-dns_ sebelumnya. Kamu dapat menggunakan opsi `pods verified`, yang mengembalikan _record_ A hanya jika ada Pod pada Namespace yang sama untuk alamat IP yang sesuai. Opsi `pods disabled` dapat digunakan jika kamu tidak menggunakan _record_ Pod.
* [prometheus](https://coredns.io/plugins/metrics/): Metrik dari CoreDNS tersedia pada `http://localhost:9153/metrics` dalam format yang sesuai dengan [Prometheus](https://prometheus.io/) (dikenal juga sebagai OpenMetrics).
* [forward](https://coredns.io/plugins/forward/): Setiap pertanyaan yang tidak berada dalam domain klaster Kubernetes akan diteruskan ke _resolver_ yang telah ditentukan dalam berkas (/etc/resolv.conf).
* [cache](https://coredns.io/plugins/cache/): Ini untuk mengaktifkan _frontend cache_.
* [loop](https://coredns.io/plugins/loop/): Mendeteksi _forwarding loop_ sederhana dan menghentikan proses CoreDNS jika _loop_ ditemukan.
* [reload](https://coredns.io/plugins/reload): Mengizinkan _reload_ otomatis Corefile yang telah diubah. Setelah kamu mengubah konfigurasi ConfigMap, beri waktu sekitar dua menit agar perubahan yang kamu lakukan berlaku.
* [loadbalance](https://coredns.io/plugins/loadbalance): Ini adalah _load balancer_ DNS secara _round-robin_ yang mengacak urutan _record_ A, AAAA, dan MX dalam setiap responnya.
Kamu dapat memodifikasi perilaku CoreDNS bawaan dengan memodifikasi ConfigMap.
### Konfigurasi _Stub-domain_ dan _Nameserver Upstream_ dengan menggunakan CoreDNS
CoreDNS memiliki kemampuan untuk mengonfigurasi _stubdomain_ dan _nameserver upstream_ dengan menggunakan [_plugin_ forward](https://coredns.io/plugins/forward/).
#### Contoh
Jika operator klaster memiliki sebuah server domain [Consul](https://www.consul.io/) yang terletak di 10.150.0.1, dan semua nama Consul memiliki akhiran .consul.local. Untuk mengonfigurasinya di CoreDNS, administrator klaster membuat bait (_stanza_) berikut dalam ConfigMap CoreDNS.
```
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
```
Untuk memaksa secara eksplisit semua pencarian DNS _non-cluster_ melalui _nameserver_ khusus pada 172.16.0.1, arahkan `forward` ke _nameserver_ bukan ke `/etc/resolv.conf`
```
forward . 172.16.0.1
```
ConfigMap terakhir bersama dengan konfigurasi `Corefile` bawaan terlihat seperti berikut:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
```
Perangkat `kubeadm` mendukung terjemahan otomatis dari ConfigMap kube-dns
ke ConfigMap CoreDNS yang setara.
{{< note >}}
Sementara ini kube-dns dapat menerima FQDN untuk _stubdomain_ dan _nameserver_ (mis: ns.foo.com), namun CoreDNS belum mendukung fitur ini.
Selama penerjemahan, semua _nameserver_ FQDN akan dihilangkan dari konfigurasi CoreDNS.
{{< /note >}}
## Konfigurasi CoreDNS yang setara dengan kube-dns
CoreDNS mendukung fitur kube-dns dan banyak lagi lainnya.
ConfigMap dibuat agar kube-dns mendukung `StubDomains` dan `upstreamNameservers` untuk diterjemahkan ke _plugin_ `forward` dalam CoreDNS.
Begitu pula dengan _plugin_ `Federations` dalam kube-dns melakukan translasi untuk _plugin_ `federation` dalam CoreDNS.
### Contoh
Contoh ConfigMap ini untuk kube-dns menentukan federasi, _stub domain_ dan server _upstream nameserver_:
```yaml
apiVersion: v1
data:
federations: |
{"foo" : "foo.feddomain.com"}
stubDomains: |
{"abc.com" : ["1.2.3.4"], "my.cluster.local" : ["2.3.4.5"]}
upstreamNameservers: |
["8.8.8.8", "8.8.4.4"]
kind: ConfigMap
```
Untuk konfigurasi yang setara dengan CoreDNS buat Corefile berikut:
* Untuk federasi:
```
federation cluster.local {
foo foo.feddomain.com
}
```
* Untuk stubDomain:
```yaml
abc.com:53 {
errors
cache 30
forward . 1.2.3.4
}
my.cluster.local:53 {
errors
cache 30
forward . 2.3.4.5
}
```
Corefile lengkap dengan _plugin_ bawaan:
```
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
federation cluster.local {
foo foo.feddomain.com
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
}
abc.com:53 {
errors
cache 30
forward . 1.2.3.4
}
my.cluster.local:53 {
errors
cache 30
forward . 2.3.4.5
}
```
## Migrasi ke CoreDNS
Untuk bermigrasi dari kube-dns ke CoreDNS,
[artikel blog](https://coredns.io/2018/05/21/migration-from-kube-dns-to-coredns/) yang detail
tersedia untuk membantu pengguna mengadaptasi CoreDNS sebagai pengganti dari kube-dns.
Kamu juga dapat bermigrasi dengan menggunakan
[skrip _deploy_](https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh) CoreDNS yang resmi.
## {{% heading "whatsnext" %}}
- Baca [_Debugging_ Resolusi DNS](/docs/tasks/administer-cluster/dns-debugging-resolution/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -0,0 +1,6 @@
---
title: "Pemantauan, Pencatatan, and Debugging"
description: Mengatur pemantauan dan pencatatan untuk memecahkan masalah klaster, atau men-_debug_ aplikasi yang terkontainerisasi.
weight: 80
---

View File

@ -0,0 +1,57 @@
---
content_type: concept
title: Perangkat untuk Memantau Sumber Daya
---
<!-- overview -->
Untuk melukan penyekalaan aplikasi dan memberikan Service yang handal, kamu perlu
memahami bagaimana aplikasi berperilaku ketika aplikasi tersebut digelar (_deploy_). Kamu bisa memeriksa
kinerja aplikasi dalam klaster Kubernetes dengan memeriksa Container,
[Pod](/docs/user-guide/pods), [Service](/docs/user-guide/services), dan
karakteristik klaster secara keseluruhan. Kubernetes memberikan detail
informasi tentang penggunaan sumber daya dari aplikasi pada setiap level ini.
Informasi ini memungkinkan kamu untuk mengevaluasi kinerja aplikasi kamu dan
mengevaluasi di mana kemacetan dapat dihilangkan untuk meningkatkan kinerja secara keseluruhan.
<!-- body -->
Di Kubernetes, pemantauan aplikasi tidak bergantung pada satu solusi pemantauan saja. Pada klaster baru, kamu bisa menggunakan _pipeline_ [metrik sumber daya](#pipeline-metrik-sumber-daya) atau _pipeline_ [metrik penuh](#pipeline-metrik-penuh) untuk mengumpulkan statistik pemantauan.
## _Pipeline_ Metrik Sumber Daya
_Pipeline_ metrik sumber daya menyediakan sekumpulan metrik terbatas yang terkait dengan
komponen-komponen klaster seperti _controller_ [HorizontalPodAutoscaler](/id/docs/tasks/run-application/horizontal-pod-autoscaler), begitu juga dengan utilitas `kubectl top`.
Metrik ini dikumpulkan oleh memori yang ringan, jangka pendek, dalam
[_metrics-server_](https://github.com/kubernetes-incubator/metrics-server) dan
diekspos ke API `metrics.k8s.io`.
_Metrics-server_ menemukan semua Node dalam klaster dan
bertanya ke setiap
[kubelet](/docs/reference/command-line-tools-reference/kubelet) dari Node tentang penggunaan CPU dan
memori. Kubelet bertindak sebagai jembatan antara _control plane_ Kubernetes dan
Node, mengelola Pod dan Container yang berjalan pada sebuah mesin. Kubelet
menerjemahkan setiap Pod ke Container yang menyusunnya dan mengambil masing-masing
statistik penggunaan untuk setiap Container dari _runtime_ Container melalui
antarmuka _runtime_ Container. Kubelet mengambil informasi ini dari cAdvisor yang terintegrasi
untuk pengintegrasian Docker yang lama. Hal ini yang kemudian memperlihatkan
statistik penggunaan sumber daya dari kumpulan Pod melalui API sumber daya _metrics-server_.
API ini disediakan pada `/metrics/resource/v1beta1` pada kubelet yang terautentikasi dan
porta _read-only_.
## _Pipeline_ Metrik Penuh
_Pipeline_ metrik penuh memberi kamu akses ke metrik yang lebih banyak. Kubernetes bisa
menanggapi metrik ini secara otomatis dengan mengubah skala atau mengadaptasi klaster
berdasarkan kondisi saat ini, dengan menggunakan mekanisme seperti HorizontalPodAutoscaler.
_Pipeline_ pemantauan mengambil metrik dari kubelet dan
kemudian memgekspos ke Kubernetes melalui adaptor dengan mengimplementasikan salah satu dari API
`custom.metrics.k8s.io` atau API `external.metrics.k8s.io`.
[Prometheus](https://prometheus.io), sebuah proyek CNCF, yang dapat secara alami memonitor Kubernetes, Node, dan Prometheus itu sendiri.
Proyek _pipeline_ metrik penuh yang bukan merupakan bagian dari CNCF berada di luar ruang lingkup dari dokumentasi Kubernetes.

View File

@ -0,0 +1,6 @@
---
title: "Mengelola Daemon Klaster"
description: Melakukan tugas-tugas umum untuk mengelola sebuah DaemonSet, misalnya _rolling update_.
weight: 130
---

View File

@ -0,0 +1,140 @@
---
title: Melakukan Rollback pada DaemonSet
content_type: task
weight: 20
min-kubernetes-server-version: 1.7
---
<!-- overview -->
Laman ini memperlihatkan bagaimana caranya untuk melakukan _rollback_ pada sebuah {{< glossary_tooltip term_id="daemonset" >}}.
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
Sebelum lanjut, alangkah baiknya jika kamu telah mengetahui cara
untuk [melakukan _rolling update_ pada sebuah DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/).
<!-- steps -->
## Melakukan _rollback_ pada DaemonSet
### Langkah 1: Dapatkan nomor revisi DaemonSet yang ingin dikembalikan
Lompati langkah ini jika kamu hanya ingin kembali (_rollback_) ke revisi terakhir.
Perintah di bawah ini akan memperlihatkan daftar semua revisi dari DaemonSet:
```shell
kubectl rollout history daemonset <nama-daemonset>
```
Perintah tersebut akan menampilkan daftar revisi seperti di bawah:
```
daemonsets "<nama-daemonset>"
REVISION CHANGE-CAUSE
1 ...
2 ...
...
```
* Alasan perubahan (_change cause_) kolom di atas merupakan salinan dari anotasi `kubernetes.io/change-cause` yang berkaitan dengan revisi pada DaemonSet. Kamu boleh menyetel _flag_ `--record=true` melalui `kubectl` untuk merekam perintah yang dijalankan akibat dari anotasi alasan perubahan.
Untuk melihat detail dari revisi tertentu, jalankan perintah di bawah ini:
```shell
kubectl rollout history daemonset <daemonset-name> --revision=1
```
Perintah tersebut memberikan detail soal nomor revisi tertentu:
```
daemonsets "<nama-daemonset>" with revision #1
Pod Template:
Labels: foo=bar
Containers:
app:
Image: ...
Port: ...
Environment: ...
Mounts: ...
Volumes: ...
```
### Langkah 2: _Rollback_ ke revisi tertentu
```shell
# Tentukan nomor revisi yang kamu dapatkan dari Langkah 1 melalui --to-revision
kubectl rollout undo daemonset <nama-daemonset> --to-revision=<nomor-revisi>
```
Jika telah berhasil, perintah tersebut akan memberikan keluaran berikut:
```
daemonset "<nama-daemonset>" rolled back
```
{{< note >}}
Jika _flag_ `--to-revision` tidak diberikan, maka kubectl akan memilihkan revisi yang terakhir.
{{< /note >}}
### Langkah 3: Lihat progres pada saat _rollback_ DaemonSet
Perintah `kubectl rollout undo daemonset` memberitahu server untuk memulai _rollback_ DaemonSet.
_Rollback_ sebenarnya terjadi secara _asynchronous_ di dalam klaster {{< glossary_tooltip term_id="control-plane" text="_control plane_" >}}.
Perintah di bawah ini dilakukan untuk melihat progres dari _rollback_:
```shell
kubectl rollout status ds/<nama-daemonset>
```
Ketika _rollback_ telah selesai dilakukan, keluaran di bawah akan ditampilkan:
```
daemonset "<nama-daemonset>" successfully rolled out
```
<!-- discussion -->
## Memahami revisi DaemonSet
Pada langkah `kubectl rollout history` sebelumnya, kamu telah mendapatkan
daftar revisi DaemonSet. Setiap revisi disimpan di dalam sumber daya bernama ControllerRevision.
Untuk melihat apa yang disimpan pada setiap revisi, dapatkan sumber daya mentah (_raw_) dari
revisi DaemonSet:
```shell
kubectl get controllerrevision -l <kunci-selektor-daemonset>=<nilai-selektor-daemonset>
```
Perintah di atas akan mengembalikan daftar ControllerRevision:
```
NAME CONTROLLER REVISION AGE
<nama-daemonset>-<hash-revisi> DaemonSet/<nama-daemonset> 1 1h
<nama-daemonset>-<hash-revisi> DaemonSet/<nama-daemonset> 2 1h
```
Setiap ControllerRevision menyimpan anotasi dan templat dari sebuah revisi DaemonSet.
Perintah `kubectl rollout undo` mengambil ControllerRevision yang spesifik dan mengganti templat
DaemonSet dengan templat yang tersimpan pada ControllerRevision.
Perintah `kubectl rollout undo` sama seperti untuk memperbarui templat
DaemonSet ke revisi sebelumnya dengan menggunakan perintah lainnya, seperti `kubectl edit` atau `kubectl apply`.
{{< note >}}
Revisi DaemonSet hanya bisa _roll_ ke depan. Artinya, setelah _rollback_ selesai dilakukan,
nomor revisi dari ControllerRevision (_field_ `.revision`) yang sedang di-_rollback_ akan maju ke depan.
Misalnya, jika kamu memiliki revisi 1 dan 2 pada sistem, lalu _rollback_ dari revisi 2 ke revisi 1,
ControllerRevision dengan `.revision: 1` akan menjadi `.revision: 3`.
{{< /note >}}
## _Troubleshoot_
* Lihat cara untuk melakukan [_troubleshoot rolling update_ pada DaemonSet](/docs/tasks/manage-daemon/update-daemon-set/#troubleshooting).

View File

@ -0,0 +1,191 @@
---
title: ConfigMap
content_type: concept
weight: 20
---
<!-- overview -->
{{< glossary_definition term_id="configmap" prepend="ConfigMapは、" length="all" >}}
{{< caution >}}
ConfigMapは機密性や暗号化を提供しません。保存したいデータが機密情報である場合は、ConfigMapの代わりに{{< glossary_tooltip text="Secret" term_id="secret" >}}を使用するか、追加の(サードパーティー)ツールを使用してデータが非公開になるようにしてください。
{{< /caution >}}
<!-- body -->
## 動機
アプリケーションのコードとは別に設定データを設定するには、ConfigMapを使用します。
たとえば、アプリケーションを開発していて、(開発用時には)自分のコンピューター上と、(実際のトラフィックをハンドルするときは)クラウド上とで実行することを想像してみてください。あなたは、`DATABASE_HOST`という名前の環境変数を使用するコードを書きます。ローカルでは、この変数を`localhost`に設定します。クラウド上では、データベースコンポーネントをクラスター内に公開するKubernetesの{{< glossary_tooltip text="Service" term_id="service" >}}を指すように設定します。
こうすることで、必要であればクラウド上で実行しているコンテナイメージを取得することで、ローカルでも完全に同じコードを使ってデバッグができるようになります。
## ConfigMapオブジェクト
ConfigMapは、他のオブジェクトが使うための設定を保存できるAPI[オブジェクト](/ja/docs/concepts/overview/working-with-objects/kubernetes-objects/)です。ほとんどのKubernetesオブジェクトに`spec`セクションがあるのとは違い、ConfigMapにはアイテム(キー)と値を保存するための`data`セクションがあります。
ConfigMapの名前は、有効な[DNSのサブドメイン名](/ja/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)でなければなりません。
## ConfigMapとPod
ConfigMapを参照して、ConfigMap内のデータを元にしてPod内のコンテナの設定をするPodの`spec`を書くことができます。このとき、PodとConfigMapは同じ{{< glossary_tooltip text="名前空間" term_id="namespace" >}}内に存在する必要があります。
以下に、ConfigMapの例を示します。単一の値を持つキーと、Configuration形式のデータ片のような値を持つキーがあります。
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
# プロパティーに似たキー。各キーは単純な値にマッピングされている
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
#
# ファイルに似たキー
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
```
ConfigMapを利用してPod内のコンテナを設定する方法には、次の4種類があります。
1. コマンドライン引数をコンテナのエントリーポイントに渡す
1. 環境変数をコンテナに渡す
1. 読み取り専用のボリューム内にファイルを追加し、アプリケーションがそのファイルを読み取る
1. Kubernetes APIを使用してConfigMapを読み込むコードを書き、そのコードをPod内で実行する
これらのさまざまな方法は、利用するデータをモデル化するのに役立ちます。最初の3つの方法では、{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}がPodのコンテナを起動する時にConfigMapのデータを使用します。
4番目の方法では、ConfigMapとそのデータを読み込むためのコードを自分自身で書く必要があります。しかし、Kubernetes APIを直接使用するため、アプリケーションはConfigMapがいつ変更されても更新イベントを受信でき、変更が発生したときにすぐに反応できます。この手法では、Kubernetes APIに直接アクセスすることで、別の名前空間にあるConfigMapにもアクセスできます。
以下に、Podを設定するために`game-demo`から値を使用するPodの例を示します。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: demo
image: game.example/demo-game
env:
# 環境変数を定義します。
- name: PLAYER_INITIAL_LIVES # ここではConfigMap内のキーの名前とは違い
# 大文字が使われていることに着目してください。
valueFrom:
configMapKeyRef:
name: game-demo # この値を取得するConfigMap。
key: player_initial_lives # 取得するキー。
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
# Podレベルでボリュームを設定し、Pod内のコンテナにマウントします。
- name: config
configMap:
# マウントしたいConfigMapの名前を指定します。
name: game-demo
# ファイルとして作成するConfigMapのキーの配列
items:
- key: "game.properties"
path: "game.properties"
- key: "user-interface.properties"
path: "user-interface.properties"
```
ConfigMapは1行のプロパティの値と複数行のファイルに似た形式の値を区別しません。問題となるのは、Podや他のオブジェクトによる値の使用方法です。
この例では、ボリュームを定義して、`demo`コンテナの内部で`/config`にマウントしています。これにより、ConfigMap内には4つのキーがあるにもかかわらず、2つのファイル`/config/game.properties`および`/config/user-interface.properties`だけが作成されます。
これは、Podの定義が`volumes`セクションで`items`という配列を指定しているためです。もし`items`の配列を完全に省略すれば、ConfigMap内の各キーがキーと同じ名前のファイルになり、4つのファイルが作成されます。
## ConfigMapを使う
ConfigMapは、データボリュームとしてマウントできます。ConfigMapは、Podへ直接公開せずにシステムの他の部品として使うこともできます。たとえば、ConfigMapには、システムの他の一部が設定のために使用するデータを保存できます。
{{< note >}}
ConfigMapの最も一般的な使い方では、同じ名前空間にあるPod内で実行されているコンテナに設定を構成します。ConfigMapを独立して使用することもできます。
たとえば、ConfigMapに基づいて動作を調整する{{< glossary_tooltip text="アドオン" term_id="addons" >}}や{{< glossary_tooltip text="オペレーター" term_id="operator-pattern" >}}を見かけることがあるかもしれません。
{{< /note >}}
### ConfigMapをPodからファイルとして使う
ConfigMapをPod内のボリュームで使用するには、次のようにします。
1. ConfigMapを作成するか、既存のConfigMapを使用します。複数のPodから同じConfigMapを参照することもできます。
1. Podの定義を修正して、`.spec.volumes[]`以下にボリュームを追加します。ボリュームに任意の名前を付け、`.spec.volumes[].configMap.name`フィールドにConfigMapオブジェクトへの参照を設定します。
1. ConfigMapが必要な各コンテナに`.spec.containers[].volumeMounts[]`を追加します。`.spec.containers[].volumeMounts[].readOnly = true`を指定して、`.spec.containers[].volumeMounts[].mountPath`には、ConfigMapのデータを表示したい未使用のディレクトリ名を指定します。
1. イメージまたはコマンドラインを修正して、プログラムがそのディレクトリ内のファイルを読み込むように設定します。ConfigMapの`data`マップ内の各キーが、`mountPath`以下のファイル名になります。
以下は、ボリューム内にConfigMapをマウントするPodの例です。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
configMap:
name: myconfigmap
```
使用したいそれぞれのConfigMapごとに、`.spec.volumes`内で参照する必要があります。
Pod内に複数のコンテナが存在する場合、各コンテナにそれぞれ別の`volumeMounts`のブロックが必要ですが、`.spec.volumes`はConfigMapごとに1つしか必要ありません。
#### マウントしたConfigMapの自動的な更新
ボリューム内で現在使用中のConfigMapが更新されると、射影されたキーも最終的に(eventually)更新されます。kubeletは定期的な同期のたびにマウントされたConfigMapが新しいかどうか確認します。しかし、kubeletが現在のConfigMapの値を取得するときにはローカルキャッシュを使用します。キャッシュの種類は、[KubeletConfiguration構造体](https://github.com/kubernetes/kubernetes/blob/{{< param "docsbranch" >}}/staging/src/k8s.io/kubelet/config/v1beta1/types.go)の中の`ConfigMapAndSecretChangeDetectionStrategy`フィールドで設定可能です。ConfigMapは、監視(デフォルト)、ttlベース、またはすべてのリクエストを直接APIサーバーへ単純にリダイレクトする方法のいずれかによって伝搬されます。その結果、ConfigMapが更新された瞬間から、新しいキーがPodに射影されるまでの遅延の合計は、最長でkubeletの同期期間+キャッシュの伝搬遅延になります。ここで、キャッシュの伝搬遅延は選択したキャッシュの種類に依存します(監視の伝搬遅延、キャッシュのttl、または0に等しくなります)。
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
Kubernetesのアルファ版の機能である _イミュータブルなSecretおよびConfigMap_ は、個別のSecretやConfigMapをイミュータブルに設定するオプションを提供します。ConfigMapを広範に使用している(少なくとも数万のConfigMapがPodにマウントされている)クラスターでは、データの変更を防ぐことにより、以下のような利点が得られます。
- アプリケーションの停止を引き起こす可能性のある予想外の(または望まない)変更を防ぐことができる
- ConfigMapをイミュータブルにマークして監視を停止することにより、kube-apiserverへの負荷を大幅に削減し、クラスターの性能が向上する
この機能を使用するには、`ImmutableEmphemeralVolumes`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にして、SecretやConfigMapの`immutable`フィールドを`true`に設定してください。次に例を示します。
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
...
immutable: true
```
{{< note >}}
一度ConfigMapやSecretがイミュータブルに設定すると、この変更を元に戻したり、`data`フィールドのコンテンツを変更することは*できません*。既存のPodは削除されたConfigMapのマウントポイントを保持するため、こうしたPodは再作成することをおすすめします。
{{< /note >}}
## {{% heading "whatsnext" %}}
* [Secret](/docs/concepts/configuration/secret/)について読む。
* [Podを構成してConfigMapを使用する](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)を読む。
* コードを設定から分離する動機を理解するために[The Twelve-Factor App](https://12factor.net/ja/)を読む。

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,144 @@
---
title: Serviceトポロジー
feature:
title: Serviceトポロジー
description: >
Serviceのトラフィックをクラスタートポロジーに基づいてルーティングします。
content_type: concept
weight: 10
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
*Serviceトポロジー*を利用すると、Serviceのトラフィックをクラスターのードトポロジーに基づいてルーティングできるようになります。たとえば、あるServiceのトラフィックに対して、できるだけ同じードや同じアベイラビリティゾーン上にあるエンドポイントを優先してルーティングするように指定できます。
<!-- body -->
## はじめに
デフォルトでは、`ClusterIP`や`NodePort`Serviceに送信されたトラフィックは、Serviceに対応する任意のバックエンドのアドレスにルーティングされる可能性があります。しかし、Kubernetes 1.7以降では、「外部の」トラフィックをそのトラフィックを受信したード上のPodにルーティングすることが可能になりました。しかし、この機能は`ClusterIP`Serviceでは対応しておらず、ゾーン内ルーティングなどのより複雑なトポロジーは実現不可能でした。*Serviceトポロジー*の機能を利用すれば、Serviceの作者が送信元ードと送信先ードのNodeのラベルに基づいてトラフィックをルーティングするためのポリシーを定義できるようになるため、この問題を解決できます。
送信元と送信先の間のNodeラベルのマッチングを使用することにより、オペレーターは、そのオペレーターの要件に適したメトリクスを使用して、お互いに「より近い」または「より遠い」ードのグループを指定できます。たとえば、パブリッククラウド上のさまざまなオペレーターでは、Serviceのトラフィックを同一ゾーン内に留めようとする傾向があります。パブリッククラウドでは、ゾーンをまたぐトラフィックでは関連するコストがかかる一方、ゾーン内のトラフィックにはコストがかからない場合があるからです。その他のニーズとしては、DaemonSetが管理するローカルのPodにトラフィックをルーティングできるようにしたり、レイテンシーを低く抑えるために同じラック上のスイッチに接続されたードにトラフィックを限定したいというものがあります。
## Serviceトポロジーを利用する
クラスターのServiceトポロジーが有効になっていれば、Serviceのspecに`topologyKeys`フィールドを指定することで、Serviceのトラフィックのルーティングを制御できます。このフィールドは、Nodeラベルの優先順位リストで、このServiceにアクセスするときにエンドポイントをソートするために使われます。トラフィックは、最初のラベルの値が送信元Nodeのものと一致するNodeに送信されます。一致したード上にServiceに対応するバックエンドが存在しなかった場合は、2つ目のラベルについて検討が行われ、同様に、残っているラベルが順番に検討されまます。
一致するキーが1つも見つからなかった場合、トラフィックは、Serviceに対応するバックエンドが存在しなかったかのように拒否されます。言い換えると、エンドポイントは、利用可能なバックエンドが存在する最初のトポロジーキーに基づいて選択されます。このフィールドが指定され、すべてのエントリーでクライアントのトポロジーに一致するバックエンドが存在しない場合、そのクライアントに対するバックエンドが存在しないものとしてコネクションが失敗します。「任意のトポロジー」を意味する特別な値`"*"`を指定することもできます。任意の値にマッチするこの値に意味があるのは、リストの最後の値として使った場合だけです。
`topologyKeys`が未指定または空の場合、トポロジーの制約は適用されません。
ホスト名、ゾーン名、リージョン名のラベルが付いたNodeを持つクラスターについて考えてみましょう。このとき、Serviceの`topologyKeys`の値を設定することで、トラフィックの向きを以下のように制御できます。
* トラフィックを同じード上のエンドポイントのみに向け、同じード上にエンドポイントが1つも存在しない場合には失敗するようにする: `["kubernetes.io/hostname"]`
* 同一ノード上のエンドポイントを優先し、失敗した場合には同一ゾーン上のエンドポイント、同一リージョンゾーンのエンドポイントへとフォールバックし、それ以外の場合には失敗する: `["kubernetes.io/hostname", "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]`。これは、たとえばデータのローカリティが非常に重要である場合などに役に立ちます。
* 同一ゾーンを優先しますが、ゾーン内に利用可能なノードが存在しない場合は、利用可能な任意のエンドポイントにフォールバックする: `["topology.kubernetes.io/zone", "*"]`
## 制約
* Serviceトポロジーは`externalTrafficPolicy=Local`と互換性がないため、Serviceは2つの機能を同時に利用できません。2つの機能を同じクラスター上の異なるServiceでそれぞれ利用することは可能ですが、同一のService上では利用できません。
* 有効なトポロジーキーは、現在は`kubernetes.io/hostname`、`topology.kubernetes.io/zone`、および`topology.kubernetes.io/region`に限定されています。しかし、将来は一般化され、他のノードラベルも使用できるようになる予定です。
* トポロジーキーは有効なラベルのキーでなければならず、最大で16個のキーまで指定できます。
* すべての値をキャッチする`"*"`を使用する場合は、トポロジーキーの最後の値として指定しなければなりません。
## 例
以下では、Serviceトポロジーの機能を利用したよくある例を紹介します。
### ノードローカルのエンドポイントだけを使用する
ードローカルのエンドポイントのみにルーティングするServiceの例です。もし同一ード上にエンドポイントが存在しない場合、トラフィックは損失します。
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
```
### ノードローカルのエンドポイントを優先して使用する
ードローカルのエンドポイントを優先して使用しますが、ードローカルのエンドポイントが存在しない場合にはクラスター全体のエンドポイントにフォールバックするServiceの例です。
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "*"
```
### 同一ゾーンや同一リージョンのエンドポイントだけを使用する
同一リージョンのエンドポイントより同一ゾーンのエンドポイントを優先するServiceの例です。もしいずれのエンドポイントも存在しない場合、トラフィックは損失します。
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
```
### ノードローカル、同一ゾーン、同一リーションのエンドポイントを優先して使用する
ードローカル、同一ゾーン、同一リージョンのエンドポイントを順番に優先し、クラスター全体のエンドポイントにフォールバックするServiceの例です。
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
topologyKeys:
- "kubernetes.io/hostname"
- "topology.kubernetes.io/zone"
- "topology.kubernetes.io/region"
- "*"
```
## {{% heading "whatsnext" %}}
* [Serviceトポトジーを有効にする](/docs/tasks/administer-cluster/enabling-service-topology)を読む。
* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む。

View File

@ -0,0 +1,17 @@
---
title: ConfigMap
id: configmap
date: 2018-04-12
full_link: /ja/docs/concepts/configuration/configmap/
short_description: >
機密性のないデータをキーと値のペアで保存するために使用されるAPIオブジェクトです。環境変数、コマンドライン引数、またはボリューム内の設定ファイルとして使用できます。
aka:
tags:
- core-object
---
機密性のないデータをキーと値のペアで保存するために使用されるAPIオブジェクトです。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、環境変数、コマンドライン引数、または{{< glossary_tooltip text="ボリューム" term_id="volume" >}}内の設定ファイルとしてConfigMapを使用できます。
<!--more-->
ConfigMapを使用すると、環境固有の設定を{{< glossary_tooltip text="コンテナイメージ" term_id="image" >}}から分離できるため、アプリケーションを簡単に移植できるようになります。

View File

@ -0,0 +1,15 @@
---
title: Kindを使用してKubernetesをインストールする
weight: 40
content_type: concept
---
<!-- overview -->
Kindは、Dockerコンテナをードとして使用して、ローカルのKubernetesクラスターを実行するためのツールです。
<!-- body -->
## インストール
[Kindをインストールする](https://kind.sigs.k8s.io/docs/user/quick-start/)を参照してください。

View File

@ -0,0 +1,44 @@
---
title: Serviceトポロジーを有効にする
content_type: task
---
<!-- overview -->
このページでは、Kubernetes上でServiceトポロジーを有効にする方法の概要について説明します。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
<!-- steps -->
## はじめに
*Serviceトポロジー*は、クラスターのードのトポロジーに基づいてトラフィックをルーティングできるようにする機能です。たとえば、あるServiceのトラフィックに対して、できるだけ同じードや同じアベイラビリティゾーン上にあるエンドポイントを優先してルーティングするように指定できます。
## 前提
トポロジーを考慮したServiceのルーティングを有効にするには、以下の前提を満たしている必要があります。
* Kubernetesバージョン1.17以降である
* {{< glossary_tooltip text="Kube-proxy" term_id="kube-proxy" >}}がiptableモードまたはIPVSモードで稼働している
* [Endpoint Slice](/docs/concepts/services-networking/endpoint-slices/)を有効にしている
## Serviceトポロジーを有効にする
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
Serviceトポロジーを有効にするには、すべてのKubernetesコンポーネントで`ServiceTopology`と`EndpointSlice`フィーチャーゲートを有効にする必要があります。
```
--feature-gates="ServiceTopology=true,EndpointSlice=true"
```
## {{% heading "whatsnext" %}}
* [Serviceトポロジー](/ja/docs/concepts/services-networking/service-topology)のコンセプトについて読む
* [Endpoint Slice](/docs/concepts/services-networking/endpoint-slices)について読む
* [サービスとアプリケーションの接続](/ja/docs/concepts/services-networking/connect-applications-service/)を読む

View File

@ -0,0 +1,93 @@
---
title: huge pageを管理する
content_type: task
description: クラスター内のスケジュール可能なリソースとしてhuge pageの設定と管理を行います。
---
<!-- overview -->
{{< feature-state state="stable" >}}
Kubernetesでは、事前割り当てされたhuge pageをPod内のアプリケーションに割り当てたり利用したりすることをサポートしています。このページでは、ユーザーがhuge pageを利用できるようにする方法について説明します。
## {{% heading "prerequisites" %}}
1. Kubernetesのードがhuge pageのキャパシティを報告するためには、ード上でhuge pageを事前割り当てしておく必要があります。1つのードでは複数のサイズのhuge pageが事前割り当てできます。
ードは、すべてのhuge pageリソースを、スケジュール可能なリソースとして自動的に探索・報告してくれます。
<!-- steps -->
## API
huge pageはコンテナレベルのリソース要求で`hugepages-<size>`という名前のリソースを指定することで利用できます。ここで、`<size>`は、特定のード上でサポートされている整数値を使った最も小さなバイナリ表記です。たとえば、ードが2048KiBと1048576KiBのページサイズをサポートしている場合、ードはスケジュール可能なリソースとして、`hugepages-2Mi`と`hugepages-1Gi`の2つのリソースを公開します。CPUやメモリとは違い、huge pageはオーバーコミットをサポートしません。huge pageリソースをリクエストするときには、メモリやCPUリソースを同時にリクエストしなければならないことに注意してください。
1つのPodのspec内に書くことで、Podから複数のサイズのhuge pageを利用することもできます。その場合、すべてのボリュームマウントで`medium: HugePages-<hugepagesize>`という表記を使う必要があります。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: huge-pages-example
spec:
containers:
- name: example
image: fedora:latest
command:
- sleep
- inf
volumeMounts:
- mountPath: /hugepages-2Mi
name: hugepage-2mi
- mountPath: /hugepages-1Gi
name: hugepage-1gi
resources:
limits:
hugepages-2Mi: 100Mi
hugepages-1Gi: 2Gi
memory: 100Mi
requests:
memory: 100Mi
volumes:
- name: hugepage-2mi
emptyDir:
medium: HugePages-2Mi
- name: hugepage-1gi
emptyDir:
medium: HugePages-1Gi
```
Podで1種類のサイズのhuge pageをリクエストするときだけは、`medium: HugePages`という表記を使うこともできます。
```yaml
apiVersion: v1
kind: Pod
metadata:
name: huge-pages-example
spec:
containers:
- name: example
image: fedora:latest
command:
- sleep
- inf
volumeMounts:
- mountPath: /hugepages
name: hugepage
resources:
limits:
hugepages-2Mi: 100Mi
memory: 100Mi
requests:
memory: 100Mi
volumes:
- name: hugepage
emptyDir:
medium: HugePages
```
- huge pageのrequestsはlimitsと等しくなければなりません。limitsを指定した場合にはこれがデフォルトですが、requestsを指定しなかった場合にはデフォルトではありません。
- huge pageはコンテナのスコープで隔離されるため、各コンテナにはそれぞれのcgroupサンドボックスの中でcontainer specでリクエストされた通りのlimitが設定されます。
- huge pageベースのEmptyDirボリュームは、Podがリクエストしたよりも大きなサイズのページメモリーを使用できません。
- `shmget()`に`SHM_HUGETLB`を指定して取得したhuge pageを使用するアプリケーションは、`/proc/sys/vm/hugetlb_shm_group`に一致する補助グループ(supplemental group)を使用して実行する必要があります。
- namespace内のhuge pageの使用量は、ResourceQuotaに対して`cpu`や`memory`のような他の計算リソースと同じように`hugepages-<size>`というトークンを使用することで制御できます。
- 複数のサイズのhuge pageのサポートはフィーチャーゲートによる設定が必要です。{{< glossary_tooltip text="kubelet" term_id="kubelet" >}}と{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}}上で、`HugePageStorageMediumSize`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を使用すると有効にできます(`--feature-gates=HugePageStorageMediumSize=true`)。

View File

@ -0,0 +1,6 @@
---
title: "TLS"
weight: 100
description: Transport Layer Security(TLS)を使用して、クラスター内のトラフィックを保護する方法について理解します。
---

View File

@ -0,0 +1,41 @@
---
title: Kubeletの証明書のローテーションを設定する
content_type: task
---
<!-- overview -->
このページでは、kubeletの証明書のローテーションを設定する方法を説明します。
{{< feature-state for_k8s_version="v1.8" state="beta" >}}
## {{% heading "prerequisites" %}}
* Kubernetesはバージョン1.8.0以降である必要があります。
<!-- steps -->
## 概要
kubeletは、Kubernetes APIへの認証のために証明書を使用します。デフォルトでは、証明書は1年間の有効期限付きで発行されるため、頻繁に更新する必要はありません。
Kubernetes 1.8にはベータ機能の[kubelet certificate rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)が含まれているため、現在の証明書の有効期限が近づいたときに自動的に新しい鍵を生成して、Kubernetes APIに新しい証明書をリクエストできます。新しい証明書が利用できるようになると、Kubernetes APIへの接続の認証に利用されます。
## クライアント証明書のローテーションを有効にする
`kubelet`プロセスは`--rotate-certificates`という引数を受け付けます。この引数によって、現在使用している証明書の有効期限が近づいたときに、kubeletが自動的に新しい証明書をリクエストするかどうかを制御できます。証明書のローテーションはベータ機能であるため、`--feature-gates=RotateKubeletClientCertificate=true`を使用してフィーチャーフラグを有効にする必要もあります。
`kube-controller-manager`プロセスは、`--experimental-cluster-signing-duration`という引数を受け付け、この引数で証明書が発行される期間を制御できます。
## 証明書のローテーションの設定を理解する
kubeletが起動すると、ブートストラップが設定されている場合(`--bootstrap-kubeconfig`フラグを使用した場合)、初期証明書を使用してKubernetes APIに接続して、証明書署名リクエスト(certificate signing request、CSR)を発行します。証明書署名リクエストのステータスは、次のコマンドで表示できます。
```sh
kubectl get csr
```
ード上のkubeletから発行された証明書署名リクエストは、初めは`Pending`状態です。証明書署名リクエストが特定の条件を満たすと、コントローラーマネージャーに自動的に承認され、`Approved`状態になります。次に、コントローラーマネージャーは`--experimental-cluster-signing-duration`パラメーターで指定された有効期限で発行された証明書に署名を行い、署名された証明書が証明書署名リクエストに添付されます。
kubeletは署名された証明書をKubernetes APIから取得し、ディスク上の`--cert-dir`で指定された場所に書き込みます。その後、kubeletは新しい証明書を使用してKubernetes APIに接続するようになります。
署名された証明書の有効期限が近づくと、kubeletはKubernetes APIを使用して新しい証明書署名リクエストを自動的に発行します。再び、コントローラーマネージャーは証明書のリクエストを自動的に承認し、署名された証明書を証明書署名リクエストに添付します。kubeletは新しい署名された証明書をKubernetes APIから取得してディスクに書き込みます。その後、kubeletは既存のコネクションを更新して、新しい証明書でKubernetes APIに再接続します。

View File

@ -0,0 +1,261 @@
---
title: "例: StatefulSetを使用したCassandraのデプロイ"
content_type: tutorial
weight: 30
---
<!-- overview -->
このチュートリアルでは、[Apache Cassandra](http://cassandra.apache.org/)をKubernetes上で実行する方法を紹介します。データベースの一種であるCassandraには、データの耐久性(アプリケーションの*状態*)を提供するために永続ストレージが必要です。この例では、カスタムのCassandraのseed providerにより、Cassandraクラスターに参加した新しいCassandraインスタンスを検出できるようにします。
*StatefulSet*を利用すると、ステートフルなアプリケーションをKubernetesクラスターにデプロイするのが簡単になります。このチュートリアルで使われている機能のより詳しい情報は、[StatefulSet](/ja/docs/concepts/workloads/controllers/statefulset/)を参照してください。
{{< note >}}
CassandraとKubernetesは、ともにクラスターのメンバーを表すために*ノード*という用語を使用しています。このチュートリアルでは、StatefulSetに属するPodはCassandraのードであり、Cassandraクラスター(*ring*と呼ばれます)のメンバーでもあります。これらのPodがKubernetesクラスター内で実行されるとき、Kubernetesのコントロールプレーンは、PodをKubernetesの{{< glossary_tooltip text="Node" term_id="node" >}}上にスケジュールします。
Cassandraードが開始すると、*シードリスト*を使ってring上の他のードの検出が始まります。このチュートリアルでは、Kubernetesクラスター内に現れた新しいCassandra Podを検出するカスタムのCassandraのseed providerをデプロイします。
{{< /note >}}
## {{% heading "objectives" %}}
* Cassandraのheadless {{< glossary_tooltip text="Service" term_id="service" >}}を作成して検証する。
* {{< glossary_tooltip term_id="StatefulSet" >}}を使用してCassandra ringを作成する。
* StatefulSetを検証する。
* StatefulSetを編集する。
* StatefulSetと{{< glossary_tooltip text="Pod" term_id="pod" >}}を削除する。
## {{% heading "prerequisites" %}}
{{< include "task-tutorial-prereqs.md" >}}
このチュートリアルを完了するには、{{< glossary_tooltip text="Pod" term_id="pod" >}}、{{< glossary_tooltip text="Service" term_id="service" >}}、{{< glossary_tooltip text="StatefulSet" term_id="StatefulSet" >}}の基本についてすでに知っている必要があります。
### Minikubeのセットアップに関する追加の設定手順
{{< caution >}}
[Minikube](/ja/docs/getting-started-guides/minikube/)は、デフォルトでは1024MiBのメモリと1CPUに設定されます。デフォルトのリソース設定で起動したMinikubeでは、このチュートリアルの実行中にリソース不足のエラーが発生してしまいます。このエラーを回避するためにはMinikubeを次の設定で起動してください。
```shell
minikube start --memory 5120 --cpus=4
```
{{< /caution >}}
<!-- lessoncontent -->
## Cassandraのheadless Serviceを作成する {#creating-a-cassandra-headless-service}
Kubernetesでは、{{< glossary_tooltip text="Service" term_id="service" >}}は同じタスクを実行する{{< glossary_tooltip text="Pod" term_id="pod" >}}の集合を表します。
以下のServiceは、Cassandra Podとクラスター内のクライアント間のDNSルックアップに使われます。
{{< codenew file="application/cassandra/cassandra-service.yaml" >}}
`cassandra-service.yaml`ファイルから、Cassandra StatefulSetのすべてのメンバーをトラッキングするServiceを作成します。
```shell
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
```
### 検証 (オプション) {#validating}
Cassandra Serviceを取得します。
```shell
kubectl get svc cassandra
```
結果は次のようになります。
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra ClusterIP None <none> 9042/TCP 45s
```
`cassandra`という名前のServiceが表示されない場合、作成に失敗しています。よくある問題のトラブルシューティングについては、[Serviceのデバッグ](/ja/docs/tasks/debug-application-cluster/debug-service/)を読んでください。
## StatefulSetを使ってCassandra ringを作成する
以下に示すStatefulSetマニフェストは、3つのPodからなるCassandra ringを作成します。
{{< note >}}
この例ではMinikubeのデフォルトのプロビジョナーを使用しています。クラウドを使用している場合、StatefulSetを更新してください。
{{< /note >}}
{{< codenew file="application/cassandra/cassandra-statefulset.yaml" >}}
`cassandra-statefulset.yaml`ファイルから、CassandraのStatefulSetを作成します。
```shell
# cassandra-statefulset.yaml を編集せずにapplyできる場合は、このコマンドを使用してください
kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
```
クラスターに合わせて`cassandra-statefulset.yaml`を編集する必要がある場合、 https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml をダウンロードして、修正したバージョンを保存したフォルダからマニフェストを適用してください。
```shell
# cassandra-statefulset.yaml をローカルで編集する必要がある場合、このコマンドを使用してください
kubectl apply -f cassandra-statefulset.yaml
```
## CassandraのStatefulSetを検証する
1. CassandraのStatefulSetを取得します
```shell
kubectl get statefulset cassandra
```
結果は次のようになるはずです。
```
NAME DESIRED CURRENT AGE
cassandra 3 0 13s
```
`StatefulSet`リソースがPodを順番にデプロイします。
1. Podを取得して順序付きの作成ステータスを確認します
```shell
kubectl get pods -l="app=cassandra"
```
結果は次のようになるはずです。
```shell
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
```
3つすべてのPodのデプロイには数分かかる場合があります。デプロイが完了すると、同じコマンドは次のような結果を返します。
```
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 10m
cassandra-1 1/1 Running 0 9m
cassandra-2 1/1 Running 0 8m
```
3. 1番目のPodの中でCassandraの[nodetool](https://cwiki.apache.org/confluence/display/CASSANDRA2/NodeTool)を実行して、ringのステータスを表示します。
```shell
kubectl exec -it cassandra-0 -- nodetool status
```
結果は次のようになるはずです。
```
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo
UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo
UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo
```
## CassandraのStatefulSetを変更する
`kubectl edit`を使うと、CassandraのStatefulSetのサイズを変更できます。
1. 次のコマンドを実行します。
```shell
kubectl edit statefulset cassandra
```
このコマンドを実行すると、ターミナルでエディタが起動します。変更が必要な行は`replicas`フィールドです。以下の例は、StatefulSetファイルの抜粋です。
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
```
1. レプリカ数を4に変更し、マニフェストを保存します。
これで、StatefulSetが4つのPodを実行するようにスケールされました。
1. CassandraのStatefulSetを取得して、変更を確かめます。
```shell
kubectl get statefulset cassandra
```
結果は次のようになるはずです。
```
NAME DESIRED CURRENT AGE
cassandra 4 4 36m
```
## {{% heading "cleanup" %}}
StatefulSetを削除したりスケールダウンしても、StatefulSetに関係するボリュームは削除されません。StatefulSetに関連するすべてのリソースを自動的に破棄するよりも、データの方がより貴重であるため、安全のためにこのような設定になっています。
{{< warning >}}
ストレージクラスやreclaimポリシーによっては、*PersistentVolumeClaim*を削除すると、関連するボリュームも削除される可能性があります。PersistentVolumeClaimの削除後にもデータにアクセスできるとは決して想定しないでください。
{{< /warning >}}
1. 次のコマンドを実行して(単一のコマンドにまとめています)、CassandraのStatefulSetに含まれるすべてのリソースを削除します。
```shell
grace=$(kubectl get pod cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
&& kubectl delete statefulset -l app=cassandra \
&& echo "Sleeping ${grace} seconds" 1>&2 \
&& sleep $grace \
&& kubectl delete persistentvolumeclaim -l app=cassandra
```
1. 次のコマンドを実行して、CassandraをセットアップしたServiceを削除します。
```shell
kubectl delete service -l app=cassandra
```
## Cassandraコンテナの環境変数
このチュートリアルのPodでは、Googleの[コンテナレジストリ](https://cloud.google.com/container-registry/docs/)の[`gcr.io/google-samples/cassandra:v13`](https://github.com/kubernetes/examples/blob/master/cassandra/image/Dockerfile)イメージを使用しました。このDockerイメージは[debian-base](https://github.com/kubernetes/kubernetes/tree/master/build/debian-base)をベースにしており、OpenJDK 8が含まれています。
このイメージには、Apache Debianリポジトリの標準のCassandraインストールが含まれます。環境変数を利用すると、`cassandra.yaml`に挿入された値を変更できます。
| 環境変数 | デフォルト値 |
| ------------------------ |:---------------: |
| `CASSANDRA_CLUSTER_NAME` | `'Test Cluster'` |
| `CASSANDRA_NUM_TOKENS` | `32` |
| `CASSANDRA_RPC_ADDRESS` | `0.0.0.0` |
## {{% heading "whatsnext" %}}
* [StatefulSetのスケール](/ja/docs/tasks/run-application/scale-stateful-set/)を行う方法を学ぶ。
* [*KubernetesSeedProvider*](https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)についてもっと学ぶ。
* カスタムの[Seed Providerの設定](https://git.k8s.io/examples/cassandra/java/README.md)についてもっと学ぶ。

View File

@ -41,7 +41,6 @@ Google이 일주일에 수십억 개의 컨테이너들을 운영하게 해준
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<br>
<a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?utm_source=kubernetes.io&utm_medium=nav&utm_campaign=kccnceu20" button id="desktopKCButton">Attend KubeCon in Amsterdam on August 13-16, 2020</a>
<br>
<br>

View File

@ -67,7 +67,6 @@ Kubernetes 是开源系统,可以自由地部署在企业内部,私有云、
<button id="desktopShowVideoButton" onclick="kub.showVideo()">Watch Video</button>
<br>
<br>
<br>
<!-- <a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">Attend KubeCon in Shanghai on Nov. 13-15, 2018</a> -->
<a href="https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2018/" button id="desktopKCButton">参加11月13日到15日的上海 KubeCon</a>
<br>

View File

@ -0,0 +1,696 @@
---
title: " 使用 Kubernetes Pet Sets 和 Datera Elastic Data Fabric 的 FlexVolume 扩展有状态的应用程序 "
date: 2016-08-29
slug: stateful-applications-using-kubernetes-datera
url: /zh/blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera
---
<!--
---
title: " Scaling Stateful Applications using Kubernetes Pet Sets and FlexVolumes with Datera Elastic Data Fabric "
date: 2016-08-29
slug: stateful-applications-using-kubernetes-datera
url: /blog/2016/08/Stateful-Applications-Using-Kubernetes-Datera
---
--->
<!--
_Editors note: todays guest post is by Shailesh Mittal, Software Architect and Ashok Rajagopalan, Sr Director Product at Datera Inc, talking about Stateful Application provisioning with Kubernetes on Datera Elastic Data Fabric._
--->
_编者注今天的邀请帖子来自 Datera 公司的软件架构师 Shailesh Mittal 和高级产品总监 Ashok Rajagopalan介绍在 Datera Elastic Data Fabric 上用 Kubernetes 配置状态应用程序。_
<!--
**Introduction**
Persistent volumes in Kubernetes are foundational as customers move beyond stateless workloads to run stateful applications. While Kubernetes has supported stateful applications such as MySQL, Kafka, Cassandra, and Couchbase for a while, the introduction of Pet Sets has significantly improved this support. In particular, the procedure to sequence the provisioning and startup, the ability to scale and associate durably by [Pet Sets](/docs/user-guide/petset/) has provided the ability to automate to scale the “Pets” (applications that require consistent handling and durable placement).
--->
**简介**
用户从无状态工作负载转移到运行有状态应用程序Kubernetes 中的持久卷是基础。虽然 Kubernetes 早已支持有状态的应用程序,比如 MySQL、Kafka、Cassandra 和 Couchbase但是 Pet Sets 的引入明显改善了情况。特别是,[Pet Sets](/docs/user-guide/petset/) 具有持续扩展和关联的能力在配置和启动的顺序过程中可以自动缩放“Pets”需要连续处理和持久放置的应用程序
<!--
Datera, elastic block storage for cloud deployments, has [seamlessly integrated with Kubernetes](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13) through the [FlexVolume](/docs/user-guide/volumes/#flexvolume) framework. Based on the first principles of containers, Datera allows application resource provisioning to be decoupled from the underlying physical infrastructure. This brings clean contracts (aka, no dependency or direct knowledge of the underlying physical infrastructure), declarative formats, and eventually portability to stateful applications.
--->
Datera 是用于云部署的弹性块存储,可以通过 [FlexVolume](/docs/user-guide/volumes/#flexvolume) 框架与 [Kubernetes 无缝集成](http://datera.io/blog-library/8/19/datera-simplifies-stateful-containers-on-kubernetes-13)。基于容器的基本原则Datera 允许应用程序的资源配置与底层物理基础架构分离,为有状态的应用程序提供简洁的协议(也就是说,不依赖底层物理基础结构及其相关内容)、声明式格式和最后移植的能力。
<!--
While Kubernetes allows for great flexibility to define the underlying application infrastructure through yaml configurations, Datera allows for that configuration to be passed to the storage infrastructure to provide persistence. Through the notion of Datera AppTemplates, in a Kubernetes environment, stateful applications can be automated to scale.
--->
Kubernetes 可以通过 yaml 配置来灵活定义底层应用程序基础架构,而 Datera 可以将该配置传递给存储基础结构以提供持久性。通过 Datera AppTemplates 声明,在 Kubernetes 环境中,有状态的应用程序可以自动扩展。
<!--
**Deploying Persistent Storage**
Persistent storage is defined using the Kubernetes [PersistentVolume](/docs/user-guide/persistent-volumes/#persistent-volumes) subsystem. PersistentVolumes are volume plugins and define volumes that live independently of the lifecycle of the pod that is using it. They are implemented as NFS, iSCSI, or by cloud provider specific storage system. Datera has developed a volume plugin for PersistentVolumes that can provision iSCSI block storage on the Datera Data Fabric for Kubernetes pods.
--->
**部署永久性存储**
永久性存储是通过 Kubernetes 的子系统 [PersistentVolume](/docs/user-guide/persistent-volumes/#persistent-volumes) 定义的。PersistentVolumes 是卷插件,它定义的卷的生命周期和使用它的 Pod 相互独立。PersistentVolumes 由 NFS、iSCSI 或云提供商的特定存储系统实现。Datera 开发了用于 PersistentVolumes 的卷插件,可以在 Datera Data Fabric 上为 Kubernetes 的 Pod 配置 iSCSI 块存储。
<!--
The Datera volume plugin gets invoked by kubelets on minion nodes and relays the calls to the Datera Data Fabric over its REST API. Below is a sample deployment of a PersistentVolume with the Datera plugin:
--->
Datera 卷插件从 minion nodes 上的 kubelet 调用,并通过 REST API 回传到 Datera Data Fabric。以下是带有 Datera 插件的 PersistentVolume 的部署示例:
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-datera-0
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
flexVolume:
driver: "datera/iscsi"
fsType: "xfs"
options:
volumeID: "kube-pv-datera-0"
size: “100"
replica: "3"
backstoreServer: "[tlx170.tlx.daterainc.com](http://tlx170.tlx.daterainc.com/):7717”
```
<!--
This manifest defines a PersistentVolume of 100 GB to be provisioned in the Datera Data Fabric, should a pod request the persistent storage.
--->
为 Pod 申请 PersistentVolume要按照以下清单在 Datera Data Fabric 中配置 100 GB 的 PersistentVolume。
```
[root@tlx241 /]# kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-datera-0 100Gi RWO Available 8s
pv-datera-1 100Gi RWO Available 2s
pv-datera-2 100Gi RWO Available 7s
pv-datera-3 100Gi RWO Available 4s
```
<!--
**Configuration**
The Datera PersistenceVolume plugin is installed on all minion nodes. When a pod lands on a minion node with a valid claim bound to the persistent storage provisioned earlier, the Datera plugin forwards the request to create the volume on the Datera Data Fabric. All the options that are specified in the PersistentVolume manifest are sent to the plugin upon the provisioning request.
--->
**配置**
Datera PersistenceVolume 插件安装在所有 minion node 上。minion node 的声明是绑定到之前设置的永久性存储上的,当 Pod 进入具备有效声明的 minion node 上时Datera 插件会转发请求,从而在 Datera Data Fabric 上创建卷。根据配置请求PersistentVolume 清单中所有指定的选项都将发送到插件。
<!--
Once a volume is provisioned in the Datera Data Fabric, volumes are presented as an iSCSI block device to the minion node, and kubelet mounts this device for the containers (in the pod) to access it.
--->
在 Datera Data Fabric 中配置的卷会作为 iSCSI 块设备呈现给 minion node并且 kubelet 将该设备安装到容器(在 Pod 中)进行访问。
![](https://lh4.googleusercontent.com/ILlUm1HrWhGa8uTt97dQ786Gn20FHFZkavfucz05NHv6moZWiGDG7GlELM6o4CSzANWvZckoAVug5o4jMg17a-PbrfD1FRbDPeUCIc8fKVmVBNUsUPshWanXYkBa3gIJy5BnhLmZ)
<!--
**Using Persistent Storage**
Kubernetes PersistentVolumes are used along with a pod using PersistentVolume Claims. Once a claim is defined, it is bound to a PersistentVolume matching the claims specification. A typical claim for the PersistentVolume defined above would look like below:
--->
**使用永久性存储**
Kubernetes PersistentVolumes 与具备 PersistentVolume Claims 的 Pod 一起使用。定义声明后,会被绑定到与声明规范匹配的 PersistentVolume 上。上面提到的定义 PersistentVolume 的典型声明如下所示:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim-test-petset-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
```
<!--
When this claim is defined and it is bound to a PersistentVolume, resources can be used with the pod specification:
--->
定义这个声明并将其绑定到 PersistentVolume 时,资源与 Pod 规范可以一起使用:
```
[root@tlx241 /]# kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-datera-0 100Gi RWO Bound default/pv-claim-test-petset-0 6m
pv-datera-1 100Gi RWO Bound default/pv-claim-test-petset-1 6m
pv-datera-2 100Gi RWO Available 7s
pv-datera-3 100Gi RWO Available 4s
[root@tlx241 /]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pv-claim-test-petset-0 Bound pv-datera-0 0 3m
pv-claim-test-petset-1 Bound pv-datera-1 0 3m
```
<!--
A pod can use a PersistentVolume Claim like below:
--->
Pod 可以使用 PersistentVolume 声明,如下所示:
```
apiVersion: v1
kind: Pod
metadata:
name: kube-pv-demo
spec:
containers:
- name: data-pv-demo
image: nginx
volumeMounts:
- name: test-kube-pv1
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: test-kube-pv1
persistentVolumeClaim:
claimName: pv-claim-test-petset-0
```
<!--
The result is a pod using a PersistentVolume Claim as a volume. It in-turn sends the request to the Datera volume plugin to provision storage in the Datera Data Fabric.
--->
程序的结果是 Pod 将 PersistentVolume Claim 作为卷。依次将请求发送到 Datera 卷插件,然后在 Datera Data Fabric 中配置存储。
```
[root@tlx241 /]# kubectl describe pods kube-pv-demo
Name: kube-pv-demo
Namespace: default
Node: tlx243/172.19.1.243
Start Time: Sun, 14 Aug 2016 19:17:31 -0700
Labels: \<none\>
Status: Running
IP: 10.40.0.3
Controllers: \<none\>
Containers:
data-pv-demo:
Container ID: [docker://ae2a50c25e03143d0dd721cafdcc6543fac85a301531110e938a8e0433f74447](about:blank)
Image: nginx
Image ID: [docker://sha256:0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d](about:blank)
Port: 80/TCP
State: Running
Started: Sun, 14 Aug 2016 19:17:34 -0700
Ready: True
Restart Count: 0
Environment Variables: \<none\>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
test-kube-pv1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pv-claim-test-petset-0
ReadOnly: false
default-token-q3eva:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q3eva
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
43s 43s 1 {default-scheduler } Normal Scheduled Successfully assigned kube-pv-demo to tlx243
42s 42s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulling pulling image "nginx"
40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Pulled Successfully pulled image "nginx"
40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Created Created container with docker id ae2a50c25e03
40s 40s 1 {kubelet tlx243} spec.containers{data-pv-demo} Normal Started Started container with docker id ae2a50c25e03
```
<!--
The persistent volume is presented as iSCSI device at minion node (tlx243 in this case):
--->
永久卷在 minion node在本例中为 tlx243中显示为 iSCSI 设备:
```
[root@tlx243 ~]# lsscsi
[0:2:0:0] disk SMC SMC2208 3.24 /dev/sda
[11:0:0:0] disk DATERA IBLOCK 4.0 /dev/sdb
[root@tlx243 datera~iscsi]# mount ``` grep sdb
/dev/sdb on /var/lib/kubelet/pods/6b99bd2a-628e-11e6-8463-0cc47ab41442/volumes/datera~iscsi/pv-datera-0 type xfs (rw,relatime,attr2,inode64,noquota)
```
<!--
Containers running in the pod see this device mounted at /data as specified in the manifest:
--->
在 Pod 中运行的容器按照清单中将设备安装在 /data 上:
```
[root@tlx241 /]# kubectl exec kube-pv-demo -c data-pv-demo -it bash
root@kube-pv-demo:/# mount ``` grep data
/dev/sdb on /data type xfs (rw,relatime,attr2,inode64,noquota)
```
<!--
**Using Pet Sets**
Typically, pods are treated as stateless units, so if one of them is unhealthy or gets superseded, Kubernetes just disposes it. In contrast, a PetSet is a group of stateful pods that has a stronger notion of identity. The goal of a PetSet is to decouple this dependency by assigning identities to individual instances of an application that are not anchored to the underlying physical infrastructure.
--->
**使用 Pet Sets**
通常Pod 被视为无状态单元因此如果其中之一状态异常或被取代Kubernetes 会将其丢弃。相反PetSet 是一组有状态的 Pod具有更强的身份概念。PetSet 可以将标识分配给应用程序的各个实例这些应用程序没有与底层物理结构连接PetSet 可以消除这种依赖性。
<!--
A PetSet requires {0..n-1} Pets. Each Pet has a deterministic name, PetSetName-Ordinal, and a unique identity. Each Pet has at most one pod, and each PetSet has at most one Pet with a given identity. A PetSet ensures that a specified number of “pets” with unique identities are running at any given time. The identity of a Pet is comprised of:
- a stable hostname, available in DNS
- an ordinal index
- stable storage: linked to the ordinal & hostname
A typical PetSet definition using a PersistentVolume Claim looks like below:
--->
每个 PetSet 需要{0..n-1}个 Pet。每个 Pet 都有一个确定的名字、PetSetName-Ordinal 和唯一的身份。每个 Pet 最多有一个 Pod每个 PetSet 最多包含一个给定身份的 Pet。要确保每个 PetSet 在任何特定时间运行时具有唯一标识的“pet”的数量都是确定的。Pet 的身份标识包括以下几点:
- 一个稳定的主机名,可以在 DNS 中使用
- 一个序号索引
- 稳定的存储:链接到序号和主机名
使用 PersistentVolume Claim 定义 PetSet 的典型例子如下所示:
```
# A headless service to create DNS records
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: test-petset
spec:
serviceName: "test-service"
replicas: 2
template:
metadata:
labels:
app: nginx
annotations:
[pod.alpha.kubernetes.io/initialized:](http://pod.alpha.kubernetes.io/initialized:) "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: nginx
image: [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
ports:
- containerPort: 80
name: web
volumeMounts:
- name: pv-claim
mountPath: /data
volumeClaimTemplates:
- metadata:
name: pv-claim
annotations:
[volume.alpha.kubernetes.io/storage-class:](http://volume.alpha.kubernetes.io/storage-class:) anything
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
```
<!--
We have the following PersistentVolume Claims available:
--->
我们提供以下 PersistentVolume Claim
```
[root@tlx241 /]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pv-claim-test-petset-0 Bound pv-datera-0 0 41m
pv-claim-test-petset-1 Bound pv-datera-1 0 41m
pv-claim-test-petset-2 Bound pv-datera-2 0 5s
pv-claim-test-petset-3 Bound pv-datera-3 0 2s
```
<!--
When this PetSet is provisioned, two pods get instantiated:
--->
配置 PetSet 时,将实例化两个 Pod
```
[root@tlx241 /]# kubectl get pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default test-petset-0 1/1 Running 0 7s
default test-petset-1 1/1 Running 0 3s
```
<!--
Here is how the PetSet test-petset instantiated earlier looks like:
--->
以下是一个 PetSettest-petset 实例化之前的样子:
```
[root@tlx241 /]# kubectl describe petset test-petset
Name: test-petset
Namespace: default
Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
Selector: app=nginx
Labels: app=nginx
Replicas: 2 current / 2 desired
Annotations: \<none\>
CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
No events.
```
<!--
Once a PetSet is instantiated, such as test-petset below, upon increasing the number of replicas (i.e. the number of pods started with that PetSet), more pods get instantiated and more PersistentVolume Claims get bound to new pods:
--->
一旦实例化 PetSet例如下面的 test-petset随着副本数从 PetSet 的初始 Pod 数量算起)的增加,实例化的 Pod 将变得更多,并且更多的 PersistentVolume Claim 会绑定到新的 Pod 上:
```
[root@tlx241 /]# kubectl patch petset test-petset -p'{"spec":{"replicas":"3"}}'
"test-petset” patched
[root@tlx241 /]# kubectl describe petset test-petset
Name: test-petset
Namespace: default
Image(s): [gcr.io/google\_containers/nginx-slim:0.8](http://gcr.io/google_containers/nginx-slim:0.8)
Selector: app=nginx
Labels: app=nginx
Replicas: 3 current / 3 desired
Annotations: \<none\>
CreationTimestamp: Sun, 14 Aug 2016 19:46:30 -0700
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
No events.
[root@tlx241 /]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test-petset-0 1/1 Running 0 29m
test-petset-1 1/1 Running 0 28m
test-petset-2 1/1 Running 0 9s
```
<!--
Now the PetSet is running 3 pods after patch application.
--->
现在应用修补程序后PetSet 正在运行3个 Pod。
<!--
When the above PetSet definition is patched to have one more replica, it introduces one more pod in the system. This in turn results in one more volume getting provisioned on the Datera Data Fabric. So volumes get dynamically provisioned and attached to a pod upon the PetSet scaling up.
--->
当上述 PetSet 定义修补完成会产生另一个副本PetSet 将在系统中引入另一个 pod。反之这会导致在 Datera Data Fabric 上配置更多的卷。因此,在 PetSet 进行扩展时,要配置动态卷并将其附加到 Pod 上。
<!--
To support the notion of durability and consistency, if a pod moves from one minion to another, volumes do get attached (mounted) to the new minion node and detached (unmounted) from the old minion to maintain persistent access to the data.
--->
为了平衡持久性和一致性的概念,如果 Pod 从一个 Minion 转移到另一个,卷确实会附加(安装)到新的 minion node 上,并与旧的 Minion 分离(卸载),从而实现对数据的持久访问。
<!--
**Conclusion**
This demonstrates Kubernetes with Pet Sets orchestrating stateful and stateless workloads. While the Kubernetes community is working on expanding the FlexVolume frameworks capabilities, we are excited that this solution makes it possible for Kubernetes to be run more widely in the datacenters.
--->
**结论**
本文展示了具备 Pet Sets 的 Kubernetes 协调有状态和无状态工作负载。当 Kubernetes 社区致力于扩展 FlexVolume 框架的功能时,我们很高兴这个解决方案使 Kubernetes 能够在数据中心广泛运行。
<!--
Join and contribute: Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage).
--->
加入我们并作出贡献Kubernetes [Storage SIG](https://groups.google.com/forum/#!forum/kubernetes-sig-storage).
<!--
- [Download Kubernetes](http://get.k8s.io/)
- Get involved with the Kubernetes project on [GitHub](https://github.com/kubernetes/kubernetes)
- Post questions (or answer questions) on [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- Connect with the community on the [k8s Slack](http://slack.k8s.io/)
- Follow us on Twitter [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates
--->
- [下载 Kubernetes](http://get.k8s.io/)
- 参与 Kubernetes 项目 [GitHub](https://github.com/kubernetes/kubernetes)
- 发布问题(或者回答问题) [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
- 联系社区 [k8s Slack](http://slack.k8s.io/)
- 在 Twitter 上关注我们 [@Kubernetesio](https://twitter.com/kubernetesio) for latest updates

View File

@ -0,0 +1,290 @@
---
layout: blog
title: "OPA GatekeeperKubernetes 的策略和管理"
date: 2019-08-06
slug: OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes
---
<!--
---
layout: blog
title: "OPA Gatekeeper: Policy and Governance for Kubernetes"
date: 2019-08-06
slug: OPA-Gatekeeper-Policy-and-Governance-for-Kubernetes
---
--->
<!--
**Authors:** Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra)
--->
**作者:** Rita Zhang (Microsoft), Max Smythe (Google), Craig Hooper (Commonwealth Bank AU), Tim Hinrichs (Styra), Lachie Evenson (Microsoft), Torin Sandall (Styra)
<!--
The [Open Policy Agent Gatekeeper](https://github.com/open-policy-agent/gatekeeper) project can be leveraged to help enforce policies and strengthen governance in your Kubernetes environment. In this post, we will walk through the goals, history, and current state of the project.
--->
可以从项目 [Open Policy Agent Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 中获得帮助,在 Kubernetes 环境下实施策略并加强治理。在本文中,我们将逐步介绍该项目的目标,历史和当前状态。
<!--
The following recordings from the Kubecon EU 2019 sessions are a great starting place in working with Gatekeeper:
* [Intro: Open Policy Agent Gatekeeper](https://youtu.be/Yup1FUc2Qn0)
* [Deep Dive: Open Policy Agent](https://youtu.be/n94_FNhuzy4)
--->
以下是 Kubecon EU 2019 会议的录音,帮助我们更好地开展与 Gatekeeper 合作:
* [简介:开放策略代理 Gatekeeper](https://youtu.be/Yup1FUc2Qn0)
* [深入研究:开放策略代理](https://youtu.be/n94_FNhuzy4)
<!--
## Motivations
If your organization has been operating Kubernetes, you probably have been looking for ways to control what end-users can do on the cluster and ways to ensure that clusters are in compliance with company policies. These policies may be there to meet governance and legal requirements or to enforce best practices and organizational conventions. With Kubernetes, how do you ensure compliance without sacrificing development agility and operational independence?
--->
## 出发点
如果您所在的组织一直在使用 Kubernetes您可能一直在寻找如何控制终端用户在集群上的行为以及如何确保集群符合公司政策。这些策略可能需要满足管理和法律要求或者符合最佳执行方法和组织惯例。使用 Kubernetes如何在不牺牲开发敏捷性和运营独立性的前提下确保合规性
<!--
For example, you can enforce policies like:
* All images must be from approved repositories
* All ingress hostnames must be globally unique
* All pods must have resource limits
* All namespaces must have a label that lists a point-of-contact
--->
例如,您可以执行以下策略:
* 所有镜像必须来自获得批准的存储库
* 所有入口主机名必须是全局唯一的
* 所有 Pod 必须有资源限制
* 所有命名空间都必须具有列出联系的标签
<!--
Kubernetes allows decoupling policy decisions from the API server by means of [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to intercept admission requests before they are persisted as objects in Kubernetes. [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) was created to enable users to customize admission control via configuration, not code and to bring awareness of the clusters state, not just the single object under evaluation at admission time. Gatekeeper is a customizable admission webhook for Kubernetes that enforces policies executed by the [Open Policy Agent (OPA)](https://www.openpolicyagent.org), a policy engine for Cloud Native environments hosted by CNCF.
--->
在接收请求被持久化为 Kubernetes 中的对象之前Kubernetes 允许通过 [admission controller webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) 将策略决策与 API 服务器分离,从而拦截这些请求。[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 创建的目的是使用户能够通过配置而不是代码自定义控制许可并使用户了解群集的状态而不仅仅是针对评估状态的单个对象在这些对象准许加入的时候。Gatekeeper 是 Kubernetes 的一个可定制的许可 webhook ,它由 [Open Policy Agent (OPA)](https://www.openpolicyagent.org) 强制执行, OPA 是 Cloud Native 环境下的策略引擎,由 CNCF 主办。
<!--
## Evolution
Before we dive into the current state of Gatekeeper, lets take a look at how the Gatekeeper project has evolved.
--->
## 发展
在深入了解 Gatekeeper 的当前情况之前,让我们看一下 Gatekeeper 项目是如何发展的。
<!--
* Gatekeeper v1.0 - Uses OPA as the admission controller with the kube-mgmt sidecar enforcing configmap-based policies. It provides validating and mutating admission control. Donated by Styra.
* Gatekeeper v2.0 - Uses Kubernetes policy controller as the admission controller with OPA and kube-mgmt sidecars enforcing configmap-based policies. It provides validating and mutating admission control and audit functionality. Donated by Microsoft.
* Gatekeeper v3.0 - The admission controller is integrated with the [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to enforce CRD-based policies and allow declaratively configured policies to be reliably shareable. Built with kubebuilder, it provides validating and, eventually, mutating (to be implemented) admission control and audit functionality. This enables the creation of policy templates for [Rego](https://www.openpolicyagent.org/docs/latest/how-do-i-write-policies/) policies, creation of policies as CRDs, and storage of audit results on policy CRDs. This project is a collaboration between Google, Microsoft, Red Hat, and Styra.
--->
* Gatekeeper v1.0 - 使用 OPA 作为带有 kube-mgmt sidecar 的许可控制器,用来强制执行基于 configmap 的策略。这种方法实现了验证和转换许可控制。贡献方Styra
* Gatekeeper v2.0 - 使用 Kubernetes 策略控制器作为许可控制器OPA 和 kube-mgmt sidecar 实施基于 configmap 的策略。这种方法实现了验证和转换准入控制和审核功能。贡献方Microsoft
* Gatekeeper v3.0 - 准入控制器与 [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) 集成在一起,用来实施基于 CRD 的策略,并可以可靠地共享已完成声明配置的策略。使用 kubebuilder 进行构建,实现了验证以及最终转换(待完成)为许可控制和审核功能。这样就可以为 [Rego](https://www.openpolicyagent.org/docs/latest/how-do-i-write-policies/) 策略创建策略模板,将策略创建为 CRD 并存储审核结果到策略 CRD 上。该项目是 GoogleMicrosoftRed Hat 和 Styra 合作完成的。
![](/images/blog/2019-08-06-opa-gatekeeper/v3.png)
<!--
## Gatekeeper v3.0 Features
Now lets take a closer look at the current state of Gatekeeper and how you can leverage all the latest features. Consider an organization that wants to ensure all objects in a cluster have departmental information provided as part of the objects labels. How can you do this with Gatekeeper?
--->
## Gatekeeper v3.0 的功能
现在我们详细看一下 Gatekeeper 当前的状态,以及如何利用所有最新的功能。假设一个组织希望确保集群中的所有对象都有 department 信息,这些信息是对象标签的一部分。如何利用 Gatekeeper 完成这项需求?
<!--
### Validating Admission Control
Once all the Gatekeeper components have been [installed](https://github.com/open-policy-agent/gatekeeper) in your cluster, the API server will trigger the Gatekeeper admission webhook to process the admission request whenever a resource in the cluster is created, updated, or deleted.
During the validation process, Gatekeeper acts as a bridge between the API server and OPA. The API server will enforce all policies executed by OPA.
--->
### 验证许可控制
在集群中所有 Gatekeeper 组件都 [安装](https://github.com/open-policy-agent/gatekeeper) 完成之后只要集群中的资源进行创建、更新或删除API 服务器将触发 Gatekeeper 准入 webhook 来处理准入请求。
在验证过程中Gatekeeper 充当 API 服务器和 OPA 之间的桥梁。API 服务器将强制实施 OPA 执行的所有策略。
<!--
### Policies and Constraints
With the integration of the OPA Constraint Framework, a Constraint is a declaration that its author wants a system to meet a given set of requirements. Each Constraint is written with Rego, a declarative query language used by OPA to enumerate instances of data that violate the expected state of the system. All Constraints are evaluated as a logical AND. If one Constraint is not satisfied, then the whole request is rejected.
--->
### 策略与 Constraint
结合 OPA Constraint FrameworkConstraint 是一个声明表示作者希望系统满足给定的一系列要求。Constraint 都使用 Rego 编写Rego 是声明性查询语言OPA 用 Rego 来枚举违背系统预期状态的数据实例。所有 Constraint 都遵循逻辑 AND。假使有一个 Constraint 不满足,那么整个请求都将被拒绝。
<!--
Before defining a Constraint, you need to create a Constraint Template that allows people to declare new Constraints. Each template describes both the Rego logic that enforces the Constraint and the schema for the Constraint, which includes the schema of the CRD and the parameters that can be passed into a Constraint, much like arguments to a function.
For example, here is a Constraint template CRD that requires certain labels to be present on an arbitrary object.
--->
在定义 Constraint 之前,您需要创建一个 Constraint Template允许大家声明新的 Constraint。每个模板都描述了强制执行 Constraint 的 Rego 逻辑和 Constraint 的模式,其中包括 CRD 的模式和传递到 enforces 中的参数,就像函数的参数一样。
例如,以下是一个 Constraint 模板 CRD它的请求是在任意对象上显示某些标签。
```yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
deny[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
```
<!--
Once a Constraint template has been deployed in the cluster, an admin can now create individual Constraint CRDs as defined by the Constraint template. For example, here is a Constraint CRD that requires the label `hr` to be present on all namespaces.
--->
在集群中部署了 Constraint 模板后,管理员现在可以创建由 Constraint 模板定义的单个 Constraint CRD。例如这里以下是一个 Constraint CRD要求标签 `hr` 出现在所有命名空间上。
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-hr
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["hr"]
```
<!--
Similarly, another Constraint CRD that requires the label `finance` to be present on all namespaces can easily be created from the same Constraint template.
--->
类似地,可以从同一个 Constraint 模板轻松地创建另一个 Constraint CRD该 Constraint CRD 要求所有命名空间上都有 `finance` 标签。
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-finance
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["finance"]
```
<!--
As you can see, with the Constraint framework, we can reliably share Regos via the Constraint templates, define the scope of enforcement with the match field, and provide user-defined parameters to the Constraints to create customized behavior for each Constraint.
--->
如您所见,使用 Constraint framework我们可以通过 Constraint 模板可靠地共享 rego使用匹配字段定义执行范围并为 Constraint 提供用户定义的参数,从而为每个 Constraint 创建自定义行为。
<!--
### Audit
The audit functionality enables periodic evaluations of replicated resources against the Constraints enforced in the cluster to detect pre-existing misconfigurations. Gatekeeper stores audit results as `violations` listed in the `status` field of the relevant Constraint. --->
### 审核
根据群集中强制执行的 Constraint审核功能可定期评估复制的资源并检测先前存在的错误配置。Gatekeeper 将审核结果存储为 `violations`,在相关 Constraint 的 `status` 字段中列出。
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-hr
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["hr"]
status:
auditTimestamp: "2019-08-06T01:46:13Z"
byPod:
- enforced: true
id: gatekeeper-controller-manager-0
violations:
- enforcementAction: deny
kind: Namespace
message: 'you must provide labels: {"hr"}'
name: default
- enforcementAction: deny
kind: Namespace
message: 'you must provide labels: {"hr"}'
name: gatekeeper-system
- enforcementAction: deny
kind: Namespace
message: 'you must provide labels: {"hr"}'
name: kube-public
- enforcementAction: deny
kind: Namespace
message: 'you must provide labels: {"hr"}'
name: kube-system
```
<!--
### Data Replication
Audit requires replication of Kubernetes resources into OPA before they can be evaluated against the enforced Constraints. Data replication is also required by Constraints that need access to objects in the cluster other than the object under evaluation. For example, a Constraint that enforces uniqueness of ingress hostname must have access to all other ingresses in the cluster.
--->
### 数据复制
审核要求将 Kubernetes 复制到 OPA 中,然后才能根据强制的 Constraint 对其进行评估。数据复制同样也需要 Constraint这些 Constraint 需要访问集群中除评估对象之外的对象。例如,一个 Constraint 要强制确定入口主机名的唯一性,就必须有权访问集群中的所有其他入口。
<!--
To configure Kubernetes data to be replicated, create a sync config resource with the resources to be replicated into OPA. For example, the below configuration replicates all namespace and pod resources to OPA.
--->
对 Kubernetes 数据进行复制,请使用复制到 OPA 中的资源创建 sync config 资源。例如,下面的配置将所有命名空间和 Pod 资源复制到 OPA。
```yaml
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
- group: ""
version: "v1"
kind: "Pod"
```
<!--
## Planned for Future
The community behind the Gatekeeper project will be focusing on providing mutating admission control to support mutation scenarios (for example: annotate objects automatically with departmental information when creating a new resource), support external data to inject context external to the cluster into the admission decisions, support dry run to see impact of a policy on existing resources in the cluster before enforcing it, and more audit functionalities.
--->
## 未来计划
Gatekeeper 项目背后的社区将专注于提供转换许可控制,可以用来支持转换方案(例如:在创建新资源时使用 department 信息自动注释对象),支持外部数据以将集群外部环境加入到许可决策中,支持试运行以便在执行策略之前了解策略对集群中现有资源的影响,还有更多的审核功能。
<!--
If you are interested in learning more about the project, check out the [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) repo. If you are interested in helping define the direction of Gatekeeper, join the [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) channel on OPA Slack, and join our [weekly meetings](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) to discuss development, issues, use cases, etc.
--->
如果您有兴趣了解更多有关该项目的信息,请查看 [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 存储库。如果您有兴趣帮助确定 Gatekeeper 的方向,请加入 [#kubernetes-policy](https://openpolicyagent.slack.com/messages/CDTN970AX) OPA Slack 频道,并加入我们的 [周会](https://docs.google.com/document/d/1A1-Q-1OMw3QODs1wT6eqfLTagcGmgzAJAjJihiO3T48/edit) 一同讨论开发、任务、用例等。

View File

@ -19,136 +19,3 @@ The Concepts section helps you learn about the parts of the Kubernetes system an
-->
概念部分可以帮助你了解 Kubernetes 的各个组成部分以及 Kubernetes 用来表示集群的一些抽象概念,并帮助你更加深入的理解 Kubernetes 是如何工作的。
<!-- body -->
<!--
## Overview
-->
## 概述
<!--
To work with Kubernetes, you use *Kubernetes API objects* to describe your cluster's *desired state*: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, `kubectl`. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.
-->
要使用 Kubernetes你需要用 *Kubernetes API 对象* 来描述集群的 *预期状态desired state* :包括你需要运行的应用或者负载,它们使用的镜像、副本数,以及所需网络和磁盘资源等等。你可以使用命令行工具 `kubectl` 来调用 Kubernetes API 创建对象,通过所创建的这些对象来配置预期状态。你也可以直接调用 Kubernetes API 和集群进行交互,设置或者修改预期状态。
<!--
Once you've set your desired state, the *Kubernetes Control Plane* makes the cluster's current state match the desired state via the Pod Lifecycle Event Generator ([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md)). To do so, Kubernetes performs a variety of tasks automatically--such as starting or restarting containers, scaling the number of replicas of a given application, and more. The Kubernetes Control Plane consists of a collection of processes running on your cluster:
-->
一旦你设置了你所需的目标状态,*Kubernetes 控制面control plane* 会通过 Pod 生命周期事件生成器([PLEG](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/pod-lifecycle-event-generator.md))促成集群的当前状态符合其预期状态。为此Kubernetes 会自动执行各类任务比如运行或者重启容器、调整给定应用的副本数等等。Kubernetes 控制面由一组运行在集群上的进程组成:
<!--
* The **Kubernetes Master** is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) and [kube-scheduler](/docs/admin/kube-scheduler/).
* Each individual non-master node in your cluster runs two processes:
* **[kubelet](/docs/admin/kubelet/)**, which communicates with the Kubernetes Master.
* **[kube-proxy](/docs/admin/kube-proxy/)**, a network proxy which reflects Kubernetes networking services on each node.
-->
* **Kubernetes 主控组件Master** 包含三个进程,都运行在集群中的某个节点上,主控组件通常这个节点被称为 master 节点。这些进程包括:[kube-apiserver](/docs/admin/kube-apiserver/)、[kube-controller-manager](/docs/admin/kube-controller-manager/) 和 [kube-scheduler](/docs/admin/kube-scheduler/)。
* 集群中的每个非 master 节点都运行两个进程:
* **[kubelet](/docs/admin/kubelet/)**,和 master 节点进行通信。
* **[kube-proxy](/docs/admin/kube-proxy/)**,一种网络代理,将 Kubernetes 的网络服务代理到每个节点上。
<!--
## Kubernetes Objects
-->
## Kubernetes 对象
<!--
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are represented by objects in the Kubernetes API. See [Understanding Kubernetes Objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) for more details.
-->
Kubernetes 包含若干用来表示系统状态的抽象层,包括:已部署的容器化应用和负载、与它们相关的网络和磁盘资源以及有关集群正在运行的其他操作的信息。这些抽象使用 Kubernetes API 对象来表示。有关更多详细信息,请参阅[了解 Kubernetes 对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/)。
<!--
The basic Kubernetes objects include:
-->
基本的 Kubernetes 对象包括:
* [Pod](/docs/concepts/workloads/pods/pod-overview/)
* [Service](/docs/concepts/services-networking/service/)
* [Volume](/docs/concepts/storage/volumes/)
* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
<!--
Kubernetes also contains higher-level abstractions that rely on [Controllers](/docs/concepts/architecture/controller/) to build upon the basic objects, and provide additional functionality and convenience features. These include:
-->
Kubernetes 也包含大量的被称作 [Controller](/docs/concepts/architecture/controller/) 的高级抽象。控制器基于基本对象构建并提供额外的功能和方便使用的特性。具体包括:
* [Deployment](/docs/concepts/workloads/controllers/deployment/)
* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
<!--
## Kubernetes Control Plane
-->
## Kubernetes 控制面
<!--
The various parts of the Kubernetes Control Plane, such as the Kubernetes Master and kubelet processes, govern how Kubernetes communicates with your cluster. The Control Plane maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage those objects' state. At any given time, the Control Plane's control loops will respond to changes in the cluster and work to make the actual state of all the objects in the system match the desired state that you provided.
-->
关于 Kubernetes 控制平面的各个部分,(如 Kubernetes 主控组件和 kubelet 进程),管理着 Kubernetes 如何与你的集群进行通信。控制平面维护着系统中所有的 Kubernetes 对象的状态记录,并且通过连续的控制循环来管理这些对象的状态。在任意的给定时间点,控制面的控制环都能响应集群中的变化,并且让系统中所有对象的实际状态与你提供的预期状态相匹配。
<!--
For example, when you use the Kubernetes API to create a Deployment, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes--thus making the cluster's actual state match the desired state.
-->
比如, 当你通过 Kubernetes API 创建一个 Deployment 对象你就为系统增加了一个新的目标状态。Kubernetes 控制平面记录着对象的创建,并启动必要的应用然后将它们调度至集群某个节点上来执行你的指令,以此来保持集群的实际状态和目标状态的匹配。
<!--
### Kubernetes Master
-->
### Kubernetes Master 节点
<!--
The Kubernetes master is responsible for maintaining the desired state for your cluster. When you interact with Kubernetes, such as by using the `kubectl` command-line interface, you're communicating with your cluster's Kubernetes master.
-->
Kubernetes master 节点负责维护集群的目标状态。当你要与 Kubernetes 通信时,使用如 `kubectl` 的命令行工具,就可以直接与 Kubernetes master 节点进行通信。
<!--
> The "master" refers to a collection of processes managing the cluster state. Typically all these processes run on a single node in the cluster, and this node is also referred to as the master. The master can also be replicated for availability and redundancy.
-->
> "master" 是指管理集群状态的一组进程的集合。通常这些进程都跑在集群中一个单独的节点上,并且这个节点被称为 master 节点。master 节点也可以扩展副本数,来获取更好的可用性及冗余。
<!--
### Kubernetes Nodes
-->
### Kubernetes Node 节点
<!--
The nodes in a cluster are the machines (VMs, physical servers, etc) that run your applications and cloud workflows. The Kubernetes master controls each node; you'll rarely interact with nodes directly.
-->
集群中的 node 节点虚拟机、物理机等等都是用来运行你的应用和云工作流的机器。Kubernetes master 节点控制所有 node 节点;你很少需要和 node 节点进行直接通信。
## {{% heading "whatsnext" %}}
<!--
If you would like to write a concept page, see
[Using Page Templates](/docs/home/contribute/page-templates/)
for information about the concept page type and the concept template.
-->
如果你想编写一个概念页面,请参阅[使用页面模板](/docs/home/contribute/page-templates/)获取更多有关概念页面类型和概念模板的信息。

View File

@ -19,7 +19,6 @@ ConfigMap 并不提供保密或者加密功能。如果你想存储的数据是
{{< /caution >}}
<!-- body -->
<!--
## Motivation
@ -59,9 +58,11 @@ The name of a ConfigMap must be a valid
-->
## ConfigMap 对象
ConfigMap 是一个 API [对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/),让你可以存储其他对象所需要使用的配置。和其他 Kubernetes 对象都有一个 `spec` 不同的是ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。
ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects/kubernetes-objects/)
让你可以存储其他对象所需要使用的配置。
和其他 Kubernetes 对象都有一个 `spec` 不同的是ConfigMap 使用 `data` 块来存储元素(键名)和它们的值。
ConfigMap 的名字必须是一个合法的 [DNS 子域名](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。
<!--
## ConfigMaps and Pods
@ -216,15 +217,14 @@ ConfigMap 最常见的用法是为同一命名空间里某 Pod 中运行的容
## {{% heading "whatsnext" %}}
<!--
* Read about [Secrets](/docs/concepts/configuration/secret/).
* Read [Configure a Pod to Use a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/).
* Read [The Twelve-Factor App](https://12factor.net/) to understand the motivation for
separating code from configuration.
-->
* 阅读 [Secret](/docs/concepts/configuration/secret/)。
* 阅读 [配置 Pod 来使用 ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)。
* 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。
* 阅读 [配置 Pod 来使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。
* 阅读 [Twelve-Factor 应用](https://12factor.net/) 来了解将代码和配置分开的动机。

Some files were not shown because too many files have changed in this diff Show More